CN108022279B - Video special effect adding method and device and intelligent mobile terminal - Google Patents

Video special effect adding method and device and intelligent mobile terminal Download PDF

Info

Publication number
CN108022279B
CN108022279B CN201711242163.0A CN201711242163A CN108022279B CN 108022279 B CN108022279 B CN 108022279B CN 201711242163 A CN201711242163 A CN 201711242163A CN 108022279 B CN108022279 B CN 108022279B
Authority
CN
China
Prior art keywords
video
special effect
frame
editing
instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711242163.0A
Other languages
Chinese (zh)
Other versions
CN108022279A (en
Inventor
周宇涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bigo Technology Pte Ltd
Original Assignee
Guangzhou Baiguoyuan Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Baiguoyuan Information Technology Co Ltd filed Critical Guangzhou Baiguoyuan Information Technology Co Ltd
Priority to CN201711242163.0A priority Critical patent/CN108022279B/en
Publication of CN108022279A publication Critical patent/CN108022279A/en
Priority to PCT/CN2018/118370 priority patent/WO2019105438A1/en
Application granted granted Critical
Publication of CN108022279B publication Critical patent/CN108022279B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The embodiment of the invention discloses a video special effect adding method, a video special effect adding device and an intelligent mobile terminal, and the method comprises the following steps: acquiring a click instruction or a sliding instruction of a user in a video editing state; acquiring a position coordinate appointed by a user in an editing frame picture according to the click command or the sliding command, and taking the position coordinate as a falling point coordinate of a key frame picture in the special effect animation; and synthesizing the editing video and the special effect animation so as to enable the key frame picture to be covered at the position coordinates designated by the user in the editing frame picture. In the double-video editing and synthesizing process, a key frame picture is arranged in a preset special effect animation coordinate, the position of a drop point of the picture determines the position of the whole special effect animation in the superposition of an editing video during video synthesizing, and a user freely sets the picture position of the special effect animation in the editing video according to the mode of setting the drop point coordinate.

Description

Video special effect adding method and device and intelligent mobile terminal
Technical Field
The embodiment of the invention relates to the field of live broadcast, in particular to a video special effect adding method and device and an intelligent mobile terminal.
Background
Video editing refers to an editing process of firstly shooting an expected image by a camera and then manufacturing the image into a disc by video editing software on a computer. However, as the processing capability of the intelligent mobile terminal is better, the instant video editing becomes a development requirement, and the editing of the shot short video by the intelligent mobile terminal becomes a new requirement.
In the prior art, the video editing by the intelligent mobile terminal still remains in simpler applications, such as cutting and splicing the video, or changing the color of the video by the color brightness of the video and the addition of a color mask on the video.
The inventor of the invention finds in research that the video editing method of the intelligent mobile terminal in the prior art can only realize simple splicing and color matching functions of videos, and only places a plurality of videos in a sequential relation on the same time axis for playing during video splicing. Therefore, when the user uses the editing function, the operation space is limited, and the user cannot perform editing operation with a high degree of freedom according to his own editing needs. The editing function of the intelligent mobile terminal has poor customer experience and high popularization and application difficulty.
Disclosure of Invention
The embodiment of the invention provides a high-freedom video editing method and device capable of determining key frame picture drop point coordinates in special-effect animation according to a user instruction, and an intelligent mobile terminal.
In order to solve the above technical problem, the embodiment of the present invention adopts a technical solution that: a video special effect adding method is provided, which comprises the following steps:
acquiring a click instruction or a sliding instruction of a user in a video editing state;
acquiring a position coordinate appointed by a user in an editing frame picture according to the click command or the sliding command, and taking the position coordinate as a falling point coordinate of a key frame picture in the special effect animation;
and synthesizing the editing video and the special effect animation so as to enable the key frame picture to be covered at the position coordinates designated by the user in the editing frame picture.
Optionally, generating an anchor point for calibrating the drop point coordinate in the editing frame picture in the video editing state;
before the step of obtaining the click command or the slide command of the user in the video editing state, the method further comprises the following steps:
acquiring a first click instruction of a user, and calculating a coordinate of the first click instruction;
comparing whether the coordinate of the first click command is in the area of the anchor point coordinate;
and when the coordinate of the first click instruction is in the area of the anchor point coordinate, the anchor point updates the position along with the sliding instruction of the user so as to update the drop point coordinate.
Optionally, the editing area in the video editing state includes: a first editing region and a frame progress bar; the first editing area displays a frame picture image represented by the edited video at the stop moment of the frame progress bar;
before the step of obtaining the click command or the slide command of the user in the video editing state, the method further comprises the following steps:
acquiring a click or sliding instruction of a user within a frame progress bar range;
determining the stopping time of the frame progress bar according to the clicking or sliding instruction in the range of the frame progress bar;
and calling a frame picture image represented by the frame progress bar stop time as the editing frame picture.
Optionally, a sliding bar marking the special effect animation duration is arranged on the frame progress bar, and an instruction bar representing the position of the key frame picture is arranged on the sliding bar;
before the step of obtaining the click command or the slide command of the user in the video editing state, the method further comprises the following steps:
acquiring a sliding instruction acted by a user in the range of the sliding bar so as to enable the sliding bar to slide along the frame progress bar along with the sliding instruction;
determining the stop time of the frame progress bar pointed by the instruction bar according to the sliding instruction acted by the user in the range of the sliding bar;
and the first editing area displays a frame picture image represented by the frame progress bar stop time pointed by the instruction bar.
Optionally, after the step of obtaining the position coordinate specified by the user in the editing frame picture according to the click instruction or the slide instruction and using the position coordinate as the drop point coordinate of one key frame picture in the special effect animation, the method further includes the following steps:
respectively placing the edited video and the special effect animation on two parallel time tracks;
and when the edited video is played to the starting time of the special effect animation, the special effect animation is synchronously played and is positioned on the upper layer of the edited video for displaying.
Optionally, when the edited video is played to the start time of the special effect animation, after the step of synchronously playing the special effect animation and displaying the special effect animation on the upper layer of the edited video, the method further includes the following steps:
acquiring a revocation instruction of a user;
and deleting the temporarily stored special effect animation in a stacking mode.
Optionally, after the step of obtaining the position coordinate specified by the user in the editing frame picture according to the click instruction or the slide instruction and using the position coordinate as the drop point coordinate of one key frame picture in the special effect animation, the method further includes the following steps:
acquiring the position relation information of each frame of picture and the key frame of picture in the preset special-effect animation;
calculating the covering coordinate of each frame of picture in the special effect animation according to the relation information of the coordinates and the positions of the falling points;
and determining the covering position of each frame of picture in the special effect animation according to the covering coordinates.
To solve the foregoing technical problem, an embodiment of the present invention further provides a video special effect adding apparatus, including:
the acquisition module is used for acquiring a click instruction or a sliding instruction of a user in a video editing state;
the processing module is used for acquiring a position coordinate appointed by a user in an editing frame picture according to the click command or the sliding command, and taking the position coordinate as a falling point coordinate of a key frame picture in the special-effect animation;
and the synthesis module is used for synthesizing the edited video and the special effect animation so as to enable the key frame picture to cover the position coordinate designated by the user in the edited frame picture.
Optionally, generating an anchor point for calibrating the drop point coordinate in the editing frame picture in the video editing state;
the video special effect adding device further comprises:
the first acquisition submodule is used for acquiring a first click instruction of a user and calculating the coordinate of the first click instruction;
the first comparison sub-module is used for comparing whether the coordinate of the first click command is in the area of the anchor point coordinate;
and the first updating submodule is used for updating the position of the anchor point along with the sliding instruction of the user when the coordinate of the first click instruction is in the area of the anchor point coordinate so as to update the drop point coordinate.
Optionally, the editing area in the video editing state includes: a first editing region and a frame progress bar; the first editing area displays a frame picture image represented by the edited video at the stop moment of the frame progress bar;
the video special effect adding device further comprises:
the second obtaining submodule is used for obtaining a click or sliding instruction of a user in the range of the frame progress bar;
the first calculation submodule is used for determining the stopping time of the frame progress bar according to the clicking or sliding instruction in the range of the frame progress bar;
and the first calling submodule is used for calling the frame picture image represented by the stop moment of the frame progress bar as the editing frame picture.
Optionally, a sliding bar marking the special effect animation duration is arranged on the frame progress bar, and an instruction bar representing the position of the key frame picture is arranged on the sliding bar;
the video special effect adding device further comprises:
the third obtaining submodule is used for obtaining a sliding instruction acted by a user in the range of the sliding bar so as to enable the sliding bar to slide along the frame progress bar along with the sliding instruction;
the second calculation submodule is used for determining the stop time of the frame progress bar pointed by the instruction bar according to the sliding instruction acted by the user in the range of the sliding bar;
and the first display sub-module is used for displaying the frame picture image represented by the frame progress bar stop time pointed by the instruction bar in the first editing area.
Optionally, the video special effect adding apparatus further includes:
the first setting submodule is used for respectively placing the edited video and the special effect animation on two parallel time tracks;
and the first preview submodule is used for synchronously playing the special effect animation when the edited video reaches the starting time of the special effect animation, and the special effect animation is positioned on the upper layer of the edited video for displaying.
Optionally, the video special effect adding apparatus further includes:
the fourth obtaining submodule is used for obtaining a revocation instruction of a user;
and the first revocation submodule is used for deleting the temporarily stored special effect animation in a stacking mode.
Optionally, the video special effect adding apparatus further includes:
a fifth obtaining submodule, configured to obtain preset position relationship information between each frame of picture in the special-effect animation and the key frame of picture;
the third calculation submodule is used for calculating the coverage coordinate of each frame of picture in the special-effect animation according to the relation information of the coordinates of the drop points and the positions;
and the first determining submodule is used for determining the covering position of each frame of picture in the special effect animation according to the covering coordinates.
In order to solve the above technical problem, an embodiment of the present invention further provides an intelligent mobile terminal, including:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the video effects addition method described above.
The embodiment of the invention has the beneficial effects that: in the double-video editing and synthesizing process, a key frame picture is arranged in a preset special effect animation coordinate, the position of a drop point of the picture determines the position of the whole special effect animation in the superposition of an editing video during video synthesizing, and a user freely sets the picture position of the special effect animation in the editing video according to the mode of setting the drop point coordinate. By the method, the user can freely control the view position of the special effect animation in the synthesized video, the degree of freedom of the user in the video editing process is improved, the entertainment of the video editing is improved, and the method has better customer experience and market prospect.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic basic flow chart of a video special effect adding method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating a process of determining positions of other frames in a special-effect animation according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating a display manner of generating anchor points in an edited frame according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart of determining a coordinate of a drop point by an anchor point according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating a first editing region and a display region of a frame progress bar according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating an embodiment of selecting an edited frame according to the present invention;
FIG. 7 is a schematic diagram of a display area having a slider bar and an indicator bar according to an embodiment of the present invention;
FIG. 8 is a flowchart illustrating another embodiment of selecting an edited frame according to an embodiment of the present invention;
FIG. 9 is a flowchart illustrating a method for previewing results of an edited video according to an embodiment of the present invention;
FIG. 10 is a flowchart illustrating an editing effect revocation process according to an embodiment of the present invention;
fig. 11 is a schematic view of a basic structure of a red envelope issuing device in a live broadcast room according to an embodiment of the present invention;
fig. 12 is a block diagram of a basic structure of an intelligent mobile terminal according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention.
In some of the flows described in the present specification and claims and in the above figures, a number of operations are included that occur in a particular order, but it should be clearly understood that these operations may be performed out of order or in parallel as they occur herein, with the order of the operations being indicated as 101, 102, etc. merely to distinguish between the various operations, and the order of the operations by themselves does not represent any order of performance. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Examples
Referring to fig. 1, fig. 1 is a basic flow chart of a video special effect adding method according to the present embodiment.
As shown in fig. 1, a video special effect adding method includes the following steps:
s1100, acquiring a click instruction or a sliding instruction of a user in a video editing state;
the user uses the intelligent mobile terminal to edit the shot or locally stored video, and after entering an editing state, the user receives a click or sliding instruction sent by a finger or a touch pen.
S1200, acquiring a position coordinate appointed by a user in an editing frame picture according to the click command or the sliding command, and taking the position coordinate as a falling point coordinate of a key frame picture in the special-effect animation;
and acquiring the position coordinates of a click instruction or a sliding instruction of the user, wherein when the user sends the click instruction, the coordinate position of the intelligent mobile terminal display area clicked by the user is acquired, and the coordinate position is used as the position coordinates designated by the user. And when the instruction sent by the user is a sliding instruction, acquiring the position coordinate of the last point of the sliding track of the user, and taking the coordinate as the position coordinate specified by the user.
And in the editing state, a display area of the intelligent mobile terminal displays one frame of the editing video selected by the user as an editing frame picture, all editing operations occur on the editing frame picture when the user edits the video, and after the editing of the editing frame picture is finished, part of the editing operations can be copied on other frame pictures of the video. The user specifies the coordinate position specified in the display area of the intelligent mobile terminal, namely the coordinate position on the editing frame picture.
After position coordinates specified by a user are obtained, the coordinates of the falling points of the special effect animation key frame pictures at the coordinate positions are pointed out that the area of the key frame pictures is smaller than that of the editing frame pictures, one coordinate in the editing frame pictures needs to be specified when editing is carried out to be used as the falling point of the key frame pictures in the editing town pictures, and the falling point is the coordinate position specified by the user.
In this embodiment, the video editing is to edit a special effect animation on a video, and in this embodiment, the special effect animation refers to a video clip with a certain action scene (such as meteorite falling or cannonball explosion, etc.), or a subtitle with a certain motion change, etc., but is not limited thereto, and the special effect animation in this embodiment may be any video data with a video format.
In this embodiment, a key frame picture is set in the special effect animation, and the key frame picture is selected in advance, and a frame picture having the highest tension or scenario transition in the special effect animation is usually selected (for example, when the special effect animation is a gunshot, a shell falls to the ground and explodes, when the special effect animation is meteorite impact, the meteorite impact is instantaneous, or when the special effect animation is a multi-character flight subtitle, the subtitle is arranged in a straight line). However, the key frame picture is not limited to this, and the key frame picture can be a picture of one frame arbitrarily specified in the special-effect animation depending on the application scene.
1300. And synthesizing the editing video and the special effect animation so as to enable the key frame picture to be covered at the position coordinates designated by the user in the editing frame picture.
After the coordinates of the falling points of the key frame pictures in the editing frame pictures are determined, the key frame pictures are covered at the position coordinates designated by a user in the editing frame pictures, so that the composition of the editing video and the special effect animation is completed.
In the above embodiment, in the double video editing and composition, a key frame picture is set in the preset special effect animation coordinates, the drop point position of the picture determines the position of the whole special effect animation in the editing video superposition during video composition, and the user can freely set the picture position of the special effect animation in the editing video according to the mode of setting the drop point coordinates. By the method, the user can freely control the view position of the special effect animation in the synthesized video, the degree of freedom of the user in the video editing process is improved, the entertainment of the video editing is improved, and the method has better customer experience and market prospect.
The falling point coordinates have reproducibility, the special effect animation is composed of a plurality of frame pictures, and the positions of the key frame pictures can determine the covering positions of other frame pictures in the special effect animation in the edited video. Referring to fig. 2, fig. 2 is a schematic diagram illustrating a process of determining positions of other frames in the special-effect animation according to the embodiment.
As shown in fig. 2, the following steps are further included after step S1200:
s1211, obtaining position relation information of each frame of picture and the key frame of picture in the preset special-effect animation;
the special effect animation is composed of multiple frames of pictures, the center point of the selected key frame picture is used as a coordinate origin, and when the special effect animation is constructed, the position relation of other frames of pictures in the special effect animation relative to the coordinate origin and the relation of the coordinates of other frames of pictures relative to the coordinates of the origin are calculated. For example, if the coordinates of the frame adjacent to the key frame are [2,2], the key frame is moved two units, and after moving two units upwards, the position of the key frame is the position of the key frame.
After the coordinates of the falling point of the key frame picture are determined, the position information storage position in the special effect animation is accessed, and the pre-stored position relation information can be obtained.
S1212, calculating the covering coordinate of each frame of picture in the special effect animation according to the relation information of the falling point coordinate and the position;
since the size of each frame in the edited video is the same, the coordinates of the other frame calculated from the coordinates of the drop point can be directly applied to the other frame of the edited video.
Therefore, the covering coordinate of each frame of picture in the special effect animation is calculated according to the coordinate of the drop point and the position relation information. For example, the coordinates of the frame picture adjacent to the key frame picture are [2,2], the coordinates of the key frame picture in the original setting are [0,0], and the coordinates of the determined key frame picture are [100, 200], and the corresponding overlay coordinates of the frame picture are [102,202 ].
S1213, determining the covering position of each frame of picture in the special effect animation according to the covering coordinates.
When the video is edited, at the position covered by the special effect animation, each frame of the special effect animation has a frame corresponding to the frame in the edited video, and the area of each frame in the edited video is the same, so the calculated covering coordinate can be directly used in the frame of the edited video corresponding to the frame of the special effect animation. And determining the covering position of each frame of the special effect video according to the covering coordinates and the area of the frame.
According to the embodiment, the covering position of each frame of picture in the whole special-effect animation is calculated through the falling point coordinates of the key frame pictures, so that the purpose of controlling the covering position of the whole special-effect animation through the falling point coordinates of the key frame pictures is achieved, meanwhile, the complexity of editing is reduced, and the operation of a user is facilitated.
In some embodiments, in order to enable a user to determine a drop point coordinate more intuitively, an anchor point for calibrating the drop point coordinate is generated in an editing frame picture in a video editing state, specifically refer to fig. 3 and 4, where fig. 3 is a schematic view of a display manner of generating the anchor point in the editing frame picture in this embodiment; fig. 4 is a schematic flow chart of determining the coordinates of the anchor point according to this embodiment.
As shown in fig. 4, the following steps are further included between step S1100:
s1011, acquiring a first click command of a user, and calculating a coordinate of the first click command;
as shown in fig. 3, in a video editing state, an anchor point for calibrating a drop point coordinate is generated in an editing frame picture, and the anchor point is specifically designed as a sniping anchor point, that is, an outer ring representing an anchor point range and an origin point designed at a complete center are provided. However, the form of the anchor point is not limited to the above, and the anchor point can be designed to (without limitation) be a circle, a ring, a triangle, or other polygon, or can be replaced by a cartoon pattern or other silhouette pattern according to the application scene.
In the anchor point generation state, the intelligent mobile terminal obtains a first click command of the user and calculates a coordinate specified by the click command of the user.
S1012, comparing whether the coordinate of the first click command is in the area of the anchor point coordinate;
the coordinates of the shape of the anchor point are a set of all coordinates located inside the outer circle of the anchor point. And after the coordinates of the click command of the user are obtained, determining whether the coordinates specified by the user are in the anchor point coordinate set or not through comparison. If not, it means that the user has not issued the command for changing and adjusting the coordinates of the landing point, and if it is, it means that the user has issued the command for adjusting the coordinates of the landing point, the process continues to step S1013.
And S1013, when the coordinate of the first click command is in the area of the anchor point coordinate, the anchor point updates the position along with the sliding command of the user so as to update the drop point coordinate.
When the coordinate of the first click instruction is in the area of the anchor point coordinate, determining that the user instruction adjusts the drop point coordinate, moving the anchor point coordinate along with the sliding track of the user, acquiring the coordinate position of the anchor point center at the new position after the user sliding instruction is finished, and obtaining the position of the anchor point center, namely the updated drop point coordinate.
By the implementation method, the user can adjust the anchor point more visually, meanwhile, the user instruction confirmation program is set, the problem that the user cannot perform other editing activities when the drop point coordinate is set can be solved, and the drop point coordinate can be adjusted only by clicking in the anchor point area, so that the user can perform other operations on the video when the anchor point is not clicked, and the user can edit the video conveniently.
In some embodiments, the user needs to determine one frame of the multiple frame pictures of the edited video as the edited frame picture. Referring to fig. 5 and fig. 6, fig. 5 is a schematic diagram of a first editing region and a display region of a frame progress bar according to the embodiment; fig. 6 is a flowchart illustrating an embodiment of selecting an edited frame according to the present embodiment.
As shown in fig. 5, the editing area in the video editing state includes: a first editing region and a frame progress bar; and the first editing area displays a frame picture image represented by the edited video at the stop moment of the frame progress bar.
Specifically, the first editing region is located above the frame progress bar, and the first editing region is a frame for performing scaling on the display region. The frame progress bar is a time axis for editing the video, and the time axis is formed by arranging a plurality of frame picture thumbnails according to a time line. The first editing area displays a frame picture image represented by the edited video at the stop moment of the frame progress bar. If the frame progress bar stops at 03: at the position of 35 seconds, the first editing area displays a frame picture of the edited video at that time as an edited frame picture.
The following steps are also included before step S1100 shown in fig. 6:
s1021, acquiring a click or sliding instruction of a user in the frame progress bar range;
and under the anchor point display state, the intelligent mobile terminal acquires a click or sliding instruction of the user.
S1022, determining the stopping time of the frame progress bar according to the clicking or sliding instruction in the range of the frame progress bar;
the coordinates of the range of the frame progress bar are a set consisting of all the coordinates located within the area of the frame progress bar. And after the coordinates of the click command or the sliding command of the user are obtained, determining whether the coordinates specified by the user are in the set of the frame progress bar coordinates through comparison. If not, the user does not issue an instruction for changing the editing frame picture, and if so, the user issues an instruction for adjusting the editing frame picture.
And after receiving a click or sliding instruction of a user acting on the frame progress bar, determining the stop moment of the user on the frame progress bar according to the instruction, wherein the frame picture of the edited video represented at the moment is the edited frame picture selected by the user.
And S1023, calling the frame picture image represented by the stop time of the frame progress bar as the editing frame picture.
Confirming the stop time of a user instruction on the frame progress bar, calling a frame picture represented by the stop time, and displaying the frame picture in the first editing area.
As shown in fig. 5, in some alternative embodiments, an anchor point can be displayed in the first editing region while the frame progress bar is set.
In some embodiments, after the special effect animation is added, a sliding bar for representing the duration of the special effect animation is added on the frame progress bar, and an instruction bar for representing the position of the key frame picture is arranged on the sliding bar. Referring to fig. 7 and 8, fig. 7 is a schematic view of a display area provided with a slide bar and an indication bar according to the present embodiment; fig. 8 is a flowchart illustrating another embodiment of selecting an edited frame in this embodiment.
As shown in fig. 7, a slide bar indicating the duration of the special-effect animation is disposed on the frame progress bar, and an instruction bar representing the position of the key frame picture is disposed on the slide bar.
The sliding bar is a frame body for representing the duration of the special effect animation, and the length of the sliding bar is the proportion of the special effect animation on the progress bar for editing the video frame. Such as: the special effect animation time length is 5s, the video editing time length is 20s, and the length of the sliding bar on the frame progress bar is one fourth of the total length of the frame progress bar; the special effect animation time length is 5s, the video editing time length is 45s, and the length of the sliding bar on the frame progress bar is one ninth of the total length of the frame progress bar.
The instruction bar is arranged on the sliding bar and used for indicating the position of the key frame picture in the special effect animation, and the instruction bar is designed to be provided with marks such as an arrow indicating the key frame picture, such as (without limitation) a sniping anchor point or a triangular arrow.
As shown in fig. 8, step S1100 further includes the following steps:
s1031, obtaining a sliding instruction acted by a user in the range of the sliding bar, so that the sliding bar slides along the frame progress bar along with the sliding instruction;
and acquiring an activity instruction acted by the user in the range of the sliding bar so that the sliding bar can slide along with the sliding instruction of the user.
S1032, determining the frame progress bar stopping time pointed by the instruction bar according to the sliding instruction acted by the user in the range of the sliding bar;
and after receiving a click or sliding instruction of a user acting on the frame progress bar, determining the stop moment of the user on the frame progress bar according to the instruction, wherein the frame picture of the edited video represented at the moment is the edited frame picture selected by the user.
And S1033, displaying the frame picture image represented by the stop moment of the frame progress bar pointed by the instruction bar in the first editing area.
And after the sliding bar stops sliding according to the user, calling the frame picture represented by the time of the frame progress bar pointed by the indication bar at the moment as an editing frame picture, namely always displaying the image represented by the frame progress bar at the time aligned by the indication bar in the first editing area.
By adopting the method, the user can intuitively adjust the position of the whole special effect animation on the frame progress bar, and can also intuitively see the playing position of the key frame picture on the frame progress bar, so that the operation intuitiveness is improved.
In some embodiments, it is necessary to preview the editing effect after the editing is completed, specifically referring to fig. 9, and fig. 9 is a flowchart illustrating a method for previewing the video editing result according to this embodiment.
As shown in fig. 9, the following steps are further included after step S1200:
s1221, respectively placing the edited video and the special effect animation on two parallel time tracks;
and placing the edited video and the special effect animation on two parallel time tracks during previewing. And the time track of the special effect animation is always positioned above the time track of the editing video. So that the special effect animation is always displayed on the upper layer of the edited video
And S1222, when the edited video is played to the start time of the special effect animation, the special effect animation is played synchronously and is displayed on the upper layer of the edited video.
When the video special effect animation is required to be played during playing, frame pictures of the video and the special effect animation at the same moment are read simultaneously, and the two frame pictures are placed in a video memory of the intelligent mobile terminal after being rendered simultaneously. And calling the frame pictures subjected to superposition rendering for display during display, thereby finishing the demonstration after the two videos are superposed.
By means of the mode that when the preview is played, the frame pictures of two videos at the same moment are called simultaneously to be overlapped and rendered when the preview is played to the starting time of the special-effect animation, a user can preview the editing effect, the review by the user is facilitated, and the video editing effect can be enhanced.
In some embodiments, editing effects can be quickly deleted in the preview state. Referring to fig. 10, fig. 10 is a schematic diagram illustrating an editing effect revocation process according to the present embodiment.
As shown in fig. 10, the following steps are further included after step S1222:
s1231, obtaining a revocation instruction of a user;
in the preview state, a cancel instruction of the user is acquired, and the user clicks in a specific position (cancel button) area of a display area of the smart mobile terminal to issue the cancel instruction.
And S1232, deleting the temporarily stored special effect animation in a stacking mode.
When the special effect animation added in the edited video is stored by the mobile intelligent mobile terminal, the storage by adopting a stacking mode is characterized in that the storage is carried out firstly and then carried out. Because a plurality of special effect animations can be set on the same edited video, a stack entering method is adopted for storage, and when the special effect animations are cancelled, the temporary stored special effect animations are deleted in a stack mode, namely the special effect animation which enters the temporary storage space at last is deleted at first, and the special effect animation which enters the temporary storage space at first is deleted at last.
In order to solve the above technical problem, an embodiment of the present invention further provides a video special effect adding device. Referring to fig. 11, fig. 11 is a block diagram of a basic structure of the video special effect adding apparatus according to the present embodiment.
As shown in fig. 11, a video special effect adding apparatus includes: an acquisition module 2100, a processing module 2200, and a composition module 2300. The obtaining module 2100 is configured to obtain a click instruction or a slide instruction of a user in a video editing state; the processing module 2200 is configured to obtain a position coordinate specified by the user in the editing frame picture according to the click instruction or the slide instruction, and use the position coordinate as a drop point coordinate of a key frame picture in the special-effect animation; the composition module 2300 is configured to compose the edited video and the special effect animation such that the key frame picture is overlaid at the position coordinates designated by the user in the edited frame picture.
The video special effect adding device is characterized in that a key frame picture is arranged in a preset special effect animation coordinate in the process of double-video editing and synthesizing, the falling point position of the picture determines the position of the whole special effect animation in the process of video synthesizing in the process of editing and video superposition, and a user freely sets the picture position of the special effect animation in the editing video according to the mode of setting the falling point coordinate. By the method, the user can freely control the view position of the special effect animation in the synthesized video, the degree of freedom of the user in the video editing process is improved, the entertainment of the video editing is improved, and the method has better customer experience and market prospect.
In some embodiments, an anchor point for calibrating the coordinates of the drop point is generated in the editing frame picture in the video editing state. The video special effect adding device further comprises: the device comprises a first obtaining submodule, a first comparison submodule and a first updating submodule. The first acquisition submodule is used for acquiring a first click instruction of a user and calculating the coordinate of the first click instruction; the first comparison sub-module is used for comparing whether the coordinate of the first click command is in the area of the anchor point coordinate; and the first updating submodule is used for updating the position of the anchor point along with the sliding instruction of the user when the coordinate of the first click instruction is in the area of the anchor point coordinate so as to update the drop point coordinate.
In some implementations, editing the area in the video editing state includes: a first editing region and a frame progress bar; the first editing area displays a frame picture image represented by the edited video at the stop moment of the frame progress bar. The video special effect adding device further comprises: the device comprises a second obtaining submodule, a first calculating submodule and a first calling submodule. The second obtaining submodule is used for obtaining a click or sliding instruction of a user in the range of the frame progress bar; the first calculation submodule is used for determining the stopping time of the frame progress bar according to a click or sliding instruction in the range of the frame progress bar; the first calling submodule is used for calling the frame picture image represented by the stop time of the frame progress bar as an editing frame picture.
In some embodiments, a sliding bar indicating the duration of the special-effect animation is disposed on the frame progress bar, and an instruction bar representing the position of the key frame picture is disposed on the sliding bar. The video special effect adding device further comprises: the device comprises a third acquisition submodule, a second calculation submodule and a first display submodule. The third obtaining submodule is used for obtaining a sliding instruction acted by a user in the range of the sliding bar so as to enable the sliding bar to slide along the frame progress bar along with the sliding instruction; the second calculation submodule is used for determining the frame progress bar stopping time pointed by the instruction bar according to the sliding instruction acted by the user in the range of the sliding bar; the first display sub-module is used for displaying the frame picture image represented by the frame progress bar stop time pointed by the instruction bar in the first editing area.
In some embodiments, the video special effects adding apparatus further comprises: a first setting sub-module and a first preview sub-module. The first setting submodule is used for respectively placing the edited video and the special effect animation on two parallel time tracks; the first preview submodule is used for synchronously playing the special effect animation when the edited video reaches the starting time of the special effect animation, and the special effect animation is positioned on the upper layer of the edited video for displaying.
In some embodiments, the video special effects adding apparatus further comprises: a fourth acquisition submodule and a first revocation submodule. The fourth obtaining submodule is used for obtaining a revocation instruction of the user; the first revocation submodule is used for deleting the temporarily stored special effect animation in a stacking mode.
In some embodiments, the video special effects adding apparatus further comprises: the device comprises a fifth acquisition submodule, a third calculation submodule and a first determination submodule. The fifth obtaining submodule is used for obtaining the position relation information of each frame of picture and the key frame picture in the preset special-effect animation; the third calculation submodule is used for calculating the coverage coordinate of each frame of picture in the special-effect animation according to the relation information of the coordinates of the drop points and the positions; the first determining submodule is used for determining the covering position of each frame of picture in the special effect animation according to the covering coordinates.
The embodiment also provides the intelligent mobile terminal. Referring to fig. 12, fig. 12 is a schematic diagram of a basic structure of the intelligent mobile terminal according to the embodiment.
It should be noted that in this embodiment, all programs for implementing the video special effect adding method in this embodiment are stored in the memory 1520 of the smart mobile terminal, and the processor 1580 can call the programs in the memory 1520 to execute all functions listed in the video special effect adding method. As the functions realized by the intelligent mobile terminal are described in detail in the video special effect adding method in this embodiment, no further description is given here.
The intelligent mobile terminal is provided with a key frame picture in a preset special effect animation coordinate in the process of double-video editing and synthesizing, the position of a drop point of the picture determines the position of the whole special effect animation in the video composition process of the editing video, and a user freely sets the picture position of the special effect animation in the editing video according to the mode of setting the drop point coordinate. By the method, the user can freely control the view position of the special effect animation in the synthesized video, the degree of freedom of the user in the video editing process is improved, the entertainment of the video editing is improved, and the method has better customer experience and market prospect.
An intelligent mobile terminal is further provided in the embodiment of the present invention, as shown in fig. 12, for convenience of description, only a part related to the embodiment of the present invention is shown, and details of the specific technology are not disclosed, please refer to the method part in the embodiment of the present invention. The terminal may be any terminal device including an intelligent mobile terminal, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), a vehicle-mounted computer, and the like, taking the terminal as the intelligent mobile terminal as an example:
fig. 12 is a block diagram illustrating a partial structure of an intelligent mobile terminal related to a terminal provided by an embodiment of the present invention. Referring to fig. 12, the smart mobile terminal includes: radio Frequency (RF) circuitry 1510, memory 1520, input unit 1530, display unit 1540, sensor 1550, audio circuitry 1560, wireless fidelity (Wi-Fi) module 1570, processor 1580, and power supply 1590. Those skilled in the art will appreciate that the intelligent mobile terminal architecture shown in fig. 7 is not intended to be limiting of intelligent mobile terminals and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
The following describes each component of the intelligent mobile terminal in detail with reference to fig. 12:
the RF circuit 1510 may be configured to receive and transmit signals during information transmission and reception or during a call, and in particular, receive downlink information of a base station and then process the received downlink information to the processor 1580; in addition, the data for designing uplink is transmitted to the base station. In general, RF circuit 1510 includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, RF circuit 1510 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Messaging Service (SMS), and the like.
The memory 1520 may be used to store software programs and modules, and the processor 1580 performs various functional applications and data processing of the intelligent mobile terminal by operating the software programs and modules stored in the memory 1520. The memory 1520 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a voiceprint playback function, an image playback function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the smart mobile terminal, and the like. Further, the memory 1520 may include high-speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The input unit 1530 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the intelligent mobile terminal. Specifically, the input unit 1530 may include a touch panel 1531 and other input devices 1532. The touch panel 1531, also referred to as a touch screen, can collect touch operations of a user (e.g., operations of the user on or near the touch panel 1531 using any suitable object or accessory such as a finger or a stylus) and drive corresponding connection devices according to a preset program. Alternatively, the touch panel 1531 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, and sends the touch point coordinates to the processor 1580, and can receive and execute commands sent by the processor 1580. In addition, the touch panel 1531 may be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 1530 may include other input devices 1532 in addition to the touch panel 1531. In particular, other input devices 1532 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 1540 may be used to display information input by the user or information provided to the user and various menus of the intelligent mobile terminal. The Display unit 1540 may include a Display panel 1541, and optionally, the Display panel 1541 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 1531 may cover the display panel 1541, and when the touch panel 1531 detects a touch operation on or near the touch panel 1531, the touch operation is transmitted to the processor 1580 to determine the type of the touch event, and then the processor 1580 provides a corresponding visual output on the display panel 1541 according to the type of the touch event. Although the touch panel 1531 and the display panel 1541 are shown as two separate components in fig. 7 to implement the input and output functions of the smart mobile terminal, in some embodiments, the touch panel 1531 and the display panel 1541 may be integrated to implement the input and output functions of the smart mobile terminal.
The smart mobile terminal may also include at least one sensor 1550, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel 1541 according to the brightness of ambient light and a proximity sensor that may turn off the display panel 1541 and/or backlight when the smart mobile terminal is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration) for recognizing the attitude of the intelligent mobile terminal, and related functions (such as pedometer and tapping) for vibration recognition; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured in the intelligent mobile terminal, further description is omitted here.
The audio circuit 1560, speaker 1561, microphone 1562 may provide an audio interface between a user and the intelligent mobile terminal. The audio circuit 1560 may transmit the electrical signal converted from the received audio data to the speaker 1561, and convert the electrical signal into a voiceprint signal by the speaker 1561 and output the voiceprint signal; on the other hand, the microphone 1562 converts the collected voiceprint signals into electrical signals, which are received by the audio circuit 1560 and converted into audio data, which are then processed by the audio data output processor 1580, and then passed through the RF circuit 1510 to be transmitted to, for example, another intelligent mobile terminal, or output to the memory 1520 for further processing.
Wi-Fi belongs to short distance wireless transmission technology, and intelligent mobile terminal can help user to receive and send e-mail, browse webpage and access streaming media etc. through Wi-Fi module 1570, and it provides wireless broadband internet access for user. Although fig. 7 shows a Wi-Fi module 1570, it is understood that it does not belong to the essential constitution of the intelligent mobile terminal and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 1580 is a control center of the smart mobile terminal, connects various parts of the entire smart mobile terminal using various interfaces and lines, and performs various functions of the smart mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 1520 and calling data stored in the memory 1520, thereby integrally monitoring the smart mobile terminal. Optionally, the processor 1580 may include one or more processing units; preferably, the processor 1580 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, and the like, and a modem processor, which mainly handles wireless communications. It is to be appreciated that the modem processor may not be integrated into the processor 1580.
The smart mobile terminal further includes a power source 1590 (e.g., a battery) for supplying power to various components, and preferably, the power source may be logically connected to the processor 1580 through a power management system, so that functions of managing charging, discharging, and power consumption management, etc. are implemented through the power management system.
Although not shown, the smart mobile terminal may further include a camera, a bluetooth module, and the like, which are not described herein again.
It should be noted that the description of the present invention and the accompanying drawings illustrate preferred embodiments of the present invention, but the present invention may be embodied in many different forms and is not limited to the embodiments described in the present specification, which are provided as additional limitations to the present invention and to provide a more thorough understanding of the present disclosure. Moreover, the above technical features are combined with each other to form various embodiments which are not listed above, and all the embodiments are regarded as the scope of the present invention described in the specification; further, modifications and variations will occur to those skilled in the art in light of the foregoing description, and it is intended to cover all such modifications and variations as fall within the true spirit and scope of the invention as defined by the appended claims.

Claims (8)

1. A method for adding a special effect to a video, comprising the steps of:
acquiring a click or sliding instruction of a user within a frame progress bar range; determining the stopping time of the frame progress bar according to the clicking or sliding instruction in the range of the frame progress bar; calling a frame picture image represented by the frame progress bar stop time as an editing frame picture; the editing region in the video editing state includes: a first editing region and a frame progress bar; the first editing area displays a frame picture image represented by the edited video at the stop moment of the frame progress bar;
acquiring a sliding instruction acted by a user in a sliding bar range, so that the sliding bar slides along the frame progress bar along with the sliding instruction; determining the stop time of the frame progress bar pointed by the instruction bar according to the sliding instruction acted by the user in the range of the sliding bar; the first editing area displays a frame image represented by the stop time of the frame progress bar pointed by the instruction bar; a sliding bar marking the special effect animation duration is arranged on the frame progress bar, and an instruction bar representing the position of a key frame picture is arranged on the sliding bar;
acquiring a click instruction or a sliding instruction of a user in a video editing state;
acquiring a position coordinate appointed by a user in an editing frame picture according to the click command or the sliding command, and taking the position coordinate as a falling point coordinate of a key frame picture in the special effect animation;
and synthesizing the editing video and the special effect animation so as to enable the key frame picture to be covered at the position coordinates designated by the user in the editing frame picture.
2. The video special effect adding method according to claim 1, wherein an anchor point for calibrating the coordinates of the drop point is generated in the edited frame picture in the video editing state;
before the step of obtaining the click command or the slide command of the user in the video editing state, the method further comprises the following steps:
acquiring a first click instruction of a user, and calculating a coordinate of the first click instruction;
comparing whether the coordinate of the first click command is in the area of the anchor point coordinate;
and when the coordinate of the first click instruction is in the area of the anchor point coordinate, the anchor point updates the position along with the sliding instruction of the user so as to update the drop point coordinate.
3. The video special effect adding method according to claim 1, wherein after the step of obtaining a position coordinate specified by a user in an editing frame picture according to the click instruction or the slide instruction and using the position coordinate as a drop point coordinate of a key frame picture in the special effect animation, the method further comprises the following steps:
respectively placing the edited video and the special effect animation on two parallel time tracks;
and when the edited video is played to the starting time of the special effect animation, the special effect animation is synchronously played and is positioned on the upper layer of the edited video for displaying.
4. The video special effect adding method according to claim 3, wherein when the edited video is played to the start time of the special effect animation, the special effect animation is played synchronously and is positioned after the step of displaying the special effect animation on the upper layer of the edited video, and further comprising the steps of:
acquiring a revocation instruction of a user;
and deleting the temporarily stored special effect animation in a stacking mode.
5. The video special effect adding method according to claim 1, wherein after the step of obtaining a position coordinate specified by a user in an editing frame picture according to the click instruction or the slide instruction and using the position coordinate as a drop point coordinate of a key frame picture in the special effect animation, the method further comprises the following steps:
acquiring the position relation information of each frame of picture and the key frame of picture in the preset special-effect animation;
calculating the covering coordinate of each frame of picture in the special effect animation according to the relation information of the coordinates and the positions of the falling points;
and determining the covering position of each frame of picture in the special effect animation according to the covering coordinates.
6. A video special effect adding apparatus, comprising:
the first acquisition submodule is used for acquiring a click or sliding instruction of a user in the range of the frame progress bar; determining the stopping time of the frame progress bar according to the clicking or sliding instruction in the range of the frame progress bar; calling a frame picture image represented by the frame progress bar stop time as an editing frame picture; the editing region in the video editing state includes: a first editing region and a frame progress bar; the first editing area displays a frame picture image represented by the edited video at the stop moment of the frame progress bar;
acquiring a sliding instruction acted by a user in a sliding bar range, so that the sliding bar slides along the frame progress bar along with the sliding instruction; determining the stop time of the frame progress bar pointed by the instruction bar according to the sliding instruction acted by the user in the range of the sliding bar; the first editing area displays a frame image represented by the stop time of the frame progress bar pointed by the instruction bar; a sliding bar marking the special effect animation duration is arranged on the frame progress bar, and an instruction bar representing the position of a key frame picture is arranged on the sliding bar;
the acquisition module is used for acquiring a click instruction or a sliding instruction of a user in a video editing state;
the processing module is used for acquiring a position coordinate appointed by a user in an editing frame picture according to the click command or the sliding command, and taking the position coordinate as a falling point coordinate of a key frame picture in the special-effect animation;
and the synthesis module is used for synthesizing the edited video and the special effect animation so as to enable the key frame picture to cover the position coordinate designated by the user in the edited frame picture.
7. The video special effect adding apparatus according to claim 6, wherein an anchor point for calibrating the coordinates of the drop point is generated in the editing frame in the video editing state;
the first acquisition submodule is also used for acquiring a first click instruction of a user and calculating the coordinate of the first click instruction;
the first comparison sub-module is used for comparing whether the coordinate of the first click command is in the area of the anchor point coordinate;
and the first updating submodule is used for updating the position of the anchor point along with the sliding instruction of the user when the coordinate of the first click instruction is in the area of the anchor point coordinate so as to update the drop point coordinate.
8. An intelligent mobile terminal, comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the video effects addition method of any of claims 1-5.
CN201711242163.0A 2017-11-30 2017-11-30 Video special effect adding method and device and intelligent mobile terminal Active CN108022279B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201711242163.0A CN108022279B (en) 2017-11-30 2017-11-30 Video special effect adding method and device and intelligent mobile terminal
PCT/CN2018/118370 WO2019105438A1 (en) 2017-11-30 2018-11-30 Video special effect adding method and apparatus, and smart mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711242163.0A CN108022279B (en) 2017-11-30 2017-11-30 Video special effect adding method and device and intelligent mobile terminal

Publications (2)

Publication Number Publication Date
CN108022279A CN108022279A (en) 2018-05-11
CN108022279B true CN108022279B (en) 2021-07-06

Family

ID=62077714

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711242163.0A Active CN108022279B (en) 2017-11-30 2017-11-30 Video special effect adding method and device and intelligent mobile terminal

Country Status (2)

Country Link
CN (1) CN108022279B (en)
WO (1) WO2019105438A1 (en)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108022279B (en) * 2017-11-30 2021-07-06 广州市百果园信息技术有限公司 Video special effect adding method and device and intelligent mobile terminal
CN108734756B (en) * 2018-05-15 2022-03-25 深圳市腾讯网络信息技术有限公司 Animation production method and device, storage medium and electronic device
CN108712661B (en) * 2018-05-28 2022-02-25 广州虎牙信息科技有限公司 Live video processing method, device, equipment and storage medium
CN108958610A (en) * 2018-07-27 2018-12-07 北京微播视界科技有限公司 Special efficacy generation method, device and electronic equipment based on face
CN109040615A (en) * 2018-08-10 2018-12-18 北京微播视界科技有限公司 Special video effect adding method, device, terminal device and computer storage medium
CN110166842B (en) 2018-11-19 2020-10-16 深圳市腾讯信息技术有限公司 Video file operation method and device and storage medium
CN109379631B (en) * 2018-12-13 2020-11-24 广州艾美网络科技有限公司 Method for editing video captions through mobile terminal
CN110213638B (en) * 2019-06-05 2021-10-08 北京达佳互联信息技术有限公司 Animation display method, device, terminal and storage medium
CN110493630B (en) * 2019-09-11 2020-12-01 广州华多网络科技有限公司 Processing method and device for special effect of virtual gift and live broadcast system
CN111050203B (en) * 2019-12-06 2022-06-14 腾讯科技(深圳)有限公司 Video processing method and device, video processing equipment and storage medium
CN113452929B (en) * 2020-03-24 2022-10-04 北京达佳互联信息技术有限公司 Video rendering method and device, electronic equipment and storage medium
CN111739127B (en) * 2020-06-09 2024-08-02 广联达科技股份有限公司 Simulation method and simulation device for associated motion in mechanical linkage process
CN111756952A (en) * 2020-07-23 2020-10-09 北京字节跳动网络技术有限公司 Preview method, device, equipment and storage medium of effect application
CN111897483A (en) * 2020-08-11 2020-11-06 网易(杭州)网络有限公司 Live broadcast interaction processing method, device, equipment and storage medium
CN116437034A (en) * 2020-09-25 2023-07-14 荣耀终端有限公司 Video special effect adding method and device and terminal equipment
CN112199016B (en) * 2020-09-30 2023-02-21 北京字节跳动网络技术有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN113038228B (en) * 2021-02-25 2023-05-30 广州方硅信息技术有限公司 Virtual gift transmission and request method, device, equipment and medium thereof
CN116033181A (en) * 2021-10-26 2023-04-28 脸萌有限公司 Video processing method, device, equipment and storage medium
CN114125555B (en) * 2021-11-12 2024-02-09 深圳麦风科技有限公司 Editing data preview method, terminal and storage medium
CN114979704A (en) * 2022-03-28 2022-08-30 阿里云计算有限公司 Video data generation method and system and video playing system
CN115766974A (en) * 2022-10-24 2023-03-07 珠海金山数字网络科技有限公司 Special effect generation method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4001882A (en) * 1975-03-12 1977-01-04 Spectra-Vision Corporation Magnetic tape editing, previewing and animating method and system
CN101217638A (en) * 2007-12-28 2008-07-09 深圳市迅雷网络技术有限公司 A downloading method, system and device of video file fragmentation
CN102385613A (en) * 2011-09-30 2012-03-21 广州市动景计算机科技有限公司 Web page positioning method and system
US8473846B2 (en) * 2006-12-22 2013-06-25 Apple Inc. Anchor point in media
CN103220490A (en) * 2013-03-15 2013-07-24 广东欧珀移动通信有限公司 Special effect implementation method in video communication and video user terminal
CN105609121A (en) * 2014-11-20 2016-05-25 深圳市腾讯计算机系统有限公司 Method and device for controlling multimedia playing progress
CN105844987A (en) * 2016-05-30 2016-08-10 深圳科润视讯技术有限公司 Multimedia teaching interaction operating method and device
CN106385591A (en) * 2016-10-17 2017-02-08 腾讯科技(上海)有限公司 Video processing method and video processing device
CN106792078A (en) * 2016-07-12 2017-05-31 乐视控股(北京)有限公司 Method for processing video frequency and device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040071453A1 (en) * 2002-10-08 2004-04-15 Valderas Harold M. Method and system for producing interactive DVD video slides
US7434155B2 (en) * 2005-04-04 2008-10-07 Leitch Technology, Inc. Icon bar display for video editing system
CN104423779A (en) * 2013-08-26 2015-03-18 鸿合科技有限公司 Interactive display implementation method and device
CN104967900B (en) * 2015-05-04 2018-08-07 腾讯科技(深圳)有限公司 A kind of method and apparatus generating video
CN104780338A (en) * 2015-04-16 2015-07-15 美国掌赢信息科技有限公司 Method and electronic equipment for loading expression effect animation in instant video
KR101707203B1 (en) * 2015-09-04 2017-02-15 주식회사 씨지픽셀스튜디오 Transforming method of computer graphic animation file by applying joint rotating value
CN108022279B (en) * 2017-11-30 2021-07-06 广州市百果园信息技术有限公司 Video special effect adding method and device and intelligent mobile terminal

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4001882A (en) * 1975-03-12 1977-01-04 Spectra-Vision Corporation Magnetic tape editing, previewing and animating method and system
US8473846B2 (en) * 2006-12-22 2013-06-25 Apple Inc. Anchor point in media
CN101217638A (en) * 2007-12-28 2008-07-09 深圳市迅雷网络技术有限公司 A downloading method, system and device of video file fragmentation
CN102385613A (en) * 2011-09-30 2012-03-21 广州市动景计算机科技有限公司 Web page positioning method and system
CN103220490A (en) * 2013-03-15 2013-07-24 广东欧珀移动通信有限公司 Special effect implementation method in video communication and video user terminal
CN105609121A (en) * 2014-11-20 2016-05-25 深圳市腾讯计算机系统有限公司 Method and device for controlling multimedia playing progress
CN105844987A (en) * 2016-05-30 2016-08-10 深圳科润视讯技术有限公司 Multimedia teaching interaction operating method and device
CN106792078A (en) * 2016-07-12 2017-05-31 乐视控股(北京)有限公司 Method for processing video frequency and device
CN106385591A (en) * 2016-10-17 2017-02-08 腾讯科技(上海)有限公司 Video processing method and video processing device

Also Published As

Publication number Publication date
WO2019105438A1 (en) 2019-06-06
CN108022279A (en) 2018-05-11

Similar Documents

Publication Publication Date Title
CN108022279B (en) Video special effect adding method and device and intelligent mobile terminal
US11100955B2 (en) Method, apparatus and smart mobile terminal for editing video
CN109683847A (en) A kind of volume adjusting method and terminal
US20140380464A1 (en) Electronic device for displaying lock screen and method of controlling the same
CN108024073B (en) Video editing method and device and intelligent mobile terminal
US20200341594A1 (en) User interface display method and apparatus therefor
CN110460907A (en) A kind of video playing control method and terminal
CN110582018A (en) Video file processing method, related device and equipment
CN105187930A (en) Video live broadcasting-based interaction method and device
CN106303733B (en) Method and device for playing live special effect information
CN107193451B (en) Information display method and device, computer equipment and computer readable storage medium
CN108616771B (en) Video playing method and mobile terminal
CN110673770B (en) Message display method and terminal equipment
CN110868633A (en) Video processing method and electronic equipment
KR102186815B1 (en) Method, apparatus and recovering medium for clipping of contents
CN113936699B (en) Audio processing method, device, equipment and storage medium
CN110276723A (en) A kind of head portrait pendant generation method, device and relevant device
CN109800095B (en) Notification message processing method and mobile terminal
CN109104640B (en) Virtual gift presenting method and device and storage equipment
CN111128252B (en) Data processing method and related equipment
CN110958487B (en) Video playing progress positioning method and electronic equipment
CN111049977B (en) Alarm clock reminding method and electronic equipment
CN106408508A (en) Image deformation processing method and apparatus
CN108600823B (en) Video data processing method and mobile terminal
CN111399718B (en) Icon management method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20211130

Address after: 31a, 15 / F, building 30, maple mall, bangrang Road, Brazil, Singapore

Patentee after: Baiguoyuan Technology (Singapore) Co.,Ltd.

Address before: Building B-1, North District, Wanda Commercial Plaza, Wanbo business district, No. 79, Wanbo 2nd Road, Nancun Town, Panyu District, Guangzhou City, Guangdong Province

Patentee before: GUANGZHOU BAIGUOYUAN INFORMATION TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right