CN112929699B - Video processing method, device, electronic equipment and readable storage medium - Google Patents

Video processing method, device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN112929699B
CN112929699B CN202110112741.9A CN202110112741A CN112929699B CN 112929699 B CN112929699 B CN 112929699B CN 202110112741 A CN202110112741 A CN 202110112741A CN 112929699 B CN112929699 B CN 112929699B
Authority
CN
China
Prior art keywords
picture
video
track
animation frame
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110112741.9A
Other languages
Chinese (zh)
Other versions
CN112929699A (en
Inventor
康谋
王云刚
刘海东
刘灿尧
罗维飞
吴冠龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huya Technology Co Ltd
Original Assignee
Guangzhou Huya Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huya Technology Co Ltd filed Critical Guangzhou Huya Technology Co Ltd
Priority to CN202110112741.9A priority Critical patent/CN112929699B/en
Publication of CN112929699A publication Critical patent/CN112929699A/en
Application granted granted Critical
Publication of CN112929699B publication Critical patent/CN112929699B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/426Internal components of the client ; Characteristics thereof
    • H04N21/42653Internal components of the client ; Characteristics thereof for processing graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/654Transmission by server directed to the client

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The embodiment of the invention provides a video processing method, a video processing device, electronic equipment and a readable storage medium, and relates to the technical field of video processing. After adding at least one picture to the video, the motion trail of each picture is associated with the video by acquiring the motion trail of each picture, and the associated video and motion trail are sent to a user side, so that the user can replace people or certain scenes in the video by one key through any picture according to the association relation between the video and the motion trail, the operation is simple, and the cost is low.

Description

Video processing method, device, electronic equipment and readable storage medium
Technical Field
The present invention relates to the field of video processing technologies, and in particular, to a video processing method, apparatus, electronic device, and readable storage medium.
Background
In recent years, video has been widely used and spread as a carrier of information, and in the process of producing video, users often like to replace people or certain scenes in the video, and users replacing the video have wide audience and rich originality, but the production difficulty is huge and the cost is high, so that the users are required to adjust the video frame by frame.
Disclosure of Invention
Based on the above-described studies, the present invention provides a video processing method, apparatus, electronic device, and readable storage medium to improve the above-described problems.
Embodiments of the invention may be implemented as follows:
in a first aspect, the present invention provides a video processing method, applied to a server, where the method includes:
adding at least one picture to the video to obtain a motion trail of each picture;
associating the motion trail of each picture with the video;
and sending the associated video and the motion trail to a user side.
In an optional embodiment, the step of adding at least one picture to the video and obtaining the motion trail of each picture includes:
responding to the operation of adding pictures, and creating at least one picture track, wherein each picture track corresponds to one picture;
adding pictures corresponding to the picture tracks into the video;
for each picture track, editing a picture corresponding to the picture track in at least one video frame of the video to obtain at least one animation frame corresponding to the picture track;
and aiming at each picture track, calculating the motion trail of the picture corresponding to the picture track according to the state information of the picture in each animation frame corresponding to the picture track.
In an optional embodiment, the step of calculating the motion trail of the picture corresponding to the picture track according to the state information of the picture in each animation frame corresponding to the picture track includes:
for each animation frame corresponding to the picture track, acquiring state information of a picture in the animation frame and state information of a picture in a previous animation frame adjacent to the animation frame;
generating a sub-motion track from the last animation frame to the animation frame according to the state information of the picture in the animation frame and the state information of the picture in the last animation frame;
and splicing all the sub-motion tracks according to the time sequence to obtain the motion track of the picture corresponding to the picture track.
In an alternative embodiment, the step of generating the sub-motion trail from the previous animation frame to the animation frame according to the state information of the picture in the animation frame and the state information of the picture in the previous animation frame includes:
calculating to obtain state change information of the picture in the animation frame and the picture in the previous animation frame according to the state information of the picture in the animation frame and the state information of the picture in the previous animation frame;
Acquiring the number of video frames between the animation frame and the previous animation frame;
obtaining target state information of the picture in each video frame according to the number and the state change information;
and generating a sub-motion track from the last animation frame to the animation frame according to the state information of the picture in the animation frame, the target state information of the picture in each video frame and the state information of the picture in the last animation frame.
In an alternative embodiment, the state information of the picture includes position information, size information, and rotation angle; the state change information comprises position change information, size change information and angle change information;
the step of obtaining the target state information of the picture in each video frame according to the number and the state change information comprises the following steps:
according to the quantity, the position change information, the size change information and the angle change information, calculating to obtain average position change information, average size change information and average angle change information of the picture in each video frame;
and obtaining target state information of the picture in each video frame according to the average position change information, the average size change information and the average angle change information of the picture in each video frame.
In an optional embodiment, the step of associating the motion trail of each picture with the video includes:
judging whether each picture has the same picture as the picture or not;
if the picture is the same as the picture, correlating the picture with the motion trail of the picture which is the same as the picture to obtain a first trail set;
if the picture which is the same as the picture is not provided, the motion track of the picture is used as a second track set;
and associating each first track set and each second track set with the video.
In a second aspect, the present invention provides a video processing method, applied to a user terminal communicatively connected to a server terminal, where the user terminal stores a plurality of videos and a motion trail associated with each video; the method comprises the following steps:
responding to the replacement operation, and determining a target video and a motion trail associated with the target video;
and responding to a picture selection operation, and adding the selected picture into the target video according to the motion trail associated with the target video to obtain a replaced video.
In an alternative embodiment, before adding the selected picture to the target video according to the motion trail associated with the target video, the method further includes:
Responding to the switching operation, and displaying the target animation frame of the target video on a preview interface;
displaying all the replaceable contents in the target animation frame; wherein each of the replaceable contents displayed in the target animation frame corresponds to a motion trail of a picture;
the step of adding the selected picture to the target video according to the motion trail associated with the target video comprises the following steps:
and for each replaceable content in the target animation frame, responding to the operation of a user on the replaceable content, adding the selected picture into the target video according to the motion trail corresponding to the replaceable content, and replacing all the replaceable content in the target video.
In an alternative embodiment, before adding the selected picture to the target video according to the motion trail associated with the target video, the method further includes:
responding to the switching operation, and displaying a preview interface, wherein the preview interface comprises an animation display area and a mark display area;
displaying a target animation frame of the target video in the animation display area, and marking and displaying all replaceable contents in the target animation frame; wherein, each mark of the replaceable content is displayed in the mark display area, and each mark corresponds to at least one motion track of a picture;
The step of adding the selected picture to the target video according to the motion trail associated with the target video comprises the following steps:
and for each mark in the mark display area, responding to the operation of a user on the mark, adding the selected picture into the target video according to the motion track corresponding to the mark, and replacing all the replaceable contents corresponding to the mark.
In a third aspect, the present invention provides a video processing apparatus, applied to a server, where the video processing apparatus includes a track processing module, a track association module, and a track transmission module;
the track processing module is used for adding at least one picture to the video and obtaining the motion track of each picture;
the track association module is used for associating the motion track of each picture with the video;
the track transmission module is used for transmitting the associated video and the motion track to the user side.
In a fourth aspect, the present invention provides a video processing apparatus, applied to a user terminal communicatively connected to a server terminal, where the user terminal stores a plurality of videos and motion trajectories associated with each video, and the video processing apparatus includes a video determining module and a picture replacing module;
The video determining module is used for determining a target video and a motion trail associated with the target video in response to the replacing operation;
the picture replacing module is used for responding to picture selecting operation, adding the selected picture into the target video according to the motion trail associated with the target video, and obtaining a replaced video.
In a fifth aspect, the present invention provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the video processing method according to any one of the foregoing embodiments or the video processing method according to any one of the foregoing embodiments when executing the computer program.
In a sixth aspect, the present invention provides a readable storage medium having stored therein a computer program which, when executed, implements the video processing method according to any one of the foregoing embodiments or the video processing method according to any one of the foregoing embodiments.
According to the video processing method, the device, the electronic equipment and the readable storage medium, after at least one picture is added to the video, the motion trail of each picture is associated with the video by acquiring the motion trail of each picture, and the associated video and motion trail are sent to the user side, so that the user can replace people or certain scenes in the video by one key through any picture according to the association relation between the video and the motion trail, the operation is simple, and the cost is low.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a block diagram of a server according to an embodiment of the present invention.
Fig. 2 is a flowchart of a video processing method according to an embodiment of the present invention.
Fig. 3 is a schematic flow chart of another video processing method according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of an interface of a server side according to an embodiment of the present invention.
Fig. 5 is a second schematic diagram of an interface of a server side according to an embodiment of the present invention.
FIG. 6 is a third exemplary diagram of an interface at a server side according to an embodiment of the present invention.
Fig. 7 is a schematic diagram of a motion trajectory according to an embodiment of the present invention.
Fig. 8 is a schematic flow chart of a video processing method according to an embodiment of the invention.
Fig. 9 is a schematic diagram of an interface at a user side according to an embodiment of the present invention.
Fig. 10 is a second schematic diagram of an interface at a client side according to an embodiment of the present invention.
Fig. 11 is a third exemplary interface diagram of a client side according to an embodiment of the present invention.
Fig. 12 is a schematic diagram of an interface at a client side according to an embodiment of the present invention.
Fig. 13 is a schematic diagram of another motion trajectory according to an embodiment of the present invention.
Fig. 14 is a fifth exemplary interface diagram of a client side according to an embodiment of the present invention.
Fig. 15 is a block diagram of a first video processing apparatus according to an embodiment of the present invention.
Icon: 100-a server side; 10-a first video processing device; 11-a track processing module; 12-a track association module; 13-a track transmission module; 20-memory; 30-a processor; 40-an input/output unit; a 50-display unit; a 60-communication unit; 70-peripheral interface.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
In the description of the present invention, it should be noted that, if the terms "upper", "lower", "inner", "outer", and the like indicate an azimuth or a positional relationship based on the azimuth or the positional relationship shown in the drawings, or the azimuth or the positional relationship in which the inventive product is conventionally put in use, it is merely for convenience of describing the present invention and simplifying the description, and it is not indicated or implied that the apparatus or element referred to must have a specific azimuth, be configured and operated in a specific azimuth, and thus it should not be construed as limiting the present invention.
Furthermore, the terms "first," "second," and the like, if any, are used merely for distinguishing between descriptions and not for indicating or implying a relative importance.
It should be noted that the features of the embodiments of the present invention may be combined with each other without conflict.
In recent years, short videos have been widely used and spread as carriers of information, and in the process of producing short videos, users often like to replace people or certain scenes in the videos, for example, face in the videos is covered with one other picture, so that the effect of changing the face into another face is achieved, and for example, any prop, scene and other pictures in the videos are replaced by covering the picture, so that the replacement effect is achieved.
The users of the replacement video are wide in audience and rich in creative, but the manufacturing difficulty is huge, the traditional manufacturing method needs the users to adjust the video frame by frame, people or certain scenes in each frame of video are replaced by pictures to be replaced, and finally, the replacement effect is generated during playing, so that the cost for manufacturing the video is high, and the common users are not benefited to produce contents.
There is also a means for automatically replacing faces based on artificial intelligence (Artificial Intelligence, AI) face recognition in the market, but this way can only replace faces, and cannot realize replacement of scenes such as props, animals and the like.
Based on the above-mentioned research, the present embodiment provides a video processing method, apparatus, electronic device, and readable storage medium, where after adding at least one picture to a video, the motion trail of each picture is associated with the video by obtaining the motion trail of each picture, and the associated video and motion trail are sent to a user side, so that the user can implement one-key replacement of a person or some scenes in the video through any picture according to the association relationship between the video and the motion trail, and the operation is simple and the cost is low.
Referring to fig. 1, the present embodiment provides a video processing method, which is applied to the server 100 shown in fig. 1, and the server 100 provided in this embodiment may be a single physical server or a server group formed by a plurality of physical servers for executing different data processing functions.
As shown in fig. 1, in the present embodiment, the server 100 may include a first video processing apparatus 10, a memory 20, a processor 30, a communication unit 60, an input-output unit 40, a display unit 50, and a peripheral interface 70.
The memory 20, the processor 30, the input/output unit 40, the display unit 50, the communication unit 60 and the peripheral interface 70 are electrically connected directly or indirectly to each other to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The memory 20 stores therein a first video processing device 10, and the first video processing device 10 includes at least one software functional module that may be stored in the memory 20 in the form of software or firmware (firmware), and the processor 30 executes various functional applications and data processing by running software programs and modules stored in the memory 20 (e.g., the first video processing device 10 in the embodiment of the present invention), that is, implements the video processing method in the embodiment of the present invention.
The Memory 20 may be, but is not limited to, a random access Memory (Random Access Memory, RAM), a Read Only Memory (ROM), a programmable Read Only Memory (Programmable Read-Only Memory, PROM), an erasable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), an electrically erasable Read Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), etc. The memory is used for storing a program, and the processor executes the program after receiving an execution instruction.
The processor 30 may be an integrated circuit chip having data processing capabilities. The processor 30 may be a general-purpose processor including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The communication unit 60 is configured to establish a communication connection between the server 100 and the user through a network, so as to implement data transceiving operation. The network may be, but is not limited to, a wired network or a wireless network.
The input/output unit 40 is configured to provide user input data to implement interaction between an operator of the server 100 and the server 100. The input/output unit 40 may be, but is not limited to, a mouse, a keyboard, and the like.
The display unit 50 provides an interactive interface (e.g., a user operation interface) between the service terminal 100 and an operator of the service terminal 100 for displaying video information. In this embodiment, the display unit 50 may be a liquid crystal display or a touch display. In the case of a touch display, the touch display may be a capacitive touch screen or a resistive touch screen, etc. supporting single-point and multi-point touch operations. Supporting single-point and multi-point touch operations means that the touch display can sense touch operations generated from one or more locations on the touch display and communicate the sensed touch operations to the processor 30 for computation and processing.
The peripheral interface 70 couples various input/output devices, such as the input output unit 40 and the display unit 50, to the processor 30 and the memory 20. In some embodiments, the peripheral interface 70, the processor 30, and the memory 20 may be implemented in a single chip. In other examples, they may be implemented by separate chips.
It is understood that the configuration shown in fig. 1 is merely illustrative, and that the server 100 may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1. The components shown in fig. 1 may be implemented in hardware, software, or a combination thereof.
Referring to fig. 2 in combination with the implementation architecture of fig. 1, fig. 2 is a flowchart of a video processing method according to the present embodiment, where the method is performed by the server 100 shown in fig. 1, and the flowchart shown in fig. 2 is described in detail below.
Step S10: and adding at least one picture to the video, and obtaining the motion trail of each picture.
For any video, any picture can be added to the video, and the added picture is used for covering the content in the video, i.e. the content in the video is replaced by the added picture.
Step S20: and correlating the motion trail of each picture with the video.
Step S30: and sending the associated video and the motion trail to a user side.
According to the video processing method provided by the embodiment, after at least one picture is added to the video, the motion trail of each picture is associated with the video by acquiring the motion trail of each picture, and the associated video and the motion trail are sent to the user side, so that the user can replace a person or a certain scene in the video by one key through any picture according to the association relation between the video and the motion trail, the operation is simple, and the cost is low.
In this embodiment, the motion trail of the added picture characterizes the change of the state information such as position, size, angle, etc. generated when the added picture is played with the video. In order to facilitate obtaining the motion trail of each picture, referring to fig. 3, at least one picture is added to the video, and the step of obtaining the motion trail of each picture may include:
step S11: and responding to the operation of adding the pictures, and creating at least one picture track, wherein each picture track corresponds to one picture.
Step S12: and adding the pictures corresponding to the picture tracks into the video.
Step S13: and editing a picture corresponding to each picture track in at least one video frame of the video to obtain at least one animation frame corresponding to the picture track.
Step S14: and aiming at each picture track, calculating the motion trail of the picture corresponding to the picture track according to the state information of the picture in each animation frame corresponding to the picture track.
For any video, when the server plays the video, a button for adding a picture is displayed on a playing interface of the video, and when an operator at the server needs to replace certain content in the video and needs to add the picture, the button can be clicked. After the operator clicks the button, the server responds to the operation of adding the picture to create a picture track. In this embodiment, each picture track corresponds to one picture, and if multiple pictures need to be added, the button for adding the pictures can be clicked multiple times to create multiple picture tracks. After the picture track is created, the picture corresponding to the picture track can be added to each frame in the video and displayed, as shown in fig. 4, and fig. 4 shows a frame of picture in the video.
It should be noted that, in this embodiment, the picture track and the video track of the video have the same time axis, that is, the time point of the video frame included in the video track is the same as the time point of the picture track, so the display start time point and the display end time point of the corresponding picture in the video can be obtained by adjusting the start time point and the end time point of the picture track. For example, when the start time point of a certain picture track is A1 and the end time point is A2, the picture corresponding to the picture track is displayed in the video at the A1 time point, and the display is ended at the A2 time point.
It will be appreciated that the start time point and the end time point of the picture track only represent the display start time point and the display end time point of the picture in the video, which may be the same as the start playing time point of the video and the end playing time point of the video, or may be any time point in the playing process of the video.
Alternatively, in the present embodiment, the start time point and the end time point of the picture track may be adjusted by sliding and/or clipping the picture track. As shown in fig. 5, the two ends of the picture track have clipping regions, and an operator can adjust the start time point and the end time point of the picture track by pulling the clipping regions to stretch or shorten the length of the picture track. Optionally, the starting time point and the ending time point of the picture track can be adjusted by sliding the picture track left and right.
It can be appreciated that, after the picture track is created, the initial state of the picture is displayed in the video, and because the scene in the video is dynamically changed, if the added picture always keeps the initial state, the added picture may not cover the content to be replaced in the video, so that the added picture needs to be edited, and the motion trail of the picture is obtained based on the state information of the edited picture.
In this embodiment, since each picture track corresponds to one picture, when editing the picture corresponding to each picture track, the editing mode may be entered by clicking the picture track. After entering the edit mode, the added picture can be edited in any video frame.
In one embodiment, after the operator finishes editing the picture added in the current video frame, the operator can arbitrarily enter the next video frame by moving the time axis of the video track, and edit the picture added in the next video frame.
Alternatively, in the present embodiment, editing of the added picture may be editing operations such as adding, deleting, and modifying, where modifying may be modifying state information such as a position, a size, a rotation angle, and the like of the picture.
In a specific embodiment, as shown in fig. 6, fig. 6 is an interface schematic diagram of picture editing, and fig. 6 includes buttons for adding an animation frame, removing an animation frame, adding a picture, and deleting a picture. When the pictures in the video frames need to be deleted, a button for removing the animation frames can be clicked, so that the pictures in the video frames are deleted; after deleting the picture, when the picture needs to be added in the video frame again, a button for adding the animation frame can be clicked, so that the picture is added in the video frame; when a new picture needs to be added, a button for adding the picture can be clicked, a new picture track is created, and the new picture is added in the video; when the current picture track needs to be deleted and the picture corresponding to the added current picture track in the video is deleted, a deletion button can be clicked, so that the current picture track and the picture corresponding to the picture track are deleted; when the state information of the picture in the video frame needs to be modified, the modification can be realized by adjusting the editing frame corresponding to the picture. For example, when the position of the picture is modified, the editing frame corresponding to the picture can be dragged to modify the position of the picture, for example, when the size of the picture is modified, the editing frame corresponding to the picture can be scaled to modify the size of the picture, for example, when the rotation angle of the picture is modified, the editing frame corresponding to the picture can be rotated to modify the rotation angle of the picture, for example, when the mirror image operation is performed on the picture, the editing frame corresponding to the picture can be mirrored to realize the mirror image of the picture.
In this embodiment, for each video frame, when a picture in the video frame is edited, the state information of the picture changes, so as to trigger generation of an animation frame. Therefore, for each picture track, after editing the picture corresponding to the picture track in the video frame of the video, the animation frame corresponding to the picture track can be obtained.
For example, for a certain picture track, after adding a picture corresponding to the picture track to each video frame of a video, if the first video frame is edited, the first video frame triggers the generation of an animation frame, and if the third video frame is edited continuously, the third video frame triggers the generation of animation frames, so that all animation frames corresponding to the picture track can be obtained.
When the added pictures in the video frames are edited, the state information of the pictures is changed, and the animation frames are triggered to be generated. Therefore, for each picture track, the motion track of the picture corresponding to the picture track can be calculated according to the state information of the picture in the animation frame corresponding to the picture track, namely, the change of the state information such as the position, the size, the angle and the like generated when the corresponding picture is played along with the video is obtained.
In an optional embodiment, the step of calculating the motion trail of the picture corresponding to the picture track according to the state information of the picture in each animation frame corresponding to the picture track includes:
and acquiring state information of a picture in the animation frame and state information of a picture in a previous animation frame adjacent to the animation frame aiming at each animation frame corresponding to the picture track.
And generating a sub-motion track from the last animation frame to the animation frame according to the state information of the picture in the animation frame and the state information of the picture in the last animation frame.
And splicing all the sub-motion tracks according to the time sequence to obtain the motion track of the picture corresponding to the picture track.
In this embodiment, the state information of the picture includes position information, size information and rotation angle, and can be directly read from the animation frame.
After the state information of the picture in the animation frame and the state information of the picture in the previous animation frame adjacent to the animation frame are acquired for each animation frame corresponding to each picture track, the sub-motion track from the previous animation frame to the animation frame can be generated according to the state information of the picture in the previous animation frame adjacent to the animation frame and the state information of the picture in the animation frame.
As shown in fig. 7, fig. 7 shows two adjacent animation frames, where the two adjacent animation frames include a star picture and a triangle picture, and the star picture and the triangle picture correspond to two picture tracks respectively. Wherein the position information of the star picture in the first animation frame (i.e. the left animation frame) is X1, the state is A1, the position information of the triangle picture is Y1, the state is B2, the position information of the star picture in the second animation frame (i.e. the right animation frame) is X2, the state is A2, the position information of the triangle picture is Y2, and the state is B2.
The star image in the second animation frame can be obtained by transversely stretching, amplifying and translating to the right of the star image in the first animation frame, so that the sub-motion track of the star image between the two animation frames is from X1 to X2, and is transversely stretched and amplified from A1 to A2, and the motion effect during video playing is transversely stretched, translating to the right and amplifying. The triangle picture in the second animation frame can be obtained by clockwise rotation, left upper translation and reduction of the triangle picture in the first animation frame, so that the sub-motion track of the triangle picture between the two animation frames is from Y1 to Y2, from B1 to B2, and the motion effect during video playing is clockwise rotation, left upper translation and reduction.
In order to reduce the workload and save the cost, in this embodiment, each video frame does not need to be edited, therefore, there may be an unedited video frame between adjacent animation frames, and in order to improve the accuracy of the motion trail, in this embodiment, the step of generating the sub motion trail from the last animation frame to the animation frame according to the state information of the picture in the animation frame and the state information of the picture in the last animation frame may include:
and calculating the state change information of the picture in the animation frame and the picture in the previous animation frame according to the state information of the picture in the animation frame and the state information of the picture in the previous animation frame.
The number of video frames between the animation frame and the last animation frame is obtained.
And obtaining the target state information of the picture in each video frame according to the quantity and the state change information.
And generating a sub-motion track from the last animation frame to the animation frame according to the state information of the picture in the animation frame, the target state information of the picture in each video frame and the state information of the picture in the last animation frame.
After obtaining state change information of the pictures in the animation frame and the pictures in the previous animation frame and the number of video frames between the animation frame and the previous animation frame, obtaining target state information of the pictures in each video frame according to the number of the video frames and the state change information.
In this embodiment, the state information of the picture includes position information, size information and rotation angle, so the state change information of the picture includes position change information, size change information and angle change information, and the step of obtaining the target state information of the picture in each video frame according to the number and the state change information may include:
according to the quantity, the position change information, the size change information and the angle change information, calculating to obtain average position change information, average size change information and average angle change information of the picture in each video frame.
And obtaining the target state information of the picture in each video frame according to the average position change information, the average size change information and the average angle change information of the picture in each video frame.
The position, the size and the rotation angle are all linearly changed, so that after the quantity, the position change information, the size change information and the angle change information are obtained, the average position change information, the average size change information and the average angle change information of the picture in each video frame can be calculated according to the quantity, the position change information, the size change information and the angle change information. After the average position change information, the average size change information and the average angle change information of the picture in each video frame are obtained, the target state information of the picture in each video frame is obtained according to the average position change information, the average size change information and the average angle change information of the picture in each video frame.
After the target state information of the picture in each video frame is obtained, the sub-motion track from the last animation frame to the current animation frame can be generated according to the state information of the picture in the current animation frame, the target state information of the picture in each video frame and the state information of the picture in the last animation frame.
For example, there are 3 unedited video frames a1, a2 and a3 between two adjacent moving image frames a and B, wherein the picture has a position X1, a size Y1, a rotation angle Z1 in moving image frame a, a position X2, a size Y2, a rotation angle Z2 in moving image frame B, if the position change information is X2-x1=x, the size change information is Y2-y1=y, the angle change information is Z2-z1=z, the average position change information of the picture in each video frame is X/4, the average size change information is Y/4, the average angle change information is Z/4, the picture is x1+x/4, y1+y/4, z1+z/4, x1+x/4+X/4, y1+y/4+Y/4, z1+z/4+Z/4, x1+x/4+X/4+X/4, y1+y/4+Y/4+Y/4, z1+z/4+Z/4+Z/4, and z1+z/4+Z/3/4. The motion trail of the picture between the animation frame A and the animation frame B is as follows:
The positions are X1, X1+X/4, X1+2X/4, X1+3X/4, X1+X (i.e. X2);
the size is as follows: y1, y1+Y/4, y1+2Y/4, y1+3Y/4, y1+Y (i.e., Y2);
the rotation angle is as follows: z1, z1+z/4, z1+2z/4, z1+3z/4, z1+z (i.e. Z).
It should be noted that, in this embodiment, the state information of the picture may further include mirroring, where the mirrored state is changed into a transient change, and the change information of the mirrored state in each video frame does not need to be calculated. If the picture is mirrored only in the animation frame B between the animation frames a and B, the change information of the video frame between the animation frames a and B is not required to be calculated, and the picture is mirrored only at the moment of the animation frame B, that is, the mirroring of the picture occurs at the moment of the animation frame B.
According to the video processing method provided by the embodiment, for each animation frame, the sub-motion track from the last animation frame to the animation frame is generated according to the state information of the picture in the animation frame, the target state information of the picture in each video frame and the state information of the picture in the last animation frame by calculating the state change information of the picture in the animation frame and the picture in the last animation frame and the number of video frames between the animation frame and the last animation frame, so that the picture does not need to be edited frame by frame, the workload is greatly reduced, and meanwhile, the accuracy of motion track calculation can be improved.
And for the picture track, after the sub-motion tracks among all the animation frames corresponding to the picture track are calculated in the mode, all the sub-motion tracks are spliced according to the time sequence, and the motion track of the picture corresponding to the picture track can be obtained. For example, for a certain picture track, the corresponding animation frames include animation frame 1, animation frame 2 and animation frame 3, wherein the animation frame 1, animation frame 2 and animation frame 3 are sequentially arranged in time sequence, that is, the playing time point of the animation frame 1 is earlier than the playing time point of the animation frame 2 and the playing time point of the animation frame 2 is earlier than the playing time point of the animation frame 3; if the sub-motion trajectories of the animation frames 1 to 2 are M1 and the sub-motion trajectories of the animation frames 2 to 3 are M2, the motion trajectories of the pictures corresponding to the picture trajectories are M1-M2.
After the motion trail of the picture corresponding to the picture track is obtained, the motion trail is derived, the derived motion trail is associated with the video, then the associated video and the motion trail are sent to the user side, the user side can randomly select the picture according to the association relation between the video and the motion trail, and the selected picture is added into the video according to the motion trail associated with the video, so that the replacement of people or certain scenes in the video can be realized, the operation is simple, and the cost is low.
It should be noted that, in this embodiment, since each picture has a motion track, if N pictures are used in the video editing process, N motion tracks are generated, and then N motion tracks are associated with the video, and the associated video and motion tracks are sent to the user side, where the user side needs to use N pictures to completely replace the video.
In order to improve the editability of the operation, in this embodiment, the step of associating the motion trail of each picture with the video may include:
for each picture, it is determined whether there is a picture identical to the picture.
And if the picture is the same as the picture, correlating the picture with the motion track of the picture which is the same as the picture to obtain a first track set.
And if the picture which is the same as the picture is not provided, taking the motion track of the picture as a second track set.
Each first track set and each second track set are associated with a video.
When deriving the motion track of each picture, whether the picture is the same as the picture or not can be judged first for each picture, if the picture is the same as the picture, the picture and the motion track of the picture which is the same as the picture are associated to obtain a first track set, and if the picture which is not the same as the picture, the motion track of the picture is taken as a second track set, so that the motion tracks of the same picture can be associated to obtain a set of the motion track of the same picture.
For example, for a person a and a person B in a certain video, the same picture a is used for replacement, where the picture a corresponding to the person a has a motion track a1, the picture a corresponding to the person B has a motion track a2, the motion track a1 and the motion track a2 are associated, and a first track set of the picture a can be obtained, and the first track set includes the motion track a1 and the motion track a2. And for the character C in the video, the picture C is adopted to replace the character C alone, and the picture C only has the motion trail C1, so that the motion trail C1 is a second trail set of the picture C, and the second trail set only comprises the motion trail C1.
Alternatively, the same identifier may be used to associate the motion trail of the same picture, that is, the motion trail of the same picture may be associated with the same identifier, and the associated identifier may be used as the set name.
According to the video processing method provided by the embodiment, the motion tracks of the same pictures are associated, so that when the video is replaced by the user side after the associated video and the motion tracks are sent to the user side, the associated motion tracks can be replaced by one picture, and the convenience and the timeliness of operation are greatly improved.
It should be noted that, in this embodiment, the motion trajectories of the same pictures are associated, so that the number of pictures required by the user side when performing video replacement is related to the types of pictures used by the server side when performing video editing (the same type of pictures is used). If M pictures (the number of the pictures actually used is more than or equal to M) are used in the video editing process, the user side can completely replace the video by only M pictures. Therefore, the number of pictures can be reduced, and the convenience of operation is improved.
According to the video processing method provided by the embodiment, after the pictures are added to the video, the motion trail of each picture is associated with the video by acquiring the motion trail of each picture, and the associated video and the motion trail are sent to the user side, so that the user can replace people or certain scenes in the video by one key through any picture according to the association relation between the video and the motion trail, the operation is simple, the service side only needs to invest once, and can benefit thousands of users, various creatives can be produced quickly through one-key replacement, the cost is low, and the effect is obvious.
Based on the same inventive concept, the present embodiment further provides a video processing method, which is applied to a user side communicatively connected to a server side, where the user side may be, but is not limited to, an electronic device such as a mobile phone, a tablet computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a personal digital assistant (personal digital assistant, PDA), and the like.
In this embodiment, the user side stores a plurality of videos sent by the server side and a motion track associated with each video. Based on the multiple videos sent by the server side and the motion trail associated with each video, the user side can implement the video processing method shown in fig. 8, and the flow chart of the video processing method shown in fig. 8 is described in detail below.
Step S1: and responding to the replacement operation, and determining the target video and the motion trail associated with the target video.
Step S2: and responding to the picture selection operation, adding the selected picture into the target video according to the motion trail associated with the target video, and obtaining the replaced video.
When the user side plays the video, if any replaceable material exists in the video, the video interface displays a replacement button, if the user needs to replace the video, the user can click the replacement button, and after the user clicks the replacement button, the user side responds to the replacement operation to display all the video clips supporting replacement currently. Optionally, a first frame thumbnail of each video clip and a corresponding video duration may be displayed for user viewing.
If the user needs to replace a certain video clip, the user can click the video clip, and the user terminal takes the video clip clicked by the user as a target video by responding to the operation, and determines the motion trail associated with the target video according to the target video.
In an alternative embodiment, when the user side plays the video, for the video supporting the replacement, the image track may be displayed on the video interface, if the user needs to replace the video, the image track may be clicked, after the user clicks the image track, the user side responds to the operation, takes the video as the target video, and reads the motion trail associated with the target video.
After determining the target video and the motion trail associated with the target video, the user side can enter a video editing interface, as shown in fig. 10, wherein the video editing interface comprises a picture display area and a picture uploading button, the picture display area is used for displaying a picture preconfigured by the user side, the user can select the picture to replace the content in the video, the picture uploading button is used for uploading a user-defined picture, and the content in the video is replaced according to the user-defined picture uploaded by the user.
When the user selects the picture in the picture display area, the user can click the picture to be selected, respond to the picture selection operation, and add the selected picture into the target video according to the motion trail associated with the target video to obtain the replaced video. When the user selects to upload the self-defined picture, a picture uploading button can be clicked to upload the picture selected by the user, and the user side responds to the picture selecting operation to add the picture selected by the user into the target video according to the motion track associated with the target video to obtain the replaced video.
According to the video processing method, the video is associated with the motion trail, so that a user can replace people or certain scenes in the video by one-key through any picture according to the association relation between the video and the motion trail, the operation is simple, and the cost is low.
In view of practical applications, each video may be replaced by a plurality of pictures, that is, each video may be associated with a plurality of motion trajectories, for convenience of operation of a user, in this embodiment, before adding the selected picture to the target video according to the motion trajectories associated with the target video, the method may further include:
and responding to the switching operation, and displaying the target animation frame of the target video on a preview interface.
Displaying all the replaceable contents in the target animation frame; wherein each of the displayed replaceable contents in the target animation frame corresponds to a motion trail of one picture.
The step of adding the selected picture to the target video according to the motion trail associated with the target video comprises the following steps:
for each replaceable content in the target animation frame, responding to the operation of a user on the replaceable content, adding the selected picture into the target video according to the motion trail corresponding to the replaceable content, and replacing all the replaceable content in the target video.
In a specific embodiment, after determining the target video and the motion trail associated with the target video, the user side may enter a video editing interface, as shown in fig. 11, where the video editing interface includes a switch button, and when the user clicks the switch button, the user side responds to the switch operation to display a target animation frame of the target video on the preview interface, and display all the replaceable contents in the target animation frame. In this embodiment, in order to facilitate the operation of the user, the preview interface may further include an animation display area and a picture upload button, as shown in fig. 11.
Alternatively, in this embodiment, the target animation frame may be the first animation frame or the last animation frame of the target video, which may be specifically set according to the actual situation, and this embodiment is not limited specifically. After the target animation frames of the target video are displayed on the preview interface, the rest animation frames can be sequentially displayed on the preview interface through the sliding display interface.
For each replaceable content in the animation frame displayed on the preview interface, the server adds a picture to replace the replaceable content in the video in advance, calculates the motion trail of the picture replacing the replaceable content, correlates the motion trail of the picture replacing the replaceable content with the video after calculating the motion trail of the picture replacing the replaceable content, and then the server sends the correlated video and motion trail to the user, and the user can have the motion trail of the picture corresponding to the replaceable content. Therefore, in the present embodiment, each of the replaceable contents displayed in the animation frame of the preview interface corresponds to a motion trail of one picture.
It will be appreciated that the motion trail corresponding to each piece of replaceable content characterizes the state information of the replaceable content in different video frames, and the state information of the replaceable content in the currently displayed animation frame is only shown in fig. 11. Therefore, after displaying all the replaceable contents in the target animation frame, responding to the operation of the user on the replaceable contents for each replaceable content in the target animation frame, and adding the selected picture to the target video according to the motion trail corresponding to the replaceable contents, namely replacing all the replaceable contents in the target video.
For example, as shown in fig. 11, for each replaceable content in the displayed animation frame, the user may drag the selected picture to the replaceable content, and in response to the operation of the user on the replaceable content, the user side may add the selected picture to the target video with a motion track corresponding to the replaceable content to replace all the replaceable content in the target video.
To facilitate user operation, a one-touch replacement is implemented, and in an alternative embodiment, before adding the selected picture to the target video according to the motion trail associated with the target video, the method further includes:
And responding to the switching operation, displaying a preview interface, wherein the preview interface comprises an animation display area and a mark display area.
Displaying a target animation frame of the target video in an animation display area, and marking and displaying all replaceable contents in the target animation frame; wherein, the mark of each replaceable content is displayed in the mark display area, and each mark corresponds to at least one motion track of one picture.
The step of adding the selected picture to the target video according to the motion trail associated with the target video comprises the following steps:
and for each mark in the mark display area, responding to the operation of a user on the mark, adding the selected picture into the target video according to the motion trail corresponding to the mark, and replacing all the replaceable contents corresponding to the mark.
After the user side enters the video editing interface, the button is switched, the user side responds to the switching operation, the preview interface is displayed, the target animation frame of the target video is displayed in the animation display area of the preview interface, and all the replaceable contents in the target animation frame are marked and displayed.
In this embodiment, each of the replaceable contents corresponds to a motion track of one picture, and different replaceable contents may be replaced with the same picture, and motion tracks of all the replaceable contents that may be replaced with the same picture have been associated in advance, so that all the replaceable contents that may be replaced with the same picture may be displayed with the same mark, and further, in this embodiment, each mark corresponds to at least one motion track of one picture.
After each of the replaceable contents is displayed with the mark, the mark of each of the replaceable contents is displayed in the mark display area, and as shown in fig. 12, mark 1 in the mark display area corresponds to the replaceable content marked 1 in the animation frame, and mark 2 in the mark display area corresponds to the replaceable content marked 2 in the animation frame.
For each mark in the mark display area, when a user needs to replace the replaceable content corresponding to the mark, the user can drag the selected picture to the mark, or after selecting the picture, the user clicks the mark, and the user side adds the selected picture into the target video according to the motion trail corresponding to the mark by responding to the operation of the user on the mark, and replaces all the replaceable content corresponding to the mark in the target video, so that one-key replacement is realized.
In this embodiment, for all the replaceable contents in the video, the user may select to replace a part of the replaceable contents, or may select to replace all the replaceable contents, and this embodiment is not particularly limited.
In the following, a specific application scenario is described, as shown in fig. 13 and 14, fig. 13 is a schematic diagram of motion trajectories of pictures, fig. 14 is a schematic diagram of video playing of a user side, and fig. 13 includes 4 pictures corresponding to 5 motion trajectories respectively, wherein a square picture corresponds to a motion trajectory a, a circular picture corresponds to a motion trajectory b, star pictures correspond to motion trajectories c and e, and a triangle picture corresponds to a motion trajectory d. All the stickers appear for the first time at 3 moments, wherein the square picture appears for the first time at 12 th second, the round picture appears for the first time at 20 th second, the star picture appears for the first time at 20 th second, the triangle picture appears for the first time at 10 th second, and the square picture and the triangle picture appear together at 12 th second, and the round picture and the star picture appear together at 20 th second. Therefore, for the user side, 4 pictures are required to be replaced, and at 10 seconds, triangle pictures appear in the video as in fig. 14 (a), square pictures and triangle pictures appear in the video as in fig. 14 (b), and at 20 seconds, circular pictures and star pictures appear in the video as in fig. 14 (c).
In view of the fact that in practical application, the picture selected by the user may be too large, but the picture cannot be added to the video normally, in order to avoid that the picture is too large and the replacement of the video is affected, in this embodiment, for each replaceable content, after the user selects the picture for replacing the replaceable content, the picture selected by the user may be previewed in the animation frame, if the picture is too large, that is, the picture exceeds the threshold of the picture size included in the motion trail corresponding to the replaceable content, the user is prompted, so that the user scales the previewed picture according to the prompt. When the user zooms the picture, the user terminal responds to the zooming operation of the user on the picture and zooms the picture into a set range, and the set range can be set according to the size of the picture included in the motion trail corresponding to the replaceable content. For example, the motion trajectory includes a picture size of 100×100, and the setting range may be 90×90 to 120×120, and specifically, may be set according to the actual situation, which is not limited in this embodiment.
In view of the fact that in practical application, the user needs to replace the selected picture after replacing the video, so in order to meet the requirement and facilitate the operation of the user, in this embodiment, the video editing interface may further include a delete button, when the user needs to delete the selected picture or replace the selected picture, the user may click the delete button, and by responding to the delete operation, the user terminal deletes the picture in the replaced video, and the video after deleting the picture is restored to the state when not replaced.
According to the video processing method, the video is associated with the motion track of the picture, so that a user can replace people or certain scenes in the video by one key through any picture when the user replaces the video, the operation is simple, and the cost is low.
On the basis of the above, referring to fig. 15 in combination, the present invention provides a first video processing apparatus 10, which is applied to a server, wherein the first video processing apparatus 10 includes a track processing module 11, a track association module 12 and a track transmission module 13.
The track processing module 11 is configured to add at least one picture to the video, and obtain a motion track of each picture.
The track association module 12 is used for associating the motion track of each picture with the video.
The track transmission module 13 is configured to send the associated video and the motion track to the user side.
According to the first video processing device provided by the embodiment of the invention, after at least one picture is added to the video, the motion trail of each picture is associated with the video by acquiring the motion trail of each picture, and the associated video and motion trail are sent to the user side, so that the user can realize one-key replacement of people or certain scenes in the video through any picture according to the association relation between the video and the motion trail, the operation is simple, and the cost is low.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the first video processing apparatus 10 applied to the server may refer to the working process corresponding to the video processing method applied to the server, and will not be described in detail herein.
On the basis of the above, the invention provides a second video processing device which is applied to a user side in communication connection with a server side, wherein the user side stores a plurality of videos and motion tracks associated with each video, and the second video processing device comprises a video determining module and a picture replacing module.
The video determining module is used for determining the target video and the motion trail associated with the target video in response to the replacing operation.
The picture replacement module is used for responding to picture selection operation, adding the selected picture into the target video according to the motion trail associated with the target video, and obtaining a replaced video.
According to the second video processing device provided by the embodiment, the motion trail of the video and the motion trail of the picture are associated, so that a user can replace people or certain scenes in the video by one key through any picture when the video is replaced, the operation is simple, and the cost is low.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the second video processing apparatus applied to the user side described above may refer to the working process corresponding to the video processing method applied to the user side described above, and will not be described in detail herein.
On the basis of the above, the embodiment also provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor implements the video processing method according to any one of the foregoing embodiments when executing the computer program.
On the basis of the foregoing, the present embodiment provides a readable storage medium having stored therein a computer program which, when executed, implements the video processing method of any one of the foregoing embodiments.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working procedures of the electronic device and the readable storage medium described above may refer to working procedures corresponding to the foregoing method, and will not be described in detail herein.
According to the video processing method, the device, the electronic equipment and the readable storage medium, after at least one picture is added to the video, the motion trail of each picture is associated with the video by acquiring the motion trail of each picture, and the associated video and motion trail are sent to the user side, so that the user can replace people or certain scenes in the video by one key through any picture according to the association relation between the video and the motion trail, the operation is simple, and the cost is low.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the scope of the present invention should be included in the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (12)

1. A video processing method, applied to a server, the method comprising:
responding to the operation of adding pictures, and creating at least one picture track, wherein each picture track corresponds to one picture;
adding the pictures corresponding to the picture tracks into the video;
for each picture track, editing a picture corresponding to the picture track in at least one video frame of the video to obtain at least one animation frame corresponding to the picture track;
for each picture track, calculating to obtain a motion track of a picture corresponding to the picture track according to the state information of the picture in each animation frame corresponding to the picture track; the motion trail characterizes the state change of the position, the size and the angle generated when the added picture is played along with the video;
Associating the motion trail of each picture with the video;
and sending the associated video and the motion trail to a user side, so that the user side can randomly select pictures according to the association relation between the video and the corresponding motion trail, and adding the selected pictures to the video according to the motion trail associated with the video.
2. The video processing method according to claim 1, wherein the step of calculating a motion trail of the picture corresponding to the picture track according to the state information of the picture in each of the animation frames corresponding to the picture track comprises:
for each animation frame corresponding to the picture track, acquiring state information of a picture in the animation frame and state information of a picture in a previous animation frame adjacent to the animation frame;
generating a sub-motion track from the last animation frame to the animation frame according to the state information of the picture in the animation frame and the state information of the picture in the last animation frame;
and splicing all the sub-motion tracks according to the time sequence to obtain the motion track of the picture corresponding to the picture track.
3. The video processing method according to claim 2, wherein the step of generating a sub-motion trail from the last animation frame to the animation frame based on state information of the picture in the animation frame and state information of the picture in the last animation frame comprises:
Calculating to obtain state change information of the picture in the animation frame and the picture in the previous animation frame according to the state information of the picture in the animation frame and the state information of the picture in the previous animation frame;
acquiring the number of video frames between the animation frame and the previous animation frame;
obtaining target state information of the picture in each video frame according to the number and the state change information;
and generating a sub-motion track from the last animation frame to the animation frame according to the state information of the picture in the animation frame, the target state information of the picture in each video frame and the state information of the picture in the last animation frame.
4. The video processing method according to claim 3, wherein the state information of the picture includes position information, size information, and a rotation angle; the state change information comprises position change information, size change information and angle change information;
the step of obtaining the target state information of the picture in each video frame according to the number and the state change information comprises the following steps:
according to the quantity, the position change information, the size change information and the angle change information, calculating to obtain average position change information, average size change information and average angle change information of the picture in each video frame;
And obtaining target state information of the picture in each video frame according to the average position change information, the average size change information and the average angle change information of the picture in each video frame.
5. The video processing method according to claim 1, wherein the step of associating the motion trail of each of the pictures with the video comprises:
judging whether each picture has the same picture as the picture or not;
if the picture is the same as the picture, correlating the picture with the motion trail of the picture which is the same as the picture to obtain a first trail set;
if the picture which is the same as the picture is not provided, the motion track of the picture is used as a second track set;
and associating each first track set and each second track set with the video.
6. The video processing method is characterized by being applied to a user side in communication connection with a server side, wherein the user side stores a plurality of videos sent by the server side and motion tracks associated with each video; the method comprises the following steps:
responding to the replacement operation, and determining a target video and a motion trail associated with the target video;
Responding to a picture selection operation, and adding the selected picture into the target video according to a motion trail associated with the target video to obtain a replaced video;
the server obtains the motion trail associated with the target video by the following method:
responding to the operation of adding pictures, and creating at least one picture track, wherein each picture track corresponds to one picture;
adding the pictures corresponding to the picture tracks into the target video;
for each picture track, editing a picture corresponding to the picture track in at least one video frame of the target video to obtain at least one animation frame corresponding to the picture track;
for each picture track, calculating a motion track of a picture corresponding to the picture track according to the state information of the picture in each animation frame corresponding to the picture track, and associating the motion track of each picture with the target video; the motion trail characterizes the state change of the position, the size and the angle generated by the added picture along with the video playing.
7. The video processing method of claim 6, wherein prior to adding the selected picture to the target video in accordance with the motion profile associated with the target video, the method further comprises:
Responding to the switching operation, and displaying the target animation frame of the target video on a preview interface;
displaying all the replaceable contents in the target animation frame; wherein each of the displayed replaceable contents in the target animation frame corresponds to a motion trail of a picture;
the step of adding the selected picture to the target video according to the motion trail associated with the target video comprises the following steps:
and for each replaceable content in the target animation frame, responding to the operation of a user on the replaceable content, adding the selected picture into the target video according to the motion trail corresponding to the replaceable content, and replacing all the replaceable content in the target video.
8. The video processing method of claim 6, wherein prior to adding the selected picture to the target video in accordance with the motion profile associated with the target video, the method further comprises:
responding to the switching operation, and displaying a preview interface, wherein the preview interface comprises an animation display area and a mark display area;
displaying a target animation frame of the target video in the animation display area, and marking and displaying all replaceable contents in the target animation frame; wherein, each mark of the replaceable content is displayed in the mark display area, and each mark corresponds to at least one motion track of a picture;
The step of adding the selected picture to the target video according to the motion trail associated with the target video comprises the following steps:
and for each mark in the mark display area, responding to the operation of a user on the mark, adding the selected picture into the target video according to the motion track corresponding to the mark, and replacing all the replaceable contents corresponding to the mark.
9. The video processing device is characterized by being applied to a server and comprising a track processing module, a track association module and a track transmission module;
the track processing module is used for:
responding to the operation of adding pictures, and creating at least one picture track, wherein each picture track corresponds to one picture;
adding the pictures corresponding to the picture tracks into the video;
for each picture track, editing a picture corresponding to the picture track in at least one video frame of the video to obtain at least one animation frame corresponding to the picture track;
for each picture track, calculating to obtain a motion track of a picture corresponding to the picture track according to the state information of the picture in each animation frame corresponding to the picture track; the motion trail characterizes the state change of the position, the size and the angle generated when the added picture is played along with the video;
The track association module is used for associating the motion track of each picture with the video;
the track transmission module is used for transmitting the associated video and the motion track to the user side.
10. The video processing device is characterized by being applied to a user side in communication connection with a server side, wherein the user side stores a plurality of videos sent by the server side and a motion track associated with each video, and comprises a video determining module and a picture replacing module;
the video determining module is used for determining a target video and a motion trail associated with the target video in response to the replacing operation;
the picture replacing module is used for responding to picture selecting operation, adding the selected picture into the target video according to the motion trail associated with the target video, and obtaining a replaced video;
the server obtains the motion trail associated with the target video by the following method:
responding to the operation of adding pictures, and creating at least one picture track, wherein each picture track corresponds to one picture;
adding the pictures corresponding to the picture tracks into the target video;
For each picture track, editing a picture corresponding to the picture track in at least one video frame of the target video to obtain at least one animation frame corresponding to the picture track;
for each picture track, calculating a motion track of a picture corresponding to the picture track according to the state information of the picture in each animation frame corresponding to the picture track, and associating the motion track of each picture with the target video; the motion trail characterizes the state change of the position, the size and the angle generated by the added picture along with the video playing.
11. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the video processing method of any one of claims 1-5 or the video processing method of any one of claims 6-8 when the computer program is executed by the processor.
12. A readable storage medium, characterized in that a computer program is stored in the readable storage medium, which computer program, when executed, implements the video processing method of any one of claims 1-5 or the video processing method of any one of claims 6-8.
CN202110112741.9A 2021-01-27 2021-01-27 Video processing method, device, electronic equipment and readable storage medium Active CN112929699B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110112741.9A CN112929699B (en) 2021-01-27 2021-01-27 Video processing method, device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110112741.9A CN112929699B (en) 2021-01-27 2021-01-27 Video processing method, device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN112929699A CN112929699A (en) 2021-06-08
CN112929699B true CN112929699B (en) 2023-06-23

Family

ID=76167109

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110112741.9A Active CN112929699B (en) 2021-01-27 2021-01-27 Video processing method, device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN112929699B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113825017A (en) * 2021-09-22 2021-12-21 北京达佳互联信息技术有限公司 Video editing method and video editing device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110266973A (en) * 2019-07-19 2019-09-20 腾讯科技(深圳)有限公司 Method for processing video frequency, device, computer readable storage medium and computer equipment
CN111239696A (en) * 2020-01-15 2020-06-05 深圳大学 Target track dynamic display method, device, equipment and storage medium
CN111741325A (en) * 2020-06-05 2020-10-02 咪咕视讯科技有限公司 Video playing method and device, electronic equipment and computer readable storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004199130A (en) * 2002-12-16 2004-07-15 Sony Corp Information visualizing method, device, and program
CN101458621A (en) * 2008-12-09 2009-06-17 三星电子(中国)研发中心 Application method of object variation track on application program user interface in video
CN102334142A (en) * 2009-02-24 2012-01-25 三菱电机株式会社 Human tracking device and human tracking program
CN102014278A (en) * 2010-12-21 2011-04-13 四川大学 Intelligent video monitoring method based on voice recognition technology
CN110266971B (en) * 2019-05-31 2021-10-08 上海萌鱼网络科技有限公司 Short video making method and system
CN110708596A (en) * 2019-09-29 2020-01-17 北京达佳互联信息技术有限公司 Method and device for generating video, electronic equipment and readable storage medium
CN111093026B (en) * 2019-12-30 2022-05-06 维沃移动通信(杭州)有限公司 Video processing method, electronic device and computer-readable storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110266973A (en) * 2019-07-19 2019-09-20 腾讯科技(深圳)有限公司 Method for processing video frequency, device, computer readable storage medium and computer equipment
CN111239696A (en) * 2020-01-15 2020-06-05 深圳大学 Target track dynamic display method, device, equipment and storage medium
CN111741325A (en) * 2020-06-05 2020-10-02 咪咕视讯科技有限公司 Video playing method and device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN112929699A (en) 2021-06-08

Similar Documents

Publication Publication Date Title
CN107341018B (en) Method and device for continuously displaying view after page switching
CN111935527B (en) Information display method, video playing method and equipment
CN113596555B (en) Video playing method and device and electronic equipment
CN112887794B (en) Video editing method and device
CN113093968A (en) Shooting interface display method and device, electronic equipment and medium
CN112099707A (en) Display method and device and electronic equipment
US8643672B2 (en) Instant message analytics of historical conversations in relation to present communication
CN112836086B (en) Video processing method and device and electronic equipment
CN111857460A (en) Split screen processing method, split screen processing device, electronic equipment and readable storage medium
WO2023030306A1 (en) Method and apparatus for video editing, and electronic device
CN111756615A (en) Session message display method, device, terminal equipment and computer storage medium
CN113259743A (en) Video playing method and device and electronic equipment
CN113918522A (en) File generation method and device and electronic equipment
CN112672061A (en) Video shooting method and device, electronic equipment and medium
CN112929699B (en) Video processing method, device, electronic equipment and readable storage medium
CN109388737B (en) Method and device for sending exposure data of content item and storage medium
CN114302009A (en) Video processing method, video processing device, electronic equipment and medium
CN111757177B (en) Video clipping method and device
CN112752127B (en) Method and device for positioning video playing position, storage medium and electronic device
CN112181252A (en) Screen capturing method and device and electronic equipment
CN113709565B (en) Method and device for recording facial expression of watching video
CN112422846B (en) Video recording method and electronic equipment
CN113778300A (en) Screen capturing method and device
CN111782309B (en) Method and device for displaying information and computer readable storage medium
CN112486650A (en) Operation path switching method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant