CN111654755B - Video editing method and electronic equipment - Google Patents

Video editing method and electronic equipment Download PDF

Info

Publication number
CN111654755B
CN111654755B CN202010433634.1A CN202010433634A CN111654755B CN 111654755 B CN111654755 B CN 111654755B CN 202010433634 A CN202010433634 A CN 202010433634A CN 111654755 B CN111654755 B CN 111654755B
Authority
CN
China
Prior art keywords
target
images
image
video
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010433634.1A
Other languages
Chinese (zh)
Other versions
CN111654755A (en
Inventor
芮元乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010433634.1A priority Critical patent/CN111654755B/en
Publication of CN111654755A publication Critical patent/CN111654755A/en
Application granted granted Critical
Publication of CN111654755B publication Critical patent/CN111654755B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47202End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting content on demand, e.g. video on demand
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Television Signal Processing For Recording (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention discloses a video editing method and electronic equipment, wherein the method comprises the following steps: displaying at least a portion of the plurality of target images on the first video editing interface, and receiving a first input to a target element while the target element is displayed on the first video editing interface; moving the target element on at least part of the target image in response to the first input, and respectively setting the target element at a target position in each of the plurality of target images according to a moving track of the target element on at least part of the target image; a target video is generated from a plurality of target images each including a target element. By using the embodiment of the invention, the user can be prevented from editing the position of the target element in each frame of image respectively, so that the user can conveniently and quickly edit the moving track of the target element in the target video, and the user experience is improved.

Description

Video editing method and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a video editing method and electronic equipment.
Background
At present, many video editing websites or video editing application programs provide video editing services for users, and the users can conveniently edit videos.
When a user edits a video, if a moving track of a target element (such as a person or an animal) in the video needs to be changed, each frame of image in the video needs to be edited separately to change a position of the target element in each frame of image, which results in a slow video editing speed and a cumbersome operation.
Disclosure of Invention
The embodiment of the invention provides a video editing method and electronic equipment, and aims to solve the problems of low editing speed and complex operation when a moving track of a target element in a video is edited.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a video editing method, which is applied to an electronic device, and the method includes:
displaying at least a portion of a plurality of target images on a first video editing interface, and receiving a first input to a target element if the target element is displayed on the first video editing interface;
moving the target element on the at least partial target image in response to the first input, the target element being set at a target position in each of the plurality of target images according to a movement trajectory of the target element on the at least partial target image;
generating a target video from the plurality of target images respectively including the target element.
In a second aspect, an embodiment of the present invention provides an electronic device, including:
the first input receiving module is used for displaying at least part of a plurality of target images on a first video editing interface and receiving first input of a target element under the condition that the target element is displayed on the first video editing interface;
a first input response module, configured to move the target element over the at least part of the target image in response to a first input, and set a target position in each of the plurality of target images according to a movement trajectory of the target element over the at least part of the target image, respectively;
a video generation module for generating a target video from a plurality of target images each including a target element.
In a third aspect, an embodiment of the present invention provides an electronic device, which is characterized by including a processor, a memory, and a computer program that is stored in the memory and is executable on the processor, and when the computer program is executed by the processor, the steps of the video editing method are implemented.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the steps of the video editing method described above.
In the embodiment of the invention, in the case that at least part of the target images in the plurality of target images are displayed on the first video editing interface, and the target elements are displayed, the user can operate the target elements to drag the target elements on the at least part of the target images to form the movement track. And respectively setting target elements according to the target positions of the movement tracks in each target image. Therefore, the user can drag the target element to draw the moving track without editing the position of the target element in each frame of image. Therefore, the moving track of the target element in the target video is quickly edited, the operation is simple, and the use experience of a user is improved.
Drawings
Fig. 1 is a schematic flowchart of a video editing method according to an embodiment of the present invention;
fig. 2 is a schematic view of a video editing interface for selecting an editing mode according to an embodiment of the present invention;
fig. 3 is a schematic view of a video editing interface displaying a video to be edited according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a video editing interface displaying at least a portion of a plurality of target images according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of another video editing interface displaying at least some of a plurality of target images according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a video editing interface displaying a moving track according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a position of a target element in a target image before and after movement according to an embodiment of the present invention;
fig. 8 is a flowchart illustrating another video editing method according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 10 shows a hardware structure diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flowchart illustrating a video editing method according to an embodiment of the present invention. The video editing method is applied to the electronic equipment. As shown in fig. 1, the video editing method includes:
s101, displaying at least part of the target images in the plurality of target images on the first video editing interface, and receiving a first input to a target element under the condition that the target element is displayed on the first video editing interface.
Optionally, before S101, the video editing method further includes: receiving a second input of the video to be edited displayed on the second video editing interface; in response to a second input, a plurality of frames of video included in the video to be edited are determined as a plurality of target images.
Optionally, before S101, the video editing method further includes: receiving a third input to a third video editing interface to create a new video; in response to a third input, a predetermined template image is determined as a plurality of target images.
It can be seen that there are at least two ways to obtain multiple target images. The following description is given by way of example.
For example, the electronic device displays a video editing interface shown in fig. 2, and the user has two choices on the video editing interface:
1. the user selects to edit the video. The electronic device receives an input of an edited video of the video editing interface shown in fig. 2 from a user, displays a video selection interface, and selects a video to be edited on the video selection interface by the user. After the user selects the video to be edited, the electronic device displays a second video editing interface as shown in fig. 3, and the second video editing interface displays the video to be edited selected by the user and a control of "start editing video". The electronic device then receives a second input to the "begin editing video" control. The electronic equipment responds to a second input, and determines a plurality of frames of video included in the video to be edited as a plurality of target images. Wherein the target element is an element in at least one of the plurality of target images. Therefore, the user can edit the moving track of the target element in the existing video according to the requirement of the user.
2. The user selects to create a new video. The electronic device receives an input of a user to create a new video to the video editing interface (i.e., the third video editing interface) shown in fig. 2, and determines a predetermined template image (e.g., a blank image or an image with a pattern) as a plurality of target images. The user can switch a plurality of target images according to the requirement of the user. The target element may be an element in at least one of the plurality of target images, or the target element may be a system default element. The user can change the target element according to the requirement of the user. Thus, the user can create a new video and edit the movement trajectory of the target element in the new video.
The embodiment of the invention can obtain a plurality of target images by adopting the two modes, and generate the target video by utilizing the plurality of target images, thereby meeting the requirements of users on editing the video or creating a new video.
After the plurality of target images are determined, at least a portion of the plurality of target images are displayed on the first video editing interface due to the limited display area of the first video editing interface. It should be noted that, the user may change at least a part of the target image displayed on the first video editing interface according to the needs of the user. At least part of the target images in the plurality of target images are displayed on the first video editing interface in the following two ways:
the first method is as follows: and displaying part of the plurality of target images on the first video editing interface. Part of the target image may be displayed in a tiled manner.
For example, assuming that the number of the plurality of target images is 100 frames, as shown in fig. 4, 4 target images (a 1 st frame target image, a 31 st frame target image, a 61 st frame target image, and a 91 st frame target image, respectively) are selected from the 100 target images, and the 4 target images are sequentially displayed on the first video editing interface. There is no overlap between two adjacent target images.
The second method comprises the following steps: all of the plurality of target images are displayed on the first video editing interface. The plurality of target images are displayed in thumbnail form.
For example, as shown in fig. 5, a plurality of target images are sequentially displayed on the first video editing interface, and there is an overlap between two adjacent target images.
After displaying at least part of the plurality of target images on the first video editing interface, the user can drag the target element to move on at least part of the target images, and the target element generates a moving track on at least part of the target images.
For example, as shown in fig. 6, the user drags the target element 100 to move on the 4-frame target image displayed on the first video editing interface, and a movement trajectory 200 of the target element 100 is generated.
And S102, responding to the first input, moving the target elements on at least part of the target images, and respectively setting the target elements at the target positions in each of the plurality of target images according to the moving tracks of the target elements on at least part of the target images.
For example, as shown in fig. 7, after the user drags the target element 100 to move and generates a movement trajectory, the target element is placed at a target position in each of the plurality of target images.
In fig. 7, since the display area of the first editing interface is limited, a part of the target images of the plurality of target images is displayed on the first editing interface. Thus, the position of the moving target element in the partial target image is shown in fig. 7. In practice, the position of the target element in each of the plurality of target images needs to be moved.
And S103, generating a target video according to a plurality of target images respectively comprising target elements.
Wherein the plurality of target images are synthesized into the target video in a predetermined order of the plurality of target images. In the case where the plurality of target images are a plurality of frames of video included in the video to be edited, the predetermined sequence is a playing sequence of the plurality of target images in the video to be edited. In the case where the plurality of target images are a predetermined plurality of template images, the predetermined order is an order of the plurality of template images which is default by the system.
In the embodiment of the invention, at least part of the target images and the target elements in the plurality of target images are displayed on the first video editing interface, and a user can drag the target elements on the at least part of the target images to form a moving track. And respectively placing the target elements at the target positions in each target image according to the movement tracks. Therefore, the user can drag the target element to draw the moving track without editing the position of the target element in each frame of image. Therefore, the moving track of the target element in the target video is quickly edited, and the use experience of a user is improved.
In the related art, a video may be edited using a video matting technique. The video matting technique is similar to picture matting. The principle of the video matting technology is that each frame in a video is processed by identifying and dividing key examples in the video and then by pixel-level matting, and finally a coherent video after matting processing is formed. The processing method has the defects that the position of the target element cannot be processed in the time dimension, the target element is displayed and played in a rectangular frame at the initial position set by a user all the time, and the moving track of the target element cannot be customized.
The embodiment of the invention provides an editing mode of the plane dimension of the time axis, and after a user locks a target element, the user can drag the target element on a target image displayed according to the time axis of a video frame to form a moving track of the target element. And according to the moving track, moving the position of the target element in each target image, so that the central point of the target element in each target image is close to the moving track. Then, a target video is generated from a plurality of target images each including a target element. When the target video is played, the target element moves along the drawn movement track in the time dimension.
Therefore, in the embodiment of the invention, the user can draw the movement track of the target element and flexibly control the movement of the target element in the target video, so that the target element is controlled to move towards the desired track in the time dimension. For example, when shooting a basketball, the electronic device recognizes the target element "basketball", and the user drags "basketball" to draw a moving track of the basketball from the starting point to the basket. And moving the position of the basketball in the target image according to the basketball moving track to generate a target video for throwing the basketball into the basket.
In one or more embodiments of the invention, prior to receiving the first input to the target element, the video editing method further comprises:
identifying an element in each of a plurality of target images;
receiving a fourth input to a target element in a first target image, the first target image being an image in at least part of the target image; for example, the first target image is any one of at least partial target images;
in response to a fourth input, the target element in the selected state is displayed in the first target image.
Wherein, after identifying the element in each of the plurality of target images, further comprising: elements are extracted from each target image using an extraction technique to separate the elements from the target image.
In the embodiment of the invention, the user can select the target element in the target image according to the requirement of the user and display that the target element is in the selected state, so that the user can draw the moving track by using the target element conveniently.
In one or more embodiments of the invention, after receiving a fourth input for a target element in the first target image, the video editing method further comprises:
displaying an image including a target element in the plurality of target images in a first display mode, and displaying an image not including the target element in the plurality of target images in a second display mode; the first display mode is different from the second display mode.
For example, an image including a target element among the plurality of target images is displayed in the effect of a breathing light, and an image not including the target element is kept unchanged in the original display state.
In the embodiment of the invention, the images including the target elements in the plurality of target images are displayed in a distinguishing manner, so that a user can more intuitively see which images include the target elements and which images do not include the target elements.
In one or more embodiments of the present invention, setting a target element at a target position in each of a plurality of target images, respectively, comprises:
determining whether each target image of a plurality of target images includes a target element;
in the event that a second target image of the plurality of target images is determined to include a target element, moving the target element in the second target image to a target position of the second target image;
in a case where it is determined that a third target image of the plurality of target images does not include the target element, the target element is added to a target position of the third target image.
Adding the target element to the target position of the third target image specifically includes: and copying the target element, and moving the copied target element to the target position of the third target image.
Optionally, copying the target element from the fourth target image; wherein the fourth target image is an image whose distance from the third target image among the plurality of target images satisfies a predetermined condition. For example, the fourth target image is an image that is closest to the third target image among the plurality of target images and includes the target element.
In this way, as the target elements are copied from the target image closer to the third target image, the postures of the target elements in the third target image and the target elements in the target image closer to the third target image are close, and the generated target video is more natural.
In one or more embodiments of the present invention, before the target element is respectively set at the target position in each of the plurality of target images according to the movement trajectory of the target element on at least a part of the target images, the video editing method further includes:
acquiring coordinate information of a plurality of target points on a moving track in a preset coordinate system, wherein the plurality of target points correspond to the plurality of target images one to one;
and determining the coordinate information of the target point corresponding to each target image as the coordinate information of the target position in the target image so as to determine the target position in each target image.
Optionally, before acquiring coordinate information of a plurality of target points on the movement trajectory in a predetermined coordinate system, the video editing method further includes: and respectively allocating a corresponding target point to each target image according to the preset sequence of the target images and the sequence of the target points on the moving track.
When the plurality of target images are multi-frame video frames included in the video to be edited, the predetermined sequence is the playing sequence of the plurality of target images in the video to be edited. In the case where the plurality of target images are a predetermined plurality of template images, the predetermined order is an order of the plurality of template images which is default by the system.
For example, with continued reference to FIG. 6, the plurality of target images are 100 frames of target images. First, 100 target points are taken on the movement trajectory 200, and the 100 target points are evenly distributed on the movement trajectory 200. The 1 st target point is assigned to the 1 st frame target image in the order of the 100 target points on the movement trajectory 200, and then the coordinate information of the 1 st target point is the coordinate information of the target position in the 1 st frame target image, thereby determining the target position in the 1 st frame target image.
Then, the 2 nd target point is assigned to the 2 nd frame target image, and the coordinate information of the 2 nd target point is the coordinate information of the target position in the 2 nd frame target image, thereby determining the target position in the 2 nd frame target image.
And so on, the target position in each frame of target image in the target images of 100 frames is obtained.
In the embodiment of the present invention, a target position in each target image is determined according to coordinate information of a target point on the movement trajectory, and a target element is placed to the target position in the target image. The target position of the target element in each target image is automatically determined, and the position of the target element edited in each target image by a user is avoided.
In one or more embodiments of the present invention, in a case where the target element includes a person or an animal, before the target element is set at the target position in each of the plurality of target images respectively according to the movement locus of the target element on at least a part of the target images, the video editing method further includes:
determining the posture of a target element in each target image in a plurality of target images according to the curvature of the moving track, the speed generated by the moving track or the rhythm information of video background music; the skeleton point of the target element can be detected, and the position of the part connected with the skeleton point in the target element is adjusted, so that the posture of the target element is determined.
The method for respectively setting the target elements at the target positions in each target image in the plurality of target images according to the moving tracks of the target elements on at least part of the target images comprises the following steps:
and respectively setting the target elements in different postures at the target positions in the target images corresponding to the target elements according to the movement tracks.
For example, according to the curvature of the movement trajectory, the posture of the target element in each of the plurality of target images is determined so that the magnitude of the curvature of the movement trajectory has a positive correlation with the movement speed of the target element or the movement amplitude of the target element.
For another example, the pose of the target element in each of the plurality of target images is determined according to the speed of the movement trace, so that the speed of the movement trace is in positive correlation with the movement speed of the target element or the movement amplitude of the target element.
For another example, the pose of the target element in each of the plurality of target images is determined according to the rhythm information of the video background music. Therefore, when the generated target video is played, the postures of the target elements change along with the rhythm of the video background music, so that the postures of the target elements and the video background music are integrated, and the wonderful rhythm that the target elements and the video background music are harmoniously resonated is achieved.
In the embodiment of the invention, the posture of the target element in the target image can be determined according to the curvature of the moving track and/or the speed generated by the moving track. Therefore, when the user draws the moving track, the curvature of the moving track and/or the speed generated by the moving track can be adjusted, so that the gesture of the target element can be automatically adjusted, and the gesture of the target element can be prevented from being manually adjusted by the user.
In one or more embodiments of the invention, when the target element moves on at least a part of the target image, the video editing method further includes:
and sequentially displaying the target elements in different postures corresponding to the target images according to the preset display sequence of the target images.
Alternatively, in the case where the plurality of target images are a plurality of frames of video included in the video to be edited, the predetermined display order is a play order of the plurality of target images in the video to be edited. In the case where the plurality of target images are a predetermined plurality of template images, the predetermined display order is the order of the plurality of template images which is default by the system.
In the embodiment of the invention, the target elements with different postures corresponding to the plurality of target images respectively are sequentially displayed during the movement of the target elements on at least part of the target images, so that the effect of dynamically changing the postures of the target elements is formed. The method and the device have the advantages that during the process of drawing the moving track, the user can know the moving effect of the target element in the target video, if the user is not satisfied with the moving effect of the target element, the moving track can be drawn again in time, the user can not know the moving effect of the target element in the target video after the target video is generated, and therefore the user can conveniently edit the video.
The embodiment of the invention provides another video editing method. Fig. 8 is a flowchart illustrating another video editing method according to an embodiment of the present invention. As shown in fig. 8, the video editing method includes:
s201, starting a video track editing function.
S202, determine a plurality of frames of video included in the video to be edited as a plurality of target images, identify elements in each target image, and mark the elements in each target image using the identification information, for example, mark the elements in each target image with a mask (mask). At least a part of the plurality of target images is tiled on the editing frame time axis.
S203, receiving input of a target element in a first target image of the plurality of target images.
S204, in response to the input of the target element in the first target image, determining whether the target element is included in at least one part of the target images except the first target image. In the case where the target image including the target element is determined, S205 is performed. In the case where it is determined that the target image of the target element is not included, S206 is performed.
And S205, displaying the target image containing the target element with a breathing effect to distinguish the target image not containing the target element. In this case, the target image including the target element may be displayed in a differentiated manner during a period in which an input for selecting the target element by the user is received, for example, during a period in which the user holds down the target element.
S206, the display state of the target image not containing the target element is kept unchanged.
And S207, receiving input of the target element.
And S208, responding to the input of the target element, and moving the target element on at least part of the target image.
S209, determining whether the target image through which the movement track passes contains the target element.
S210, moving the target element to a target position in the target image for the target image containing the target element so as to realize that the target element is close to the moving track.
S211, copying the target elements for the target image which does not contain the target elements, and executing S210, namely setting the target elements at the target positions in the target image to realize that the target elements are close to the moving tracks.
S212, generating a target video according to a plurality of target images respectively comprising target elements.
Correspondingly, the embodiment of the invention provides electronic equipment based on the video editing method of the embodiment of the invention. Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. As shown in fig. 9, the electronic apparatus includes:
a first input receiving module 301, configured to display at least a part of a target image in a plurality of target images on a first video editing interface, and receive a first input to a target element in a case where the target element is displayed on the first video editing interface;
a first input response module 302, configured to move a target element on at least a part of the target image in response to a first input, and set a target element at a target position in each of the plurality of target images according to a movement trajectory of the target element on at least a part of the target image;
a video generating module 303, configured to generate a target video according to a plurality of target images respectively including the target element.
In the embodiment of the invention, at least part of the target images and the target elements in the plurality of target images are displayed on the first video editing interface, and a user can drag the target elements on the at least part of the target images to form a moving track. And respectively placing the target elements at the target positions in each target image according to the movement tracks. Therefore, the user can drag the target element to draw the moving track without editing the position of the target element in each frame of image. Therefore, the moving track of the target element in the target video is quickly edited, and the use experience of a user is improved.
In one or more embodiments of the present invention, the electronic device further comprises:
the second input receiving module is used for receiving second input of the video to be edited displayed on the second video editing interface;
and the second input response module is used for responding to the second input and determining a plurality of frames of video included in the video to be edited as a plurality of target images.
In the embodiment of the invention, the user can edit the moving track of the target element in the existing video according to the requirement of the user.
In one or more embodiments of the present invention, the electronic device further comprises:
the third input receiving module is used for receiving third input for creating a new video of a third video editing interface;
and a third input response module for determining the predetermined template image as a plurality of target images in response to a third input.
In the embodiment of the invention, a user can create a new video and edit the moving track of the target element in the new video.
In one or more embodiments of the invention, the electronic device further comprises:
an element identification module to identify an element in each of a plurality of target images;
a fourth input receiving module for receiving a fourth input to the target element in the first target image, the first target image being an image of at least part of the target image;
and the fourth input response module is used for responding to the fourth input and displaying the target element in the selected state in the first target image.
In the embodiment of the invention, the user can select the target element in the target image according to the requirement of the user and display the target element in the selected state, so that the user can draw the moving track by using the target element conveniently.
In one or more embodiments of the invention, the electronic device further comprises:
the first display module is used for displaying an image including a target element in the target images in a first display mode and displaying an image not including the target element in the target images in a second display mode;
the first display mode is different from the second display mode.
In the embodiment of the invention, the images including the target elements in the plurality of target images are displayed in a distinguishing manner, so that a user can more intuitively see which images include the target elements and which images do not include the target elements.
In one or more embodiments of the invention, the first input response module comprises:
an element determination module to determine whether each of a plurality of target images includes a target element;
an element moving module for moving a target element in a second target image in the plurality of target images to a target position of the second target image if it is determined that the second target image includes the target element;
an element adding module for adding the target element to a target position of a third target image in the plurality of target images if it is determined that the third target image does not include the target element.
In one or more embodiments of the invention, the electronic device further comprises:
the coordinate information acquisition module is used for acquiring coordinate information of a plurality of target points on the moving track in a preset coordinate system, and the plurality of target points correspond to the plurality of target images one to one;
and the target position determining module is used for determining the coordinate information of the target point corresponding to each target image as the coordinate information of the target position in the target image so as to determine the target position in each target image.
In the embodiment of the present invention, a target position in each target image is determined according to coordinate information of a target point on the movement trajectory, and a target element is placed to the target position in the target image. The target position of the target element in each target image is automatically determined, and the position of the target element edited in each target image by a user is avoided.
In one or more embodiments of the present invention, in a case where the target element includes a human or an animal, the electronic device further includes:
the gesture adjusting module is used for determining the gesture of the target element in each target image in the plurality of target images according to the curvature of the moving track, the speed generated by the moving track or the rhythm information of video background music;
the first input response module 302 includes:
and the element placing module is used for respectively setting the target elements with different postures at target positions in the target images corresponding to the target elements according to the moving tracks.
In the embodiment of the invention, the posture of the target element in the target image can be determined according to the curvature of the moving track and/or the speed generated by the moving track. Therefore, when the user draws the moving track, the gesture of the target element can be automatically adjusted by adjusting the curvature of the moving track and/or the speed generated by the moving track, and the gesture of the target element is prevented from being manually adjusted by the user. In addition, the postures of the target elements can be determined according to the rhythm information of the video background music, so that the postures of the target elements are changed along with the rhythm of the video background music, the postures of the target elements and the video background music are integrated, and the wonderful rhythm that the target elements and the video background music are harmoniously resonated is achieved.
In one or more embodiments of the invention, the electronic device further comprises:
and the second display module is used for sequentially displaying the target elements of different postures corresponding to the target images according to the preset display sequence of the target images.
In the embodiment of the invention, the target elements with different postures corresponding to the plurality of target images respectively are sequentially displayed during the movement of the target elements on at least part of the target images, so that the effect of dynamically changing the postures of the target elements is formed. The method and the device have the advantages that during the process of drawing the moving track, the user can know the moving effect of the target element in the target video, if the user is not satisfied with the moving effect of the target element, the moving track can be drawn again in time, the user can not know the moving effect of the target element in the target video after the target video is generated, and therefore the user can conveniently edit the video.
Fig. 10 shows a schematic hardware structure diagram of an electronic device according to an embodiment of the present invention, where the electronic device 400 includes, but is not limited to: radio frequency unit 401, network module 402, audio output unit 403, input unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, processor 410, and power supply 411. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 10 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
A user input unit 407, configured to display at least a part of the target images in the plurality of target images on the first video editing interface through the display unit 406, and receive a first input to the target element in a case where the target element is displayed on the first video editing interface;
a processor 410, configured to move the target element over at least a portion of the target image in response to the first input, and set the target element at a target position in each of the plurality of target images according to a movement trajectory of the target element over at least a portion of the target image;
the processor 410 is further configured to generate a target video according to the plurality of target images respectively including the target element.
In the embodiment of the invention, at least part of the target images and the target elements in the plurality of target images are displayed on the first video editing interface, and a user can drag the target elements on the at least part of the target images to form a moving track. And respectively placing the target elements at the target positions in each target image according to the movement tracks. Therefore, the user can drag the target element to draw the moving track without editing the position of the target element in each frame of image. Therefore, the moving track of the target element in the target video is quickly edited, and the use experience of a user is improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 401 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 410; in addition, the uplink data is transmitted to the base station. Typically, radio unit 401 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio unit 401 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 402, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 403 may convert audio data received by the radio frequency unit 401 or the network module 402 or stored in the memory 409 into an audio signal and output as sound. Also, the audio output unit 403 may also provide audio output related to a specific function performed by the electronic apparatus 400 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 403 includes a speaker, a buzzer, a receiver, and the like.
The input unit 404 is used to receive audio or video signals. The input Unit 404 may include a Graphics Processing Unit (GPU) 4041 and a microphone 4042, and the Graphics processor 4041 processes image data of a still picture or video obtained by an image capturing apparatus (such as a camera) in a video capture mode or an image capture mode. The processed image frames may be displayed on the display unit 406. The image frames processed by the graphic processor 4041 may be stored in the memory 409 (or other storage medium) or transmitted via the radio frequency unit 401 or the network module 402. The microphone 4042 may receive sound, and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 401 in case of the phone call mode.
The electronic device 400 also includes at least one sensor 405, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 4061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 4061 and/or the backlight when the electronic apparatus 400 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 405 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which will not be described in detail herein.
The display unit 406 is used to display information input by the user or information provided to the user. The Display unit 406 may include a Display panel 4061, and the Display panel 4061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 407 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 407 includes a touch panel 4071 and other input devices 4072. Touch panel 4071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 4071 using a finger, a stylus, or any suitable object or attachment). The touch panel 4071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 410, receives a command from the processor 410, and executes the command. In addition, the touch panel 4071 can be implemented by using various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 4071, the user input unit 407 may include other input devices 4072. Specifically, the other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 4071 can be overlaid on the display panel 4061, and when the touch panel 4071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 410 to determine the type of the touch event, and then the processor 410 provides a corresponding visual output on the display panel 4061 according to the type of the touch event. Although in fig. 10, the touch panel 4071 and the display panel 4061 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 4071 and the display panel 4061 may be integrated to implement the input and output functions of the electronic device, and this is not limited herein.
The interface unit 408 is an interface for connecting an external device to the electronic apparatus 400. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 408 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 400 or may be used to transmit data between the electronic apparatus 400 and an external device.
The memory 409 may be used to store software programs as well as various data. The memory 409 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 409 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 410 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 409 and calling data stored in the memory 409, thereby performing overall monitoring of the electronic device. Processor 410 may include one or more processing units; preferably, the processor 410 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
The electronic device 400 may further comprise a power supply 411 (e.g. a battery) for supplying power to various components, and preferably, the power supply 411 may be logically connected to the processor 410 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the electronic device 400 includes some functional modules that are not shown, and are not described in detail herein.
An embodiment of the present invention further provides an electronic device, including a processor, a memory, and a computer program stored in the memory and capable of running on the processor, where the computer program, when executed by the processor, implements the processes of the video editing method embodiment, and can achieve the same technical effects, and in order to avoid repetition, the details are not repeated here.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the video editing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a component of' ...does not exclude the presence of another like element in a process, method, article, or apparatus that comprises the element.
Through the description of the foregoing embodiments, it is clear to those skilled in the art that the method of the foregoing embodiments may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but in many cases, the former is a better implementation. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (7)

1. A video editing method applied to electronic equipment is characterized by comprising the following steps:
displaying at least a portion of a plurality of target images on a first video editing interface, and receiving a first input to a target element if the target element is displayed on the first video editing interface;
moving the target element on the at least partial target image in response to the first input, the target element being set at a target position in each of the plurality of target images according to a movement trajectory of the target element on the at least partial target image;
generating a target video from the plurality of target images respectively including the target element;
in a case where the target element includes a person or an animal, before the target element is respectively set at the target position in each of the plurality of target images according to the movement locus of the target element on the at least part of the target image, the method further includes:
determining the posture of the target element in each target image in the plurality of target images according to the curvature of the moving track or the speed generated by the moving track;
the setting the target element at the target position in each target image of the plurality of target images according to the moving track of the target element on at least part of the target images comprises:
according to the movement track, the target elements in different postures are respectively arranged at target positions in the target image corresponding to the target elements;
prior to the receiving the first input to the target element, the method further comprises:
identifying an element in each of the plurality of target images;
receiving a fourth input to the target element in a first target image, the first target image being an image of the at least part of the target image;
in response to the fourth input, displaying the target element in a selected state in the first target image;
after said receiving a fourth input to the target element in the first target image, the method further comprises:
displaying images including the target elements in the plurality of target images in a first display mode, and displaying images not including the target elements in the plurality of target images in a second display mode;
wherein the first display mode is different from the second display mode.
2. The method of claim 1, wherein prior to displaying at least a portion of the plurality of target images on the first video editing interface and receiving the first input to the target element while the target element is displayed on the first video editing interface, the method further comprises:
receiving a second input of a video to be edited displayed on a second video editing interface;
in response to the second input, determining a plurality of frames of video included in the video to be edited as the plurality of target images.
3. The method of claim 1, wherein prior to displaying at least a portion of the plurality of target images on the first video editing interface and receiving the first input to the target element while the target element is displayed on the first video editing interface, the method further comprises:
receiving a third input to a third video editing interface to create a new video;
in response to the third input, a predetermined template image is determined as the plurality of target images.
4. The method of claim 1, wherein the setting the target element at the target position in each of the plurality of target images comprises:
determining whether each target image of the plurality of target images includes the target element;
in an instance in which it is determined that a second target image of the plurality of target images includes the target element, moving the target element in the second target image to a target position of the second target image;
in an instance in which it is determined that a third target image of the plurality of target images does not include the target element, adding the target element to a target location of the third target image.
5. The method according to claim 1, wherein before the target element is respectively set at the target position in each of the plurality of target images according to the movement track of the target element on the at least part of the target images, the method further comprises:
acquiring coordinate information of a plurality of target points on the moving track in a preset coordinate system, wherein the target points correspond to the target images one by one;
and determining the coordinate information of the target point corresponding to each target image as the coordinate information of the target position in the target image so as to determine the target position in each target image.
6. The method of claim 1, wherein the target element is moved over the at least a portion of the target image, the method further comprising:
and sequentially displaying the target elements in different postures corresponding to the target images according to the preset display sequence of the target images.
7. An electronic device, comprising:
the first input receiving module is used for displaying at least part of a plurality of target images on a first video editing interface and receiving first input of a target element under the condition that the target element is displayed on the first video editing interface;
a first input response module, configured to move the target element on the at least part of the target image in response to the first input, and set the target element at a target position in each of the plurality of target images according to a movement trajectory of the target element on the at least part of the target image;
a video generation module for generating a target video from the plurality of target images respectively including the target element;
in a case where the target element includes a human or an animal, the electronic device further includes:
the gesture adjusting module is used for determining the gesture of the target element in each target image in the plurality of target images according to the curvature of the moving track or the speed generated by the moving track;
the first input response module includes:
the element placement module is used for respectively arranging the target elements with different postures at target positions in the target image corresponding to the target elements according to the movement track;
the electronic device further includes:
an element identification module to identify an element in each of the plurality of target images;
a fourth input receiving module for receiving a fourth input to the target element in a first target image, the first target image being an image of the at least part of the target image;
a fourth input response module for displaying the target element in a selected state in the first target image in response to the fourth input;
the first display module is used for displaying the image including the target element in the target images in a first display mode and displaying the image not including the target element in the target images in a second display mode;
wherein the first display mode is different from the second display mode.
CN202010433634.1A 2020-05-21 2020-05-21 Video editing method and electronic equipment Active CN111654755B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010433634.1A CN111654755B (en) 2020-05-21 2020-05-21 Video editing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010433634.1A CN111654755B (en) 2020-05-21 2020-05-21 Video editing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN111654755A CN111654755A (en) 2020-09-11
CN111654755B true CN111654755B (en) 2023-04-18

Family

ID=72349181

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010433634.1A Active CN111654755B (en) 2020-05-21 2020-05-21 Video editing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN111654755B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114339073A (en) * 2022-01-04 2022-04-12 维沃移动通信有限公司 Video generation method and video generation device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8290298B2 (en) * 2009-01-20 2012-10-16 Mitsubishi Electric Research Laboratories, Inc. Method for temporally editing videos
CN106385591B (en) * 2016-10-17 2020-05-15 腾讯科技(上海)有限公司 Video processing method and video processing device
CN110874859A (en) * 2018-08-30 2020-03-10 三星电子(中国)研发中心 Method and equipment for generating animation

Also Published As

Publication number Publication date
CN111654755A (en) 2020-09-11

Similar Documents

Publication Publication Date Title
CN108668083B (en) Photographing method and terminal
CN108471498B (en) Shooting preview method and terminal
CN107817939B (en) Image processing method and mobile terminal
CN107943390B (en) Character copying method and mobile terminal
CN108737904B (en) Video data processing method and mobile terminal
CN110174993B (en) Display control method, terminal equipment and computer readable storage medium
CN111050070B (en) Video shooting method and device, electronic equipment and medium
CN109474787B (en) Photographing method, terminal device and storage medium
US11165950B2 (en) Method and apparatus for shooting video, and storage medium
CN111142769A (en) Split screen display method and electronic equipment
CN109102555B (en) Image editing method and terminal
CN109840060A (en) A kind of display control method and terminal device
CN110764675A (en) Control method and electronic equipment
CN111026305A (en) Audio processing method and electronic equipment
CN108804628B (en) Picture display method and terminal
CN108898555A (en) A kind of image processing method and terminal device
CN108174110B (en) Photographing method and flexible screen terminal
CN110650367A (en) Video processing method, electronic device, and medium
CN108174109B (en) Photographing method and mobile terminal
CN110913261A (en) Multimedia file generation method and electronic equipment
CN110941378B (en) Video content display method and electronic equipment
CN110413363B (en) Screenshot method and terminal equipment
CN110908517A (en) Image editing method, image editing device, electronic equipment and medium
CN108984062B (en) Content display method and terminal
CN110780751A (en) Information processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant