CN109246474B - Video file editing method and mobile terminal - Google Patents

Video file editing method and mobile terminal Download PDF

Info

Publication number
CN109246474B
CN109246474B CN201811205238.2A CN201811205238A CN109246474B CN 109246474 B CN109246474 B CN 109246474B CN 201811205238 A CN201811205238 A CN 201811205238A CN 109246474 B CN109246474 B CN 109246474B
Authority
CN
China
Prior art keywords
video
file
change curve
determining
video file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811205238.2A
Other languages
Chinese (zh)
Other versions
CN109246474A (en
Inventor
李才莲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Hangzhou Co Ltd
Original Assignee
Vivo Mobile Communication Hangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Hangzhou Co Ltd filed Critical Vivo Mobile Communication Hangzhou Co Ltd
Priority to CN201811205238.2A priority Critical patent/CN109246474B/en
Publication of CN109246474A publication Critical patent/CN109246474A/en
Application granted granted Critical
Publication of CN109246474B publication Critical patent/CN109246474B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4394Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The embodiment of the invention discloses a video file editing method and a mobile terminal, wherein the method comprises the following steps: analyzing a video file to be edited to generate a video scene change curve; respectively determining a volume change curve corresponding to each candidate music file; respectively determining the similarity of each volume change curve and the video scene change curve; determining a target music file according to each similarity; and synthesizing the video file and the target music file to generate an edited target video file. According to the video file editing method disclosed by the embodiment of the invention, the user does not need to manually execute the selection operation of the target music file and the combination operation of the video file to be edited and the target music file, so that the use experience of the user can be improved.

Description

Video file editing method and mobile terminal
Technical Field
The embodiment of the invention relates to the technical field of mobile terminals, in particular to a video file editing method and a mobile terminal.
Background
Mobile terminals such as mobile phones have become a necessity in daily life, and users can photograph images or video files through the mobile terminals. Furthermore, the user can edit background music for the photographed video file.
The existing method for editing background music for video files mainly comprises the following steps: the user manually selects a piece of music from the music library of the mobile terminal, then the user manually plays and debugs the music to edit the selected music into the background music of the video file, and the whole process of editing the background music of the video file requires manual operation of the user, so that the operation of the user is inconvenient.
Disclosure of Invention
The embodiment of the invention provides a video file editing method, which aims to solve the problem that in the prior art, a user needs to manually edit background music for a video file, so that the operation of the user is inconvenient.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a method for editing a video file, where the method includes: analyzing a video file to be edited to generate a video scene change curve; respectively determining a volume change curve corresponding to each candidate music file; respectively determining the similarity of each volume change curve and the video scene change curve; determining a target music file according to each similarity; and synthesizing the video file and the target music file to generate an edited target video file.
In a second aspect, an embodiment of the present invention provides a mobile terminal, where the mobile terminal includes: the generating module is used for analyzing the video file to be edited and generating a video scene change curve; the determining module is used for respectively determining the volume change curves corresponding to the candidate music files; the similarity determining module is used for respectively determining the similarity between each volume change curve and the video scene change curve; the target music file determining module is used for determining a target music file according to each similarity; and the editing module is used for synthesizing the video file and the target music file to generate an edited target video file.
In a third aspect, an embodiment of the present invention provides a mobile terminal, including a processor, a memory, and a computer program stored on the memory and operable on the processor, where the computer program, when executed by the processor, implements the steps of any one of the video file editing methods described in the embodiments of the present invention.
In a fourth aspect, the present invention provides a computer-readable storage medium, where the computer-readable storage medium stores thereon a computer program, and the computer program, when executed by a processor, implements the steps of any one of the video file editing methods described in the embodiments of the present invention.
In the embodiment of the invention, a video scene change curve is generated by analyzing a video file to be edited; respectively determining a volume change curve corresponding to each candidate music file; respectively determining the similarity of each volume change curve and a video scene change curve; determining a target music file according to each similarity; the video file and the target music file are synthesized to generate the edited target video file, the target music file can be intelligently selected for the video file to be edited, the video file to be edited and the target music file are automatically merged, the user does not need to manually execute the operation of selecting the target music file and the operation of merging the video file to be edited and the target music file, and the use experience of the user can be improved.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a flowchart illustrating steps of a video file editing method according to a first embodiment of the present invention;
fig. 2 is a flowchart illustrating steps of a video file editing method according to a second embodiment of the present invention;
fig. 3 is a block diagram of a mobile terminal according to a third embodiment of the present invention;
fig. 4 is a schematic diagram of a hardware structure of a mobile terminal according to a fourth embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
Referring to fig. 1, a flowchart illustrating steps of a video file editing method according to a first embodiment of the present invention is shown.
The video file editing method of the embodiment of the invention comprises the following steps:
step 101: and analyzing the video file to be edited to generate a video scene variable quantity curve.
The video file to be edited may be a video file shot by the user, or may be a video file downloaded by the user or a video file locally stored in the mobile terminal.
The video file comprises a plurality of frames of images, one or more frames of images can form a scene, scene variation exists among different scenes, and the variation trend of the scene variation along with time can be reflected by a video scene variation curve. The scene change amount may be represented by YUV change amounts from image frame to image frame, "Y" represents brightness, "U" represents chromaticity, and "V" represents color density.
Step 102: and respectively determining the volume change curve corresponding to each candidate music file.
The candidate music files may be music files stored locally at the mobile terminal. In a specific implementation process, the volume change curves corresponding to the candidate music files can be created in advance, the volume change curves corresponding to the candidate music files are stored in advance, and when the volume change curves corresponding to the candidate music files are determined, the volume change curves are directly extracted.
Of course, when determining the volume change curve corresponding to each candidate music file, a corresponding volume change curve may be generated for each candidate music file.
In a specific implementation process, the method is not limited to generating a corresponding volume change curve for each candidate music file, but also can generate a corresponding frequency change curve for each candidate music file, respectively calculate the similarity between each audio change curve and a video scene change curve, and determine a target music file according to each similarity.
Step 103: and respectively determining the similarity of each volume change curve and the video scene change curve.
One optional implementation is: calculating the similarity between the video scene change curve and each volume change curve through a similarity calculation method, which may include, but is not limited to: cosine similarity algorithms, minkowski distance algorithms, and the like. After the similarity calculation, each volume change curve corresponds to a similarity value, and each volume change curve corresponds to a music file.
Step 104: and determining the target music file according to the similarity.
When determining the target music file, the mobile terminal may recommend a preset number of music files for the user to select and determine the target music file according to the similarity values corresponding to the volume change curves, or the mobile terminal user may automatically determine a target music file according to the similarity values corresponding to the volume change curves, for example: and determining the music file corresponding to the volume change curve with the highest similarity value as a target music file.
Step 105: and synthesizing the video file and the target music file to generate an edited target video file.
As for the specific mode of synthesizing the video file and the target music file, reference may be made to related technologies, which is not specifically limited in the embodiment of the present invention, and the target music file in the edited target video file is presented as the background music of the video file.
According to the video file editing method provided by the embodiment of the invention, a video scene change curve is generated by analyzing a video file to be edited; respectively determining a volume change curve corresponding to each candidate music file; respectively determining the similarity of each volume change curve and a video scene change curve; determining a target music file according to each similarity; the video file and the target music file are synthesized to generate the edited target video file, the target music file can be intelligently selected for the video file to be edited, the video file to be edited and the target music file are automatically merged, the user does not need to manually execute the operation of selecting the target music file and the operation of merging the video file to be edited and the target music file, and the use experience of the user can be improved.
Example two
Referring to fig. 2, a flowchart illustrating steps of a video file editing method according to a second embodiment of the present invention is shown.
The video file editing method of the embodiment of the invention comprises the following steps:
step 201: and analyzing the video file to be edited to generate a video scene change curve.
A manner of optionally parsing a video file to be edited to generate a video scene change curve is as follows:
firstly, respectively determining scene variable quantities of two adjacent frames of images in a video file;
the video file is composed of a frame of image, the change of the image forms dynamic video content, each frame of image is composed of YUV data, and the YUV difference between frames represents the video scene change quantity, for example: YUV of one frame image is represented by P, and N represents a video scene change amount, the video scene change amount of the adjacent frame image can be represented as: n1 ═ P2-P1, N2 ═ P3-P2 …; a series of video scene variation N can be obtained through the expression mode.
Secondly, normalizing the variable quantity of each scene;
and after the data are subjected to normalization processing, a series of scene variation between 0 and 1 is obtained.
And finally, generating a video scene change curve according to the playing frame rate of the video file and the variable quantity of each scene.
The video scene change curve is used for representing the change trend of the scene change quantity along with the time.
The frame rate of play is generally in the range of 24-30 frames per second, and the video file can be converted from a frame representation mode to an event representation mode through the frame rate of play. For example: the video file comprises 30000 frames, the video frame rate is 30 frames per second, and the converted video file is a video file with the duration of 100 seconds.
Step 202: and respectively determining the volume change curve corresponding to each candidate music file.
One way to determine the volume change curve corresponding to each candidate music file is as follows:
firstly, respectively determining the volume variation of two adjacent time points in each candidate music file; secondly, normalizing the volume variable quantity; and finally, sequencing the volume variable quantities according to the sequence of the corresponding time points to generate a volume change curve.
The volume change curve is used for representing the change trend of the volume change quantity along with time. The setting of the granularity of the time point can be set by those skilled in the art according to actual needs, for example: the time point granularity is set to 1 second, 500 milliseconds, 300 milliseconds, or the like.
In a specific implementation process, a corresponding volume change curve is generated for each candidate music file according to the mode when the step is executed each time. Or after generating the corresponding volume change curve for each candidate music file, storing the volume change curve and the candidate music file in the local of the mobile terminal, and when executing the video file editing process next time, directly acquiring the volume change curve corresponding to each locally stored candidate music file. In addition, the corresponding volume change curves can be generated for the candidate music files when the candidate music files are stored, time does not need to be spent to generate the volume change curves corresponding to the candidate music files when the video file editing process is executed for the first time, time consumed by editing the video files can be saved, and video file editing efficiency is improved.
Step 203: and respectively determining the similarity of each volume change curve and the video scene change curve.
One candidate music corresponds to one volume change curve, and one volume change curve corresponds to one similarity.
After determining the similarity between each audio variation curve and the video scene variation curve, the target music file is determined according to each similarity, and the specific determination process is as shown in steps 204 to 207.
Step 204: and sequencing the similarity from high to low.
In a specific implementation process, the similarity degrees may also be sorted from low to high, and accordingly, the candidate music files corresponding to the preset number of sorted similarity degrees are selected.
Step 205: and displaying the candidate music files corresponding to the similarity of the preset number sorted at the top.
The preset number can be set by a person skilled in the art according to actual requirements, and is not particularly limited in the embodiment of the present invention. For example: the preset number is set to 3, 4 or 5, etc.
Step 206: and receiving a selection operation of a target music file in the displayed candidate music files.
When each candidate music file is displayed, the name of the candidate music file can be displayed; the name of the candidate music and the information of the singer can also be displayed, and the specific display mode is not particularly limited in the embodiment of the invention. The selection operation for the target music file may be a double-click operation, a single-click operation, a long-press operation, or the like for the target music file.
Step 207: and determining the target music file according to the selection operation.
The selected candidate music file is determined as the target music file.
Steps 204 to 207 are ways of determining the target music file according to the user's needs, which can satisfy the user's personalized needs. In addition, the target music file can be automatically determined by the mobile terminal, manual operation of a user is not needed, and the use experience of the user can be improved. The manner in which the target music file is automatically determined by the mobile terminal may be set as: and determining the candidate music corresponding to the highest similarity in the similarities as the target music file.
Step 208: and synthesizing the video file and the target music file to generate an edited target video file.
Under the condition that the first time length of the target music file is longer than the second time length of the video file, intercepting a volume change curve segment of the second time length with the highest matching degree with a video scene change curve of the video file from a volume change curve corresponding to the target music file; and determining the music sections corresponding to the volume change curve segments in the target music file.
Under the condition that the first time length of the target music file is smaller than the second time length of the video file, determining a video scene change curve section with the first time length, which is matched with the volume change curve of the target music file, from the video scene change curves corresponding to the video file; determining a video clip corresponding to a video scene change curve segment in a video file; and correspondingly synthesizing the target music file into the video segments in the video file to generate an edited target video file.
Under the condition that the first time length of the target music file is equal to the second time length of the video file, the target music file and the video file are directly combined from the starting time point, and the target music file or the video file does not need to be intercepted.
According to the video file editing method provided by the embodiment of the invention, a video scene change curve is generated by analyzing a video file to be edited; respectively determining a volume change curve corresponding to each candidate music file; respectively determining the similarity of each volume change curve and a video scene change curve; determining a target music file according to each similarity; the video file and the target music file are synthesized to generate the edited target video file, the target music file can be intelligently selected for the video file to be edited, the video file to be edited and the target music file are automatically merged, the user does not need to manually execute the operation of selecting the target music file and the operation of merging the video file to be edited and the target music file, and the use experience of the user can be improved. In addition, the video file editing method provided by the embodiment of the invention can display the primarily screened preferred music files with the preset number to the user, and the user can select the target music file from the preferred music files.
EXAMPLE III
Referring to fig. 3, a block diagram of a mobile terminal according to a third embodiment of the present invention is shown.
The mobile terminal of the embodiment of the invention comprises: the generating module 301 is configured to analyze a video file to be edited, and generate a video scene change curve; a determining module 302, configured to determine volume change curves corresponding to the candidate music files respectively; a similarity determining module 303, configured to determine similarity between each volume change curve and the video scene change curve respectively; a target music file determining module 304, configured to determine a target music file according to each of the similarities; and an editing module 305, configured to synthesize the video file and the target music file, and generate an edited target video file.
Preferably, the generating module 301 may include: the first determining submodule 3011 is configured to determine scene changes of two adjacent frames of images in the video file respectively; a first normalization submodule 3012, configured to perform normalization processing on each scene variation; the first curve generating module 3013 is configured to generate a video scene change curve according to the playing frame rate of the video file and each of the scene change amounts, where the video scene change curve is used to represent a change trend of the scene change amount with time.
Preferably, the determining module 302 may include: a second determining submodule 3021, configured to determine, for each candidate music file, volume change amounts of two adjacent time points in the candidate music file respectively; a second normalization submodule 3022, configured to perform normalization processing on each volume variation; a second curve generating module 3023, configured to sort the volume change amounts according to the sequence of the corresponding time points, and generate a volume change curve, where the volume change curve is used to represent a change trend of the volume change amount with time.
Preferably, the target music file determining module 304 may include: a third determining submodule 3041, configured to determine, as a target music file, a candidate music corresponding to the highest similarity among the similarities; wherein, a candidate music corresponds to a volume change curve, and a volume change curve corresponds to a similarity; or, the sorting submodule 3042 is configured to sort the similarities from high to low; a display sub-module 3043, configured to display candidate music files corresponding to the similarity in the top preset number; a receiving submodule 3044 for receiving a selection operation of a target music file among the displayed candidate music files; a fourth determining sub-module 3045 for determining the target music file according to the selecting operation.
Preferably, the editing module 305 may include: the intercepting submodule 3051 is configured to intercept, from the volume change curve corresponding to the target music file, a volume change curve segment of the second duration with the highest matching degree with the video scene change curve of the video file when the first duration of the target music file is longer than the second duration of the video file; the music piece determining sub-module 3052 is configured to determine, in the target music file, a music piece corresponding to the volume change curve segment; the first synthesizing sub-module 3053 is configured to synthesize the video file and the music piece, and generate an edited target video file.
Preferably, the editing module 305 may include: the curve segment determining sub-module 3054 is configured to, when the first duration of the target music file is smaller than the second duration of the video file, determine, from video scene change curves corresponding to the video file, a video scene change curve segment of the first duration, where the degree of matching with the volume change curve of the target music file is highest; the video clip determining sub-module 3055 is configured to determine, in the video file, the video clip corresponding to the video scene change curve segment; the second synthesizing sub-module 3056 is configured to correspondingly synthesize the target music file into the video clips in the video file, so as to generate an edited target video file.
The mobile terminal provided in the embodiment of the present invention can implement each process implemented by the mobile terminal in the method embodiments of fig. 1 to fig. 2, and is not described herein again to avoid repetition.
The mobile terminal provided by the embodiment of the invention generates a video scene change curve by analyzing a video file to be edited; respectively determining a volume change curve corresponding to each candidate music file; respectively determining the similarity of each volume change curve and a video scene change curve; determining a target music file according to each similarity; the video file and the target music file are synthesized to generate the edited target video file, the target music file can be intelligently selected for the video file to be edited, and the video file to be edited and the target music file are automatically merged, so that the user does not need to manually perform the selection operation of the target music file and the merging operation of the video file to be edited and the target music file, the operation is convenient, and the use experience of the user can be improved.
Example four
Referring to fig. 4, a block diagram of a mobile terminal according to a fourth embodiment of the present invention is shown.
Fig. 4 is a schematic diagram of a hardware structure of a mobile terminal for implementing various embodiments of the present invention, where the mobile terminal 600 includes, but is not limited to: radio frequency unit 601, network module 602, audio output unit 603, input unit 604, sensor 605, display unit 406, user input unit 607, interface unit 608, memory 609, processor 610, and power supply 611. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 4 is not intended to be limiting of mobile terminals, and that a mobile terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The processor 610 is configured to analyze a video file to be edited, and generate a video scene change curve; respectively determining a volume change curve corresponding to each candidate music file; respectively determining the similarity of each volume change curve and the video scene change curve; determining a target music file according to each similarity; and synthesizing the video file and the target music file to generate an edited target video file.
The mobile terminal provided by the embodiment of the invention generates a video scene change curve by analyzing a video file to be edited; respectively determining a volume change curve corresponding to each candidate music file; respectively determining the similarity of each volume change curve and a video scene change curve; determining a target music file according to each similarity; the video file and the target music file are synthesized to generate the edited target video file, the target music file can be intelligently selected for the video file to be edited, the video file to be edited and the target music file are automatically merged, the user does not need to manually execute the operation of selecting the target music file and the operation of merging the video file to be edited and the target music file, and the use experience of the user can be improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 601 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 610; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 601 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio frequency unit 601 may also communicate with a network and other devices through a wireless communication system.
The mobile terminal provides the user with wireless broadband internet access through the network module 602, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 603 may convert audio data received by the radio frequency unit 601 or the network module 602 or stored in the memory 609 into an audio signal and output as sound. Also, the audio output unit 603 may also provide audio output related to a specific function performed by the mobile terminal 600 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 603 includes a speaker, a buzzer, a receiver, and the like.
The input unit 604 is used to receive audio or video signals. The input Unit 604 may include a Graphics Processing Unit (GPU) 6041 and a microphone 6042, and the Graphics processor 6041 processes image data of a still picture or video obtained by an image capturing apparatus (such as a camera) in a video capture mode or an image capture mode. The processed image frames may be displayed on the display unit 606. The image frames processed by the graphic processor 6041 may be stored in the memory 609 (or other storage medium) or transmitted via the radio frequency unit 601 or the network module 602. The microphone 6042 can receive sound, and can process such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 601 in case of the phone call mode.
The mobile terminal 600 also includes at least one sensor 605, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 6061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 6061 and/or the backlight when the mobile terminal 600 is moved to the ear. Display panel 6061 is flexible display screen, and flexible display screen is including the screen base, liftable module array and the flexible screen that superpose the setting in proper order. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 605 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 606 is used to display information input by the user or information provided to the user. The Display unit 606 may include a Display panel 6061, and the Display panel 6061 may be configured by a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 607 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 607 includes a touch panel 6071 and other input devices 6072. Touch panel 6071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 6071 using a finger, stylus, or any suitable object or accessory). The touch panel 6071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 610, receives a command from the processor 610, and executes the command. In addition, the touch panel 6071 can be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The user input unit 607 may include other input devices 6072 in addition to the touch panel 6071. Specifically, the other input devices 6072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 6071 can be overlaid on the display panel 6061, and when the touch panel 6071 detects a touch operation on or near the touch panel 6071, the touch operation is transmitted to the processor 610 to determine the type of the touch event, and then the processor 610 provides a corresponding visual output on the display panel 6061 according to the type of the touch event. Although the touch panel 6071 and the display panel 6061 are shown in fig. 4 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 6071 and the display panel 6061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 608 is an interface through which an external device is connected to the mobile terminal 600. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 608 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 600 or may be used to transmit data between the mobile terminal 600 and external devices.
The memory 609 may be used to store software programs as well as various data. The memory 609 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 609 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 610 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 609 and calling data stored in the memory 609, thereby integrally monitoring the mobile terminal. Processor 610 may include one or more processing units; preferably, the processor 610 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 610.
The mobile terminal 600 may further include a power supply 611 (e.g., a battery) for supplying power to the various components, and preferably, the power supply 611 is logically connected to the processor 610 via a power management system, so that functions of managing charging, discharging, and power consumption are performed via the power management system.
In addition, the mobile terminal 600 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides a mobile terminal, which includes a processor 610, a memory 609, and a computer program stored in the memory 609 and capable of running on the processor 610, where the computer program, when executed by the processor 610, implements each process of the above-mentioned video file editing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned video file editing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (14)

1. A video file editing method is applied to a mobile terminal, and is characterized by comprising the following steps:
analyzing a video file to be edited to generate a video scene change curve;
respectively determining a volume change curve corresponding to each candidate music file;
respectively determining the similarity of each volume change curve and the video scene change curve;
determining a target music file according to each similarity;
synthesizing the video file and the target music file to generate an edited target video file;
the video scene change curve is the change trend of scene change quantity in the video file along with time, and the scene change quantity comprises the change quantities of brightness, chroma and color concentration between image frames; the volume change curve is the change trend of the volume change quantity of the adjacent time points in the music file along with the time.
2. The method according to claim 1, wherein the step of parsing the video file to be edited to generate the video scene change curve comprises:
respectively determining scene variation of two adjacent frames of images in the video file;
normalizing the scene variable quantities;
and generating a video scene change curve according to the playing frame rate of the video file and each scene change amount, wherein the video scene change curve is used for representing the change trend of the scene change amount along with time.
3. The method of claim 1, wherein the step of determining the volume change curve corresponding to each candidate music file comprises:
respectively determining the volume variation of two adjacent time points in each candidate music file;
normalizing each volume variation;
and sequencing the volume variable quantities according to the sequence of the corresponding time points to generate a volume change curve, wherein the volume change curve is used for representing the change trend of the volume variable quantities along with time.
4. The method of claim 1, wherein the step of determining the target music file according to each of the similarities comprises:
determining the candidate music corresponding to the highest similarity in the similarities as a target music file; wherein, a candidate music corresponds to a volume change curve, and a volume change curve corresponds to a similarity;
or,
sequencing the similarity from high to low;
displaying the candidate music files corresponding to the similarity in the preset number in the front;
receiving a selection operation of a target music file in the displayed candidate music files;
and determining a target music file according to the selection operation.
5. The method of claim 1, wherein the step of synthesizing the video file with the target music file to generate an edited target video file comprises:
under the condition that the first duration of the target music file is longer than the second duration of the video file, intercepting a volume change curve segment of the second duration with the highest matching degree with a video scene change curve of the video file from a volume change curve corresponding to the target music file;
determining a music segment corresponding to the volume change curve segment in the target music file;
and synthesizing the video file and the music fragment to generate an edited target video file.
6. The method of claim 1, wherein the step of synthesizing the video file with the target music file to generate an edited target video file comprises:
under the condition that the first time length of the target music file is smaller than the second time length of the video file, determining a video scene change curve segment with the first time length, which is matched with the volume change curve of the target music file to the highest degree, from video scene change curves corresponding to the video file;
determining the video clip corresponding to the video scene change curve segment in the video file;
and correspondingly synthesizing the target music file into the video clips in the video file to generate an edited target video file.
7. A mobile terminal, characterized in that the mobile terminal comprises:
the generating module is used for analyzing the video file to be edited and generating a video scene change curve;
the determining module is used for respectively determining the volume change curves corresponding to the candidate music files;
the similarity determining module is used for respectively determining the similarity between each volume change curve and the video scene change curve;
the target music file determining module is used for determining a target music file according to each similarity;
the editing module is used for synthesizing the video file and the target music file to generate an edited target video file;
the video scene change curve is the change trend of scene change quantity in the video file along with time, and the scene change quantity comprises the change quantities of brightness, chroma and color concentration between image frames; the volume change curve is the change trend of the volume change quantity of the adjacent time points in the music file along with the time.
8. The mobile terminal of claim 7, wherein the generating module comprises:
the first determining submodule is used for respectively determining scene variation of two adjacent frames of images in the video file;
the first normalization submodule is used for performing normalization processing on the scene variable quantity;
and the first curve generation module is used for generating a video scene change curve according to the playing frame rate of the video file and each scene change amount, wherein the video scene change curve is used for representing the change trend of the scene change amount along with time.
9. The mobile terminal of claim 7, wherein the determining module comprises:
the second determining submodule is used for respectively determining the volume change of two adjacent time points in each candidate music file;
the second normalization submodule is used for performing normalization processing on each volume variable quantity;
and the second curve generation module is used for sequencing the volume change quantities according to the sequence of the corresponding time points to generate a volume change curve, wherein the volume change curve is used for representing the change trend of the volume change quantities along with the time.
10. The mobile terminal of claim 7, wherein the target music file determining module comprises:
the third determining submodule is used for determining the candidate music corresponding to the highest similarity in the similarities as a target music file; wherein, a candidate music corresponds to a volume change curve, and a volume change curve corresponds to a similarity;
or,
the sequencing submodule is used for sequencing the similarity from high to low;
the display sub-module is used for displaying the candidate music files corresponding to the similarity in the preset number in the front;
the receiving submodule is used for receiving the selection operation of the target music file in each displayed candidate music file;
and the fourth determining submodule is used for determining the target music file according to the selection operation.
11. The mobile terminal of claim 7, wherein the editing module comprises:
the intercepting submodule is used for intercepting a volume change curve segment of the second duration with the highest matching degree with a video scene change curve of the video file from the volume change curve corresponding to the target music file under the condition that the first duration of the target music file is longer than the second duration of the video file;
the music piece determining submodule is used for determining the music piece corresponding to the volume change curve segment in the target music file;
and the first synthesis submodule is used for synthesizing the video file and the music fragment to generate an edited target video file.
12. The mobile terminal of claim 7, wherein the editing module comprises:
the curve segment determining submodule is used for determining a video scene variation curve segment of the first duration with the highest matching degree with the volume variation curve of the target music file from the video scene variation curves corresponding to the video files under the condition that the first duration of the target music file is smaller than the second duration of the video files;
the video clip determining submodule is used for determining the video clip corresponding to the video scene change curve segment in the video file;
and the second synthesis submodule is used for correspondingly synthesizing the target music file into the video clips in the video file to generate an edited target video file.
13. A mobile terminal, characterized in that it comprises a processor, a memory and a computer program stored on said memory and executable on said processor, said computer program, when executed by said processor, implementing the steps of the video file editing method according to any one of claims 1 to 6.
14. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the video file editing method according to any one of claims 1 to 6.
CN201811205238.2A 2018-10-16 2018-10-16 Video file editing method and mobile terminal Active CN109246474B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811205238.2A CN109246474B (en) 2018-10-16 2018-10-16 Video file editing method and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811205238.2A CN109246474B (en) 2018-10-16 2018-10-16 Video file editing method and mobile terminal

Publications (2)

Publication Number Publication Date
CN109246474A CN109246474A (en) 2019-01-18
CN109246474B true CN109246474B (en) 2021-03-02

Family

ID=65053742

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811205238.2A Active CN109246474B (en) 2018-10-16 2018-10-16 Video file editing method and mobile terminal

Country Status (1)

Country Link
CN (1) CN109246474B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110278484B (en) * 2019-05-15 2022-01-25 北京达佳互联信息技术有限公司 Video dubbing method and device, electronic equipment and storage medium
TWI716033B (en) * 2019-07-15 2021-01-11 李姿慧 Video Score Intelligent System
CN111491211B (en) * 2020-04-17 2022-01-28 维沃移动通信有限公司 Video processing method, video processing device and electronic equipment
CN112153460B (en) * 2020-09-22 2023-03-28 北京字节跳动网络技术有限公司 Video dubbing method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101141603A (en) * 2006-09-06 2008-03-12 富士胶片株式会社 Method, program and apparatus for generating scenario for music-and-image-synchronized motion picture
CN101727943A (en) * 2009-12-03 2010-06-09 北京中星微电子有限公司 Method and device for dubbing music in image and image display device
CN102314917A (en) * 2010-07-01 2012-01-11 北京中星微电子有限公司 Method and device for playing video and audio files
CN103793446A (en) * 2012-10-29 2014-05-14 汤晓鸥 Music video generation method and system
CN104778238A (en) * 2015-04-03 2015-07-15 中国农业大学 Video saliency analysis method and video saliency analysis device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002258842A (en) * 2000-12-27 2002-09-11 Sony Computer Entertainment Inc Device, method, and program for sound control, computer- readable storage medium with stored sound control program, and program for executing device executing the sound control program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101141603A (en) * 2006-09-06 2008-03-12 富士胶片株式会社 Method, program and apparatus for generating scenario for music-and-image-synchronized motion picture
CN101727943A (en) * 2009-12-03 2010-06-09 北京中星微电子有限公司 Method and device for dubbing music in image and image display device
CN102314917A (en) * 2010-07-01 2012-01-11 北京中星微电子有限公司 Method and device for playing video and audio files
CN103793446A (en) * 2012-10-29 2014-05-14 汤晓鸥 Music video generation method and system
CN104778238A (en) * 2015-04-03 2015-07-15 中国农业大学 Video saliency analysis method and video saliency analysis device

Also Published As

Publication number Publication date
CN109246474A (en) 2019-01-18

Similar Documents

Publication Publication Date Title
CN110740259B (en) Video processing method and electronic equipment
CN107817939B (en) Image processing method and mobile terminal
CN109246474B (en) Video file editing method and mobile terminal
CN110365907B (en) Photographing method and device and electronic equipment
CN110557683B (en) Video playing control method and electronic equipment
CN109240577B (en) Screen capturing method and terminal
CN111050070B (en) Video shooting method and device, electronic equipment and medium
EP3699743B1 (en) Image viewing method and mobile terminal
CN108646960B (en) File processing method and flexible screen terminal
CN110855921B (en) Video recording control method and electronic equipment
CN109618218B (en) Video processing method and mobile terminal
CN108984143B (en) Display control method and terminal equipment
CN108718389B (en) Shooting mode selection method and mobile terminal
CN108174110B (en) Photographing method and flexible screen terminal
CN111401463A (en) Method for outputting detection result, electronic device, and medium
CN111343402B (en) Display method and electronic equipment
CN111212316B (en) Video generation method and electronic equipment
CN107728877B (en) Application recommendation method and mobile terminal
CN111182211B (en) Shooting method, image processing method and electronic equipment
CN110022445B (en) Content output method and terminal equipment
CN109462727B (en) Filter adjusting method and mobile terminal
CN110086998B (en) Shooting method and terminal
CN109639981B (en) Image shooting method and mobile terminal
CN108628534B (en) Character display method and mobile terminal
CN111445929A (en) Voice information processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant