CN111491211B - Video processing method, video processing device and electronic equipment - Google Patents

Video processing method, video processing device and electronic equipment Download PDF

Info

Publication number
CN111491211B
CN111491211B CN202010305884.7A CN202010305884A CN111491211B CN 111491211 B CN111491211 B CN 111491211B CN 202010305884 A CN202010305884 A CN 202010305884A CN 111491211 B CN111491211 B CN 111491211B
Authority
CN
China
Prior art keywords
music
target
video
piece
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010305884.7A
Other languages
Chinese (zh)
Other versions
CN111491211A (en
Inventor
孙鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010305884.7A priority Critical patent/CN111491211B/en
Publication of CN111491211A publication Critical patent/CN111491211A/en
Application granted granted Critical
Publication of CN111491211B publication Critical patent/CN111491211B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8106Monomedia components thereof involving special audio data, e.g. different tracks for different languages
    • H04N21/8113Monomedia components thereof involving special audio data, e.g. different tracks for different languages comprising music, e.g. song in MP3 format

Abstract

The invention provides a video processing method, a video processing device and electronic equipment, wherein the method comprises the following steps: displaying a music identifier, wherein the music identifier indicates target music, the target music is identified based on target audio, and the target audio is audio collected by a microphone of the electronic equipment; receiving a first input of the music identification by a user; in response to the first input, adding a target music piece associated with the target music to a target video. Therefore, the acquisition and addition operation of the background music in the video production process can be simplified.

Description

Video processing method, video processing device and electronic equipment
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a video processing method, a video processing apparatus, and an electronic device.
Background
At present, background music is usually added in the process of making multimedia contents such as videos and planning schemes, so as to improve the playing effect of the multimedia contents. However, in the process of adding background music, the user is usually required to first search the local database for background music of interest; if not, the interested background music is downloaded from the network data card and finally added to the multimedia content. Therefore, in the existing video production process, the problem of complex operation exists in the addition of background music.
Disclosure of Invention
The embodiment of the invention provides a video processing method, a video processing device and electronic equipment, which can solve the problem of complicated operation in the addition of background music in the existing video production process.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a video processing method applied to an electronic device, including:
displaying a music identifier, wherein the music identifier indicates target music, the target music is identified based on target audio, and the target audio is audio collected by a microphone of the electronic equipment;
receiving a first input of the music identification by a user;
in response to the first input, adding a target music piece associated with the target music to a target video.
In a second aspect, an embodiment of the present invention further provides a video processing apparatus, including:
the first display module is used for displaying a music identifier, wherein the music identifier indicates target music, the target music is identified based on target audio, and the target audio is audio collected by a microphone of the electronic equipment;
the first receiving module is used for receiving a first input of the music identifier by a user;
an adding module, configured to add, in response to the first input, a target music piece associated with the target music to a target video.
In a third aspect, an embodiment of the present invention further provides an electronic device, which includes a processor, a memory, and a computer program stored on the memory and executable on the processor, where the computer program, when executed by the processor, implements the steps of the video processing method.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the video processing method are implemented.
In the embodiment of the invention, in the process of making the video, under the condition that the microphone of the electronic equipment collects the audio, the target music corresponding to the audio can be automatically identified through the collected audio and is visually displayed in the form of a music identifier so as to prompt a user that available background music exists; the method can also add the target music segment associated with the target music into the target video by receiving the first input of the user to the music identifier, thereby realizing the addition of the background music of the target video; compared with the method that the user needs to input the search key words and search the background music which is interested by the user, the method can effectively simplify the acquisition and addition operation of the background music in the video production process.
Drawings
Fig. 1 is a flow chart of a video processing method provided by an embodiment of the invention;
FIG. 2a is a schematic diagram of a production interface for a target video provided by an embodiment of the invention;
FIG. 2b is a diagram of a music editing interface according to an embodiment of the present invention;
FIG. 2c is a second schematic diagram of a music editing interface according to an embodiment of the present invention;
fig. 3 is a block diagram of a video processing apparatus according to an embodiment of the present invention;
fig. 4 is a block diagram of an electronic device provided in an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, an embodiment of the present invention provides a video processing method, which can be applied to an electronic device such as a mobile phone and a tablet computer, so that a user adds background music in a process of making multimedia content, and the method includes the following steps:
step 101, displaying a music identifier.
In this step, the music identifier is used to indicate the target music, the target music is identified based on the target audio, and the target audio is audio collected by a microphone of the electronic device.
For example, in the process of video processing, when a microphone of the electronic device detects an audio signal, the audio signal is collected, audio data corresponding to the audio signal is obtained, and corresponding music is identified according to the collected audio data, so that music collection and acquisition are realized.
If the detected audio data comprises other audio data except the music audio data, the other audio data is filtered, only the music audio data in the audio data is extracted, and the corresponding music is identified through the extracted music audio data, so that the interference influence of the other audio data on the identification of the target music is reduced.
Moreover, in the case that the target music is identified, a music identifier may be displayed on the video processing interface to indicate to the user that the electronic device has collected music information and may be used as background music for the video.
The music identifier may be a control identifier displayed in the video processing interface, and the control identifier may also be displayed in different shapes or colors according to the type of music. For example, if the collected music is popular music, the mark of the popular style is displayed; if the collected music is electronic music, the identification of the style of the electronic sound is displayed so as to improve the identification degree of the style of the music.
Furthermore, the matching degree of the identified music and the currently made video can be analyzed, and the identified music is scored; and the music with lower scores can be directly filtered, namely, for the music with lower scores, the music identification is not displayed, so that the recommendation times are reduced, and the operation experience of the user in the video production process is improved.
In the embodiment, the audio collected by the microphone of the electronic device is used for acquiring the corresponding target music, so that a user does not need to actively search the background music matched with the current video, and the acquisition operation of the background music is effectively simplified.
In particular, music signals around the user/electronic device may be automatically detected based on a voiceprint recognition algorithm of the electronic device. For example, in making a video, it is possible to identify music played by shops on the street, or to identify a song hummed by passers-by.
Under the condition that the microphone of the electronic equipment collects the audio data, the music identification information for identifying the audio data collected by the microphone can be judged according to the audio data stored in the local database or the network database. For example, the song title information, singer information, lyric information, album information, and the like of the audio data collected by the microphone are identified.
After determining the music identification information of the audio data collected by the microphone, the audio segment associated with the audio data may be obtained in the network database according to the music identification information, that is, the music associated with the audio data is identified, and the music identifier is displayed to prompt the user to identify the music that can be used as the video background music. Therefore, by identifying surrounding audio data, available background music is recommended, and compared with the method that a user needs to input search keywords and search background music which is interesting to the user, the method can effectively simplify the acquisition and addition operation of the background music in the video production process.
It should be noted that if audio data is not collected, the music identifier is not displayed; if the audio data is collected but the identification information of the audio data cannot be identified, and the music piece corresponding to the audio data cannot be acquired, the music mark is not displayed.
If a plurality of audio data are identified at the same time, displaying an audio clip associated with the identified plurality of audio data on a production interface of the video so that the user can select the background music.
The music identifier can be a production interface displayed on the video in a floating manner. Moreover, the music identifier may also display song title information or singer information or the like associated with the audio data, so that the user can select whether to use the audio segment associated with the audio data as the background music of the video by looking at the music identifier.
And 102, receiving a first input of the music identification by a user.
In this step, the first input may be a pressing operation, a touch operation, or the like of the music identifier by the user.
Step 103, responding to the first input, and adding a target music segment associated with the target music into a target video.
In the step, the target music segment associated with the target music is added to the target video in response to the first input, so that the addition of the background music of the video is realized.
The target music piece may be a music piece of target music included in the target audio, that is, a currently identified music piece of target music included in the target audio; it may also be a piece of music selected from at least two pieces of music of the target music.
Moreover, when the target music piece is the music piece of the target music included in the target audio, the target music piece can be directly added to the target video, so that the selection operation of a user is omitted, and the convenience of adding the background music of the target video is further improved.
In the embodiment, in the process of making the video, under the condition that the microphone of the electronic device collects the audio, the target music corresponding to the audio can be automatically identified through the collected audio and is visually displayed in the form of a music identifier so as to prompt a user that available background music exists; the method can also add the target music segment associated with the target music into the target video by receiving the first input of the user to the music identifier, thereby realizing the addition of the background music of the target video; compared with the method that the user needs to input the search key words and search the background music which is interested by the user, the method can effectively simplify the acquisition and addition operation of the background music in the video production process.
Optionally, before adding the target music segment associated with the target music to the target video, the method further includes: displaying music identification information of the target music and T music pieces of the target music; wherein T is a positive integer.
In this embodiment, in the background music adding interface of the video, the music identification information of the target music may be displayed, so that the user can know the specific information of the target music, such as song style information, singer information, playing time, album information, and the like; t music pieces of the target music can also be displayed, so that the user selects the target music piece from the T music pieces as background music of the target video.
Specifically, the first music piece of the T music pieces may be taken as the target music piece and added to the target video by receiving a user selection input for the first music piece. Therefore, by increasing the selectivity of the music pieces, the interaction between the user and the electronic equipment in the background music adding process can be enhanced, and the man-machine interaction effect can be improved.
The T pieces of music may include a currently identified piece of music, or a piece of music with a preset music characteristic, such as a climax piece of the target music, and may also include an entire piece of music of the target music.
As shown in fig. 2a, the method may include displaying a music editing interface 20 shown in fig. 2b by receiving a selection input for the music identifier 10 in a case of acquiring a target music associated with the audio with respect to a microphone of the electronic device and in response to the selection input; and may display the currently identified section 21 of the target music, the climax section 22 of the target music, and the entire section 23 of the target music on the music editing interface 20, thereby increasing the selection of the background music so that the user selects an appropriate audio section from the audio sections displayed on the music editing interface as the background music of the target video.
As shown in fig. 2b, a video track interface 24 may also be displayed below the available music pieces, the video track interface displaying a plurality of video frame thumbnails and a video slide bar, and the addition position of the target music piece may be set by dragging the video slide bar.
Wherein the target music piece may be determined by receiving a user selection input for a first music piece of the displayed T music pieces. Specifically, a marquee may be displayed behind each music piece, and when a selection input for the marquee is received, the audio music corresponding to the marquee is added to the target video.
For example, during the production process of the video multimedia content, a key frame picture of the video can be displayed on the music editing interface, so that the user can determine the adding position of the background music. And a video slide bar can be arranged at the adding position of the background music so that the user can check the adding position of the background music.
After the background music is added to the first position of the target video, the adding position of the background music can be adjusted by dragging the video slide bar, for example, from the first position to the second position.
And the user can also accept sliding operations, such as left sliding operation and right sliding operation, on the key frame picture of the target video, and view the key frame picture of the video.
For example, a video track of a target video may be displayed, where the target video includes P video frame thumbnails and a video slide bar, and the video slide bar is moved from a first position to a second position by receiving a fifth input to the video slide bar, such as a drag input/slide input, from a user, and adds a selected target music piece to a target video frame, where the target video frame is a video frame corresponding to the video frame thumbnail at the second position; where P is a positive integer, the T music pieces include a first piece identified by the target music, a second piece with preset music characteristics, and the like, and specifically, the T music pieces may include a currently identified piece 21 of the target music, a climax piece 22 of the target music, and a total piece 23 of the target music as shown in fig. 2 b.
The playing length of the music segments is also displayed aiming at the music segments displayed on the music display interface, and the playing length of the music segments can be adjusted by receiving the dragging operation of a user on the playing length. Moreover, in order to improve the adjustment precision of the playing length of the music segment, the long-press operation of the user on the music segment can be received, the adjustment interface of the playing length of the music segment is displayed in response to the long-press operation, and the playing time length of the music segment can be intercepted in the adjustment interface by taking seconds as a unit. For example, for a music piece whose playing time is 20 seconds, the numbers 5 and 15 may be input, so that the 5-15 second portion of the music piece is intercepted.
Optionally, when the first playing time of the target music segment is shorter than the second playing time of the target video, the playing mode of the target music segment in the target video may be set to be the circular playing mode, so that background music is always available in the playing process of the target video, and the playing effect of the target video is improved.
Optionally, when the first playing time of the target music segment is longer than the second playing time of the target video, the target music segment may be intercepted according to the second playing time, and the intercepted target music segment is added to the target video, so as to improve the matching degree between the target music segment and the target video, and thus improve the playing effect of the background music of the target video.
Further, if the user wants to add multiple pieces of background music to the target video, for example, adding the currently identified piece of the target music to the first two minutes of the target video, and adding the climax piece of the target music to the last two minutes of the target video; firstly, positioning a video slide bar at the initial position (namely a first frame picture) of a target video, and checking a current identification segment of target music; and then moving the video slide bar to the middle position of the target video, and checking the climax section of the target music.
And if the playing duration of the added music segments is less than two minutes, controlling the music segments to be played circularly in the corresponding playing interval, so that the playing effect of the target video is improved.
For example, if the current identification segment of the target music is less than two minutes, the current identification segment of the target music is played in a circulating manner within the first two minutes of the target video; when the playing time of the current identification segment of the target music exceeds two minutes, automatically stopping playing the current identification segment of the target music; then, the next two-minute playing time interval of the target video is entered, and the climax section of the target music is called as background music to be played in the next two-minute playing time interval of the target video.
Optionally, the music identification information includes a music name tag and a music singer tag; after the displaying the music identification information of the target music and the T pieces of music of the target music, and before the adding the target piece of music associated with the target music into the target video, the method further includes: receiving a third input from the user to the music singer tag; updating the tag contents of said music singer tags from the first singer to the second singer and said T music pieces to S music pieces in response to said third input; the T music pieces are music pieces of the target music of a first singing version, and the target music of the first singing version is singed by the first singer; the S music pieces are music pieces of the target music of a second singing version, and the target music of the second singing version is singed by the second singer; and S is a positive integer.
Further optionally, after the tag contents of the music singer tag are updated from the first singer to the second singer and the T music pieces are updated to S music pieces, before the target music piece associated with the target music is added to the target video, the method further includes: receiving a fourth input of the user to a second music piece of the S music pieces; in response to the fourth input, determining the second piece of music as a target piece of music.
In the embodiment, music pieces with different versions can be provided for the user to select, so that the flexibility of selecting the background music is increased, and the adding effect of the background music is improved.
As shown in fig. 2c, a music version label 25 (i.e. a music singer label) may also be displayed on the music editing interface 20 to provide singing versions of different singers, and may also provide singing versions of the same singer on different occasions to increase flexibility and variety of selection of background music.
Wherein the fourth input is a version switching operation of song information associated with the target music. Also, in response to the fourth input, the target music piece added to the target video may be updated to a music piece of the target version (i.e., the version sung by the second singer) so that the user selects a favorite version as the background music of the target video.
According to the video processing method, the music identifier is displayed, the music identifier indicates target music, the target music is identified based on target audio, and the target audio is audio collected by a microphone of the electronic equipment; receiving a first input of the music identification by a user; in response to the first input, adding a target music piece associated with the target music to a target video. Therefore, the acquisition and addition operation of the background music in the video production process can be simplified.
As shown in fig. 3, an embodiment of the present invention further provides a video processing apparatus, where the video processing apparatus 300 includes:
the first display module 301 is configured to display a music identifier, where the music identifier indicates target music, the target music is identified based on target audio, and the target audio is audio collected by a microphone of an electronic device;
a first receiving module 302, configured to receive a first input of the music identifier from a user;
an adding module 303, configured to add, in response to the first input, a target music piece associated with the target music to a target video.
Optionally, the target music segment is a music segment of the target music included in the target audio, or the target music segment is one of at least two music segments of the target music.
Optionally, the video processing apparatus 300 further includes:
the second display module is used for displaying the music identification information of the target music and the T music pieces of the target music;
wherein T is a positive integer.
Optionally, the video processing apparatus 300 further includes:
the second receiving module is used for receiving a second input of the user to a first music piece in the T music pieces;
a second determining module for determining the first music piece as a target music piece in response to the second input.
Optionally, the music identification information includes a music name tag and a music singer tag;
the video processing apparatus 300 further comprises:
a third receiving module, configured to receive a third input to the music singer tag from the user;
an update module, configured to update the tag contents of the music singer tags from the first singer to the second singer and update the T music pieces to S music pieces in response to the third input;
the T music pieces are music pieces of the target music of a first singing version, and the target music of the first singing version is singed by the first singer; the S music pieces are music pieces of the target music of a second singing version, and the target music of the second singing version is singed by the second singer; and S is a positive integer.
Optionally, the video processing apparatus 300 further includes:
a fourth receiving module, configured to receive a fourth input of the second music piece of the S music pieces from the user;
a second determining module, configured to determine the second piece of music as a target piece of music in response to the fourth input.
Optionally, the adding module 303 includes:
the display unit is used for displaying a video track of the target video, the target video comprises P video frame thumbnails and a video slide bar, and fifth input of a user to the video slide bar is received;
a first adding unit, configured to, in response to the fifth input, move the video slide bar from a first position to a second position, and add a target music piece to a target video frame, where the target video frame is a video frame corresponding to a video frame thumbnail at the second position;
wherein P is a positive integer, and the T music pieces comprise the identified first piece of the target music and a second piece of preset music characteristics.
Optionally, the adding module 303 includes:
a second adding unit, configured to add the target music segment to the target video and set a playing mode of the target music segment to a loop playing mode in a case where a first playing time length of the target music segment is smaller than a second playing time length of the target video;
and the third adding unit is used for intercepting the target music segment according to the second playing time length under the condition that the first playing time length of the target music segment is greater than the second playing time length of the target video, and adding the intercepted target music segment into the target video.
The video processing apparatus 300 can implement each process implemented by the electronic device in the method embodiments of fig. 1 and fig. 2c, and is not described herein again to avoid repetition.
As shown in fig. 4, an embodiment of the present invention further provides an electronic device, where the electronic device 400 includes, but is not limited to: radio frequency unit 401, network module 402, audio output unit 403, input unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, processor 410, and power supply 411. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 4 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The display unit 406 is configured to display a music identifier, where the music identifier indicates target music, the target music is identified based on target audio, and the target audio is audio collected by a microphone of the electronic device; a user input unit 407, configured to receive a first input of the music identifier by a user; a processor 410 for adding a target music piece associated with the target music to a target video in response to the first input.
Optionally, the target music segment is a music segment of the target music included in the target audio, or the target music segment is one of at least two music segments of the target music.
Optionally, the display unit 406 is configured to display music identification information of the target music and T pieces of music of the target music; wherein T is a positive integer.
Optionally, the user input unit 407 is configured to receive a second input of the first music piece of the T music pieces by the user; a processor 410 for determining the first piece of music as a target piece of music in response to the second input.
Optionally, the music identification information includes a music name tag and a music singer tag; a user input unit 407 for receiving a third input of the music singer tag by a user; a processor 410, configured to update the tag content of said music singer tags from the first singer to the second singer and update said T music segments to S music segments in response to said third input; the T music pieces are music pieces of the target music of a first singing version, and the target music of the first singing version is singed by the first singer; the S music pieces are music pieces of the target music of a second singing version, and the target music of the second singing version is singed by the second singer; and S is a positive integer.
Optionally, the user input unit 407 is configured to receive a fourth input of the second music piece of the S music pieces from the user; a processor 410, configured to determine the second piece of music as a target piece of music in response to the fourth input.
Optionally, the display unit 406 is configured to display a video track of the target video, where the target video includes P video frame thumbnails and a video slide bar, and receive a fifth input to the video slide bar from the user; a processor 410, configured to respond to the fifth input, move the video slide bar from the first position to the second position, and add a target music piece to a target video frame, where the target video frame is a video frame corresponding to the video frame thumbnail at the second position; wherein P is a positive integer, and the T music pieces comprise the identified first piece of the target music and a second piece of preset music characteristics.
Optionally, the user input unit 407 is configured to, in a case that the first playing time length of the target music piece is smaller than the second playing time length of the target video, add the target music piece to the target video, and set the playing mode of the target music piece to the loop playing mode; the user input unit 407 is configured to intercept the target music segment according to a second playing time length when the first playing time length of the target music segment is greater than the second playing time length of the target video, and add the intercepted target music segment to the target video.
The electronic device 400 can implement the processes implemented by the electronic device in the foregoing embodiments, and in order to avoid repetition, the detailed description is omitted here.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 401 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 410; in addition, the uplink data is transmitted to the base station. Typically, radio unit 401 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio unit 401 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 402, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 403 may convert audio data received by the radio frequency unit 401 or the network module 402 or stored in the memory 409 into an audio signal and output as sound. Also, the audio output unit 403 may also provide audio output related to a specific function performed by the electronic apparatus 400 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 403 includes a speaker, a buzzer, a receiver, and the like.
The input unit 404 is used to receive audio or video signals. The input Unit 404 may include a Graphics Processing Unit (GPU) 4041 and a microphone 4042, and the Graphics processor 4041 processes image data of a still picture or video obtained by an image capturing apparatus (such as a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 406. The image frames processed by the graphic processor 4041 may be stored in the memory 409 (or other storage medium) or transmitted via the radio frequency unit 401 or the network module 402. The microphone 4042 may receive sound, and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 401 in case of the phone call mode.
The electronic device 400 also includes at least one sensor 405, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 4061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 4061 and/or the backlight when the electronic apparatus 400 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 405 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which will not be described in detail herein.
The display unit 406 is used to display information input by the user or information provided to the user. The Display unit 406 may include a Display panel 4061, and the Display panel 4061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 407 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 407 includes a touch panel 4071 and other input devices 4072. Touch panel 4071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 4071 using a finger, a stylus, or any suitable object or attachment). The touch panel 4071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 410, receives a command from the processor 410, and executes the command. In addition, the touch panel 4071 can be implemented by using various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 4071, the user input unit 407 may include other input devices 4072. Specifically, the other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 4071 can be overlaid on the display panel 4061, and when the touch panel 4071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 410 to determine the type of the touch event, and then the processor 410 provides a corresponding visual output on the display panel 4061 according to the type of the touch event. Although in fig. 4, the touch panel 4071 and the display panel 4061 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 4071 and the display panel 4061 may be integrated to implement the input and output functions of the electronic device, and the implementation is not limited herein.
The interface unit 408 is an interface for connecting an external device to the electronic apparatus 400. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 408 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 400 or may be used to transmit data between the electronic apparatus 400 and an external device.
The memory 409 may be used to store software programs as well as various data. The memory 409 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 409 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 410 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 409 and calling data stored in the memory 409, thereby performing overall monitoring of the electronic device. Processor 410 may include one or more processing units; preferably, the processor 410 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
The electronic device 400 may further include a power supply 411 (e.g., a battery) for supplying power to various components, and preferably, the power supply 411 may be logically connected to the processor 410 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the electronic device 400 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides an electronic device, which includes a processor 410, a memory 409, and a computer program that is stored in the memory 409 and can be run on the processor 410, and when being executed by the processor 410, the computer program implements each process of the above-mentioned video processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the video processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A video processing method applied to an electronic device is characterized by comprising the following steps:
displaying a music identifier, wherein the music identifier indicates target music, the target music is identified based on target audio, and the target audio is audio collected by a microphone of the electronic equipment;
receiving a first input of the music identification by a user;
in response to the first input, adding a target music piece associated with the target music to a target video.
2. The method according to claim 1, wherein the target music piece is a music piece of the target music included in the target audio, or wherein the target music piece is one of at least two music pieces of the target music.
3. The method of claim 1, wherein prior to adding the target music segment associated with the target music to the target video, the method further comprises:
displaying music identification information of the target music and T music pieces of the target music;
wherein T is a positive integer.
4. The method according to claim 3, wherein after displaying the music identification information of the target music and the T pieces of music of the target music and before adding the target piece of music associated with the target music to the target video, the method further comprises:
receiving a second input of a user to a first music piece in the T music pieces;
in response to the second input, determining the first piece of music as a target piece of music.
5. The method of claim 3, wherein the music identification information includes a music title tag and a music singer tag;
after the displaying the music identification information of the target music and the T pieces of music of the target music, and before the adding the target piece of music associated with the target music into the target video, the method further includes:
receiving a third input from the user to the music singer tag;
updating the tag contents of said music singer tags from the first singer to the second singer and said T music pieces to S music pieces in response to said third input;
the T music pieces are music pieces of the target music of a first singing version, and the target music of the first singing version is singed by the first singer; the S music pieces are music pieces of the target music of a second singing version, and the target music of the second singing version is singed by the second singer; and S is a positive integer.
6. The method of claim 5, wherein after updating the tag content of the music singer tag from the first singer to the second singer and updating the T music pieces to S music pieces, before adding the target music piece associated with the target music to the target video, the method further comprises:
receiving a fourth input of the user to a second music piece of the S music pieces;
in response to the fourth input, determining the second piece of music as a target piece of music.
7. The method according to any one of claims 3 to 6, wherein the adding the target music piece associated with the target music to the target video comprises:
displaying a video track of the target video, wherein the target video comprises P video frame thumbnails and a video slide bar, and receiving a fifth input of a user to the video slide bar;
in response to the fifth input, moving the video slide bar from a first position to a second position and adding a target music piece to a target video frame, the target video frame being a video frame corresponding to a video frame thumbnail at the second position;
wherein P is a positive integer, and the T music pieces comprise the identified first piece of the target music and a second piece of preset music characteristics.
8. The method according to any one of claims 1 to 6, wherein the adding the target music piece associated with the target music to the target video comprises:
under the condition that the first playing time length of the target music segment is smaller than the second playing time length of the target video, adding the target music segment into the target video, and setting the playing mode of the target music segment to be a circular playing mode;
and under the condition that the first playing time length of the target music segment is longer than the second playing time length of the target video, intercepting the target music segment according to the second playing time length, and adding the intercepted target music segment into the target video.
9. A video processing apparatus, comprising:
the first display module is used for displaying a music identifier, wherein the music identifier indicates target music, the target music is identified based on target audio, and the target audio is audio collected by a microphone of the electronic equipment;
the first receiving module is used for receiving a first input of the music identifier by a user;
an adding module, configured to add, in response to the first input, a target music piece associated with the target music to a target video.
10. An electronic device, comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the video processing method according to any one of claims 1 to 8.
CN202010305884.7A 2020-04-17 2020-04-17 Video processing method, video processing device and electronic equipment Active CN111491211B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010305884.7A CN111491211B (en) 2020-04-17 2020-04-17 Video processing method, video processing device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010305884.7A CN111491211B (en) 2020-04-17 2020-04-17 Video processing method, video processing device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111491211A CN111491211A (en) 2020-08-04
CN111491211B true CN111491211B (en) 2022-01-28

Family

ID=71812851

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010305884.7A Active CN111491211B (en) 2020-04-17 2020-04-17 Video processing method, video processing device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111491211B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112153307A (en) 2020-08-28 2020-12-29 北京达佳互联信息技术有限公司 Method and device for adding lyrics in short video, electronic equipment and storage medium
CN113038014A (en) * 2021-03-17 2021-06-25 北京字跳网络技术有限公司 Video processing method of application program and electronic equipment
CN113573161B (en) * 2021-09-22 2022-02-08 腾讯科技(深圳)有限公司 Multimedia data processing method, device, equipment and storage medium
CN114329223A (en) * 2022-01-04 2022-04-12 北京字节跳动网络技术有限公司 Media content searching method, device, equipment and medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1524667A1 (en) * 2003-10-16 2005-04-20 Magix Ag System and method for improved video editing
CN105070283A (en) * 2015-08-27 2015-11-18 百度在线网络技术(北京)有限公司 Singing voice scoring method and apparatus
CN107707828A (en) * 2017-09-26 2018-02-16 维沃移动通信有限公司 A kind of method for processing video frequency and mobile terminal
CN107959873A (en) * 2017-11-02 2018-04-24 深圳天珑无线科技有限公司 Method, apparatus, terminal and the storage medium of background music are implanted into video
CN109246474A (en) * 2018-10-16 2019-01-18 维沃移动通信(杭州)有限公司 A kind of video file edit methods and mobile terminal
CN109547847A (en) * 2018-11-22 2019-03-29 广州酷狗计算机科技有限公司 Add the method, apparatus and computer readable storage medium of video information
CN110222224A (en) * 2019-06-06 2019-09-10 广州酷狗计算机科技有限公司 Identify the methods, devices and systems of song information

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102053998A (en) * 2009-11-04 2011-05-11 周明全 Method and system device for retrieving songs based on voice modes
CN103914238B (en) * 2012-12-30 2017-02-08 杭州网易云音乐科技有限公司 Method and device for achieving integration of controls in interface

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1524667A1 (en) * 2003-10-16 2005-04-20 Magix Ag System and method for improved video editing
CN105070283A (en) * 2015-08-27 2015-11-18 百度在线网络技术(北京)有限公司 Singing voice scoring method and apparatus
CN107707828A (en) * 2017-09-26 2018-02-16 维沃移动通信有限公司 A kind of method for processing video frequency and mobile terminal
CN107959873A (en) * 2017-11-02 2018-04-24 深圳天珑无线科技有限公司 Method, apparatus, terminal and the storage medium of background music are implanted into video
CN109246474A (en) * 2018-10-16 2019-01-18 维沃移动通信(杭州)有限公司 A kind of video file edit methods and mobile terminal
CN109547847A (en) * 2018-11-22 2019-03-29 广州酷狗计算机科技有限公司 Add the method, apparatus and computer readable storage medium of video information
CN110222224A (en) * 2019-06-06 2019-09-10 广州酷狗计算机科技有限公司 Identify the methods, devices and systems of song information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于内容的视频检索中的音频处理;冯哲;《中国博士学位论文全文数据库 信息科技辑》;20050315(第01期);I138-32 *

Also Published As

Publication number Publication date
CN111491211A (en) 2020-08-04

Similar Documents

Publication Publication Date Title
CN108470041B (en) Information searching method and mobile terminal
CN111491211B (en) Video processing method, video processing device and electronic equipment
CN110557683B (en) Video playing control method and electronic equipment
CN110248251B (en) Multimedia playing method and terminal equipment
CN108763316B (en) Audio list management method and mobile terminal
CN109561211B (en) Information display method and mobile terminal
CN112689201B (en) Barrage information identification method, barrage information display method, server and electronic equipment
CN108334272B (en) Control method and mobile terminal
CN110618969B (en) Icon display method and electronic equipment
EP3699743B1 (en) Image viewing method and mobile terminal
CN110866038A (en) Information recommendation method and terminal equipment
CN111445927B (en) Audio processing method and electronic equipment
CN110990679A (en) Information searching method and electronic equipment
CN110958485A (en) Video playing method, electronic equipment and computer readable storage medium
CN109246474B (en) Video file editing method and mobile terminal
CN111372029A (en) Video display method and device and electronic equipment
CN111212316B (en) Video generation method and electronic equipment
CN111143614A (en) Video display method and electronic equipment
CN111491205A (en) Video processing method and device and electronic equipment
CN109324999B (en) Method and electronic equipment for executing operation based on download instruction
CN108710521B (en) Note generation method and terminal equipment
CN110032320B (en) Page rolling control method and device and terminal
CN111445929A (en) Voice information processing method and electronic equipment
CN110932964A (en) Information processing method and device
CN110928616A (en) Shortcut icon management method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant