WO2018184488A1 - 视频配音方法及装置 - Google Patents

视频配音方法及装置 Download PDF

Info

Publication number
WO2018184488A1
WO2018184488A1 PCT/CN2018/080657 CN2018080657W WO2018184488A1 WO 2018184488 A1 WO2018184488 A1 WO 2018184488A1 CN 2018080657 W CN2018080657 W CN 2018080657W WO 2018184488 A1 WO2018184488 A1 WO 2018184488A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
video segment
dubbing
time
terminal
Prior art date
Application number
PCT/CN2018/080657
Other languages
English (en)
French (fr)
Inventor
黄思军
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2018184488A1 publication Critical patent/WO2018184488A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/233Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream

Definitions

  • the embodiments of the present invention relate to the field of video editing technologies, and in particular, to a video dubbing method and apparatus.
  • the current video dubbing method is often: the terminal plays a fixed length video; in the process of playing the video, the recording function is turned on and the dubbing file is recorded; after that, the fixed length video and the dubbing file are synthesized to obtain the dubbed video.
  • the dubbed video obtained by the above video dubbing method may include redundant information.
  • the embodiment of the present application provides a video dubbing method and device, which can solve the problems in the related art.
  • the technical solutions are as follows:
  • a video dubbing method comprising:
  • a target video segment is synthesized based on the video segment and the dubbing file.
  • a video dubbing apparatus comprising:
  • a first receiving module configured to receive a voiceover request during video playback
  • a determining module configured to determine a start time and an end time of the video segment to be dubbed
  • a playing module configured to play a video segment between the start time and the end time determined by the determining module
  • a recording module configured to record a voiceover file corresponding to the video segment during the playing of the video segment by the playing module
  • An intercepting module configured to intercept the video segment in the video
  • a synthesizing module configured to synthesize the target video segment according to the video segment and the dubbing file.
  • a terminal comprising a processor and a memory, wherein the memory stores at least one instruction, at least one program, a code set or a set of instructions, the at least one instruction, the at least one program, The set of codes or sets of instructions is loaded and executed by the processor to implement a video dubbing method as described above.
  • a computer readable storage medium stores at least one instruction, at least one program, a code set, or a set of instructions, the at least one instruction, the at least one program, and the code set Or the set of instructions is loaded and executed by the processor to implement the video dubbing method as described above.
  • the terminal After receiving the dubbing request, determining the start time and the end time of the video segment to be dubbed, and playing the video segment between the start time and the end time, recording the dubbing file corresponding to the video segment, and intercepting
  • the video segment further generates a target video segment according to the dubbing file and the captured video segment; in the related art, the terminal can only dub the existing video, and when the existing video includes the segment that the user does not need to dub, after the dubbing
  • the video includes the problem of redundant information; the terminal can only dub the desired video clips, reducing the redundancy effect.
  • FIG. 1 is a schematic diagram of an implementation environment involved in an embodiment of the present application.
  • FIG. 2 is a flowchart of a video dubbing method provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a user triggering a dubbing option according to an embodiment of the present application
  • FIG. 4 is a schematic diagram of a user setting start tag provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a terminal preview video frame provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a user stopping dubbing provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a user starting dubbing and canceling dubbing provided by an embodiment of the present application.
  • FIG. 8 is a flowchart of downloading a video segment from a background server by a terminal according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a terminal preview target video segment provided by an embodiment of the present application.
  • FIG. 10 is a flowchart of sharing a target video segment according to an embodiment of the present application.
  • FIG. 11 is a schematic diagram of a sharing target video segment provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of a video dubbing apparatus according to an embodiment of the present application.
  • FIG. 13 is a schematic diagram of a terminal provided by an embodiment of the present application.
  • the video dubbing method provided by each of the following embodiments is applied to a terminal having an audio collection capability.
  • the terminal can be a smart phone, a tablet computer, an e-reader, a desktop computer connected to a microphone, etc., and is not limited thereto.
  • a video player for playing video is installed in the terminal, and the video player may be a player that is provided by the terminal, or a player that is actively downloaded and installed by the user, which is not limited.
  • the video dubbed in the following embodiments may be a video saved locally by the terminal or a video played online.
  • the video saved locally by the terminal may be a pre-recorded video of the terminal, or may be a video that is downloaded and saved by the terminal from the background server in advance, which is not limited.
  • the implementation scenario includes a terminal 110 (with a video player 111 installed) and a background server 120.
  • the terminal 110 is the above-mentioned terminal, and the terminal 110 can be connected to the background server 120 through a wired or wireless network.
  • the background server 120 is a background server corresponding to the video player 111.
  • the background server 120 may be a server or a server cluster composed of multiple servers, which is not limited.
  • FIG. 2 is a flowchart of a method for video dubbing provided by an embodiment of the present application.
  • the video dubbing method may include:
  • Step 201 Receive a dubbing request during video playback.
  • the terminal When the user plays a video using the video player in the terminal, if the user wants to dub a video segment in the video, the user can apply a dubbing request in the terminal, and accordingly, the terminal can receive the dubbing request.
  • the user can click anywhere in the video playback interface, and after receiving the click signal, the terminal displays the picture shown in Figure 2 (2). Dubbing option 31. Thereafter, the user can click on the dubbing option 31, at which point the terminal will receive the dubbing request upon receiving the click signal.
  • the terminal may display other options, such as “selection set”, “barrage”, and “screen shot”, etc., and details are not described herein again.
  • Step 202 Determine a start time and an end time of a video segment to be dubbed.
  • the terminal After the terminal receives the dubbing request, the terminal can determine the start time and end time of the video segment to be dubbed.
  • the terminal may display the start label and the end label in the play progress bar of the video, and the user may select the video clip to be dubbed by dragging the start label and the end label, and corresponding steps are performed.
  • the processing of 202 can be as follows:
  • the start tag is displayed at the first preset position in the play progress bar of the video, and the end tag is displayed at the second preset position of the play progress bar.
  • the terminal After receiving the dubbing request, the terminal can display the start tag and the end tag in the play progress bar.
  • the start tag is used to indicate the starting position of the video segment to be dubbed in the video
  • the end tag is used to indicate the end position of the video segment to be dubbed in the video.
  • the first preset location may be a default location in the playback progress bar.
  • the second preset position may be a position that differs from the starting label by a predetermined time interval.
  • the predetermined time interval may be an interval set by the system in the video player, or may be a preset interval preset by the user, which is not limited thereto. In actual implementation, the predetermined time interval may be 30 s. It should be noted that if the time interval between the location of the start tag and the end position of the video is less than the predetermined time interval, the end tag may be at the end position of the video, which is not limited thereto.
  • the second preset position may be the default position
  • the first preset position is the position at a predetermined time interval before ending the label. This is not a limitation.
  • the first sliding signal may be a sliding signal corresponding to the left sliding or the right sliding of the starting label.
  • the sliding distance of the starting label is the sliding distance of the first sliding signal, and details are not described herein.
  • the terminal After the terminal displays the start tag and the end tag, if the location where the start tag is located is not the intercept position desired by the user, referring to the figure (1) in FIG. 4, the user can apply the first of the slide start tag 41.
  • the sliding signal correspondingly, the terminal can receive the first sliding signal.
  • the starting label 41 can be slid correspondingly. For example, referring to the figure (2) in FIG. 4, after the terminal receives the first sliding signal, the starting label 41 can be slid from the A position to the B position.
  • the user can also apply a second sliding signal of the sliding end tag, and correspondingly, the terminal can receive the second sliding signal.
  • the second step and the third step are optional steps. If the location of the initial label and the end label of the initial display of the terminal is the position that the user desires to intercept, then the second and the second may not be performed at this time. Three steps, this is not limited.
  • the time corresponding to the start tag is the start time of the video segment to be dubbed.
  • the terminal may determine the time corresponding to the start tag as the start time of the video segment to be dubbed.
  • the starting tag is located at 23'30" in the movie "Crouching Tiger, Hidden Dragon", and the starting moment is 23'30".
  • the position of the starting label after sliding is 28'37", and the starting time is 28'37".
  • the user may determine the start time of the video clip by sliding the start tag, and the user may drag the start tag multiple times.
  • the terminal may be acquired each time the terminal slides the start tag.
  • the time corresponding to the location where the tag is located, and the time corresponding to the newly acquired start tag may be determined as the start time of the video segment to be dubbed, that is, in this case, the terminal may slide according to the start tag. , the start time of the video clip of the dubbing is updated.
  • the time corresponding to the end tag is the end time of the video segment to be dubbed.
  • the terminal may also determine the time corresponding to the end tag as the end time of the video segment to be dubbed.
  • the user can determine the end time of the video clip by sliding the end tag, and the user may drag the end tag multiple times. In this case, each time the terminal slides the end tag, the position of the end tag can be obtained. At the moment, the time corresponding to the newly obtained end tag can be determined as the end time of the video segment to be dubbed, that is, in this case, the terminal can treat the video video segment of the dubbed according to the sliding condition of the end tag. Update at the end time.
  • the terminal may obtain a video frame corresponding to the location where the start label is located, and further, the start label may be displayed.
  • Video frame In actual implementation, the terminal may display the video frame in a window based on the start tag, or the terminal may display the video frame at a preset size at a central location of the video play interface. For example, please refer to (1) and (2) in FIG. 5, which show two possible display modes, respectively. Of course, in actual implementation, the terminal can preview the video frame in other manners, which is not limited thereto. Similarly, if the terminal performs the third step described above, the video frame at the end tag is previewed after the end tag is swiped.
  • the video frame at the corresponding position is displayed, so that the user can intuitively know the start position and the end position of the clipped video clip, thereby obtaining the video clip that is needed by the user.
  • the terminal determines the start time and the end time in the foregoing manner.
  • the determining the start time and the end time may include:
  • This step includes: using the preset time in the video as the starting time of the video segment to be dubbed.
  • the preset time may be a start time of the video, an intermediate time, or a time when the voiceover request is received, and the like.
  • the moment when the dubbing request is received is the time corresponding to the play progress bar of the video when the dubbing request is received. For example, when a dubbing request is received and the video is played to 34'48", the terminal can determine 34'48" as the starting time.
  • This step includes the following possible implementations.
  • the time after the preset time length is delayed from the start time is determined as the end time.
  • the preset duration may be the duration set by the system in the video player, or may be a preset duration of the user. This is not limited. For example, the preset duration is 30 seconds.
  • receiving the stop dubbing request and receiving the stop dubbing request as the end time.
  • the time at which the stop of the dubbing request is received is the time corresponding to the playback progress bar of the video when the dubbing request is stopped.
  • the terminal may update the dubbing option in the current interface to stop the dubbing option. For example, referring to FIG. 6, the terminal may display the stop dubbing option 61. Thereafter, the user can apply a click signal of clicking to stop the dubbing option 61, and the click signal received by the terminal is to stop the dubbing request.
  • the terminal may determine the start time and the end time in other manners, which is not limited in this embodiment.
  • Step 203 Play a video segment between the start time and the end time in the video.
  • the video segment can be played, which is not limited.
  • the terminal may display a start option and a cancel option in the play interface.
  • the start option is used to trigger the start of the dubbing
  • the cancel option is used to trigger the de-dubbing.
  • the terminal can display a start option 71 and a cancel option 72.
  • the user wants to start dubbing, the user can apply a selection signal to select the start option 71, and accordingly, the terminal can receive the selection signal and play the video clip after receiving the selection signal.
  • the user wants to cancel the dubbing, the user can apply an option signal for selecting the cancel option 72. Accordingly, after receiving the selection signal, the terminal jumps to the video playing interface.
  • the terminal may also obtain the time corresponding to the start tag when receiving the selection signal corresponding to the start option, and determine the start time of the video segment to be dubbed, and the end of the acquisition is completed.
  • the time corresponding to the tag and determines it as the end time of the video segment to be dubbed.
  • the video clip between the start time and the end time in the video can be played.
  • step 203 may be as follows: if the time difference between the end time of the video segment to be dubbed and the starting time does not reach the preset duration, then the playback is performed. A video clip between the start time and the end time.
  • the preset duration may be pre-stored in the terminal, where the preset duration is used to limit the playing duration of the video clip selected by the user.
  • the time difference between the end time and the start time may be calculated first. If the time difference is less than or equal to the preset duration, Then the video clip between the start time and the end time can be played. Otherwise, the terminal may display a prompt message indicating that the playback failed, and may display the reason for the failure, so that the user can re-determine the start time and the end time of the video segment to be dubbed.
  • Step 204 Record a dubbing file corresponding to the video segment during the process of playing the video segment.
  • the terminal can turn on the microphone, and in the process of playing the video clip, the terminal can collect the dubbing file through the microphone.
  • the terminal can initiate a voice recording thread through which the voice collected by the microphone is written into the cache directory. After the recording is finished, the terminal can save it as a dubbing file.
  • the format of the recorded dubbing file may be the default format provided by the system in the terminal, which is not limited.
  • the original audio in the video is usually information that the user does not expect, therefore, in order to avoid interference of the original audio in the video when playing the video clip, the terminal
  • the image information in the video clip can be played only, and the audio information is not played, which is not limited.
  • step 205 the video clip in the video is intercepted.
  • the video clip in the video can be intercepted.
  • different intercept methods can be used depending on whether the video is a locally saved video. Specifically, if the video is a video saved locally by the terminal, the terminal may directly intercept the video segment between the start time and the end time in the locally saved video.
  • the terminal may continuously cache the content of the video segment during the process of playing the video segment, and finally intercept the video segment; optionally, the terminal may also determine the starting time and After the end time, the download request is sent to the background server. After receiving the download request, the background server may return the video segment to the terminal. Correspondingly, the terminal may receive the video segment returned by the background server.
  • the download request may include a start time, an end time, and a video identifier, or the download request may include a start time, a target duration, and a video identifier, where the target duration is a time difference between the end time and the start time, and Or, for the case where the start time is the preset time and the duration of the video segment is the preset duration, the download request may include a video identifier.
  • the background server may generate a video segment according to the start time and the end time or the start time and the target duration, and feed back the download address to the terminal.
  • the terminal may start the download thread and download the video clip from the download address through the download thread.
  • Figure 8 shows the complete download process.
  • the terminal may apply for a piece of memory in advance according to the size of the video segment, and after the video segment is captured, the video segment is read into the memory.
  • Step 206 Synthesize the target video segment according to the video segment and the dubbing file.
  • This step can include:
  • the image information in the video clip is extracted.
  • the terminal can read the content in the memory through the streaming interface.
  • the video clip is the content intercepted from the original video
  • the audio and the image are included at the same time, and the audio and the image are two independent media streams. Therefore, the terminal can separate the audio and the image in the video clip and save separately.
  • the audio memory area and the image memory area in the memory In this case, the terminal can read the image information stored in the image memory area of the video segment, so that the terminal can obtain the image information in the video segment.
  • the terminal can simultaneously write the acquired image information and the voice information in the recorded dubbing file into one video file to obtain the target video segment.
  • the terminal can compress the image information and the voice information in the voice-over file into a memory area through the streaming interface of the system, and then write the content in the memory area to the video file through the streaming media interface, and write The incoming video file is the target video clip.
  • the terminal can automatically play the target video segment.
  • the terminal may jump to the preset interface, and may synthesize the image information in the video clip and the voice information in the dubbing file.
  • the terminal can automatically play the target video segment in the preview window of the preset interface. For example, referring to FIG. 9, the terminal can automatically preview the target video segment in the window 91.
  • the terminal may display the “Loading” prompt information in the preview window during the time period. Not limited.
  • the terminal may jump to an interface including a preview option, and the user may click the preview option.
  • the terminal will receive a selection instruction of the preview option, and then, the target video segment can be started to be played.
  • the specific implementation of the embodiment is not limited.
  • the user may trigger to save the target video segment, and if the user is not satisfied, the user may trigger cancellation of the present dubbing, which is not limited in this embodiment.
  • the terminal can share the target video segment, that is, the video dubbing method may also include the following steps:
  • Step 1001 Receive a sharing request for sharing a target video segment, where the sharing request includes a sharing method.
  • the sharing method may be a method of sharing to a target friend through a target communication method or sharing to a target platform.
  • the user when the user wants to share the target video clip to the microblog, the user can apply a click signal of the click microblog 111, and the corresponding terminal can receive the click signal, and the click signal is a sharing request.
  • Step 1002 After receiving the sharing request, share the target video segment according to the sharing method.
  • the terminal may share the target video segment according to the sharing method in the sharing request. For example, in conjunction with FIG. 11, after the terminal receives the click signal of the click microblog 111, the terminal may invoke the microblog interface to share the target video segment to the microblog through the invoked microblog interface.
  • the video dubbing method determines the start time and the end time of the video segment to be dubbed after receiving the dubbing request, and plays the video between the start time and the end time.
  • the video file corresponding to the video segment is recorded, the video segment is intercepted, and the target video segment is generated according to the voice file and the captured video segment; in the related art, the terminal can only perform voiceover on the existing video.
  • Some videos include the problem that the video after dubbing includes redundant information when the user does not need to dub the clip; the terminal can only dub the desired video clip to reduce the redundancy effect.
  • the user can freely match the video clip of a certain length in the video with the voice that he/she needs, the effect of the user can be increased and the user experience can be improved.
  • FIG. 12 is a schematic structural diagram of a video dubbing apparatus according to an embodiment of the present disclosure.
  • the video dubbing apparatus may include: a first receiving module 1210, a determining module 1220, and a playing module 1230.
  • the first receiving module 1210 is configured to receive a voiceover request during video playback
  • a determining module 1220 configured to determine a start time and an end time of the video segment to be dubbed
  • a playing module 1230 configured to play a video segment between the start time and the end time determined by the determining module 1220;
  • the recording module 1240 is configured to record a voiceover file corresponding to the video segment during the playing of the video segment by the playing module;
  • An intercepting module 1250 configured to intercept the video segment in the video
  • the synthesizing module 1260 is configured to synthesize the target video segment according to the video segment and the dubbing file.
  • the video dubbing apparatus determines the start time and the end time of the video clip to be dubbed after receiving the dubbing request, and plays the video between the start time and the end time.
  • the video file corresponding to the video segment is recorded, the video segment is intercepted, and the target video segment is generated according to the voice file and the captured video segment; in the related art, the terminal can only perform voiceover on the existing video.
  • Some videos include the problem that the video after dubbing includes redundant information when the user does not need to dub the clip; the terminal can only dub the desired video clip to reduce the redundancy effect.
  • the user can freely match the video clip of a certain length in the video with the voice that he/she needs, the effect of the user can be increased and the user experience can be improved.
  • the determining module 1220 includes:
  • a display unit configured to display a start tag at a first preset position in a play progress bar of the video after receiving the voiceover request, and display end at a second preset position of the play progress bar label;
  • the acquiring unit is configured to obtain a starting moment of the video segment to be dubbed, and obtain a ending time of the video segment to be dubbed.
  • the determining module 1220 further includes:
  • a processing unit configured to receive a first sliding signal for sliding the starting label, sliding the starting label; and/or for receiving a second sliding signal for sliding the ending label, sliding the End the label.
  • the device further includes:
  • a preview module configured to display a video frame at a position corresponding to the start label after sliding the start label, or to receive the first When the signal is swiped, after the end tag is slid, the video frame at the position corresponding to the end tag is displayed.
  • the playing module 1230 includes:
  • a receiving unit configured to receive a start dubbing request
  • a playing unit configured to play a video segment between the start time and the end time after the receiving unit receives the start dubbing request.
  • the synthesizing module 1260 includes:
  • An extracting unit configured to extract image information in the video segment
  • a synthesizing unit configured to synthesize the image information and the voice information in the dubbing file, and obtain the target video segment.
  • the device further includes:
  • a second receiving module configured to receive a sharing request for sharing the target video segment, where the sharing request includes a sharing mode
  • a sharing module configured to share the target video segment according to the sharing manner after the second receiving module receives the sharing request.
  • the playing module 1230 is configured to:
  • the video segment between the start time and the end time is played.
  • the embodiment of the present application further provides a computer readable storage medium, which may be a computer readable storage medium included in the memory in the foregoing embodiment, or may exist separately and not assembled into the terminal.
  • Computer readable storage medium may be a computer readable storage medium included in the memory in the foregoing embodiment, or may exist separately and not assembled into the terminal.
  • Computer readable storage medium stores one or more programs that are used by one or more processors to perform the video dubbing method described above.
  • FIG. 13 is a block diagram of a terminal 1300 according to an embodiment of the present invention.
  • the terminal may include a radio frequency (RF) circuit 1301, a memory 1302 including one or more computer readable storage media, and an input unit 1303.
  • RF radio frequency
  • the terminal structure shown in FIG. 13 does not constitute a limitation to the terminal, and may include more or less components than those illustrated, or a combination of certain components, or different component arrangements. among them:
  • the RF circuit 1301 can be used for receiving and transmitting signals during and after receiving or transmitting information, in particular, after receiving downlink information of the base station, and processing it by one or more processors 1308; in addition, transmitting data related to the uplink to the base station.
  • the RF circuit 1301 includes, but is not limited to, an antenna, at least one amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, a low noise amplifier (LNA, Low Noise Amplifier), duplexer, etc.
  • SIM Subscriber Identity Module
  • the RF circuit 1301 can also communicate with the network and other devices through wireless communication.
  • the wireless communication may use any communication standard or protocol, including but not limited to Global System of Mobile communication (GSM), General Packet Radio Service (GPRS), and Code Division Multiple Access (CDMA). , Code Division Multiple Access), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), e-mail, Short Messaging Service (SMS), and the like.
  • GSM Global System of Mobile communication
  • GPRS General Packet Radio Service
  • CDMA Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • LTE Long Term Evolution
  • SMS Short Messaging Service
  • the memory 1302 can be used to store software programs and modules, and the processor 1308 executes various functional applications and data processing by running software programs and modules stored in the memory 1302.
  • the memory 1302 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored according to Data created by the use of the terminal (such as audio data, phone book, etc.).
  • memory 1302 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, memory 1302 may also include a memory controller to provide access to memory 1302 by processor 1308 and input unit 1303.
  • the input unit 1303 can be configured to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function controls.
  • input unit 1303 can include a touch-sensitive surface as well as other input devices.
  • Touch-sensitive surfaces also known as touch screens or trackpads, collect touch operations on or near the user (such as the user using a finger, stylus, etc., any suitable object or accessory on a touch-sensitive surface or touch-sensitive Operation near the surface), and drive the corresponding connecting device according to a preset program.
  • the touch sensitive surface may include two parts of a touch detection device and a touch controller.
  • the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information.
  • the processor 1308 is provided and can receive commands from the processor 1308 and execute them.
  • touch-sensitive surfaces can be implemented in a variety of types, including resistive, capacitive, infrared, and surface acoustic waves.
  • the input unit 1303 may also include other input devices. Specifically, other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like.
  • Display unit 1304 can be used to display information entered by the user or information provided to the user as well as various graphical user interfaces of the terminal, which can be composed of graphics, text, icons, video, and any combination thereof.
  • the display unit 1304 may include a display panel.
  • the display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
  • the touch-sensitive surface can cover the display panel, and when the touch-sensitive surface detects a touch operation thereon or nearby, it is transmitted to the processor 1308 to determine the type of the touch event, and then the processor 1308 displays the type according to the type of the touch event. A corresponding visual output is provided on the panel.
  • the touch-sensitive surface and display panel are implemented as two separate components to perform input and input functions, in some embodiments, the touch-sensitive surface can be integrated with the display panel to implement input and output functions.
  • the terminal may also include at least one type of sensor 1305, such as a light sensor, motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel according to the brightness of the ambient light, and the proximity sensor may close the display panel and/or the backlight when the terminal moves to the ear.
  • the gravity acceleration sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity.
  • the terminal can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, no longer Narration.
  • An audio circuit 1306, a speaker, and a microphone provide an audio interface between the user and the terminal.
  • the audio circuit 1306 can transmit the converted electrical signal of the audio data to the speaker, and convert it into a sound signal output by the speaker; on the other hand, the microphone converts the collected sound signal into an electrical signal, which is received by the audio circuit 1306 and then converted.
  • the audio data output processor 13008 After the audio data is processed by the audio data output processor 1308, it is sent to, for example, another terminal via the RF circuit 1301, or the audio data is output to the memory 1302 for further processing.
  • the audio circuit 1306 may also include an earbud jack to provide communication between the peripheral earphone and the terminal.
  • WiFi is a short-range wireless transmission technology.
  • the terminal can help users to send and receive emails, browse web pages and access streaming media through the WiFi module 1307, which provides users with wireless broadband Internet access.
  • FIG. 13 shows the WiFi module 1307, it can be understood that it does not belong to the necessary configuration of the terminal, and may be omitted as needed within the scope of not changing the essence of the invention.
  • the processor 1308 is a control center of the terminal that connects various portions of the entire handset using various interfaces and lines, by executing or executing software programs and/or modules stored in the memory 1302, and invoking data stored in the memory 1302, The various functions of the terminal and processing data to monitor the mobile phone as a whole.
  • the processor 1309 may include one or more processing cores; preferably, the processor 1308 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like.
  • the modem processor primarily handles wireless communications. It will be appreciated that the above described modem processor may also not be integrated into the processor 1308.
  • the terminal also includes a power source 1309 (such as a battery) for powering various components.
  • the power source can be logically coupled to the processor 1309 through a power management system to manage functions such as charging, discharging, and power management through the power management system.
  • the power supply 1309 can also include any one or more of a DC or AC power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
  • the terminal may further include a camera, a Bluetooth module, and the like, and details are not described herein again.
  • the processor 1308 in the terminal runs one or more program instructions stored in the memory 1302, thereby implementing the video dubbing method provided in the foregoing various method embodiments.
  • a person skilled in the art may understand that all or part of the steps of implementing the above embodiments may be completed by hardware, or may be instructed by a program to execute related hardware, and the program may be stored in a computer readable storage medium.
  • the storage medium mentioned may be a read only memory, a magnetic disk or an optical disk or the like.

Abstract

本申请公开了一种视频配音方法及装置,属于视频编辑技术领域。所述方法包括:在视频播放过程中,接收配音请求;确定待配音的视频片段的起始时刻和结束时刻;播放所述起始时刻和所述结束时刻之间的视频片段;在播放所述视频片段的过程中,录制所述视频片段所对应的配音文件;截取所述视频中的所述视频片段;根据所述视频片段和所述配音文件合成目标视频片段。解决了相关技术中终端只能对已有的视频进行配音,当已有的视频中包括用户无需配音的片段时,配音后的视频中包括冗余信息的问题;达到了终端可以只对需要的视频片段进行配音,降低冗余的效果。

Description

视频配音方法及装置
本申请要求于2017年4月6日提交中国国家知识产权局、申请号为201710220247.8、发明名称为“视频配音方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及视频编辑技术领域,特别涉及一种视频配音方法及装置。
背景技术
随着电子技术和视频编辑技术的发展,各种各样的终端得到了广泛的应用,相应的终端上的应用程序的种类越来越多、功能越来越丰富。视频播放器即是一种很常用的应用程序。
用户可以通过视频播放器观看视频。在观看视频的过程中,用户还可以为视频配音。目前的视频配音方法往往是:终端播放固定长度的视频;在播放视频的过程中,开启录音功能并录制配音文件;此后将固定长度的视频和配音文件进行合成,得到配音后的视频。
由于视频中可能会包含用户无需配音的片段,因此通过上述视频配音方法得到的配音后的视频中可能会包括冗余信息。
发明内容
本申请实施例提供了一种视频配音方法及装置,可以解决相关技术中存在的问题。技术方案如下:
一方面,提供一种视频配音方法,所述方法用于终端,该方法包括:
在视频播放过程中,接收配音请求;
确定待配音的视频片段的起始时刻和结束时刻;
播放所述起始时刻和所述结束时刻之间的视频片段;
在播放所述视频片段的过程中,录制所述视频片段所对应的配音文件;
截取所述视频中的所述视频片段;
根据所述视频片段和所述配音文件合成目标视频片段。
另一方面,提供一种视频配音装置,该装置包括:
第一接收模块,用于在视频播放过程中,接收配音请求;
确定模块,用于确定待配音的视频片段的起始时刻和结束时刻;
播放模块,用于播放所述确定模块确定的所述起始时刻和所述结束时刻之间的视频片段;
录制模块,用于在所述播放模块播放所述视频片段的过程中,录制所述视频片段所对应的配音文件;
截取模块,用于截取所述视频中的所述视频片段;
合成模块,用于根据所述视频片段和所述配音文件合成目标视频片段。
再一方面,提供一种终端,所述终端包括处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如上述所述的视频配音方法。
再一方面,提供一种计算机可读存储介质,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现如上述所述的视频配音方法。
本申请实施例提供的技术方案带来的有益效果是:
通过在接收到配音请求之后,确定待配音的视频片段的起始时刻和结束时刻,并在播放处于起始时刻和结束时刻之间的视频片段时,录制该视频片段所对应的配音文件,截取视频片段,进而根据配音文件和截取得到的视频片段生成目标视频片段;解决了相关技术中终端只能对已有的视频进行配音,当已有的视频中包括用户无需配音的片段时,配音后的视频中包括冗余信息的问题;达到了终端可以只对需要的视频片段进行配音,降低冗余的效果。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例所涉及的实施环境的示意图;
图2是本申请一个实施例提供的视频配音方法的流程图;
图3是本申请一个实施例提供的用户触发配音选项时的示意图;
图4是本申请一个实施例提供的用户设置起始标签的示意图;
图5是本申请一个实施例提供的终端预览视频帧的示意图;
图6是本申请一个实施例提供的用户停止配音的示意图;
图7是本申请一个实施例提供的用户开始配音以及取消配音的示意图;
图8是本申请一个实施例提供的终端从后台服务器下载视频片段的流程图;
图9是本申请一个实施例提供的终端预览目标视频片段的示意图;
图10是本申请一个实施例提供的分享目标视频片段的流程图;
图11是本申请一个实施例提供的分享目标视频片段的示意图;
图12是本申请一个实施例提供的视频配音装置的示意图;
图13是本申请一个实施例提供的终端的示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
下述各个实施例所提供的视频配音方法应用于终端中,该终端具备音频采集能力。比如,该终端可以为智能手机、平板电脑、电子阅读器、连接有麦克风的台式电脑等等,对此并不做限定。实际实现时,终端中安装有用于播放视频的视频播放器,该视频播放器可以为终端自带的播放器,也可以为用户主动下载并安装的播放器,对此并不做限定。
下述各个实施例中配音的视频可以为终端本地保存的视频也可以为在线播放的视频。其中,终端本地保存的视频可以为终端预先录制的视频,也可以为终端预先从后台服务器中下载并保存的视频,对此并不做限定。
并且,在视频为在线播放的视频时,该视频配音方法可以应用于图1所述的实施场景中。该实施场景包括终端110(安装有视频播放器111)以及后台服务器120。其中,终端110为上述所说的终端,终端110可以通过有线或者无线网络与后台服务器120连接。后台服务器120为视频播放器111所对应的 后台服务器,该后台服务器120可以为一台服务器,也可以为由多台服务器组成的服务器集群,对此并不做限定。
请参考图2,其示出了本申请一个实施例提供的视频配音方法的方法流程图,如图2所示,该视频配音方法可以包括:
步骤201,在视频播放过程中,接收配音请求。
用户在使用终端中的视频播放器播放视频时,若想要为视频中的某一视频片段配音,则用户可以在终端中施加配音请求,相应的,终端可以接收到该配音请求。
比如,当用户想要配音时,请参考图3中的(1)图,用户可以点击视频播放界面中的任意位置,终端接收到点击信号之后,展示图3中的(2)图所示的配音选项31。此后,用户可以点击配音选项31,此时,终端将会接收到该点击信号即可以接收到配音请求。实际实现时,如图3中的(2)图所示,终端接收到点击信号之后还可以展示其他选项,比如“选集”、“弹幕”以及“截屏”等等,在此不再赘述。
步骤202,确定待配音的视频片段的起始时刻和结束时刻。
在终端接收到配音请求之后,终端可以确定待配音的视频片段的的起始时刻和结束时刻。
可选地,终端接收到配音请求后,可以在视频的播放进度条中显示起始标签和结束标签,用户可以通过拖动起始标签和结束标签,选择待配音的视频片段,相应的,步骤202的的处理过程可以如下:
第一,在接收到配音请求之后,在视频的播放进度条中的第一预设位置处展示起始标签,在播放进度条的第二预设位置处展示结束标签。
终端接收到配音请求之后,可以在播放进度条中展示起始标签和结束标签。其中,起始标签用于表示待配音的视频片段在该视频中的起始位置,结束标签用于表示待配音的视频片段在该视频中的结束位置。
可选地,第一预设位置可以为播放进度条中的默认位置。比如,该视频的起始位置,或者,接收到配音请求时视频的播放位置,或者,该视频的中间位置,对此并不做限定。第二预设位置可以为与起始标签相差预定时间间隔的位置。其中,预定时间间隔可以为视频播放器中系统设定的间隔,也可以为用户预先自定义的间隔,对此并不做限定。实际实现时,该预定时间间隔可以为30s。 需要说明的是,若起始标签所在位置与视频的结束位置之间的时间间隔小于预定时间间隔,则结束标签可以处于视频的结束位置,对此并不做限定。当然,上述只是以第一预设位置为默认位置为例,实际实现时,也可以实现为第二预设位置为默认位置,第一预设位置为在结束标签之前相隔预定时间间隔处的位置,对此并不做限定。
第二,接收用于滑动起始标签的第一滑动信号,滑动起始标签。
其中,第一滑动信号可以为对应起始标签的向左滑动或者向右滑动的滑动信号,起始标签的滑动距离即为第一滑动信号的滑动距离,在此不再赘述。
在终端展示起始标签和结束标签之后,若起始标签所处的位置并非是用户期望的截取位置,则请参考图4中的(1)图,用户可以施加滑动起始标签41的第一滑动信号,相应的,终端可以接收到该第一滑动信号。终端接收到第一滑动信号之后,可以相应的滑动起始标签41。比如,请参考图4中的(2)图,终端接收到第一滑动信号之后,可以将起始标签41从A位置滑动至B位置。
第三,接收用于滑动结束标签的第二滑动信号。
与上述第二个步骤类似,用户还可以施加滑动结束标签的第二滑动信号,相应的,终端可以接收到该第二滑动信号。
需要说明的是,第二个步骤和第三个步骤为可选步骤,若终端初始展示的起始标签和结束标签的位置即为用户期望截取的位置,则此时可以不执行第二和第三步骤,对此并不做限定。
第四,获取起始标签所对应的时刻为待配音的视频片段的起始时刻。
终端可以将起始标签所对应的时刻确定为待配音的视频片段的起始时刻。比如,起始标签所在的位置为影片“卧虎藏龙”中的23’30”,则起始时刻即为23’30”。又比如,滑动后的起始标签所在的位置为28’37”,则起始时刻即为28’37”。其中,针对用户可以通过滑动起始标签来确定视频片段的起始时刻的情况,用户可能会多次拖动起始标签,此种情况下,每当终端滑动起始标签后,均可以获取起始标签所在位置对应的时刻,并可以将最新获取到的起始标签所对应的时刻,确定为待配音的视频片段的起始时刻,即此种情况下,终端可以根据起始标签的滑动情况,对待配音的视频视频片段的起始时刻进行更新。
第五,获取结束标签所对应的时刻为待配音的视频片段的结束时刻。
类似的,终端还可以将结束标签所对应的时刻确定为待配音的视频片段的结束时刻。其中,针对用户可以通过滑动结束标签来确定视频片段的结束时刻 的情况,用户可能会多次拖动结束标签,此种情况下,每当终端滑动结束标签后,均可以获取结束标签所在位置对应的时刻,并可以将最新获取到的结束标签所对应的时刻,确定为待配音的视频片段的结束时刻,即此种情况下,终端可以根据结束标签的滑动情况,对待配音的视频视频片段的结束时刻进行更新。
需要补充说明的是,在本实施例中,若终端执行上述第二个步骤,则在滑动起始标签之后,终端可以获取起始标签所在位置对应的视频帧,进而,可以显示起始标签处的视频帧。实际实现时,终端可以在基于起始标签的窗口中显示该视频帧,或者,终端还可以在视频播放界面的中心位置处以预设大小展示该视频帧。比如,请参考图5中的(1)图和(2)图,其分别示出了两种可能的显示方式。当然,实际实现时,终端还可以以其它方式预览视频帧,对此并不做限定。类似的,若终端执行上述第三个步骤,则在滑动结束标签之后,预览结束标签处的视频帧。
通过在滑动起始标签或者结束标签之后,显示对应位置处的视频帧,使得用户可以直观的获知截取的视频片段的起始位置和结束位置,进而得到自己需要的视频片段。
上述只是以终端通过上述方式确定起始时刻和结束时刻为例,可选地,作为另一种可能的实现方式,确定起始时刻和结束时刻的步骤可以包括:
第一,确定起始时刻。
本步骤包括:将视频中的预设时刻作为待配音的视频片段的起始时刻。其中,预设时刻可以为视频的起始时刻、中间时刻或者接收到配音请求的时刻等等。其中,接收到配音请求的时刻为接收到配音请求时视频的播放进度条所对应的时刻。比如,接收到配音请求时,视频播放至34’48”,则终端可以将34’48”确定为起始时刻。
第二,确定结束时刻。
本步骤包括如下可能的实现方式。
在第一种可能的实现方式中,将从起始时刻延迟预设时长之后的时刻确定为结束时刻。其中,预设时长可以为视频播放器中系统设置的时长,也可以为用户预先自定义的时长,对此并不做限定,比如,该预设时长为30秒。
在第二种可能的实现方式中,接收停止配音请求,将接收到停止配音请求的时刻作为结束时刻。其中,接收到停止配音请求的时刻为接收到停止配音请 求时视频的播放进度条所对应的时刻。
具体的,在接收到配音请求之后,终端可以将当前界面中的配音选项更新为停止配音选项,比如,请参考图6,终端可以展示停止配音选项61。此后,用户可以施加点击停止配音选项61的点击信号,终端接收到的点击信号即为停止配音请求。
当然,实际实现时终端还可以通过其他方式类确定起始时刻和结束时刻,本实施例对此并不做限定。
步骤203,播放视频中起始时刻和结束时刻之间的视频片段。
实际实现时,终端确定出起始时刻和结束时刻之后,可以播放该视频片段,对此并不做限定。
可选地,终端接收到配音请求之后,终端可以在播放界面中展示开始选项和取消选项。其中,开始选项用于触发开始配音,取消选项用于触发取消配音。比如,请参考图7,终端可以展示开始选项71和取消选项72。当用户想要开始配音时,用户可以施加选择开始选项71的选择信号,相应的,终端可以接收到选择信号,并在接收到选择信号之后播放该视频片段。而当用户想要取消配音时,用户可以施加选择取消选项72的选项信号,相应的,终端接收到选择信号之后,跳转至视频播放界面。针对当前界面中包含开始选项的情况,终端还可以在接收到对应开始选项的选择信号时,获取起始标签所对应的时刻,并将其确定为待配音的视频片段的起始时刻,获取结束标签所对应的时刻,并将其确定为待配音的视频片段的结束时刻。确定出起始时刻和结束时刻后,可以播放视频中起始时刻和结束时刻之间的视频片段。
可选的,针对终端中预先设置有预设时长的情况,相应的,步骤203的处理过程可以如下:如果待配音的视频片段的结束时刻与起始时刻的时差未达到预设时长,则播放起始时刻和结束时刻之间的视频片段。
终端中可以预先存储有预设时长,其中,该预设时长用于限制用户所选择的视频片段的播放时长。此种情况下,终端确定出待配音的视频片段的起始时刻和结束时刻后,播放该视频片段前,可以先计算结束时刻与起始时刻的时差,如果该时差小于或等于预设时长,则可以播放起始时刻和结束时刻之间的视频片段。否则,终端可以显示播放失败的提示信息,并可以显示失败原因,以便用户重新确定待配音的视频片段的起始时刻和结束时刻。
步骤204,在播放视频片段的过程中,录制视频片段所对应的配音文件。
在开始播放视频片段时,终端可以开启麦克风,进而,在播放视频片段的过程中,终端可以通过麦克风采集配音文件。可选地,终端可以启动一个语音录制的线程,通过该线程将麦克风采集的语音写入至缓存目录中。录制结束后,终端可以将其保存为配音文件。其中,录制的配音文件的格式可以为终端中系统提供的默认格式,对此并不做限定。
需要说明的是,由于实际实现时,在用户配音的过程中,视频中的原始音频通常是用户并不期望的信息,因此,在播放视频片段时,为了避免视频中的原始音频的干扰,终端可以只播放视频片段中的图像信息,而并不播放音频信息,对此并不做限定。
步骤205,截取视频中的视频片段。
终端录制得到配音文件后,可以截取视频中的视频片段,其中,实际实现时,基于视频是否是本地保存的视频,可以采用不同的截取方式。具体的,若视频为终端本地保存的视频,则终端可以直接在本地保存的视频中截取起始时刻和结束时刻之间的视频片段。
而若视频为终端在线播放的视频,则终端可以在播放视频片段的过程中,不断缓存该视频片段的内容,进而最终截取得到该视频片段;可选地,终端还可以在确定起始时刻和结束时刻之后,发送下载请求至后台服务器,后台服务器接收到下载请求后,可以向终端返回该视频片段,相应的,终端可以接收后台服务器返回的该视频片段。其中,下载请求中可以包括起始时刻、结束时刻和视频标识,或者,下载请求中可以包括起始时刻、目标时长和视频标识,该目标时长为结束时刻和起始时刻之间的时间差,又或者,针对起始时刻是预设时刻、视频片段的时长是预设时长的情况,下载请求中可以包括视频标识。可选地,后台服务器在接收到下载请求之后,可以根据起始时刻和结束时刻或者起始时刻和目标时长生成视频片段,并将下载地址反馈至终端。终端接收到下载地址之后,可以开启下载线程,并通过该下载线程从该下载地址处下载该视频片段。比如,请参考图8,其示出了完整的下载流程。
终端可以根据视频片段的大小预先申请一块内存,在截取得到视频片段之后,将该视频片段读取到内存中。
步骤206,根据视频片段和配音文件合成目标视频片段。
本步骤可以包括:
第一,提取视频片段中的图像信息。
可选地,终端可以通过流媒体接口读取内存中的内容。另外,由于视频片段是从原视频中截取到的内容,会同时包含音频和图像,且音频和图像是两路独立的媒体流,因此,终端可以将视频片段中的音频和图像分离,分别保存在内存中的音频内存区和图像内存区。此种情况下,终端可以读取视频片段存储在图像内存区中的图像信息,从而,终端可以获取得到视频片段中的图像信息。
第二,合成图像信息和配音文件中的语音信息,并得到目标视频片段。
终端可以将获取到的图像信息和录制得到的配音文件中的语音信息同时写入至一个视频文件中,进而得到目标视频片段。可选地,终端可以通过系统的流媒体接口,将图像信息和配音文件中的语音信息压缩至一块内存区中,然后再通过流媒体接口将内存区中的内容写入至视频文件中,写入后的视频文件即为目标视频片段。
在得到目标视频片段之后,终端可以自动播放该目标视频片段。可选地,在终端播放视频片段的过程中,当播放至视频片段的结束时刻时,终端可以跳转至预设界面,并可以将视频片段中的图像信息和配音文件中的语音信息进行合成。得到目标视频片段后,终端可以在该预设界面的预览窗口中自动播放目标视频片段。比如,请参考图9,终端可以在窗口91中自动预览该目标视频片段。需要说明的是,在播放至结束时刻时,由于终端还需要耗费一定时间来合成目标视频片段,因此,在该时间段内,终端可以在预览窗口中展示“加载中”的提示信息,对此并不做限定。或者,终端在得到目标视频片段之后,可以跳转至包含预览选项的界面,用户可以点击该预览选项,此时,终端将会接收到预览选项的选择指令,进而,可以开始播放该目标视频片段,本实施例对其具体实现并不做限定。
此外,在预览目标视频片段之后,若用户满意,则用户可以触发保存该目标视频片段,而若用户不满意,则用户可以触发取消本次配音,本实施例对此也不做限定。
需要补充说明的是,在得到目标视频片段之后,终端可以分享该目标视频片段,也即请参考图10该视频配音方法还可以包括如下步骤:
步骤1001,接收分享目标视频片段的分享请求,分享请求中包括分享方式。
其中,分享方式可以为通过目标通信方式分享至目标好友,或者分享至目标平台的方式。
比如,请参考图11,用户想要将目标视频片段分享至微博时,用户可以施加点击微博111的点击信号,相应的终端可以接收到该点击信号,该点击信号即为分享请求。
步骤1002,在接收到分享请求之后,按照分享方式分享目标视频片段。
终端接收到分享请求之后,可以按照该分享请求中的分享方式分享该目标视频片段。比如,结合图11,终端接收到点击微博111的点击信号之后,终端可以调用微博接口,通过调用的该微博接口将该目标视频片段分享至微博。
综上所述,本实施例提供的视频配音方法,通过在接收到配音请求之后,确定待配音的视频片段的起始时刻和结束时刻,并在播放处于起始时刻和结束时刻之间的视频片段时,录制该视频片段所对应的配音文件,截取视频片段,进而根据配音文件和截取得到的视频片段生成目标视频片段;解决了相关技术中终端只能对已有的视频进行配音,当已有的视频中包括用户无需配音的片段时,配音后的视频中包括冗余信息的问题;达到了终端可以只对需要的视频片段进行配音,降低冗余的效果。此外,由于用户可以自由的为视频中的某一长度的视频片段配上自己需要的语音,达到了可以增加趣味的效果,提升了用户体验。
请参考图12,其示出了本申请一个实施例提供的视频配音装置的结构示意图,如图12所示,该视频配音装置可以包括:第一接收模块1210、确定模块1220、播放模块1230、录制模块1240、截取模块1250和合成模块1260。
第一接收模块1210,用于在视频播放过程中,接收配音请求;
确定模块1220,用于确定待配音的视频片段的起始时刻和结束时刻;
播放模块1230,用于播放所述确定模块1220确定的所述起始时刻和所述结束时刻之间的视频片段;
录制模块1240,用于在所述播放模块播放所述视频片段的过程中,录制所述视频片段所对应的配音文件;
截取模块1250,用于截取所述视频中的所述视频片段;
合成模块1260,用于根据所述视频片段和所述配音文件合成目标视频片段。
综上所述,本实施例提供的视频配音装置,通过在接收到配音请求之后,确定待配音的视频片段的起始时刻和结束时刻,并在播放处于起始时刻和结束 时刻之间的视频片段时,录制该视频片段所对应的配音文件,截取视频片段,进而根据配音文件和截取得到的视频片段生成目标视频片段;解决了相关技术中终端只能对已有的视频进行配音,当已有的视频中包括用户无需配音的片段时,配音后的视频中包括冗余信息的问题;达到了终端可以只对需要的视频片段进行配音,降低冗余的效果。此外,由于用户可以自由的为视频中的某一长度的视频片段配上自己需要的语音,达到了可以增加趣味的效果,提升了用户体验。
基于上述实施例提供的视频配音装置,可选的,所述确定模块1220,包括:
展示单元,用于在接收到所述配音请求之后,在所述视频的播放进度条中的第一预设位置处展示起始标签,在所述播放进度条的第二预设位置处展示结束标签;
获取单元,用于获取所述起始标签所对应的时刻为待配音的视频片段的起始时刻,获取所述结束标签所对应的时刻为待配音的视频片段的结束时刻。
可选的,所述确定模块1220,还包括:
处理单元,用于接收用于滑动所述起始标签的第一滑动信号,滑动所述起始标签;和/或,用于接收用于滑动所述结束标签的第二滑动信号,滑动所述结束标签。
可选的,所述装置还包括:
预览模块,用于在接收到所述第一滑动信号时,在滑动所述起始标签之后,显示所述起始标签所对应的位置处的视频帧;或者,用于在接收到所述第二滑动信号时,在滑动所述结束标签之后,显示所述结束标签所对应的位置处的视频帧。
可选的,所述播放模块1230,包括:
接收单元,用于接收开始配音请求;
播放单元,用于在所述接收单元接收到所述开始配音请求之后,播放起始时刻和所述结束时刻之间的视频片段。
可选的,所述合成模块1260,包括:
提取单元,用于提取所述视频片段中的图像信息;
合成单元,用于合成所述图像信息和所述配音文件中的语音信息,并得到所述目标视频片段。
可选的,所述装置还包括:
第二接收模块,用于接收分享所述目标视频片段的分享请求,所述分享请求中包括分享方式;
分享模块,用于在所述第二接收模块接收到所述分享请求之后,按照所述分享方式分享所述目标视频片段。
可选的,所述播放模块1230,用于:
如果所述待配音的视频片段的结束时刻与起始时刻的时差未达到预设时长,则播放所述起始时刻和所述结束时刻之间的视频片段。
本申请实施例还提供了一种计算机可读存储介质,该计算机可读存储介质可以是上述实施例中的存储器中所包含的计算机可读存储介质;也可以是单独存在,未装配入终端中的计算机可读存储介质。该计算机可读存储介质存储有一个或者一个以上程序,该一个或者一个以上程序被一个或者一个以上的处理器用来执行上述视频配音方法。
图13其示出了本发明一个实施例提供的终端1300的框图,该终端可以包括射频(RF,Radio Frequency)电路1301、包括有一个或一个以上计算机可读存储介质的存储器1302、输入单元1303、显示单元1304、传感器1305、音频电路1306、无线保真(WiFi,Wireless Fidelity)模块1307、包括有一个或者一个以上处理核心的处理器1308、以及电源1309等部件。本领域技术人员可以理解,图13中示出的终端结构并不构成对终端的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。其中:
RF电路1301可用于收发信息或通话过程中,信号的接收和发送,特别地,将基站的下行信息接收后,交由一个或者一个以上处理器1308处理;另外,将涉及上行的数据发送给基站。通常,RF电路1301包括但不限于天线、至少一个放大器、调谐器、一个或多个振荡器、用户身份模块(SIM,Subscriber Identity Module)卡、收发信机、耦合器、低噪声放大器(LNA,Low Noise Amplifier)、双工器等。此外,RF电路1301还可以通过无线通信与网络和其他设备通信。所述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯系统(GSM,Global System of Mobile communication)、通用分组无线服务(GPRS,General Packet Radio Service)、码分多址(CDMA,Code Division Multiple Access)、宽带码分多址(WCDMA,Wideband Code Division Multiple  Access)、长期演进(LTE,Long Term Evolution)、电子邮件、短消息服务(SMS,Short Messaging Service)等。
存储器1302可用于存储软件程序以及模块,处理器1308通过运行存储在存储器1302的软件程序以及模块,从而执行各种功能应用以及数据处理。存储器1302可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据终端的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器1302可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。相应地,存储器1302还可以包括存储器控制器,以提供处理器1308和输入单元1303对存储器1302的访问。
输入单元1303可用于接收输入的数字或字符信息,以及产生与用户设置以及功能控制有关的键盘、鼠标、操作杆、光学或者轨迹球信号输入。具体地,在一个具体的实施例中,输入单元1303可包括触敏表面以及其他输入设备。触敏表面,也称为触摸显示屏或者触控板,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触敏表面上或在触敏表面附近的操作),并根据预先设定的程式驱动相应的连接装置。可选的,触敏表面可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器1308,并能接收处理器1308发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触敏表面。除了触敏表面,输入单元1303还可以包括其他输入设备。具体地,其他输入设备可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。
显示单元1304可用于显示由用户输入的信息或提供给用户的信息以及终端的各种图形用户接口,这些图形用户接口可以由图形、文本、图标、视频和其任意组合来构成。显示单元1304可包括显示面板,可选的,可以采用液晶显示器(LCD,Liquid Crystal Display)、有机发光二极管(OLED,Organic Light-Emitting Diode)等形式来配置显示面板。进一步的,触敏表面可覆盖显示面板,当触敏表面检测到在其上或附近的触摸操作后,传送给处理器1308 以确定触摸事件的类型,随后处理器1308根据触摸事件的类型在显示面板上提供相应的视觉输出。虽然在图13中,触敏表面与显示面板是作为两个独立的部件来实现输入和输入功能,但是在某些实施例中,可以将触敏表面与显示面板集成而实现输入和输出功能。
终端还可包括至少一种传感器1305,比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板的亮度,接近传感器可在终端移动到耳边时,关闭显示面板和/或背光。作为运动传感器的一种,重力加速度传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于终端还可配置的陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。
音频电路1306、扬声器,传声器可提供用户与终端之间的音频接口。音频电路1306可将接收到的音频数据转换后的电信号,传输到扬声器,由扬声器转换为声音信号输出;另一方面,传声器将收集的声音信号转换为电信号,由音频电路1306接收后转换为音频数据,再将音频数据输出处理器1308处理后,经RF电路1301以发送给比如另一终端,或者将音频数据输出至存储器1302以便进一步处理。音频电路1306还可能包括耳塞插孔,以提供外设耳机与终端的通信。
WiFi属于短距离无线传输技术,终端通过WiFi模块1307可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图13示出了WiFi模块1307,但是可以理解的是,其并不属于终端的必须构成,完全可以根据需要在不改变发明的本质的范围内而省略。
处理器1308是终端的控制中心,利用各种接口和线路连接整个手机的各个部分,通过运行或执行存储在存储器1302内的软件程序和/或模块,以及调用存储在存储器1302内的数据,执行终端的各种功能和处理数据,从而对手机进行整体监控。可选的,处理器1309可包括一个或多个处理核心;优选的,处理器1308可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器1308中。
终端还包括给各个部件供电的电源1309(比如电池),优选的,电源可以通过电源管理系统与处理器1309逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。电源1309还可以包括一个或一个以上的直流或交流电源、再充电系统、电源故障检测电路、电源转换器或者逆变器、电源状态指示器等任意组件。
尽管未示出,终端还可以包括摄像头、蓝牙模块等,在此不再赘述。具体在本实施例中,终端中的处理器1308会运行存储在存储器1302中的一个或一个以上的程序指令,从而实现上述各个方法实施例中所提供视频配音方法。
应当理解的是,在本文中使用的,除非上下文清楚地支持例外情况,单数形式“一个”(“a”、“an”、“the”)旨在也包括复数形式。还应当理解的是,在本文中使用的“和/或”是指包括一个或者一个以上相关联地列出的项目的任意和所有可能组合。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述仅为本申请的一个实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (18)

  1. 一种视频配音方法,其特征在于,所述方法用于终端,所述方法包括:
    在视频播放过程中,接收配音请求;
    确定待配音的视频片段的起始时刻和结束时刻;
    播放所述起始时刻和所述结束时刻之间的所述视频片段;
    在播放所述视频片段的过程中,录制所述视频片段所对应的配音文件;
    截取所述视频中的所述视频片段;
    根据所述视频片段和所述配音文件合成目标视频片段。
  2. 根据权利要求1所述的方法,其特征在于,所述确定待配音的视频片段的起始时刻和结束时刻,包括:
    在接收到所述配音请求之后,在所述视频的播放进度条中的第一预设位置处展示起始标签,在所述播放进度条的第二预设位置处展示结束标签;
    获取所述起始标签所对应的时刻为待配音的视频片段的起始时刻,
    获取所述结束标签所对应的时刻为待配音的视频片段的结束时刻。
  3. 根据权利要求2所述的方法,其特征在于,所述获取所述起始标签所对应的时刻为所述起始时刻,获取所述结束标签所对应的时刻为所述结束时刻之前,所述方法还包括:
    接收用于滑动所述起始标签的第一滑动信号,滑动所述起始标签;
    和/或,
    接收用于滑动所述结束标签的第二滑动信号,滑动所述结束标签。
  4. 根据权利要求3所述的方法,其特征在于,所述方法还包括:
    若接收到所述第一滑动信号,则在滑动所述起始标签之后,显示所述起始标签所对应的位置处的视频帧;
    若接收到所述第二滑动信号,则在滑动所述结束标签之后,显示所述结束标签所对应的位置处的视频帧。
  5. 根据权利要求1至4任一所述的方法,其特征在于,所述播放所述起始 时刻和所述结束时刻之间的视频片段,包括:
    接收开始配音请求;
    在接收到所述开始配音请求之后,播放所述起始时刻和所述结束时刻之间的视频片段。
  6. 根据权利要求1至4任一所述的方法,其特征在于,所述根据所述视频片段和所述配音文件合成目标视频片段,包括:
    提取所述视频片段中的图像信息;
    合成所述图像信息和所述配音文件中的语音信息,并得到所述目标视频片段。
  7. 根据权利要求1至4任一所述的方法,其特征在于,所述方法还包括:
    接收分享所述目标视频片段的分享请求,所述分享请求中包括分享方式;
    在接收到所述分享请求之后,按照所述分享方式分享所述目标视频片段。
  8. 根据权利要求1至4任一所述的方法,其特征在于,所述播放所述起始时刻和所述结束时刻之间的所述视频片段,包括:
    如果所述待配音的视频片段的结束时刻与起始时刻的时差未达到预设时长,则播放所述起始时刻和所述结束时刻之间的视频片段。
  9. 一种视频配音装置,其特征在于,所述装置包括:
    第一接收模块,用于在视频播放过程中,接收配音请求;
    确定模块,用于确定待配音的视频片段的起始时刻和结束时刻;
    播放模块,用于播放所述确定模块确定的所述起始时刻和所述结束时刻之间的视频片段;
    录制模块,用于在所述播放模块播放所述视频片段的过程中,录制所述视频片段所对应的配音文件;
    截取模块,用于截取所述视频中的所述视频片段;
    合成模块,用于根据所述视频片段和所述配音文件合成目标视频片段。
  10. 根据权利要求9所述的装置,其特征在于,所述确定模块,包括:
    展示单元,用于在接收到所述配音请求之后,在所述视频的播放进度条中的第一预设位置处展示起始标签,在所述播放进度条的第二预设位置处展示结束标签;
    获取单元,用于获取所述起始标签所对应的时刻为待配音的视频片段的起始时刻,获取所述结束标签所对应的时刻为待配音的视频片段的结束时刻。
  11. 根据权利要求10所述的装置,其特征在于,所述确定模块,还包括:
    处理单元,用于接收用于滑动所述起始标签的第一滑动信号,滑动所述起始标签;和/或,用于接收用于滑动所述结束标签的第二滑动信号,滑动所述结束标签。
  12. 根据权利要求11所述的装置,其特征在于,所述装置还包括:
    预览模块,用于在接收到所述第一滑动信号时,在滑动所述起始标签之后,显示所述起始标签所对应的位置处的视频帧;或者,用于在接收到所述第二滑动信号时,在滑动所述结束标签之后,显示所述结束标签所对应的位置处的视频帧。
  13. 根据权利要求9至12任一所述的装置,其特征在于,所述播放模块,包括:
    接收单元,用于接收开始配音请求;
    播放单元,用于在所述接收单元接收到所述开始配音请求之后,播放起始时刻和所述结束时刻之间的视频片段。
  14. 根据权利要求9至12任一所述的装置,其特征在于,所述合成模块,包括:
    提取单元,用于提取所述视频片段中的图像信息;
    合成单元,用于合成所述图像信息和所述配音文件中的语音信息,并得到所述目标视频片段。
  15. 根据权利要求9至12任一所述的装置,其特征在于,所述装置还包括:
    第二接收模块,用于接收分享所述目标视频片段的分享请求,所述分享请 求中包括分享方式;
    分享模块,用于在所述第二接收模块接收到所述分享请求之后,按照所述分享方式分享所述目标视频片段。
  16. 根据权利要求9至12任一所述的装置,其特征在于,所述播放模块,用于:
    如果所述待配音的视频片段的结束时刻与起始时刻的时差未达到预设时长,则播放所述起始时刻和所述结束时刻之间的视频片段。
  17. 一种终端,其特征在于,所述终端包括处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如权利要求1至8任一所述的视频配音方法。
  18. 一种计算机可读存储介质,其特征在于,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现如权利要求1至8任一所述的视频配音方法。
PCT/CN2018/080657 2017-04-06 2018-03-27 视频配音方法及装置 WO2018184488A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710220247.8 2017-04-06
CN201710220247.8A CN106911900A (zh) 2017-04-06 2017-04-06 视频配音方法及装置

Publications (1)

Publication Number Publication Date
WO2018184488A1 true WO2018184488A1 (zh) 2018-10-11

Family

ID=59193993

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/080657 WO2018184488A1 (zh) 2017-04-06 2018-03-27 视频配音方法及装置

Country Status (2)

Country Link
CN (1) CN106911900A (zh)
WO (1) WO2018184488A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109413342A (zh) * 2018-12-21 2019-03-01 广州酷狗计算机科技有限公司 音视频处理方法、装置、终端及存储介质

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107071512B (zh) * 2017-01-16 2019-06-25 腾讯科技(深圳)有限公司 一种配音方法、装置及系统
CN106911900A (zh) * 2017-04-06 2017-06-30 腾讯科技(深圳)有限公司 视频配音方法及装置
CN107809666A (zh) * 2017-10-26 2018-03-16 费非 音频数据合并方法、装置存储介质及处理器
CN107872620B (zh) * 2017-11-22 2020-06-02 北京小米移动软件有限公司 视频录制方法及装置、计算机可读存储介质
CN108024073B (zh) 2017-11-30 2020-09-04 广州市百果园信息技术有限公司 视频编辑方法、装置及智能移动终端
CN108038185A (zh) * 2017-12-08 2018-05-15 广州市百果园信息技术有限公司 视频动态编辑方法、装置及智能移动终端
CN108337558A (zh) * 2017-12-26 2018-07-27 努比亚技术有限公司 音视频剪辑方法及终端
CN108600825B (zh) 2018-07-12 2019-10-25 北京微播视界科技有限公司 选择背景音乐拍摄视频的方法、装置、终端设备和介质
CN109361954B (zh) * 2018-11-02 2021-03-26 腾讯科技(深圳)有限公司 视频资源的录制方法、装置、存储介质及电子装置
CN110366032B (zh) * 2019-08-09 2020-12-15 腾讯科技(深圳)有限公司 视频数据处理方法、装置和视频播放方法、装置
CN110868633A (zh) * 2019-11-27 2020-03-06 维沃移动通信有限公司 一种视频处理方法及电子设备
CN111212321A (zh) * 2020-01-10 2020-05-29 上海摩象网络科技有限公司 视频处理方法、装置、设备及计算机存储介质
CN111741231B (zh) 2020-07-23 2022-02-22 北京字节跳动网络技术有限公司 一种视频配音方法、装置、设备及存储介质
CN112565905B (zh) * 2020-10-24 2022-07-22 北京博睿维讯科技有限公司 一种图像锁定操作方法、系统、智能终端和存储介质
CN112954390B (zh) * 2021-01-26 2023-05-09 北京有竹居网络技术有限公司 视频处理方法、装置、存储介质及设备
CN115037975B (zh) * 2021-02-24 2024-03-01 花瓣云科技有限公司 一种视频配音的方法、相关设备以及计算机可读存储介质
CN113630630B (zh) * 2021-08-09 2023-08-15 咪咕数字传媒有限公司 一种视频解说配音信息的处理方法、装置及设备
CN114338579B (zh) * 2021-12-29 2024-02-09 南京大众书网图书文化有限公司 一种用于配音的方法、设备、介质
CN114666516A (zh) * 2022-02-17 2022-06-24 海信视像科技股份有限公司 显示设备及流媒体文件合成方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006196091A (ja) * 2005-01-14 2006-07-27 Matsushita Electric Ind Co Ltd 映像音声信号記録再生装置
CN101217638A (zh) * 2007-12-28 2008-07-09 深圳市迅雷网络技术有限公司 视频文件分段下载的方法、系统及装置
CN104333802A (zh) * 2013-12-13 2015-02-04 乐视网信息技术(北京)股份有限公司 一种视频播放方法及视频播放器
CN105959773A (zh) * 2016-04-29 2016-09-21 魔方天空科技(北京)有限公司 多媒体文件的处理方法和装置
CN106293347A (zh) * 2016-08-16 2017-01-04 广东小天才科技有限公司 一种人机交互的学习方法及装置、用户终端
CN106911900A (zh) * 2017-04-06 2017-06-30 腾讯科技(深圳)有限公司 视频配音方法及装置

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105847966A (zh) * 2016-03-29 2016-08-10 乐视控股(北京)有限公司 终端及视频截取分享方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006196091A (ja) * 2005-01-14 2006-07-27 Matsushita Electric Ind Co Ltd 映像音声信号記録再生装置
CN101217638A (zh) * 2007-12-28 2008-07-09 深圳市迅雷网络技术有限公司 视频文件分段下载的方法、系统及装置
CN104333802A (zh) * 2013-12-13 2015-02-04 乐视网信息技术(北京)股份有限公司 一种视频播放方法及视频播放器
CN105959773A (zh) * 2016-04-29 2016-09-21 魔方天空科技(北京)有限公司 多媒体文件的处理方法和装置
CN106293347A (zh) * 2016-08-16 2017-01-04 广东小天才科技有限公司 一种人机交互的学习方法及装置、用户终端
CN106911900A (zh) * 2017-04-06 2017-06-30 腾讯科技(深圳)有限公司 视频配音方法及装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109413342A (zh) * 2018-12-21 2019-03-01 广州酷狗计算机科技有限公司 音视频处理方法、装置、终端及存储介质
US11659227B2 (en) 2018-12-21 2023-05-23 Guangzhou Kugou Computer Technology Co., Ltd. Audio and video processing method and apparatus, terminal and storage medium

Also Published As

Publication number Publication date
CN106911900A (zh) 2017-06-30

Similar Documents

Publication Publication Date Title
WO2018184488A1 (zh) 视频配音方法及装置
US10841661B2 (en) Interactive method, apparatus, and system in live room
US10708649B2 (en) Method, apparatus and system for displaying bullet screen information
US10643666B2 (en) Video play method and device, and computer storage medium
KR102040754B1 (ko) 추천 콘텐츠에 기초한 상호작용 방법, 단말기 및 서버
WO2016177296A1 (zh) 一种生成视频的方法和装置
US10484641B2 (en) Method and apparatus for presenting information, and computer storage medium
TWI592021B (zh) 生成視頻的方法、裝置及終端
WO2018157812A1 (zh) 一种实现视频分支选择播放的方法及装置
US20160323610A1 (en) Method and apparatus for live broadcast of streaming media
CN108566332B (zh) 一种即时通讯信息处理方法、装置和存储介质
US11785304B2 (en) Video preview method and electronic device
CN107333162B (zh) 一种播放直播视频的方法和装置
WO2016184295A1 (zh) 即时通讯方法、用户设备及系统
CN111309218A (zh) 信息显示方法、装置及电子设备
CN108616771B (zh) 视频播放方法及移动终端
WO2015131768A1 (en) Video processing method, apparatus and system
WO2017215661A1 (zh) 一种场景音效的控制方法、及电子设备
US20190275428A1 (en) Control method of scene sound effect and related products
US20220311859A1 (en) Do-not-disturb method and terminal
WO2018161788A1 (zh) 多媒体数据共享方法及装置
CN110618806A (zh) 一种应用程序控制方法、装置、电子设备及存储介质
KR101876394B1 (ko) 단말기에 미디어 데이터를 재생하는 방법 및 장치
US20150089370A1 (en) Method and device for playing media data on a terminal
WO2015139595A1 (en) Method and apparatus for controlling presentation of multimedia data

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18780658

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18780658

Country of ref document: EP

Kind code of ref document: A1