WO2020062685A1 - 视频处理方法、装置、终端和介质 - Google Patents

视频处理方法、装置、终端和介质 Download PDF

Info

Publication number
WO2020062685A1
WO2020062685A1 PCT/CN2018/124784 CN2018124784W WO2020062685A1 WO 2020062685 A1 WO2020062685 A1 WO 2020062685A1 CN 2018124784 W CN2018124784 W CN 2018124784W WO 2020062685 A1 WO2020062685 A1 WO 2020062685A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
target
playback speed
editing
segment
Prior art date
Application number
PCT/CN2018/124784
Other languages
English (en)
French (fr)
Inventor
韩旭
王海婷
付平非
Original Assignee
北京微播视界科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京微播视界科技有限公司 filed Critical 北京微播视界科技有限公司
Priority to JP2020549860A priority Critical patent/JP7038226B2/ja
Priority to GB2017355.5A priority patent/GB2589731B/en
Publication of WO2020062685A1 publication Critical patent/WO2020062685A1/zh
Priority to US17/021,245 priority patent/US11037600B2/en

Links

Images

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/005Reproducing at a different information rate from the information rate of recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47202End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting content on demand, e.g. video on demand
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/034Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/482End-user interface for program selection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Definitions

  • the present disclosure relates to the field of Internet technologies, and for example, to a video processing method, device, terminal, and medium.
  • video interactive applications support providing users with multiple types of video resources, such as funny, humorous, scientific, current affairs, and life. At the same time, it also supports users to shoot different styles of video anytime, anywhere And add a variety of special effects, set different types of background music and so on.
  • the video interactive application software in the related art easily causes information loss in the video during the process of editing the video multiple times.
  • the present disclosure provides a video processing method, device, terminal, and medium to avoid loss of information during video editing.
  • An embodiment of the present disclosure provides a video processing method.
  • the method includes:
  • a first editing parameter of a playback speed of a continuous video and a second editing parameter of a playback speed of each target video segment in at least one target video segment are obtained, wherein the continuous video is synthesized by at least two video segments, and the at least one A target video segment includes at least one video segment of the at least two video segments;
  • the at least two video segments are synthesized into a target video that conforms to a preset duration.
  • An embodiment of the present disclosure further provides a video processing apparatus, where the apparatus includes:
  • the editing operation acquisition module is configured to obtain a first editing parameter of a playback speed of a continuous video and a second editing parameter of a playback speed of each target video segment in at least one target video segment, wherein the continuous video consists of at least two Video segment synthesis, the at least one target video segment includes at least one video segment of the at least two video segments;
  • a video clip editing module configured to calculate a target playback speed of each video clip according to the first edit parameter and a second edit parameter corresponding to each target video clip;
  • the video synthesizing module is configured to synthesize the at least two video fragments into a target video conforming to a preset duration based on a target playback speed of each video fragment.
  • An embodiment of the present disclosure further provides a terminal, including:
  • One or more processors are One or more processors;
  • Memory set to store one or more programs
  • the one or more processors When the one or more programs are executed by the one or more processors, the one or more processors implement the video processing method according to any embodiment of the present disclosure.
  • An embodiment of the present disclosure also provides a computer-readable storage medium storing a computer program, which is executed by a processor to implement the video processing method according to any embodiment of the present disclosure.
  • each video segment is calculated by acquiring a first editing parameter of a playback speed of a continuous video synthesized by at least two video segments and a second editing parameter of a playback speed of at least one of the at least two video segments.
  • Target playback speed and then synthesize target videos based on the target playback speed of each video clip, which solves the problem of easy information loss during video editing, avoids the loss of video information, and improves the user's ability to synthesize complete videos based on video clips Sharing experience; and, video-based interactive applications can support video editing operations, simplify the video editing process, reduce the difficulty and complexity of video editing, and improve the convenience of video editing.
  • FIG. 1 is a schematic flowchart of a video processing method according to an embodiment of the present disclosure
  • FIG. 2 is a schematic diagram of a video selection interface according to an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of a video editing interface provided with a video preview area according to an embodiment of the present disclosure
  • FIG. 4 is a schematic flowchart of another video processing method according to an embodiment of the present disclosure.
  • FIG. 5 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram of a hardware structure of a terminal according to an embodiment of the present disclosure.
  • FIG. 1 is a schematic flowchart of a video processing method according to an embodiment of the present disclosure. This embodiment is applicable to processing a video, for example, when a playback speed is edited multiple times during a video synthesis process.
  • the method may be performed by a video processing device.
  • the device may be implemented in software and / or hardware, and may be configured on any terminal having network communication functions, such as a smart phone, a computer, and an Apple iPad (ipad).
  • the video processing method provided in the embodiment of the present disclosure may include the following steps.
  • a user selects multiple video clips for video synthesis through a video selection interface of the video interactive application, and the video clips are local video resources of the user terminal.
  • the video interaction application detects the user's video selection operation, it obtains the video clip selected by the user in real time.
  • the video interactive application can switch directly from the video selection interface to the video editing interface.
  • the video editing interface users can edit the playback speed with each video clip as the editing object or the initial composite continuous video as the editing object. After editing, you can preview the editing effect.
  • the editing parameters of the playback speed acquired by the terminal are used to describe the specific editing operation performed by the user, including the correspondence between the editing operation and the editing object (continuous video or video clip) and the editing speed value.
  • the user may also perform at least one editing operation in the video editing interface, such as cropping, resolution editing, playback screen rotation, video deletion, and adding display effects.
  • FIG. 2 is a schematic diagram of a video selection interface provided in this embodiment as an example
  • FIG. 3 is a schematic diagram of a video editing interface provided with a video preview area provided in this embodiment as an example.
  • the video editing interface can be switched directly from the video selection interface.
  • the video editing interface includes a video preview area 31, a video cropping area 32, and a video clip area 33.
  • the video preview area 31 is used to preview the video editing effects during the video editing process, such as previewing the preliminary composite continuous video editing effect or the editing effect of a single video clip; the handles on the left and right sides of the video cropping area 32 can follow the user's Drag to move to adjust the preview video duration; the pointer in the video cropping area 32 moves with the playback of the preview video; the video clip area 33 displays multiple video clips selected by the user, and slides left and right in this area To view all video clips selected by the user.
  • the edge position of the video editing interface or the gap between the video preview area 31 and the video cropping area 32 can be used to set editing controls, such as a cropping control, a playback speed editing control, a resolution editing control, and the like.
  • the video interactive application obtains the first editing parameter and the second editing parameter corresponding to each target video segment. Through the statistics and classification of the editing parameters, each editing operation is corresponding to the video segment, and the final result of each video segment is calculated. Play speed, which is the target playback speed.
  • the duration of the continuous video is greater than or equal to the duration of the target video. That is, the continuous video is equivalent to a transitional intermediate video file.
  • the user can perform editing operations on the basis of the continuous video to obtain the target video. For example, if the duration of the continuous video is longer than the preset duration, the target video that meets the duration requirement can be obtained by video cropping.
  • the video interactive application itself can support a variety of video editing operations, such as playback speed editing, users do not need to resort to other video synthesis tools, such as third-party applications for video synthesis, in the process of synthesizing videos, which can simplify the video.
  • the editing process reduces the difficulty and complexity of video editing and improves the convenience of video editing.
  • the method further includes: acquiring at least two selected video clips in response to a video multiple selection request triggered on the video selection interface, wherein the video selection interface is captured by a video The interface or the detail interface is switched and displayed.
  • the video selection interface is displayed by the user triggering a specific identification of the video shooting interface or the detail interface
  • the detail interface includes a detail interface such as an audio detail interface and a video detail interface.
  • the user when the user enables the video interactive application on the terminal, it can trigger the specific identification of the video shooting interface or music details interface of the video interactive application, such as the upload logo or share logo, etc., and switch from the current interface to the video selection interface, which is the video selection interface.
  • the local video clips of the terminal can be displayed synchronously to facilitate user selection.
  • the video selection interface is switched from the music detail interface, the user can use the audio of the music detail page as the background music of the synthesized video.
  • the video interactive application After the user triggers the video multiple selection request, the video interactive application records the selection order of the video clips selected by the user while obtaining the information of the video clips selected by the user, so that when the video is synthesized, the video is synthesized based on the user's selection order.
  • the video may also be synthesized according to the arrangement order of the user-defined video clips.
  • the user can touch the video thumbnail position on the video selection interface to preview the video to determine whether to select the video clip.
  • the duration of the video segment selected by the user is greater than or equal to the duration threshold, and the immediate length threshold determines the effective video segment that the user can select.
  • the duration threshold is set to 3 seconds.
  • a prompt box pop toast
  • the number of user-selectable video clips can be adaptively set, for example, it can be set to 12, which is not limited in this embodiment. When the number of video clips selected by the user reaches the preset number requirement, other video thumbnails on the video selection interface can cover the white mask layer, and the user selection is no longer supported.
  • a multi-select control 21 is set on the video selection interface. After the user triggers a video multi-select request, each time a video clip is selected, a number will be displayed in the upper left corner of the video thumbnail corresponding to the video clip as the user selects the video Record information for the clip.
  • the multi-select control 21 can be set at any position on the edge of the interface. FIG. 2 is only an example, and the multi-select control 21 is set at the lower left.
  • At least one of editing operations such as cropping, resolution editing, playback screen rotation, video deletion, and adding display effects can be performed.
  • the above-mentioned editing operation involves overall editing with a composite continuous video as an editing object and individual editing with each video clip as an editing object.
  • the effects of video editing, including playback speed editing effects can be displayed to the user through the video preview area of the video editing interface, that is, after the user's video editing operation ends, the video interactive application will generate a video preview effect corresponding to the editing operation for User preview and saved in the terminal cache.
  • the embodiment of the present disclosure first obtains the first editing parameter of the playback speed of the initially synthesized continuous video and the second editing parameter of the playback speed of at least one video clip, and then calculates each The target playback speed of each video clip, and finally the target video is synthesized based on the target playback speed of each video clip, which solves the problem of easy information loss during the video editing process, and avoids the video by always editing the playback speed based on the original video clip.
  • the loss of information further improves the user's experience of synthesizing complete videos based on video clips for video sharing; and because the video interactive application itself can support multiple video editing operations, users do not need to rely on other professional video synthesis during the process of video synthesis Tools, thus improving the convenience of video editing and reducing the difficulty and complexity of video editing.
  • FIG. 4 is a schematic flowchart of another video processing method according to an embodiment of the present disclosure, and is described based on the foregoing embodiment.
  • the video processing method provided in the embodiment of the present disclosure may include the following steps.
  • the user edits the overall playback speed of the continuous video, it is equivalent to performing the same playback speed editing operation on all video clips included in the continuous video. Therefore, according to the corresponding relationship between the playback speed and each video clip, the first edit The parameter and the second editing parameter are logically summed to obtain the sum of the editing operations corresponding to the playback speed of each video clip, and then the video clips can be edited one by one according to the original video data of the video clip.
  • the user first adjusts 1/3 times the speed of the continuous video C that is initially synthesized by video clips A and B; then separately adjusts 2 times the speed of video clip A, and the video interactive application records the user separately Slow editing operation on continuous video C and fast editing operation of user on video clip A.
  • the target playback speed of video clip A is 2/3 of the original playback speed of video clip A
  • the target playback speed of video clip B is the original playback speed of video clip B.
  • 2 / 3x speed adjustment and 1 / 3x speed adjustment are performed, respectively.
  • the final playback speed of video clip A is not obtained based on the original video data of video clip A, but is edited again based on the effect corresponding to the first editing operation, that is, 1 / Based on the 3x speed video clip A, a 2x speed adjustment is performed to obtain the final playback speed of the video clip A.
  • the method of this embodiment can not only avoid the loss of video information, but also avoid the accumulation of the degree of information loss during multiple editing processes, so the fidelity of the video data is higher.
  • the target playback speed of the video clip is the playback speed of the continuous video after editing.
  • the execution order between operations S220 and S230 there is no strict limitation on the execution order between operations S220 and S230, and the execution order example shown in FIG. 2 should not be used as a limitation on this embodiment.
  • the target playback speed of each video segment includes the playback speed of the image data and the playback speed of the audio data in each video segment.
  • the logical operation of the editing parameters in this embodiment is applicable to both the image data and the audio data in the video, so that the loss of the image data and audio data during the video editing process can be avoided at the same time.
  • a target video with a duration greater than or equal to the preset video duration can be synthesized.
  • the target of each target video segment is first obtained by performing a logical operation on the first editing parameter and the second editing parameter. Playback speed, and use the first editing parameter as the target playback speed of other video clips except the target video clip; finally, the target video is synthesized based on the target playback speed, which solves the problem of easy information loss during video editing.
  • the original video clips are edited at a playback speed to avoid the loss of video information, thereby improving the user's experience of synthesizing complete videos based on video clips for video sharing; at the same time, it improves the convenience of video editing.
  • FIG. 5 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present disclosure. This embodiment is applicable to processing a video, for example, a case of editing and playing multiple times during a video synthesis process.
  • the video processing device may be implemented in software and / or hardware, and may be configured on any terminal having a network communication function.
  • the video processing apparatus may include an editing operation acquisition module 510, a video clip editing module 520, and a video synthesis module 530.
  • the editing operation acquisition module 510 is configured to acquire continuous A first editing parameter of the playback speed of the video, and a second editing parameter of the playback speed of each target video segment in the at least one target video segment, wherein the continuous video is synthesized from at least two video segments, and the at least one target video segment includes at least At least one video clip of two video clips;
  • a video clip editing module 520 is configured to calculate a target playback speed of each video clip according to a first edit parameter and a second edit parameter corresponding to each target video clip;
  • video synthesis Module 530 is configured to synthesize at least two video clips into a target video that conforms to a preset duration based on the target playback speed of each video clip.
  • the video clip editing module 520 includes: a video clip first editing unit configured to perform a logical operation on the first editing parameter and the second editing parameter corresponding to each target video clip, and use the result of the logical operation as A target playback speed corresponding to each target video clip; a second video clip editing unit configured to use the first editing parameter as a target playback speed of at least two video clips other than at least one target video clip .
  • the target playback speed of each video segment includes the playback speed of the image data and the playback speed of the audio data in each video segment.
  • the device further includes a video clip obtaining module configured to obtain at least two selected video clips in response to a video multiple selection request triggered on the video selection interface, wherein the video selection interface is a video shooting interface Or the details interface is switched and displayed.
  • a video clip obtaining module configured to obtain at least two selected video clips in response to a video multiple selection request triggered on the video selection interface, wherein the video selection interface is a video shooting interface Or the details interface is switched and displayed.
  • the duration of the continuous video is greater than or equal to the duration of the target video.
  • the video processing apparatus described above can execute the video processing method provided by any embodiment of the present disclosure, and has the corresponding functional modules and beneficial effects of executing the method.
  • FIG. 6 is a schematic diagram of a hardware structure of a terminal according to an embodiment of the present disclosure.
  • the terminal in the embodiment of the present disclosure may include, but is not limited to, such as a mobile phone, a notebook computer, a digital broadcast receiver, a Personal Digital Assistant (PDA), a tablet computer (Portable Android Device, PAD), and a portable multimedia player ( Portable media (PMP), mobile terminals such as car terminals (such as car navigation terminals), and fixed terminals such as digital televisions (TVs), desktop computers, and the like.
  • PDA Personal Digital Assistant
  • PAD Portable Android Device
  • PMP Portable media
  • mobile terminals such as car terminals (such as car navigation terminals)
  • TVs digital televisions
  • desktop computers and the like.
  • the terminal shown in FIG. 6 is only an example, and should not impose any limitation on the functions and scope of use of the embodiments of the present disclosure.
  • the terminal 800 may include one or more processors (for example, a central processing unit, a graphics processor, etc.) 801 and a memory 808 configured to store one or more programs.
  • the processor 801 may execute at least one appropriate according to a program stored in a read-only memory (ROM) 802 or a program loaded from the memory 808 to a random access memory (Random Access Memory (RAM) 803). Action and handling.
  • ROM read-only memory
  • RAM Random Access Memory
  • the processor 801, the ROM 802, and the RAM 803 are connected to each other through a bus 804.
  • An input / output (I / O) interface 805 is also connected to the bus 804.
  • the following devices can be connected to the I / O interface 805: including input devices 806 such as a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc .; including, for example, a liquid crystal display (Liquid Crystal Display, LCD)
  • An output device 807 such as a speaker, a vibrator, and the like; includes a memory 808 such as a magnetic tape, a hard disk, and the like; and a communication device 809.
  • the communication device 809 may allow the terminal 800 to perform wireless or wired communication with other devices to exchange data.
  • FIG. 6 shows the terminal 800 having various devices, it is not required to implement or have all the devices shown. More or fewer devices may be implemented or provided instead.
  • the process described above with reference to the flowchart may be implemented as a computer software program.
  • embodiments of the present disclosure include a computer program product including a computer program borne on a computer-readable medium, the computer program containing program code for performing a method shown in a flowchart.
  • the computer program may be downloaded and installed from the network through the communication device 809, or installed from the memory 808, or installed from the ROM 802.
  • the processor 801 the functions defined in the method of the embodiment of the present disclosure are executed.
  • the computer-readable medium of the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the two.
  • the computer-readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples of computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, RAM, ROM, erasable programmable read-only memory (Erasable Programmable Read-Only Memory, EPROM or flash memory), optical fiber, Compact Disc Read-Only Memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing.
  • CD-ROM Compact Disc Read-Only Memory
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in combination with an instruction execution system, device, or device.
  • a computer-readable signal medium may include a data signal that is transmitted in baseband or transmitted as part of a carrier wave, and the computer-readable signal medium carries computer-readable program code.
  • Such a propagated data signal may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, and the computer-readable signal medium may be sent, transmitted, or transmitted for use by or with an instruction execution system, apparatus, or device, A program that uses a device or a combination of devices.
  • the program code contained on the computer-readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, radio frequency (RF), or any suitable combination of the foregoing.
  • the computer-readable medium may be included in the terminal described above, or may exist alone without being assembled into the terminal.
  • the computer-readable medium carries one or more programs, and when the one or more programs are executed by the terminal, the terminal: obtains a first editing parameter of a continuous video playback speed, and each of at least one target video segment A second editing parameter for the playback speed of each target video segment, wherein the continuous video is synthesized from at least two video segments, and the at least one target video segment includes at least one video segment of the at least two video segments; according to The first editing parameter and the second editing parameter corresponding to each target video segment, calculate a target playback speed of each video segment; and based on the target playback speed of each video segment, combine the at least two video segments into Target video that matches the preset duration.
  • Computer program code for performing the operations of the present disclosure may be written in one or more programming languages or a combination of programming languages, the programming languages including object-oriented programming languages such as Java, Smalltalk, C ++, Also included are conventional procedural programming languages-such as "C" or similar programming languages.
  • the program code can be executed entirely on the user's computer, partly on the user's computer, as an independent software package, partly on the user's computer, partly on a remote computer, or entirely on a remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network-including a local area network (LAN) or a wide area network (WAN)-or it can be connected to an external computer ( (E.g. using an Internet service provider to connect via the Internet).
  • LAN local area network
  • WAN wide area network
  • each block in the flowchart or block diagram may represent a module, a program segment, or a part of code, which contains one or more functions to implement a specified logical function Executable instructions.
  • Each block in the block diagrams and / or flowcharts, and combinations of blocks in the block diagrams and / or flowcharts can be implemented in a dedicated hardware-based system that performs the specified function or operation, or can be implemented in dedicated hardware In combination with computer instructions.
  • the units described in the embodiments of the present disclosure may be implemented by software or hardware. Among them, the name of the unit does not constitute a limitation on the unit itself in some cases.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Television Signal Processing For Recording (AREA)
  • Studio Circuits (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

本公开实施例公开了一种视频处理方法、装置、终端和介质,其中,该方法包括:获取连续视频的播放速度的第一编辑参数,以及至少一个目标视频片段中每个目标视频片段的播放速度的第二编辑参数,其中,连续视频由至少两个视频片段合成,至少一个目标视频片段包括至少两个视频片段中的至少一个视频片段;根据第一编辑参数和每个目标视频片段对应的第二编辑参数,计算出每个视频片段的目标播放速度;基于每个视频片段的目标播放速度,将至少两个视频片段合成为符合预设时长的目标视频。

Description

视频处理方法、装置、终端和介质
本公开要求在2018年09月30日提交中国专利局、申请号为201811162074.X的中国专利申请的优先权,该申请的全部内容通过引用结合在本公开中。
技术领域
本公开涉及互联网技术领域,例如涉及一种视频处理方法、装置、终端和介质。
背景技术
网络技术的发展,使得视频交互应用在人们的日常生活中非常流行。
对于视频交互应用的互联网企业而言,满足用户需求,为用户提供满意的产品体验,是保持企业竞争力不可忽视的关键因素。针对广泛的用户群体,视频交互应用支持为用户提供多种类型的视频资源,例如搞笑类、幽默类、科学类、时事类和生活类等;同时,还支持用户随时随地拍摄不同风格的视频,并添加多种特效,设置不同类型的背景音乐等。
但是,相关技术中的视频交互应用软件在对视频进行多次编辑的处理过程中,容易造成视频中的信息丢失。
发明内容
本公开提供一种视频处理方法、装置、终端和介质,以避免视频编辑过程中信息的丢失。
本公开实施例提供了一种视频处理方法,该方法包括:
获取连续视频的播放速度的第一编辑参数,以及至少一个目标视频片段中每个目标视频片段的播放速度的第二编辑参数,其中,所述连续视频由至少两个视频片段合成,所述至少一个目标视频片段包括所述至少两个视频片段中的至少一个视频片段;
根据所述第一编辑参数和每个目标视频片段对应的第二编辑参数,计算出每个视频片段的目标播放速度;
基于每个视频片段的目标播放速度,将所述至少两个视频片段合成为符合预设时长的目标视频。
本公开实施例还提供了一种视频处理装置,该装置包括:
编辑操作获取模块,设置为获取连续视频的播放速度的第一编辑参数,以及至少一个目标视频片段中每个目标视频片段的播放速度的第二编辑参数,其中,所述连续视频由至少两个视频片段合成,所述至少一个目标视频片段包括所述至少两个视频片段中的至少一个视频片段;
视频片段编辑模块,设置为根据所述第一编辑参数和每个目标视频片段对应的第二编辑参数,计算出每个视频片段的目标播放速度;
视频合成模块,设置为基于每个视频片段的目标播放速度,将所述至少两个视频片段合成为符合预设时长的目标视频。
本公开实施例还提供了一种终端,包括:
一个或多个处理器;
存储器,设置为存储一个或多个程序,
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如本公开任一实施例所述的视频处理方法。
本公开实施例还提供了一种计算机可读存储介质,存储有计算机程序,该程序被处理器执行时实现如本公开任一实施例所述的视频处理方法。
本公开实施例通过获取至少两个视频片段合成的连续视频的播放速度的第一编辑参数,以及至少两个视频片段中至少一个视频片段的播放速度的第二编辑参数,计算出每个视频片段的目标播放速度,进而基于每个视频片段的目标播放速度合成目标视频,解决了视频编辑过程中信息易丢失的问题,避免了视频信息的丢失,进而提高了用户基于视频片段合成完整视频进行视频分享的体验;并且,基于视频交互应用本身可以支持视频编辑操作,简化了视频编辑过程,降低了视频编辑的难度和复杂度,提高了视频编辑的便捷性。
附图说明
图1是本公开实施例提供的一种视频处理方法的流程示意图;
图2是本公开实施例提供的一种视频选择界面的示意图;
图3是本公开实施例提供的一种设置有视频预览区的视频编辑界面的示意图;
图4是本公开实施例提供的另一种视频处理方法的流程示意图;
图5是本公开实施例提供的一种视频处理装置的结构示意图;
图6是本公开实施例提供的一种终端的硬件结构示意图。
具体实施方式
下面结合附图和实施例对本公开进行说明。此处所描述的具体实施例仅仅用于解释本公开,而非对本公开的限定。另外,为了便于描述,附图中仅示出了与本公开相关的部分而非全部结构。
实施例
图1是本公开实施例提供的一种视频处理方法的流程示意图,本实施例可适用于对视频进行处理,例如视频合成过程中多次编辑播放速度的情况,该方法可以由视频处理装置来执行,该装置可以采用软件和/或硬件的方式实现,并可配置于任何具有网络通信功能的终端上,例如智能手机、电脑和苹果平板电脑(ipad)等。
如图1所示,本公开实施例中提供的视频处理方法可以包括如下步骤。
S110、获取连续视频的播放速度的第一编辑参数,以及至少一个目标视频片段中每个目标视频片段的播放速度的第二编辑参数,其中,连续视频由至少两个视频片段合成,至少一个目标视频片段包括至少两个视频片段中的至少一个视频片段。
在利用视频交互应用进行视频合成的过程中,用户通过视频交互应用的视频选择界面选择用于视频合成的多个视频片段,该视频片段是用户终端本地的视频资源。视频交互应用检测到用户的视频选择操作后,并实时获取用户选择的视频片段。当用户对视频片段的选择操作结束后,视频交互应用可以直接由视频选择界面切换至视频编辑界面。在视频编辑界面,用户可以以每个视频片 段为编辑对象或者以初步合成的连续视频为编辑对象进行播放速度编辑操作,编辑之后还可以预览编辑效果。终端获取的播放速度的编辑参数即用于描述用户具体执行编辑操作,包括编辑操作与编辑对象(连续视频或视频片段)的对应关系和编辑速度值。在一实施例中,用户在视频编辑界面还可以进行剪裁、分辨率编辑、播放画面旋转、视频删除和添加显示特效中的至少一种编辑操作。
图2作为示例,示出了本实施例提供的一种视频选择界面的示意图;图3作为示例,示出了本实施例提供的一种设置有视频预览区的视频编辑界面的示意图。视频编辑界面可以由视频选择界面直接切换进入。如图3所示,视频编辑界面包括视频预览区31、视频裁剪区32和视频片段区33。其中,视频预览区31用于在视频编辑过程中预览视频编辑效果,例如预览初步合成的连续视频的编辑效果或者单个视频片段的编辑效果;视频裁剪区32左右两侧的把手可以随着用户的拖动而移动,以实现调整预览的视频时长;视频裁剪区32内的指针随着预览视频的播放而移动;视频片段区33展示用户选择的多个视频片段,通过在该区域内的左右滑动,可以查看用户选择的所有视频片段。视频编辑界面的边缘位置,或者视频预览区31和视频裁剪区32之间的空隙,均可以用于设置编辑控件,例如剪裁控件、播放速度编辑控件、分辨率编辑控件等。
本实施例中,播放速度的第一编辑参数和播放速度的第二编辑参数对应的编辑操作之间并没有严格的先后顺序,即用户可以任意的顺序执行对连续视频的整体编辑操作和对单个视频片段的单独编辑操作。
S120、根据第一编辑参数和每个目标视频片段对应的第二编辑参数,计算出每个视频片段的目标播放速度。
视频交互应用获取第一编辑参数和每个目标视频片段对应的第二编辑参数,可以通过对编辑参数的统计与归类,将每一个编辑操作对应到视频片段,统计出每个视频片段最终的播放速度,即目标播放速度。
S130、基于每个视频片段的目标播放速度,将至少两个视频片段合成为符合预设时长的目标视频。
当经过多次播放速度编辑,得到针对每个视频片段的目标播放速度后,以每个视频片段的原始视频数据为基础对每个视频片段进行一次性的播放速度编辑,进而基于编辑后的多个视频片段,合成为时长大于或等于预设视频时长的 目标视频。
在一实施例中,连续视频的时长大于或等于目标视频的时长。即连续视频相当于一个过渡的中间视频文件,用户可以在连续视频的基础上,进行编辑操作,得到目标视频。例如,如果连续视频时长大于预设时长,则可以通过视频裁剪得到符合时长要求的目标视频。
此外,由于视频交互应用本身可以支持多种视频编辑操作,例如播放速度编辑,用户合成视频的过程中便无需额外借助其他视频合成工具,例如用于视频合成的第三方应用等,因而可以简化视频编辑过程,降低视频编辑的难度和复杂度,提高视频编辑的便捷性。
在一实施例中,在上述技术方案的基础上,该方法还包括:响应于在视频选择界面上触发的视频多选请求,获取选择的至少两个视频片段,其中,视频选择界面由视频拍摄界面或者详情界面切换而显示。在一实施例中,视频选择界面通过用户触发视频拍摄界面或者详情界面的特定标识而显示,详情界面包括音频详情界面和视频详情界面等其他信息的详情界面。
即当用户启用终端上的视频交互应用后,可以触发视频交互应用的视频拍摄界面或者音乐详情界面的特定标识,例如上传标识或者分享标识等,由当前界面切换至视频选择界面,该视频选择界面可以同步显示终端本地的视频片段,以方便用户选择。在一实施例中,如果视频选择界面是由音乐详情界面切换进入,则用户可以使用该音乐详情页的音频作为合成视频的背景音乐。
用户触发视频多选请求后,视频交互应用在获取用户选择的视频片段的信息的同时,记录用户选择视频片段的选择顺序,以便视频合成时,基于用户的选择顺序合成视频。本实施例中,在合成视频的过程中,也可以按照用户自定义的视频片段的排列顺序合成视频。
在用户选择过程中,用户可以触控视频选择界面上的视频缩略图位置进行视频预览,以决定是否选择该视频片段。在一实施例中,用户选择的视频片段的时长大于或等于时长阈值,即时长阈值决定了用户可选择的有效视频片段。示例性的,时长阈值设置为3秒,当用户选择的视频片段a的时长为2秒时,可以弹出提示框(弹toast),提示用户选择的视频片段a无效,需要重新进行选择。用户可选择的视频片段的数量可以适应性设置,例如可以设置为12,本 实施例对此不做限定。当用户选择的视频片段的数量达到预设数量要求时,视频选择界面上的其他视频缩略图可以覆盖白色蒙层,并且不再支持用户选择。
如图2所示,视频选择界面上设置多选控件21,用户触发视频多选请求后,每次选择一个视频片段,该视频片段对应的视频缩略图左上角将会显示编号,作为用户选择视频片段的记录信息。本实施例中,多选控件21可以设置在界面边缘的任意位置,图2仅作为示例,将多选控件21设置在左下方。
在视频合成过程中,除了可以对视频进行播放速度编辑外,还可以进行剪裁、分辨率编辑、播放画面旋转、视频删除和添加显示特效中的至少一种编辑操作。上述编辑操作涉及以合成的连续视频为编辑对象而进行的整体编辑和以每个视频片段为编辑对象而进行的单独编辑。视频编辑的效果,包括播放速度编辑效果等,均可以通过视频编辑界面的视频预览区展示给用户,即用户的视频编辑操作结束后,视频交互应用会生成编辑操作对应的视频预览效果,以供用户预览,同时保存在终端缓存中。
本公开实施例通过首先获取初步合成的连续视频的播放速度的第一编辑参数,以及至少一个视频片段的播放速度的第二编辑参数,然后根据第一编辑参数和第二编辑参数,计算出每个视频片段的目标播放速度,最后基于每个视频片段的目标播放速度合成目标视频,解决了视频编辑过程中信息易丢失的问题,以始终基于原始的视频片段进行播放速度编辑的方式避免了视频信息的丢失,进而提高了用户基于视频片段合成完整视频进行视频分享的体验;并且,由于视频交互应用本身可以支持多种视频编辑操作,用户合成视频的过程中便无需额外借助其他专业的视频合成工具,因而提高了视频编辑的便捷性,降低了视频编辑的难度和复杂度。
图4是本公开实施例提供的另一种视频处理方法的流程示意图,以上述实施例中为基础进行说明。
如图4所示,本公开实施例中提供的视频处理方法可以包括如下步骤。
S210、获取连续视频的播放速度的第一编辑参数,以及至少一个目标视频片段中每个目标视频片段的播放速度的第二编辑参数,其中,连续视频由至少两个视频片段合成,至少一个目标视频片段包括至少两个视频片段中的至少一 个视频片段。
S220、将第一编辑参数和每个目标视频片段对应的第二编辑参数进行逻辑运算,并将逻辑运算的结果作为所述每个目标视频片段对应的目标播放速度。
用户如果对连续视频进行整体的播放速度编辑,则相当于对连续视频包含的所有视频片段进行了相同的播放速度编辑操作,因此,依据播放速度与每个视频片段的对应关系,将第一编辑参数和第二编辑参数进行逻辑和运算,得到每个视频片段对应的播放速度编辑操作之和,后续便可依据视频片段的原始视频数据逐个视频片段进行编辑。
示例性的,对于视频片段A和B,用户首先对由视频片段A和B初步合成的连续视频C进行1/3倍速调整;然后单独对视频片段A进行2倍速调整,视频交互应用分别记录用户对连续视频C的慢速编辑操作和用户对视频片段A的快速编辑操作。通过编辑操作的逻辑运算,可以得到两次编辑操作后,视频片段A的目标播放速度为视频片段A的原始播放速度的2/3,视频片段B的目标播放速度为视频片段B的原始播放速度的1/3,然后以视频片段A和视频片段B的原始视频数据为基础,分别进行2/3倍速调整和1/3倍速调整。在相关技术中,针对上述情况,并非以视频片段A的原始视频数据为基础得到视频片段A最终的播放速度,而是在第一次编辑操作对应的效果基础上进行再次编辑,即以1/3倍速的视频片段A为基础进行2倍速调整得到视频片段A的最终播放速度。由此可见,本实施例方法不仅可以避免视频信息的丢失,还可以避免多次编辑过程中信息丢失程度的累加,因而视频数据的保真度更高。
S230、将第一编辑参数作为所述至少两个视频片段中除目标视频片段之外的其他视频片段的目标播放速度。
连续视频中包括的视频片段并非均会进行再次的播放速度编辑,对于只进行了一次播放速度编辑的视频片段,该视频片段的目标播放速度即编辑之后的连续视频的播放速度。本实施例中,操作S220和操作S230之间并没有严格的执行顺序限定,不应当将图2所示的执行顺序示例作为对本实施例的限定。
在一实施例中,每个视频片段的目标播放速度包括所述每个视频片段中图像数据的播放速度和音频数据的播放速度。本实施例中的编辑参数逻辑运算同时适用于视频中的图像数据和音频数据,因而可以同时避免视频编辑过程中图 像数据和音频数据的丢失。
S240、基于每个视频片段的目标播放速度,将至少两个视频片段合成为符合预设时长的目标视频。
以每个视频片段的原始视频为基础,确定出多次播放速度编辑之后的目标播放速度后,便可以合成时长大于或等于预设视频时长的目标视频。
本公开实施例基于获取的连续视频的第一编辑参数和每个目标视频片段对应的第二编辑参数,首先通过将第一编辑参数和第二编辑参数进行逻辑运算得到每个目标视频片段的目标播放速度,并将第一编辑参数作为除目标视频片段之外的其他视频片段的目标播放速度;最后,基于目标播放速度合成目标视频,解决了视频编辑过程中信息易丢失的问题,以始终基于原始的视频片段进行播放速度编辑的方式避免了视频信息的丢失,进而提高了用户基于视频片段合成完整视频进行视频分享的体验;同时,提高了视频编辑的便捷性。
图5是本公开实施例提供的一种视频处理装置的结构示意图,本实施例可适用于对视频进行处理,例如视频合成过程中多次编辑播放速度的情况。该视频处理装置可以采用软件和/或硬件的方式实现,并可配置于任何具有网络通信功能的终端上。
如图5所示,本公开实施例中提供的视频处理装置可以包括编辑操作获取模块510、视频片段编辑模块520和视频合成模块530,本实施例中,编辑操作获取模块510,设置为获取连续视频的播放速度的第一编辑参数,以及至少一个目标视频片段中每个目标视频片段的播放速度的第二编辑参数,其中,连续视频由至少两个视频片段合成,至少一个目标视频片段包括至少两个视频片段中的至少一个视频片段;视频片段编辑模块520,设置为根据第一编辑参数和每个目标视频片段对应的第二编辑参数,计算出每个视频片段的目标播放速度;视频合成模块530,设置为基于每个视频片段的目标播放速度,将至少两个视频片段合成为符合预设时长的目标视频。
在一实施例中,视频片段编辑模块520包括:视频片段第一编辑单元,设置为将第一编辑参数和每个目标视频片段对应的第二编辑参数进行逻辑运算,并将逻辑运算的结果作为所述每个目标视频片段对应的目标播放速度;视频片 段第二编辑单元,设置为将第一编辑参数作为至少两个视频片段中除至少一个目标视频片段之外的其他视频片段的目标播放速度。
在一实施例中,每个视频片段的目标播放速度包括所述每个视频片段中图像数据的播放速度和音频数据的播放速度。
在一实施例中,该装置还包括:视频片段获取模块,设置为响应于在视频选择界面上触发的视频多选请求,获取选择的至少两个视频片段,其中,视频选择界面由视频拍摄界面或者详情界面切换而显示。
在一实施例中,连续视频的时长大于或等于目标视频的时长。
上述视频处理装置可执行本公开任意实施例所提供的视频处理方法,具备执行方法相应的功能模块和有益效果。
实施例
图6是本公开实施例提供的一种终端的硬件结构示意图。下面参考图6,图6示出了适于用来实现本公开实施例的终端800的结构示意图。本公开实施例中的终端可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、个人数字助理(Personal Digital Assistant,PDA)、平板电脑(Portable Android Device,PAD)、便携式多媒体播放器(Portable Media Player,PMP)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字电视(television,TV)、台式计算机等等的固定终端。图6示出的终端仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图6所示,终端800可以包括一个或多个处理器(例如中央处理器、图形处理器等)801,以及设置为存储一个或多个程序的存储器808。其中,处理器801可以根据存储在只读存储器(Read-Only Memory,ROM)802中的程序或者从存储器808加载到随机访问存储器(Random Access Memory,RAM)803中的程序而执行至少一种适当的动作和处理。在RAM 803中,还存储有终端800操作所需的至少一种程序和数据。处理器801、ROM 802以及RAM 803通过总线804彼此相连。输入/输出(I/O)接口805也连接至总线804。
通常,以下装置可以连接至I/O接口805:包括例如触摸屏、触摸板、键盘、 鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置806;包括例如液晶显示器(Liquid Crystal Display,LCD)、扬声器、振动器等的输出装置807;包括例如磁带、硬盘等的存储器808;以及通信装置809。通信装置809可以允许终端800与其他设备进行无线或有线通信以交换数据。虽然图6示出了具有各种装置的终端800,但是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,该计算机程序产品包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置809从网络上被下载和安装,或者从存储器808被安装,或者从ROM 802被安装。在该计算机程序被处理器801执行时,执行本公开实施例的方法中限定的上述功能。
本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、RAM、ROM、可擦式可编程只读存储器(Erasable Programmable Read-Only Memory,EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与指令执行系统、装置或者器件结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,计算机可读信号介质中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与指令执行系统、装置或者器件结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输, 包括但不限于:电线、光缆、射频(Radio Frequency,RF)等等,或者上述的任意合适的组合。
上述计算机可读介质可以是上述终端中所包含的;也可以是单独存在,而未装配入该终端中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该终端执行时,使得该终端:获取连续视频的播放速度的第一编辑参数,以及至少一个目标视频片段中每个目标视频片段的播放速度的第二编辑参数,其中,所述连续视频由至少两个视频片段合成,所述至少一个目标视频片段包括所述至少两个视频片段中的至少一个视频片段;根据所述第一编辑参数和每个目标视频片段对应的第二编辑参数,计算出每个视频片段的目标播放速度;基于每个视频片段的目标播放速度,将所述至少两个视频片段合成为符合预设时长的目标视频。
可以以一种或多种程序设计语言或多种程序设计语言组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括面向对象的程序设计语言-诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言-诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(Local Area Network,LAN)或广域网(Wide Area Network,WAN)-连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以 通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定。

Claims (10)

  1. 一种视频处理方法,包括:
    获取连续视频的播放速度的第一编辑参数,以及至少一个目标视频片段中每个目标视频片段的播放速度的第二编辑参数,其中,所述连续视频由至少两个视频片段合成,所述至少一个目标视频片段包括所述至少两个视频片段中的至少一个视频片段;
    根据所述第一编辑参数和每个目标视频片段对应的第二编辑参数,计算出每个视频片段的目标播放速度;
    基于每个视频片段的目标播放速度,将所述至少两个视频片段合成为符合预设时长的目标视频。
  2. 根据权利要求1所述的方法,其中,根据所述第一编辑参数和每个目标视频片段对应的第二编辑参数,计算出每个视频片段的目标播放速度,包括:
    将所述第一编辑参数和每个目标视频片段对应的第二编辑参数进行逻辑运算,并将逻辑运算的结果作为所述每个目标视频片段的目标播放速度;
    将所述第一编辑参数作为所述至少两个视频片段中除所述至少一个目标视频片段之外的其他视频片段的目标播放速度。
  3. 根据权利要求1或2所述的方法,其中,每个视频片段的目标播放速度包括所述每个视频片段中图像数据的播放速度和音频数据的播放速度。
  4. 根据权利要求1、2或3所述的方法,还包括:
    响应于在视频选择界面上触发的视频多选请求,获取选择的所述至少两个视频片段,其中,所述视频选择界面由视频拍摄界面或者详情界面切换而显示。
  5. 根据权利要求1-4任一项所述的方法,其中,所述连续视频的时长大于或等于所述目标视频的时长。
  6. 一种视频处理装置,包括:
    编辑操作获取模块,设置为获取连续视频的播放速度的第一编辑参数,以及至少一个目标视频片段中每个目标视频片段的播放速度的第二编辑参数,其中,所述连续视频由至少两个视频片段合成,至少一个目标视频片段包括所述至少两个视频片段中的至少一个视频片段;
    视频片段编辑模块,设置为根据所述第一编辑参数和每个目标视频片段对应的第二编辑参数,计算出每个视频片段的目标播放速度;
    视频合成模块,设置为基于每个视频片段的目标播放速度,将所述至少两个视频片段合成为符合预设时长的目标视频。
  7. 根据权利要求6所述的装置,其中,所述视频片段编辑模块包括:
    视频片段第一编辑单元,设置为将所述第一编辑参数和每个目标视频片段对应的第二编辑参数进行逻辑运算,并将逻辑运算的结果作为所述每个目标视频片段的目标播放速度;
    视频片段第二编辑单元,设置为将所述第一编辑参数作为所述至少两个视频片段中除所述至少一个目标视频片段之外的其他视频片段的目标播放速度。
  8. 根据权利要求6或7所述的装置,其中,每个视频片段的目标播放速度包括所述每个视频片段中图像数据的播放速度和音频数据的播放速度。
  9. 一种终端,包括:
    至少一个处理器;
    存储器,设置为存储至少一个程序,
    当所述至少一个程序被所述至少一个处理器执行,使得所述至少一个处理器实现如权利要求1~5中任一所述的视频处理方法。
  10. 一种计算机可读存储介质,存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1~5中任一所述的视频处理方法。
PCT/CN2018/124784 2018-09-30 2018-12-28 视频处理方法、装置、终端和介质 WO2020062685A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2020549860A JP7038226B2 (ja) 2018-09-30 2018-12-28 ビデオ処理方法、装置、端末および媒体
GB2017355.5A GB2589731B (en) 2018-09-30 2018-12-28 Video processing method and apparatus, terminal and medium
US17/021,245 US11037600B2 (en) 2018-09-30 2020-09-15 Video processing method and apparatus, terminal and medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811162074.X 2018-09-30
CN201811162074.XA CN109151595B (zh) 2018-09-30 2018-09-30 视频处理方法、装置、终端和介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/021,245 Continuation US11037600B2 (en) 2018-09-30 2020-09-15 Video processing method and apparatus, terminal and medium

Publications (1)

Publication Number Publication Date
WO2020062685A1 true WO2020062685A1 (zh) 2020-04-02

Family

ID=64810606

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/124784 WO2020062685A1 (zh) 2018-09-30 2018-12-28 视频处理方法、装置、终端和介质

Country Status (5)

Country Link
US (1) US11037600B2 (zh)
JP (1) JP7038226B2 (zh)
CN (1) CN109151595B (zh)
GB (1) GB2589731B (zh)
WO (1) WO2020062685A1 (zh)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110177298B (zh) * 2019-05-27 2021-03-26 湖南快乐阳光互动娱乐传媒有限公司 一种基于语音的视频倍速播放方法及系统
CN110324718B (zh) * 2019-08-05 2021-09-07 北京字节跳动网络技术有限公司 音视频生成方法、装置、电子设备及可读介质
CN112700797B (zh) * 2019-10-22 2022-08-16 西安诺瓦星云科技股份有限公司 播放清单编辑方法、装置及系统和计算机可读存储介质
CN110798744A (zh) * 2019-11-08 2020-02-14 北京字节跳动网络技术有限公司 多媒体信息处理方法、装置、电子设备及介质
CN111314639A (zh) * 2020-02-28 2020-06-19 维沃移动通信有限公司 一种视频录制方法及电子设备
CN111770386A (zh) * 2020-05-29 2020-10-13 维沃移动通信有限公司 视频处理方法、视频处理装置及电子设备
CN112004136A (zh) * 2020-08-25 2020-11-27 广州市百果园信息技术有限公司 一种视频剪辑的方法、装置、设备和存储介质
CN115037872B (zh) * 2021-11-30 2024-03-19 荣耀终端有限公司 视频处理方法和相关装置
CN118590723A (zh) * 2023-03-01 2024-09-03 北京字跳网络技术有限公司 视频变速播放方法、装置、电子设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105307028A (zh) * 2015-10-26 2016-02-03 新奥特(北京)视频技术有限公司 一种针对多个镜头视频素材的视频编辑方法和装置
CN107256117A (zh) * 2017-04-19 2017-10-17 上海卓易电子科技有限公司 一种视频编辑的方法及其移动终端
US9838731B1 (en) * 2016-04-07 2017-12-05 Gopro, Inc. Systems and methods for audio track selection in video editing with audio mixing option
CN108521866A (zh) * 2017-12-29 2018-09-11 深圳市大疆创新科技有限公司 一种视频获取方法、控制终端、飞行器及系统

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101710945A (zh) * 2009-11-30 2010-05-19 上海交通大学 基于粒子纹理的流体视频合成方法
KR101328199B1 (ko) * 2012-11-05 2013-11-13 넥스트리밍(주) 동영상 편집 방법 및 그 단말기 그리고 기록매체
US11184580B2 (en) * 2014-05-22 2021-11-23 Microsoft Technology Licensing, Llc Automatically curating video to fit display time
CN105959678B (zh) * 2016-04-20 2018-04-10 杭州当虹科技有限公司 一种基于音视频解码器hash特征值检测的高效回归测试方法
CN106804002A (zh) * 2017-02-14 2017-06-06 北京时间股份有限公司 一种视频处理系统及方法
KR101938667B1 (ko) * 2017-05-29 2019-01-16 엘지전자 주식회사 휴대 전자장치 및 그 제어 방법

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105307028A (zh) * 2015-10-26 2016-02-03 新奥特(北京)视频技术有限公司 一种针对多个镜头视频素材的视频编辑方法和装置
US9838731B1 (en) * 2016-04-07 2017-12-05 Gopro, Inc. Systems and methods for audio track selection in video editing with audio mixing option
CN107256117A (zh) * 2017-04-19 2017-10-17 上海卓易电子科技有限公司 一种视频编辑的方法及其移动终端
CN108521866A (zh) * 2017-12-29 2018-09-11 深圳市大疆创新科技有限公司 一种视频获取方法、控制终端、飞行器及系统

Also Published As

Publication number Publication date
CN109151595B (zh) 2019-10-18
GB2589731B (en) 2023-05-10
JP7038226B2 (ja) 2022-03-17
GB2589731A (en) 2021-06-09
US20200411053A1 (en) 2020-12-31
US11037600B2 (en) 2021-06-15
GB202017355D0 (en) 2020-12-16
CN109151595A (zh) 2019-01-04
JP2021506196A (ja) 2021-02-18

Similar Documents

Publication Publication Date Title
WO2020062685A1 (zh) 视频处理方法、装置、终端和介质
WO2020062684A1 (zh) 视频处理方法、装置、终端和介质
US11670339B2 (en) Video acquisition method and device, terminal and medium
JP7139515B2 (ja) 動画撮像方法、動画撮像装置、電子機器、およびコンピューター読取可能な記憶媒体
CN108989691B (zh) 视频拍摄方法、装置、电子设备及计算机可读存储介质
US11178448B2 (en) Method, apparatus for processing video, electronic device and computer-readable storage medium
WO2020077854A1 (zh) 视频生成的方法、装置、电子设备及计算机存储介质
WO2020029526A1 (zh) 视频特效添加方法、装置、终端设备及存储介质
WO2021073315A1 (zh) 视频文件的生成方法、装置、终端及存储介质
WO2020233142A1 (zh) 多媒体文件播放方法、装置、电子设备和存储介质
WO2021073368A1 (zh) 视频文件的生成方法、装置、终端及存储介质
BR112013004857B1 (pt) Método implementado por computador e sistema para controlar, usando um dispositivo móvel, apresentação de conteúdo de mídia realizado por um cliente de mídia, e método implementado por computador para apresentar conteúdo de mídia a partir de um cliente de mídia em um dispositivo de exibição
JP2024502664A (ja) ビデオ生成方法、装置、電子機器および記憶媒体
WO2021083145A1 (zh) 视频处理的方法、装置、终端及存储介质
WO2023024921A1 (zh) 视频交互方法、装置、设备及介质
WO2020220773A1 (zh) 图片预览信息的显示方法、装置、电子设备及计算机可读存储介质
US12019669B2 (en) Method, apparatus, device, readable storage medium and product for media content processing
JP2023540753A (ja) ビデオ処理方法、端末機器及び記憶媒体
WO2023179424A1 (zh) 弹幕添加方法、装置、电子设备和存储介质
US20200404214A1 (en) An apparatus and associated methods for video presentation
WO2022194070A1 (zh) 应用程序的视频处理方法和电子设备
US20240195937A1 (en) Method, device, storage medium and program product for video recording
WO2024104333A1 (zh) 演播画面的处理方法、装置、电子设备及存储介质
WO2020062743A1 (zh) 视频码率调整方法以及装置、终端及存储介质
CN109710779A (zh) 多媒体文件截取方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18935039

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020549860

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 202017355

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20181228

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 01.07.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18935039

Country of ref document: EP

Kind code of ref document: A1