WO2019140621A1 - 视频处理方法及终端设备 - Google Patents

视频处理方法及终端设备 Download PDF

Info

Publication number
WO2019140621A1
WO2019140621A1 PCT/CN2018/073337 CN2018073337W WO2019140621A1 WO 2019140621 A1 WO2019140621 A1 WO 2019140621A1 CN 2018073337 W CN2018073337 W CN 2018073337W WO 2019140621 A1 WO2019140621 A1 WO 2019140621A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
target
target video
segments
terminal device
Prior art date
Application number
PCT/CN2018/073337
Other languages
English (en)
French (fr)
Inventor
宋启恒
李欣宇
刘江辉
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN202210869709.XA priority Critical patent/CN115103166A/zh
Priority to PCT/CN2018/073337 priority patent/WO2019140621A1/zh
Priority to CN201880031293.6A priority patent/CN110612721B/zh
Publication of WO2019140621A1 publication Critical patent/WO2019140621A1/zh
Priority to US16/893,156 priority patent/US11587317B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • H04N7/185Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source from a mobile camera, e.g. for remote control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64CAEROPLANES; HELICOPTERS
    • B64C39/00Aircraft not otherwise provided for
    • B64C39/02Aircraft not otherwise provided for characterised by special use
    • B64C39/024Aircraft not otherwise provided for characterised by special use of the remote controlled vehicle type, i.e. RPV
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/19Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
    • G11B27/28Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • H04N5/77Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2101/00UAVs specially adapted for particular uses or applications
    • B64U2101/30UAVs specially adapted for particular uses or applications for imaging, photography or videography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Definitions

  • the embodiments of the present invention relate to the field of drones, and in particular, to a video processing method and a terminal device.
  • the embodiment of the invention provides a video processing method and a terminal device to improve video processing efficiency.
  • a first aspect of the embodiments of the present invention provides a video processing method, including:
  • a second aspect of the embodiments of the present invention provides a terminal device, including: a memory and a processor;
  • the memory is for storing program code
  • the processor calls the program code to perform the following operations when the program code is executed:
  • the video processing method and the terminal device provided by the embodiment obtain video data from the terminal device, obtain a plurality of video segments from the video data, and process the plurality of video segments according to preset parameters to obtain a target video, where the user does not
  • the post-editing of the video data is required, which saves the complicated process of post-editing and improves the video processing efficiency.
  • FIG. 1 is a flowchart of a video processing method according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a communication system according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a video segment according to an embodiment of the present invention.
  • FIG. 4 is a flowchart of a video processing method according to another embodiment of the present invention.
  • FIG. 5 is a flowchart of a video processing method according to another embodiment of the present invention.
  • FIG. 6 is a structural diagram of a terminal device according to an embodiment of the present invention.
  • 60 terminal device; 61: memory; 62: processor;
  • a component when referred to as being "fixed” to another component, it can be directly on the other component or the component can be present. When a component is considered to "connect” another component, it can be directly connected to another component or possibly a central component.
  • FIG. 1 is a flowchart of a video processing method according to an embodiment of the present invention. As shown in FIG. 1, the method in this embodiment may include:
  • Step S101 Acquire video data.
  • the execution body of the method of this embodiment may be a terminal device, which may be a smart phone, a tablet computer, a ground control station, a laptop computer, etc., optionally, a smart phone, a tablet computer, a ground control station, a laptop Computers and other specific shooting functions.
  • the terminal device may also be a photographing device such as a camera, a camera, or the like.
  • the terminal device acquires video data, which may be taken by a shooting device carried by the drone or by the terminal device itself.
  • the acquiring the video data includes: acquiring video data captured by the photographing device.
  • the drone 21 includes a processor 22, a communication interface 23, a pan/tilt head 24, and a photographing device 25.
  • the processor 22 may specifically be a flight controller of the drone 21 or a general purpose or dedicated processor.
  • the photographing device 25 is mounted on the body of the drone 21 through the pan/tilt head 24, the photographing device 25 is used to capture video data, the processor 22 can acquire video data captured by the photographing device 25, and photograph the photographing device 25 through the communication interface 23.
  • the video data is transmitted to the terminal device 26 on the ground, and the terminal device 26 receives the video data captured by the photographing device 25 transmitted from the communication interface 23 via the antenna 27.
  • the processor within the terminal device 26 can acquire video data captured by the photographing device 25.
  • the terminal device 26 itself has a photographing function.
  • the camera device 26 is provided with a camera, and the processor within the terminal device 26 can acquire video data captured by the camera on the terminal device 26.
  • Step S102 Acquire a plurality of video segments from the video data.
  • the video data may also be obtained from the video data.
  • the processor decomposes the video data into multiple video segments.
  • the method further includes: receiving flight parameter information of the drone or motion of the photographing device itself during the process of capturing the video data by the photographing device sent by the drone Parameter information.
  • the processor 22 may acquire flight parameter information of the drone 21, and optionally, the flight parameter information of the drone includes at least one of the following The flight speed of the drone, the acceleration of the drone, the attitude of the drone, the attitude of the head of the drone, and the position information of the drone.
  • the drone 21 transmits the video data captured by the photographing device 25 to the terminal device 26
  • the drone 21 may also transmit the flight parameter information of the drone 21 to the terminal device in the process of capturing the video data by the photographing device 25. 26, that is, the terminal device 26 can receive the flight parameter information of the drone 21 during the process of capturing the video data while receiving the video data captured by the photographing device 25 transmitted by the drone 21 .
  • the processor 22 may further acquire the motion parameter information of the photographing device 25, the motion parameter information of the photographing device includes at least one of the following: the posture of the photographing device, the The moving speed of the photographing device, the acceleration of the photographing device, and the position information of the photographing device.
  • the drone 21 transmits the video data captured by the photographing device 25 to the terminal device 26, the drone 21 can also transmit the motion parameter information of the photographing device 25 itself to the terminal device during the process of capturing the video data. 26.
  • the terminal device 26 itself has a photographing function, and the processor in the terminal device 26 can acquire the motion parameter information of the terminal device 26 itself while acquiring the video data captured by the camera on the terminal device 26, for example, the terminal device 26 One or more of the attitude, the speed of movement of the terminal device 26, the acceleration of the terminal device 26, the location information of the terminal device 26, and the like.
  • the acquiring a plurality of video segments from the video data includes: according to the flight parameter information of the drone or the motion parameter of the photographing device in the process of capturing the video data by the photographing device Information, obtaining a plurality of video segments from the video data.
  • 30 denotes video data captured by the photographing device 25, and t1, t2, t3, t4, t5, t6, t7, and t8 respectively represent acquisition timings of flight parameter information of the drone, and optionally, t1.
  • T2, t3, t4, t5, t6, t7, and t8 may be equally spaced, or may be unequal intervals.
  • the flight parameter information of the drone is not limited in the captured video data. Acquisition time and sampling interval.
  • t1, t2, t3, t4, t5, t6, t7, and t8 may divide the video data 30 into a plurality of video segments, and further, according to flight parameter information of the drone in different sampling intervals, Select a plurality of high quality video clips from the divided plurality of video clips. For example, in a certain sampling interval, the flying speed of the drone is within the preset speed range, and the acceleration of the drone is within the preset acceleration range, and the drone is relatively stable, and the head of the drone is rotated. Smooth, you can determine that the video clips in the sampling interval are good video clips.
  • the location information of the drone such as GPS information
  • the acquisition information of the visual module And information such as image frame information in the video data, and selecting a plurality of high-quality video segments from the plurality of divided video segments.
  • t1, t2, t3, t4, t5, t6, t7, and t8 may also represent acquisition time of motion parameter information of the photographing device, t1, t2, t3, t4, t5, T6, t7, and t8 divide the video data 30 into a plurality of video segments. Further, according to the motion parameter information of the photographing device in different sampling intervals, a plurality of high-quality videos are selected from the plurality of divided video segments. Fragment.
  • Step S103 Process the plurality of video segments according to preset parameters to obtain a target video.
  • processing the multiple video segments according to the preset parameters to obtain the target video includes the following feasible implementation manners:
  • a feasible implementation manner is: identifying, by using a machine learning manner, a scene corresponding to the video data; and processing, by using the preset parameter, the multiple video segments to obtain a target video, including: corresponding to the scene according to the scenario
  • the preset parameters process the plurality of video segments to obtain a target video.
  • the terminal device 26 may be in a machine learning manner according to one or more of flight parameter information of the drone 21, motion parameter information of the photographing device 25, image frame information in the video data, and the like.
  • the scene corresponding to the video data is identified.
  • the scene corresponding to the video data may include at least one of the following: landscape, city, coast, sky, and portrait.
  • the terminal device 26 can obtain preset parameters corresponding to the scene from the plurality of preset parameters that have been stored by the terminal device 26 according to the scene corresponding to the video data that is recognized by the machine learning. For example, the terminal device 26 can store each of the preset parameters.
  • the preset parameter corresponding to the different scenes if the terminal device 26 recognizes that the scene of the video data captured by the photographing device 25 is a scene, the terminal device 26 acquires a preset parameter corresponding to the landscape, and according to the preset parameter pair The high quality multiple video segments obtained in the above steps are processed to obtain the target video.
  • Another feasible implementation manner is: detecting a scene setting operation of the user; determining a scene corresponding to the video data according to the detected scene setting operation; and processing the plurality of video segments according to the preset parameter
  • the target video includes: processing the plurality of video segments according to preset parameters corresponding to the scene to obtain a target video.
  • the user can also set the scene of the video data.
  • the user can set the scene of the video data to the scene on the terminal device, and the terminal device 26 acquires preset parameters corresponding to the landscape, and according to the preset.
  • the parameter processes the high quality multiple video segments obtained in the above steps to obtain the target video.
  • a further possible implementation manner is: detecting a scene switching operation of the user; switching the scene according to the detected scene switching operation; and the plurality of video segments according to preset parameters corresponding to the scene
  • the process of obtaining the target video includes: processing the plurality of video segments according to preset parameters corresponding to the switched scenario to obtain a target video.
  • the user can also switch the scene of the video data. It is assumed that the terminal device 26 recognizes that the scene corresponding to the video data is a landscape according to the machine learning, and the user considers that the scene corresponding to the video data is a coast, and the user can use the terminal device to The scene corresponding to the video data is switched, and the terminal device 26 obtains preset parameters corresponding to the coast, and processes the high-quality multiple video segments obtained in the above steps according to the preset parameters to obtain a target video.
  • each of the preset parameters may be a set of solutions for processing a plurality of video segments.
  • the preset parameters include at least one of the following: audio information, such as background music, filter information, The target attribute of the video clip, the transition information of the video clip, and the target duration of the target video.
  • the terminal device 26 may select at least one target segment that matches the target attribute from the plurality of video segments according to the target attribute of the video segment, and process the at least one target segment according to the key point of the background music to obtain the target video, and according to the target video, according to The filter information performs image processing on an image in the target video, or/and an adjacent target segment is transitioned at the key point according to a transition manner indicated by the transition information, or/and The duration of the target video is adjusted to the target duration.
  • the information included in the preset parameter is changeable.
  • the same preset parameter may include audio information such as background music, and the user may select a background music from the plurality of background music.
  • the terminal device selects a default background music from the plurality of background music.
  • the video data is acquired by the terminal device, multiple video segments are obtained from the video data, and the plurality of video segments are processed according to preset parameters to obtain a target video, and the user does not need to perform post-editing on the video data, thereby saving
  • the cumbersome process of post-editing improves the efficiency of video processing.
  • FIG. 4 is a flowchart of a video processing method according to another embodiment of the present invention.
  • the preset parameters include target attributes of at least one video segment.
  • the target attribute of the video clip may include the target duration of the video clip and the target flight parameter information of the drone corresponding to the video clip.
  • Each of the plurality of video segments obtained according to step S102 corresponds to an actual attribute, for example, an actual duration of the video segment, and actual flight parameter information of the drone corresponding to the video segment.
  • the processing the plurality of video segments according to the preset parameters to obtain the target video may include:
  • Step S401 Determine, according to actual attributes of each of the plurality of video segments, at least one target segment whose actual attribute matches the target attribute from the plurality of video segments.
  • the video segment 67 between the video segments 32, t4 and t5 between the video segments 12, t2 and t3 between t1 and t2 is a high quality video.
  • the preset parameters include the target attributes of the three video clips, such as the target attribute of the first video clip, the target attribute of the second video clip, and the target attribute of the third video clip, then the terminal device needs to receive the video clip 12
  • the video segment 32, the video segment 45, and the video segment 67 select three target segments whose actual attributes match the target attribute, for example, the actual attributes of the video segment 12 match the target attributes of the first video segment, and the video segment
  • the actual attribute of 32 matches the target attribute of the second video clip, the actual attribute of the video clip 45 matches the target attribute of the third video clip, and the video clip 12, the video clip 32, and the video clip 45 are the target clips, respectively.
  • Step S402 processing the at least one target segment to obtain a target video.
  • the terminal device processes the video segment 12, the video segment 32, and the video segment 45 to obtain a target video.
  • the preset parameter further includes audio information; the processing the at least one target segment to obtain a target video, including: processing the at least one target segment according to a key point in the audio information A target video is obtained, in which adjacent target segments in the target video are transitioned at the key points.
  • the terminal device may identify a key music point in the audio information according to the accent, the knot change sound, and the like of the audio information included in the preset parameter, and the key music point may be simply referred to as a key point.
  • the terminal device processes the video segment 12, the video segment 32, and the video segment 45 according to key points in the audio information to obtain a target video.
  • the target video the video segment 12 and the video segment 32 are adjacent to each other, and the video segment 32 is adjacent to the video clip 45.
  • the video clip 12 and the video clip 32, as well as the video clip 32 and the video clip 45 respectively transition at key points of the audio information.
  • processing the at least one target segment according to the key points in the audio information to obtain the target video includes the following feasible implementation manners:
  • a feasible implementation manner is: detecting an audio information selection operation of the user; determining an audio information selected by the user according to the detected audio information selection operation; and determining the at least one target according to a key point in the audio information
  • Processing the segment to obtain the target video includes: processing the at least one target segment according to a key point in the audio information selected by the user to obtain a target video.
  • the user may select the audio information in the preset parameter.
  • the preset parameter may correspond to multiple audio information, and the user may select one audio information from the plurality of audio information corresponding to the preset parameter, or the user may also You can choose your favorite audio information such as background music.
  • the terminal device may detect a user's selection operation on the audio information, and determine the audio information selected by the user according to the selection operation, further identify a key point in the audio information selected by the user, and according to the audio information selected by the user. At the key point, the video segment 12, the video segment 32, and the video segment 45 are processed to obtain a target video.
  • Another possible implementation manner is: detecting a switching operation of the attribute of the audio information by the user; determining, according to the detected switching operation, audio information after the attribute switching; according to the key point in the audio information Processing the at least one target segment to obtain the target video, including: processing the at least one target segment to obtain a target video according to a key point in the audio information after the attribute switching.
  • the user can also switch the attributes of the audio information in the preset parameters.
  • the user can control the terminal device to process the video segment 12, the video segment 32, and the video segment 45 by using the entire background music to obtain the target video.
  • the terminal device can be controlled to intercept a portion of the audio information that has a faster change, a portion that has a slower change, a portion that is faster and slower before the change, or a portion that is slower and faster than the patch before the change.
  • the video segment 32 and the video segment 45 are processed to obtain a target video.
  • the preset parameter further includes transition information of an adjacent target segment; and processing the at least one target segment according to a key point in the audio information to obtain a target video
  • the method includes: processing, according to a key point in the audio information, the at least one target segment to obtain a target video, so that adjacent target segments in the target video are represented by the transition information at the key point.
  • the transition mode is used for the transition.
  • the terminal device processes the video segment 12, the video segment 32, and the video segment 45 according to the key points in the audio information to obtain a target video, where the preset parameter further includes transition information of the adjacent target segment, so that the target video is included in the target video.
  • the adjacent target segments are transitioned at the key points according to the transition mode indicated by the transition information.
  • the transition information of the adjacent target segments may be the same or different, that is, the manner in which the video segment 12 and the video segment 32 are transitioned at the key points, and the video segment 32 and the video segment 45 are at key points.
  • the way to make a transition can be the same or different.
  • the preset parameter further includes filter information; the method further comprising: performing image processing on the image in the target video according to the filter information.
  • the terminal device may also perform image processing on the image in the target video by using filter information included in the preset parameter. For example, different scenes can correspond to different filters. After adding a filter to the target video, the content of the scene can be better expressed and the expressiveness of the scene can be improved.
  • the preset parameter further includes a target duration of the target video; the method further includes: adjusting a playback speed of the target video according to a target duration of the target video.
  • the terminal device can also adjust the actual duration of the target video according to the target duration of the target video. Specifically, the terminal device can adjust the playback speed of the target video. For example, the terminal device can adjust the playback speed of the target video to be fast. After slow, or slow and fast, the actual duration of the target video matches the target duration.
  • the adjusting the playing speed of the target video according to the target duration of the target video comprises: playing at least one target segment in the target video according to a target duration of the target video Speed is adjusted.
  • the target video is composed of a video segment 12, a video segment 32, and a video segment 45.
  • the terminal device may specifically adjust the playback speed of at least one of the video segment 12, the video segment 32, and the video segment 45.
  • the terminal device may use the video.
  • the playback speed of at least one of the video segment 32, the video segment 32, and the video segment 45 is adjusted to be faster and slower, or slower and faster, so that the actual duration of the target video matches the target duration.
  • At least one target segment whose actual attribute matches the target attribute is determined from the plurality of video segments, according to a key of the background music Point processing at least one target segment to obtain a target video, and performing image processing on the image in the target video according to the filter information, or/and adjacent target segments according to the transition information at the key point
  • the transition mode indicated is changed, or / and the duration of the target video is adjusted to the target duration, which improves the processing effect on the video data and improves the user experience.
  • FIG. 5 is a flowchart of a video processing method according to another embodiment of the present invention. As shown in FIG. 5, based on the foregoing embodiment, the video processing method may further include:
  • Step S501 detecting an adjustment operation of the sequence of the plurality of video segments by the user.
  • the video segment 67 between the video segments 32, t4 and t5 between the video segments 12, t2 and t3 between t1 and t2 is a high quality video. Fragment.
  • the terminal device can process the video segment 12, the video segment 32, the video segment 45, and the video segment 67 according to preset parameters to obtain a target video.
  • the user can also adjust the order of the video segment 12, the video segment 32, the video segment 45, and the video segment 67.
  • the display device of the terminal device can display an interactive interface, and the user can interact with the interface.
  • the positions of the video clip 12, the video clip 32, the video clip 45, and the video clip 67 displayed above are adjusted to adjust the order of their arrangement.
  • Step S502 Adjust an order of the plurality of video segments according to the detected adjustment operation.
  • the terminal device can adjust the order of the video segment 12, the video segment 32, the video segment 45, and the video segment 67 according to the adjustment operation of the user.
  • the adjusted arrangement order is video clip 32, video clip 67, adjusted video clip 12, and video clip 45.
  • the processing the plurality of video segments according to the preset parameters to obtain the target video includes: processing the sequentially adjusted plurality of video segments according to the preset parameters to obtain the target video.
  • the terminal device can process the video clips 32, the video clips 67, the adjusted video clips 12, and the video clips 45 to obtain the target video according to the preset parameters.
  • the specific processing is consistent with the foregoing embodiment, and details are not described herein. .
  • the method further includes: encoding the target video according to a video parameter corresponding to the target video to obtain an encoded target video; and sending the encoded target video to a server.
  • the terminal device may also use the video parameters corresponding to the target video, such as a code rate, a frame rate, a resolution, a speed, a format, a quality, etc., to encode the target video to obtain a coded target video, optionally, corresponding to each scene.
  • Video parameters can be fixed or adjustable.
  • the terminal device After the terminal device encodes the target video, the user can share the encoded target video to the social media through a button or a button on the terminal device. Specifically, after the terminal device detects the operation of the upload button or the button, the terminal device will The encoded target video is sent to a server corresponding to the social media.
  • the method further comprises storing the encoded target video locally.
  • the terminal device encodes the target video
  • the user may store the encoded target video locally in the terminal device by using the terminal device.
  • the terminal device After detecting, by the terminal device, the operation of the storage button or the button, the terminal device The encoded target video is stored locally on the terminal device.
  • the terminal device detects an adjustment operation of the sequence of the multiple video segments by the user, and adjusts the sequence of the multiple video segments according to the detected adjustment operation, and adjusts the sequence according to the preset parameters.
  • the plurality of video segments are processed to obtain a target video, which improves the flexibility of processing the video segment.
  • the user can control the terminal device to immediately send the message to the server for sharing, and the video data is realized.
  • the instant shooting is uploaded, which further enhances the user experience.
  • FIG. 6 is a structural diagram of a terminal device according to an embodiment of the present invention.
  • the terminal device 60 includes a memory 61 and a processor 62.
  • the memory 61 is configured to store program code; the processor 62 calls the program code, when the program code is executed, for performing the following operations: acquiring video data; acquiring a plurality of video segments from the video data; The parameter processes the plurality of video segments to obtain a target video.
  • the terminal device 60 further includes: a communication interface 63, configured to receive video data captured by the photographing device sent by the drone; and when the processor 62 acquires the video data, specifically, the method is: acquiring the photographing through the communication interface 63.
  • the video data captured by the device; the communication interface 63 is further configured to: receive flight parameter information of the drone during the process of capturing the video data by the photographing device sent by the drone; the processor 62 reads from the
  • the method is specifically: acquiring a plurality of video segments from the video data according to the flight parameter information of the drone during the capturing of the video data by the photographing device.
  • the flight parameter information of the drone includes at least one of: a flight speed of the drone, an acceleration of the drone, a posture of the drone, and a drone The attitude of the gimbal, the position information of the drone.
  • the terminal device 60 is a camera with a processor, and when the program code is executed, is further configured to: receive motion parameter information of the photographing device itself, and according to motion parameter information of the photographing device And acquiring a plurality of video segments from the video data.
  • the motion parameter information of the photographing device itself includes at least one of: a posture of the photographing device, a motion speed of the photographing device, an acceleration of the photographing device, and position information of the photographing device.
  • the processor 62 is further configured to: identify, by using a machine learning manner, a scenario corresponding to the video data; and when the processor 62 processes the multiple video segments according to preset parameters to obtain a target video, specifically And processing the plurality of video segments according to the preset parameters corresponding to the scene to obtain a target video.
  • the processor 62 is further configured to: detect a scene setting operation of the user; determine a scene corresponding to the video data according to the detected scene setting operation; and the processor 62 pairs the plurality of video segments according to the preset parameter.
  • the method is specifically: processing the plurality of video segments according to preset parameters corresponding to the scene to obtain a target video.
  • the processor 62 is further configured to: detect a scene switching operation of the user; switch the scene according to the detected scene switching operation; and the processor 62 pairs the multiple according to the preset parameter corresponding to the scene.
  • the specific video is processed according to the preset parameters corresponding to the switched scene to obtain the target video.
  • the preset parameter includes a target attribute of the at least one video segment.
  • the method is specifically configured to: according to actual attributes of each of the multiple video segments, from the multiple Determining at least one target segment whose actual attribute matches the target attribute in the video segment; processing the at least one target segment to obtain a target video.
  • the preset parameter further includes audio information.
  • the processor 62 processes the at least one target segment to obtain a target video
  • the method is specifically configured to: according to a key point in the audio information, the at least one The target segment is processed to obtain a target video in which adjacent target segments are transitioned at the key points.
  • the processor 62 is further configured to: detect an audio information selection operation of the user; determine an audio information selected by the user according to the detected audio information selection operation; and the processor 62 compares the key points in the audio information.
  • the processor 62 compares the key points in the audio information.
  • the processor 62 is further configured to: detect a user switching operation of the attribute of the audio information; determine audio information after the attribute switching according to the detected switching operation; and the processor 62 is configured according to the audio information.
  • the key point is that when the at least one target segment is processed to obtain the target video, the method is specifically configured to: process the at least one target segment according to a key point in the audio information after the attribute switching to obtain a target video.
  • the preset parameter further includes transition information of the adjacent target segment; and the processor 62 processes the at least one target segment according to the key point in the audio information to obtain the target video, specifically Processing the at least one target segment according to a key point in the audio information to obtain a target video, so that adjacent target segments in the target video are represented by the transition information at the key point.
  • the transition mode is used for the transition.
  • the preset parameter further includes filter information; the processor 62 is further configured to: perform image processing on the image in the target video according to the filter information.
  • the preset parameter further includes a target duration of the target video.
  • the processor 62 is further configured to: adjust a play speed of the target video according to a target duration of the target video.
  • the method is specifically configured to: at least one of the target videos according to a target duration of the target video.
  • the playback speed of the target clip is adjusted.
  • the processor 62 is further configured to: detect an adjustment operation of the sequence of the multiple video segments by the user; adjust an order of the multiple video segments according to the detected adjustment operation;
  • the method is specifically configured to: process the plurality of video segments that are sequentially adjusted according to a preset parameter to obtain a target video.
  • the processor 62 is further configured to: encode the target video according to the video parameter corresponding to the target video to obtain the encoded target video; and the communication interface 63 is further configured to: use the encoded target video Sent to the server.
  • the processor 62 is further configured to: store the encoded target video locally.
  • the video data is acquired by the terminal device, multiple video segments are obtained from the video data, and the plurality of video segments are processed according to preset parameters to obtain a target video, and the user does not need to perform post-editing on the video data, thereby saving
  • the cumbersome process of post-editing improves the efficiency of video processing.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the above-described integrated unit implemented in the form of a software functional unit can be stored in a computer readable storage medium.
  • the above software functional unit is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to perform the methods of the various embodiments of the present invention. Part of the steps.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Signal Processing (AREA)
  • Astronomy & Astrophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Television Signal Processing For Recording (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种视频处理方法及终端设备,该方法包括:获取视频数据;从所述视频数据中获取多个视频片段;根据预设参数对所述多个视频片段进行处理得到目标视频。通过终端设备获取视频数据,从所述视频数据中获取多个视频片段,根据预设参数对所述多个视频片段进行处理得到目标视频,用户不需要对视频数据进行后期编辑,节省了后期编辑的繁琐过程,提高了视频处理效率。

Description

视频处理方法及终端设备 技术领域
本发明实施例涉及无人机领域,尤其涉及一种视频处理方法及终端设备。
背景技术
现有技术中用户使用拍摄设备拍摄视频后,需要通过编辑软件对视频进行后期编辑,但是用户通过编辑软件对视频进行后期编辑的过程很繁琐,导致视频处理效率较低。
发明内容
本发明实施例提供一种视频处理方法及终端设备,以提高视频处理效率。
本发明实施例的第一方面是提供一种视频处理方法,包括:
获取视频数据;
从所述视频数据中获取多个视频片段;
根据预设参数对所述多个视频片段进行处理得到目标视频。
本发明实施例的第二方面是提供一种终端设备,包括:存储器和处理器;
所述存储器用于存储程序代码;
所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以下操作:
获取视频数据;
从所述视频数据中获取多个视频片段;
根据预设参数对所述多个视频片段进行处理得到目标视频。
本实施例提供的视频处理方法及终端设备,通过终端设备获取视频数据,从所述视频数据中获取多个视频片段,根据预设参数对所述多个视频片段进行处理得到目标视频,用户不需要对视频数据进行后期编辑,节省 了后期编辑的繁琐过程,提高了视频处理效率。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例提供的视频处理方法的流程图;
图2为本发明实施例提供的通信系统的示意图;
图3为本发明实施例提供的视频片段的示意图;
图4为本发明另一实施例提供的视频处理方法的流程图;
图5为本发明另一实施例提供的视频处理方法的流程图;
图6为本发明实施例提供的终端设备的结构图。
附图标记:
21:无人机;    22:处理器;     23:通讯接口;
24:云台;      25:拍摄设备;   26:终端设备;
27:天线;      30:视频数据;   12:视频片段;
32:视频片段;  45:视频片段;   67:视频片段;
60:终端设备;  61:存储器;     62:处理器;
63:通讯接口。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
需要说明的是,当组件被称为“固定于”另一个组件,它可以直接在另一个组件上或者也可以存在居中的组件。当一个组件被认为是“连接”另一个组件,它可以是直接连接到另一个组件或者可能同时存在居中组件。
除非另有定义,本文所使用的所有的技术和科学术语与属于本发明的 技术领域的技术人员通常理解的含义相同。本文中在本发明的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本发明。本文所使用的术语“及/或”包括一个或多个相关的所列项目的任意的和所有的组合。
下面结合附图,对本发明的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
本发明实施例提供一种视频处理方法。图1为本发明实施例提供的视频处理方法的流程图。如图1所示,本实施例中的方法,可以包括:
步骤S101、获取视频数据。
本实施例方法的执行主体可以是终端设备,该终端设备可以是智能手机、平板电脑、地面控制站、膝上型电脑等,可选的,智能手机、平板电脑、地面控制站、膝上型电脑等具体有拍摄功能。另外,该终端设备还可以是拍摄设备例如相机、摄影机等。
终端设备获取视频数据,该视频数据可以是无人机搭载的拍摄设备拍摄的,也可以是终端设备本身拍摄的。
具体的,所述获取视频数据包括:获取拍摄设备拍摄的视频数据。
如图2所示,无人机21包括处理器22、通讯接口23、云台24、拍摄设备25。处理器22具体可以是无人机21的飞行控制器,也可以是通用或专用的处理器。拍摄设备25通过云台24安装在无人机21的机身上,拍摄设备25用于拍摄视频数据,处理器22可以获取拍摄设备25拍摄的视频数据,并通过通讯接口23将拍摄设备25拍摄的视频数据发送给地面的终端设备26,终端设备26通过天线27接收通讯接口23发送的拍摄设备25拍摄的视频数据。终端设备26内的处理器可获取拍摄设备25拍摄的视频数据。
或者,终端设备26本身具有拍摄功能,例如,终端设备26上设置有摄像头,终端设备26内的处理器可获取终端设备26上的摄像头拍摄的视频数据。
步骤S102、从所述视频数据中获取多个视频片段。
当终端设备26内的处理器获取到视频数据后,还可以从视频数据中获取出多个视频片段,一种可行的实现方式是:处理器将视频数据分解为 多个视频片段。
在其他实施例中,所述方法还包括:接收所述无人机发送的所述拍摄设备在拍摄所述视频数据的过程中所述无人机的飞行参数信息或所述拍摄设备本身的运动参数信息。
如图2所示,当拍摄设备25在拍摄视频数据的过程中,处理器22可获取无人机21的飞行参数信息,可选的,所述无人机的飞行参数信息包括如下至少一种:所述无人机的飞行速度、所述无人机的加速度、所述无人机的姿态、所述无人机的云台的姿态、所述无人机的位置信息。当无人机21向终端设备26发送拍摄设备25拍摄的视频数据的同时,无人机21还可以将拍摄设备25在拍摄该视频数据的过程中无人机21的飞行参数信息发送给终端设备26,即终端设备26在接收无人机21发送的拍摄设备25拍摄的视频数据的同时,还可以接收拍摄设备25在拍摄该视频数据的过程中无人机21的飞行参数信息。
或者,当拍摄设备25在拍摄视频数据的过程中,处理器22还可获取拍摄设备25的运动参数信息,所述拍摄设备的运动参数信息包括如下至少一种:所述拍摄设备的姿态、所述拍摄设备的运动速度、所述拍摄设备的加速度、所述拍摄设备的位置信息。当无人机21向终端设备26发送拍摄设备25拍摄的视频数据的同时,无人机21还可以将拍摄设备25在拍摄该视频数据的过程中拍摄设备25本身的运动参数信息发送给终端设备26。
再或者,终端设备26本身具有拍摄功能,终端设备26内的处理器在获取终端设备26上的摄像头拍摄的视频数据的同时,还可以获取终端设备26本身的运动参数信息,例如,终端设备26的姿态、终端设备26的运动速度、终端设备26的加速度、终端设备26的位置信息等信息中的一个或多个。
具体的,所述从所述视频数据中获取多个视频片段,包括:根据所述拍摄设备在拍摄所述视频数据的过程中所述无人机的飞行参数信息或所述拍摄设备的运动参数信息,从所述视频数据中获取多个视频片段。
如图3所示,30表示拍摄设备25拍摄的视频数据,t1、t2、t3、t4、t5、t6、t7、t8分别表示无人机的飞行参数信息的采集时刻,可选的,t1、 t2、t3、t4、t5、t6、t7、t8可以等间隔,也可以不等间隔,此处只是示意性说明,并不限定拍摄设备25在拍摄视频数据中,无人机的飞行参数信息的采集时刻及采样间隔。
可选的,t1、t2、t3、t4、t5、t6、t7、t8可将视频数据30分割为多个视频片段,进一步的,还可以根据不同的采样间隔内无人机的飞行参数信息,从分割后的多个视频片段中选择出优质的多个视频片段。例如,在某一采样间隔内,无人机的飞行速度在预设速度范围内,无人机的加速度在预设加速度范围内,无人机飞行较为平稳,该无人机的云台转动较为平稳,则可确定该采样间隔内的视频片段为优质的视频片段。在其他实施例中,在根据无人机的飞行速度、加速度、无人机的姿态、云台的姿态的基础上,还可以结合无人机的位置信息例如GPS信息、视觉模块的采集信息、以及视频数据中的图像帧信息等信息,从分割后的多个视频片段中选择出优质的多个视频片段。
在其他实施例中,如图3所示,t1、t2、t3、t4、t5、t6、t7、t8还可以表示拍摄设备的运动参数信息的采集时刻,t1、t2、t3、t4、t5、t6、t7、t8将视频数据30分割为多个视频片段,进一步的,还可以根据不同的采样间隔内拍摄设备的运动参数信息,从分割后的多个视频片段中选择出优质的多个视频片段。
步骤S103、根据预设参数对所述多个视频片段进行处理得到目标视频。
可选的,根据预设参数对所述多个视频片段进行处理得到目标视频包括如下几种可行的实现方式:
一种可行的实现方式是:通过机器学习的方式对所述视频数据对应的场景进行识别;所述根据预设参数对所述多个视频片段进行处理得到目标视频,包括:根据所述场景对应的预设参数对所述多个视频片段进行处理得到目标视频。
如图2所示,终端设备26可根据无人机21的飞行参数信息、拍摄设备25的运动参数信息、视频数据中的图像帧信息等信息中的一个或多个,通过机器学习的方式对视频数据对应的场景进行识别,例如,视频数据对应的场景可以包括如下至少一种:风景、城市、海岸、天空、人像。
终端设备26可根据机器学习识别出的视频数据对应的场景,从终端 设备26已存储的多个预设参数中获取出与该场景对应的预设参数,例如,终端设备26可存储有每个不同场景对应的预设参数,若终端设备26通过机器学习识别出拍摄设备25拍摄的视频数据的场景是风景,则终端设备26获取出与风景对应的预设参数,并根据该预设参数对上述步骤获取出的优质的多个视频片段进行处理得到目标视频。
另一种可行的实现方式是:检测用户的场景设置操作;根据检测到的所述场景设置操作确定所述视频数据对应的场景;所述根据预设参数对所述多个视频片段进行处理得到目标视频,包括:根据所述场景对应的预设参数对所述多个视频片段进行处理得到目标视频。
例如,用户还可以对该视频数据的场景进行设置,例如,用户可以在终端设备上设置该视频数据的场景为风景,则终端设备26获取出与风景对应的预设参数,并根据该预设参数对上述步骤获取出的优质的多个视频片段进行处理得到目标视频。
再一种可行的实现方式是:检测用户的场景切换操作;根据检测到的所述场景切换操作对所述场景进行切换;所述根据所述场景对应的预设参数对所述多个视频片段进行处理得到目标视频,包括:根据切换后的场景对应的预设参数对所述多个视频片段进行处理得到目标视频。
例如,用户还可以对该视频数据的场景进行切换,假设终端设备26根据机器学习识别出视频数据对应的场景是风景,而用户认为该视频数据对应的场景是海岸,则用户可通过终端设备对该视频数据对应的场景进行切换,终端设备26获取出与海岸对应的预设参数,并根据该预设参数对上述步骤获取出的优质的多个视频片段进行处理得到目标视频。
在本实施例中,每个预设参数具体可以是对多个视频片段进行处理的一套方案,可选的,该预设参数包括如下至少一种:音频信息例如背景音乐、滤镜信息、视频片段的目标属性、视频片段的转场信息、目标视频的目标时长。例如,终端设备26可以按照视频片段的目标属性从多个视频片段中选取出符合该目标属性的至少一个目标片段,根据该背景音乐的关键点对至少一个目标片段进行处理得到目标视频,并根据所述滤镜信息对所述目标视频中的图像进行图像处理,或/及相邻的目标片段在所述关键点按照所述转场信息表示的转场方式进行转场,或/及将该目标视频的时长调 整为目标时长。
在其他实施例中,预设参数包括的各个信息是可更改的,例如,同一个预设参数包括的音频信息例如背景音乐可以是多个,用户可以从该多个背景音乐中选择一个背景音乐,或者,由终端设备从该多个背景音乐中选择一个默认的背景音乐。
本实施例通过终端设备获取视频数据,从所述视频数据中获取多个视频片段,根据预设参数对所述多个视频片段进行处理得到目标视频,用户不需要对视频数据进行后期编辑,节省了后期编辑的繁琐过程,提高了视频处理效率。
本发明实施例提供一种视频处理方法。图4为本发明另一实施例提供的视频处理方法的流程图。如图4所示,在图1所示实施例的基础上,所述预设参数包括至少一个视频片段的目标属性。视频片段的目标属性可以包括视频片段的目标时长、视频片段对应的无人机的目标飞行参数信息。
根据步骤S102得出的多个视频片段中,每个视频片段对应有实际属性,例如,该视频片段的实际时长、视频片段对应的无人机的实际飞行参数信息。
所述根据预设参数对所述多个视频片段进行处理得到目标视频,可以包括:
步骤S401、根据所述多个视频片段中每个视频片段的实际属性,从所述多个视频片段中确定出实际属性与所述目标属性匹配的至少一个目标片段。
如图3所示,假设t1和t2之间的视频片段12、t2和t3之间的视频片段32、t4和t5之间的视频片段45、t6和t7之间的视频片段67是优质的视频片段;预设参数包括三个视频片段的目标属性,例如第一个视频片段的目标属性、第二个视频片段的目标属性、第三个视频片段的目标属性,则终端设备需要从视频片段12、视频片段32、视频片段45、视频片段67中选择出实际属性与所述目标属性匹配的三个目标片段,例如,视频片段12的实际属性与第一个视频片段的目标属性匹配、视频片段32的实际属性与第二个视频片段的目标属性匹配、视频片段45的实际属性与第三个 视频片段的目标属性匹配,则视频片段12、视频片段32、视频片段45分别为目标片段。
步骤S402、对所述至少一个目标片段进行处理得到目标视频。
进一步的,终端设备对视频片段12、视频片段32、视频片段45进行处理得到目标视频。
可选的,所述预设参数还包括音频信息;所述对所述至少一个目标片段进行处理得到目标视频,包括:根据所述音频信息中的关键点,对所述至少一个目标片段进行处理得到目标视频,所述目标视频中相邻的目标片段在所述关键点进行转场。
例如,终端设备可根据该预设参数包括的音频信息的重音、节凑变化音等识别出该音频信息中的关键音乐点,该关键音乐点可以简称为关键点。进一步的,终端设备根据音频信息中的关键点,对视频片段12、视频片段32、视频片段45进行处理得到目标视频,例如,在目标视频中,视频片段12和视频片段32相邻,视频片段32和视频片段45相邻,可选的,视频片段12和视频片段32、以及视频片段32和视频片段45分别在该音频信息的关键点上进行转场。
可选的,根据所述音频信息中的关键点,对所述至少一个目标片段进行处理得到目标视频包括如下几种可行的实现方式:
一种可行的实现方式是:检测用户的音频信息选择操作;根据检测到的音频信息选择操作,确定用户选择的音频信息;所述根据所述音频信息中的关键点,对所述至少一个目标片段进行处理得到目标视频,包括:根据用户选择的音频信息中的关键点,对所述至少一个目标片段进行处理得到目标视频。
例如,用户可以对预设参数中的音频信息进行选择,例如,该预设参数可对应多个音频信息,用户可从该预设参数对应多个音频信息中选择一个音频信息,或者,用户还可以选择自己喜欢的音频信息例如背景音乐。具体的,终端设备可检测用户对音频信息的选择操作,并根据该选择操作确定用户选择的音频信息,进一步识别用户所选择的该音频信息中的关键点,并根据用户选择的音频信息中的关键点,对视频片段12、视频片段32、视频片段45进行处理得到目标视频。
另一种可行的实现方式是:检测用户对所述音频信息的属性的切换操作;根据检测到的所述切换操作,确定属性切换后的音频信息;所述根据所述音频信息中的关键点,对所述至少一个目标片段进行处理得到目标视频,包括:根据属性切换后的音频信息中的关键点,对所述至少一个目标片段进行处理得到目标视频。
具体的,用户还可以对预设参数中的音频信息的属性进行切换,例如,用户可以控制终端设备采用整首背景音乐对视频片段12、视频片段32、视频片段45进行处理得到目标视频,也可以控制终端设备截取该音频信息中节凑变化较快的部分、节凑变化较慢的部分、节凑变化前快后慢的部分、或节凑变化前慢后快的部分对视频片段12、视频片段32、视频片段45进行处理得到目标视频。
再一种可行的实现方式是:所述预设参数还包括相邻的目标片段的转场信息;所述根据所述音频信息中的关键点,对所述至少一个目标片段进行处理得到目标视频,包括:根据所述音频信息中的关键点,对所述至少一个目标片段进行处理得到目标视频,以使所述目标视频中相邻的目标片段在所述关键点按照所述转场信息表示的转场方式进行转场。
例如,终端设备根据音频信息中的关键点,对视频片段12、视频片段32、视频片段45进行处理得到目标视频,该预设参数还包括相邻的目标片段的转场信息,使得目标视频中相邻的目标片段在关键点按照所述转场信息表示的转场方式进行转场。可选的,相邻的目标片段的转场信息可以相同,也可以不同,即视频片段12和视频片段32在关键点上进行转场的方式,以及视频片段32和视频片段45在关键点上进行转场的方式可以相同,也可以不同。
在其他实施例中,所述预设参数还包括滤镜信息;所述方法还包括:根据所述滤镜信息对所述目标视频中的图像进行图像处理。
终端设备还可以采用预设参数包括的滤镜信息对该目标视频中的图像进行图像处理。例如,不同的场景可对应不同的滤镜,在目标视频中加上滤镜后,能更好地表达出该场景的内涵,提升场景的表现力。
在另外一些实施例中,所述预设参数还包括所述目标视频的目标时长;所述方法还包括:根据所述目标视频的目标时长,对所述目标视频的播放 速度进行调整。
终端设备还可以根据目标视频的目标时长对目标视频的实际时长进行调整,具体的,终端设备可以通过对目标视频的播放速度进行调整,例如,终端设备可以将目标视频的播放速度调整为先快后慢,或先慢后快,使得目标视频的实际时长与目标时长相匹配。
可选的,所述根据所述目标视频的目标时长,对所述目标视频的播放速度进行调整,包括:根据所述目标视频的目标时长,对所述目标视频中的至少一个目标片段的播放速度进行调整。
例如,目标视频由视频片段12、视频片段32、视频片段45构成,终端设备具体可以调整视频片段12、视频片段32、视频片段45中至少一个视频片段的播放速度,例如,终端设备可以将视频片段12、视频片段32、视频片段45中至少一个视频片段的播放速度调整为先快后慢,或先慢后快,使得目标视频的实际时长与目标时长相匹配。
本实施例通过根据所述多个视频片段中每个视频片段的实际属性,从所述多个视频片段中确定出实际属性与所述目标属性匹配的至少一个目标片段,根据该背景音乐的关键点对至少一个目标片段进行处理得到目标视频,并根据所述滤镜信息对所述目标视频中的图像进行图像处理,或/及相邻的目标片段在所述关键点按照所述转场信息表示的转场方式进行转场,或/及将该目标视频的时长调整为目标时长,提高了对视频数据的处理效果,提升了用户体验。
本发明实施例提供一种视频处理方法。图5为本发明另一实施例提供的视频处理方法的流程图。如图5所示,在上述实施例的基础上,视频处理方法还可以包括:
步骤S501、检测用户对所述多个视频片段的顺序的调整操作。
如图3所示,假设t1和t2之间的视频片段12、t2和t3之间的视频片段32、t4和t5之间的视频片段45、t6和t7之间的视频片段67是优质的视频片段。终端设备可以根据预设参数对视频片段12、视频片段32、视频片段45、视频片段67进行处理得到目标视频。
在本实施例中,用户还可以对视频片段12、视频片段32、视频片段 45、视频片段67的排列顺序进行调整,例如,终端设备的显示屏上可显示有交互界面,用户可以对交互界面上显示的视频片段12、视频片段32、视频片段45、视频片段67的位置进行调整以调整其排列顺序。
步骤S502、根据检测到的所述调整操作,调整所述多个视频片段的顺序。
终端设备可根据用户的调整操作,调整视频片段12、视频片段32、视频片段45、视频片段67的排列顺序。例如,调整后的排列顺序依次为视频片段32、视频片段67、调整视频片段12、视频片段45。
所述根据预设参数对所述多个视频片段进行处理得到目标视频,包括:根据预设参数对顺序调整后的所述多个视频片段进行处理得到目标视频。
例如,终端设备可根据预设参数对顺序调整后的视频片段32、视频片段67、调整视频片段12、视频片段45进行处理得到目标视频,具体处理过程与上述实施例一致,此处不再赘述。
另外,所述方法还包括:根据所述目标视频对应的视频参数,对所述目标视频进行编码得到编码后的目标视频;将所述编码后的目标视频发送到服务器。
终端设备还可以采用目标视频对应的视频参数例如码率、帧率、分辨率、速度、格式、质量等,对目标视频进行编码得到编码后的目标视频,可选的,每一种场景对应的视频参数可以是固定的,也可以是可调整的。当终端设备对目标视频进行编码后,用户可通过终端设备上的按钮或按键将该编码后的目标视频分享到社交媒体,具体的,终端设备检测到用户对上传按钮或按键的操作后,将该编码后的目标视频发送给该社交媒体对应的服务器。
此外,在其他实施例中,所述方法还包括:将所述编码后的目标视频存储到本地。具体的,当终端设备对目标视频进行编码后,用户可通过终端设备将该编码后的目标视频存储在该终端设备本地,具体的,终端设备检测到用户对存储按钮或按键的操作后,将该编码后的目标视频存储在该终端设备本地。
本实施例通过终端设备检测用户对所述多个视频片段的顺序的调整操作,根据检测到的所述调整操作,调整所述多个视频片段的顺序,并根 据预设参数对顺序调整后的所述多个视频片段进行处理得到目标视频,提高了对视频片段处理的灵活性,另外,当终端设备对目标视频进行编辑之后,用户可控制终端设备立即发送到服务器进行分享,实现了视频数据的即拍即上传,进一步提高了用户体验。
本发明实施例提供一种终端设备。图6为本发明实施例提供的终端设备的结构图,如图6所示,终端设备60包括存储器61和处理器62。存储器61用于存储程序代码;处理器62,调用所述程序代码,当程序代码被执行时,用于执行以下操作:获取视频数据;从所述视频数据中获取多个视频片段;根据预设参数对所述多个视频片段进行处理得到目标视频。
可选的,终端设备60还包括:通讯接口63,通讯接口63用于接收无人机发送的拍摄设备拍摄的视频数据;处理器62获取视频数据时,具体用于:通过通讯接口63获取拍摄设备拍摄的视频数据;通讯接口63还用于:接收所述无人机发送的所述拍摄设备在拍摄所述视频数据的过程中所述无人机的飞行参数信息;处理器62从所述视频数据中获取多个视频片段时,具体用于:根据所述拍摄设备在拍摄所述视频数据的过程中所述无人机的飞行参数信息,从所述视频数据中获取多个视频片段。
可选的,所述无人机的飞行参数信息包括如下至少一种:所述无人机的飞行速度、所述无人机的加速度、所述无人机的姿态、所述无人机的云台的姿态、所述无人机的位置信息。
可选的,终端设备60为带有处理器的相机,当程序代码被执行时,还用于执行以下操作:接收所述拍摄设备本身的运动参数信息,并根据所述拍摄设备的运动参数信息,从所述视频数据中获取多个视频片段。
可选的,所述拍摄设备本身的运动参数信息包括如下至少一种:所述拍摄设备的姿态、所述拍摄设备的运动速度、所述拍摄设备的加速度、所述拍摄设备的位置信息。
可选的,处理器62还用于:通过机器学习的方式对所述视频数据对应的场景进行识别;处理器62根据预设参数对所述多个视频片段进行处理得到目标视频时,具体用于:根据所述场景对应的预设参数对所述多个视频片段进行处理得到目标视频。
可选的,处理器62还用于:检测用户的场景设置操作;根据检测到的所述场景设置操作确定所述视频数据对应的场景;处理器62根据预设参数对所述多个视频片段进行处理得到目标视频时,具体用于:根据所述场景对应的预设参数对所述多个视频片段进行处理得到目标视频。
可选的,处理器62还用于:检测用户的场景切换操作;根据检测到的所述场景切换操作对所述场景进行切换;处理器62根据所述场景对应的预设参数对所述多个视频片段进行处理得到目标视频时,具体用于:根据切换后的场景对应的预设参数对所述多个视频片段进行处理得到目标视频。
可选的,所述预设参数包括至少一个视频片段的目标属性。
可选的,处理器62根据预设参数对所述多个视频片段进行处理得到目标视频时,具体用于:根据所述多个视频片段中每个视频片段的实际属性,从所述多个视频片段中确定出实际属性与所述目标属性匹配的至少一个目标片段;对所述至少一个目标片段进行处理得到目标视频。
可选的,所述预设参数还包括音频信息;处理器62对所述至少一个目标片段进行处理得到目标视频时,具体用于:根据所述音频信息中的关键点,对所述至少一个目标片段进行处理得到目标视频,所述目标视频中相邻的目标片段在所述关键点进行转场。
可选的,处理器62还用于:检测用户的音频信息选择操作;根据检测到的音频信息选择操作,确定用户选择的音频信息;处理器62根据所述音频信息中的关键点,对所述至少一个目标片段进行处理得到目标视频时,具体用于:根据用户选择的音频信息中的关键点,对所述至少一个目标片段进行处理得到目标视频。
可选的,处理器62还用于:检测用户对所述音频信息的属性的切换操作;根据检测到的所述切换操作,确定属性切换后的音频信息;处理器62根据所述音频信息中的关键点,对所述至少一个目标片段进行处理得到目标视频时,具体用于:根据属性切换后的音频信息中的关键点,对所述至少一个目标片段进行处理得到目标视频。
可选的,所述预设参数还包括相邻的目标片段的转场信息;处理器62根据所述音频信息中的关键点,对所述至少一个目标片段进行处理得到目 标视频时,具体用于:根据所述音频信息中的关键点,对所述至少一个目标片段进行处理得到目标视频,以使所述目标视频中相邻的目标片段在所述关键点按照所述转场信息表示的转场方式进行转场。
可选的,所述预设参数还包括滤镜信息;处理器62还用于:根据所述滤镜信息对所述目标视频中的图像进行图像处理。
可选的,所述预设参数还包括所述目标视频的目标时长;处理器62还用于:根据所述目标视频的目标时长,对所述目标视频的播放速度进行调整。
可选的,处理器62根据所述目标视频的目标时长,对所述目标视频的播放速度进行调整时,具体用于:根据所述目标视频的目标时长,对所述目标视频中的至少一个目标片段的播放速度进行调整。
可选的,处理器62还用于:检测用户对所述多个视频片段的顺序的调整操作;根据检测到的所述调整操作,调整所述多个视频片段的顺序;处理器62根据预设参数对所述多个视频片段进行处理得到目标视频时,具体用于:根据预设参数对顺序调整后的所述多个视频片段进行处理得到目标视频。
可选的,处理器62还用于:根据所述目标视频对应的视频参数,对所述目标视频进行编码得到编码后的目标视频;通讯接口63还用于:将所述编码后的目标视频发送到服务器。
可选的,处理器62还用于:将所述编码后的目标视频存储到本地。
本发明实施例提供的终端设备的具体原理和实现方式均与图1、图4、图5所示实施例类似,此处不再赘述。
本实施例通过终端设备获取视频数据,从所述视频数据中获取多个视频片段,根据预设参数对所述多个视频片段进行处理得到目标视频,用户不需要对视频数据进行后期编辑,节省了后期编辑的繁琐过程,提高了视频处理效率。
在本发明所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个 系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
上述以软件功能单元的形式实现的集成的单元,可以存储在一个计算机可读取存储介质中。上述软件功能单元存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本发明各个实施例所述方法的部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
本领域技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (38)

  1. 一种视频处理方法,其特征在于,包括:
    获取视频数据;
    从所述视频数据中获取多个视频片段;
    根据预设参数对所述多个视频片段进行处理得到目标视频。
  2. 根据权利要求1所述的方法,其特征在于,所述获取视频数据包括:
    获取拍摄设备拍摄的视频数据;
    所述方法还包括:
    接收无人机发送的所述拍摄设备在拍摄所述视频数据的过程中所述无人机的飞行参数信息或所述拍摄设备本身的运动参数信息;
    所述从所述视频数据中获取多个视频片段,包括:
    根据所述拍摄设备在拍摄所述视频数据的过程中所述无人机的飞行参数信息或所述拍摄设备的运动参数信息,从所述视频数据中获取多个视频片段。
  3. 根据权利要求2所述的方法,其特征在于,所述无人机的飞行参数信息包括如下至少一种:所述无人机的飞行速度、所述无人机的加速度、所述无人机的姿态、所述无人机的云台的姿态、所述无人机的位置信息;
    所述拍摄设备的运动参数信息包括如下至少一种:
    所述拍摄设备的姿态、所述拍摄设备的运动速度、所述拍摄设备的加速度、所述拍摄设备的位置信息。
  4. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    通过机器学习的方式对所述视频数据对应的场景进行识别;
    所述根据预设参数对所述多个视频片段进行处理得到目标视频,包括:
    根据所述场景对应的预设参数对所述多个视频片段进行处理得到目标视频。
  5. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    检测用户的场景设置操作;
    根据检测到的所述场景设置操作确定所述视频数据对应的场景;
    所述根据预设参数对所述多个视频片段进行处理得到目标视频,包括:
    根据所述场景对应的预设参数对所述多个视频片段进行处理得到目标视频。
  6. 根据权利要求4或5所述的方法,其特征在于,所述方法还包括:
    检测用户的场景切换操作;
    根据检测到的所述场景切换操作对所述场景进行切换;
    所述根据所述场景对应的预设参数对所述多个视频片段进行处理得到目标视频,包括:
    根据切换后的场景对应的预设参数对所述多个视频片段进行处理得到目标视频。
  7. 根据权利要求1所述的方法,其特征在于,所述预设参数包括至少一个视频片段的目标属性。
  8. 根据权利要求7所述的方法,其特征在于,所述根据预设参数对所述多个视频片段进行处理得到目标视频,包括:
    根据所述多个视频片段中每个视频片段的实际属性,从所述多个视频片段中确定出实际属性与所述目标属性匹配的至少一个目标片段;
    对所述至少一个目标片段进行处理得到目标视频。
  9. 根据权利要求8所述的方法,其特征在于,所述预设参数还包括音频信息;
    所述对所述至少一个目标片段进行处理得到目标视频,包括:
    根据所述音频信息中的关键点,对所述至少一个目标片段进行处理得到目标视频,所述目标视频中相邻的目标片段在所述关键点进行转场。
  10. 根据权利要求9所述的方法,其特征在于,所述方法还包括:
    检测用户的音频信息选择操作;
    根据检测到的音频信息选择操作,确定用户选择的音频信息;
    所述根据所述音频信息中的关键点,对所述至少一个目标片段进行处理得到目标视频,包括:
    根据用户选择的音频信息中的关键点,对所述至少一个目标片段进行处理得到目标视频。
  11. 根据权利要求9所述的方法,其特征在于,所述方法还包括:
    检测用户对所述音频信息的属性的切换操作;
    根据检测到的所述切换操作,确定属性切换后的音频信息;
    所述根据所述音频信息中的关键点,对所述至少一个目标片段进行处理得到目标视频,包括:
    根据属性切换后的音频信息中的关键点,对所述至少一个目标片段进行处理得到目标视频。
  12. 根据权利要求9-11任一项所述的方法,其特征在于,所述预设参数还包括相邻的目标片段的转场信息;
    所述根据所述音频信息中的关键点,对所述至少一个目标片段进行处理得到目标视频,包括:
    根据所述音频信息中的关键点,对所述至少一个目标片段进行处理得到目标视频,以使所述目标视频中相邻的目标片段在所述关键点按照所述转场信息表示的转场方式进行转场。
  13. 根据权利要求8-12任一项所述的方法,其特征在于,所述预设参数还包括滤镜信息;
    所述方法还包括:
    根据所述滤镜信息对所述目标视频中的图像进行图像处理。
  14. 根据权利要求13所述的方法,其特征在于,所述预设参数还包括所述目标视频的目标时长;
    所述方法还包括:
    根据所述目标视频的目标时长,对所述目标视频的播放速度进行调整。
  15. 根据权利要求14所述的方法,其特征在于,所述根据所述目标视频的目标时长,对所述目标视频的播放速度进行调整,包括:
    根据所述目标视频的目标时长,对所述目标视频中的至少一个目标片段的播放速度进行调整。
  16. 根据权利要求1-15任一项所述的方法,其特征在于,所述方法还包括:
    检测用户对所述多个视频片段的顺序的调整操作;
    根据检测到的所述调整操作,调整所述多个视频片段的顺序;
    所述根据预设参数对所述多个视频片段进行处理得到目标视频,包括:
    根据预设参数对顺序调整后的所述多个视频片段进行处理得到目标 视频。
  17. 根据权利要求1-16任一项所述的方法,其特征在于,所述方法还包括:
    根据所述目标视频对应的视频参数,对所述目标视频进行编码得到编码后的目标视频;
    将所述编码后的目标视频发送到服务器。
  18. 根据权利要求17所述的方法,其特征在于,所述方法还包括:
    将所述编码后的目标视频存储到本地。
  19. 一种终端设备,其特征在于,包括:存储器和处理器;
    所述存储器用于存储程序代码;
    所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以下操作:
    获取视频数据;
    从所述视频数据中获取多个视频片段;
    根据预设参数对所述多个视频片段进行处理得到目标视频。
  20. 根据权利要求19所述的终端设备,其特征在于,还包括:通讯接口,所述通讯接口用于接收无人机发送的拍摄设备拍摄的视频数据;
    所述处理器获取视频数据时,具体用于:
    通过所述通讯接口获取拍摄设备拍摄的视频数据;
    所述通讯接口还用于:
    接收所述无人机发送的所述拍摄设备在拍摄所述视频数据的过程中所述无人机的飞行参数信息;
    所述处理器从所述视频数据中获取多个视频片段时,具体用于:
    根据所述拍摄设备在拍摄所述视频数据的过程中所述无人机的飞行参数信息,从所述视频数据中获取多个视频片段。
  21. 根据权利要求20所述的终端设备,其特征在于,所述无人机的飞行参数信息包括如下至少一种:
    所述无人机的飞行速度、所述无人机的加速度、所述无人机的姿态、所述无人机的云台的姿态、所述无人机的位置信息。
  22. 根据权利要求19所述的终端设备,所述终端设备为带有处理器 的相机,其特征在于,当程序代码被执行时,还用于执行以下操作:
    接收拍摄设备本身的运动参数信息,并根据所述拍摄设备的运动参数信息,从所述视频数据中获取多个视频片段。
  23. 根据权利要求22所述的终端设备,其特征在于,所述拍摄设备本身的运动参数信息包括如下至少一种:所述拍摄设备的姿态、所述拍摄设备的运动速度、所述拍摄设备的加速度、所述拍摄设备的位置信息。
  24. 根据权利要求19所述的终端设备,其特征在于,所述处理器还用于:
    通过机器学习的方式对所述视频数据对应的场景进行识别;
    所述处理器根据预设参数对所述多个视频片段进行处理得到目标视频时,具体用于:
    根据所述场景对应的预设参数对所述多个视频片段进行处理得到目标视频。
  25. 根据权利要求19所述的终端设备,其特征在于,所述处理器还用于:
    检测用户的场景设置操作;
    根据检测到的所述场景设置操作确定所述视频数据对应的场景;
    所述处理器根据预设参数对所述多个视频片段进行处理得到目标视频时,具体用于:
    根据所述场景对应的预设参数对所述多个视频片段进行处理得到目标视频。
  26. 根据权利要求24或25所述的终端设备,其特征在于,所述处理器还用于:
    检测用户的场景切换操作;
    根据检测到的所述场景切换操作对所述场景进行切换;
    所述处理器根据所述场景对应的预设参数对所述多个视频片段进行处理得到目标视频时,具体用于:
    根据切换后的场景对应的预设参数对所述多个视频片段进行处理得到目标视频。
  27. 根据权利要求19所述的终端设备,其特征在于,所述预设参数 包括至少一个视频片段的目标属性。
  28. 根据权利要求27所述的终端设备,其特征在于,所述处理器根据预设参数对所述多个视频片段进行处理得到目标视频时,具体用于:
    根据所述多个视频片段中每个视频片段的实际属性,从所述多个视频片段中确定出实际属性与所述目标属性匹配的至少一个目标片段;
    对所述至少一个目标片段进行处理得到目标视频。
  29. 根据权利要求28所述的终端设备,其特征在于,所述预设参数还包括音频信息;
    所述处理器对所述至少一个目标片段进行处理得到目标视频时,具体用于:
    根据所述音频信息中的关键点,对所述至少一个目标片段进行处理得到目标视频,所述目标视频中相邻的目标片段在所述关键点进行转场。
  30. 根据权利要求29所述的终端设备,其特征在于,所述处理器还用于:
    检测用户的音频信息选择操作;
    根据检测到的音频信息选择操作,确定用户选择的音频信息;
    所述处理器根据所述音频信息中的关键点,对所述至少一个目标片段进行处理得到目标视频时,具体用于:
    根据用户选择的音频信息中的关键点,对所述至少一个目标片段进行处理得到目标视频。
  31. 根据权利要求30所述的终端设备,其特征在于,所述处理器还用于:
    检测用户对所述音频信息的属性的切换操作;
    根据检测到的所述切换操作,确定属性切换后的音频信息;
    所述处理器根据所述音频信息中的关键点,对所述至少一个目标片段进行处理得到目标视频时,具体用于:
    根据属性切换后的音频信息中的关键点,对所述至少一个目标片段进行处理得到目标视频。
  32. 根据权利要求29-31任一项所述的终端设备,其特征在于,所述预设参数还包括相邻的目标片段的转场信息;
    所述处理器根据所述音频信息中的关键点,对所述至少一个目标片段进行处理得到目标视频时,具体用于:
    根据所述音频信息中的关键点,对所述至少一个目标片段进行处理得到目标视频,以使所述目标视频中相邻的目标片段在所述关键点按照所述转场信息表示的转场方式进行转场。
  33. 根据权利要求28-32任一项所述的终端设备,其特征在于,所述预设参数还包括滤镜信息;
    所述处理器还用于:
    根据所述滤镜信息对所述目标视频中的图像进行图像处理。
  34. 根据权利要求33所述的终端设备,其特征在于,所述预设参数还包括所述目标视频的目标时长;
    所述处理器还用于:
    根据所述目标视频的目标时长,对所述目标视频的播放速度进行调整。
  35. 根据权利要求34所述的终端设备,其特征在于,所述处理器根据所述目标视频的目标时长,对所述目标视频的播放速度进行调整时,具体用于:
    根据所述目标视频的目标时长,对所述目标视频中的至少一个目标片段的播放速度进行调整。
  36. 根据权利要求19-35任一项所述的终端设备,其特征在于,所述处理器还用于:
    检测用户对所述多个视频片段的顺序的调整操作;
    根据检测到的所述调整操作,调整所述多个视频片段的顺序;
    所述处理器根据预设参数对所述多个视频片段进行处理得到目标视频时,具体用于:
    根据预设参数对顺序调整后的所述多个视频片段进行处理得到目标视频。
  37. 根据权利要求19-36任一项所述的终端设备,其特征在于,所述处理器还用于:
    根据所述目标视频对应的视频参数,对所述目标视频进行编码得到编码后的目标视频;
    通讯接口还用于:
    将所述编码后的目标视频发送到服务器。
  38. 根据权利要求37所述的终端设备,其特征在于,所述处理器还用于:
    将所述编码后的目标视频存储到本地。
PCT/CN2018/073337 2018-01-19 2018-01-19 视频处理方法及终端设备 WO2019140621A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN202210869709.XA CN115103166A (zh) 2018-01-19 2018-01-19 视频处理方法及终端设备
PCT/CN2018/073337 WO2019140621A1 (zh) 2018-01-19 2018-01-19 视频处理方法及终端设备
CN201880031293.6A CN110612721B (zh) 2018-01-19 2018-01-19 视频处理方法及终端设备
US16/893,156 US11587317B2 (en) 2018-01-19 2020-06-04 Video processing method and terminal device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/073337 WO2019140621A1 (zh) 2018-01-19 2018-01-19 视频处理方法及终端设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/893,156 Continuation US11587317B2 (en) 2018-01-19 2020-06-04 Video processing method and terminal device

Publications (1)

Publication Number Publication Date
WO2019140621A1 true WO2019140621A1 (zh) 2019-07-25

Family

ID=67300909

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/073337 WO2019140621A1 (zh) 2018-01-19 2018-01-19 视频处理方法及终端设备

Country Status (3)

Country Link
US (1) US11587317B2 (zh)
CN (2) CN115103166A (zh)
WO (1) WO2019140621A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112689200A (zh) * 2020-12-15 2021-04-20 万兴科技集团股份有限公司 视频编辑方法、电子设备及存储介质
CN113099266A (zh) * 2021-04-02 2021-07-09 云从科技集团股份有限公司 基于无人机pos数据的视频融合方法、系统、介质及装置

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111327968A (zh) * 2020-02-27 2020-06-23 北京百度网讯科技有限公司 短视频的生成方法、平台、电子设备及存储介质
CN111614912B (zh) * 2020-05-26 2023-10-03 北京达佳互联信息技术有限公司 视频生成方法、装置、设备及存储介质
CN112702656A (zh) * 2020-12-21 2021-04-23 北京达佳互联信息技术有限公司 视频编辑方法和视频编辑装置
CN112738626B (zh) * 2020-12-24 2023-02-03 北京百度网讯科技有限公司 视频文件的目标检测方法、装置、电子设备及存储介质
CN112954481B (zh) * 2021-02-07 2023-12-12 脸萌有限公司 特效处理方法和装置
CN115484425A (zh) * 2021-06-16 2022-12-16 荣耀终端有限公司 一种转场特效的确定方法及电子设备
CN113821188A (zh) * 2021-08-25 2021-12-21 深圳市声扬科技有限公司 调整音频播放速度的方法、装置、电子设备及存储介质
CN115567660B (zh) * 2022-02-28 2023-05-26 荣耀终端有限公司 一种视频处理方法和电子设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050257240A1 (en) * 2004-04-29 2005-11-17 Harris Corporation, Corporation Of The State Of Delaware Media asset management system for managing video news segments and associated methods
CN104702919A (zh) * 2015-03-31 2015-06-10 小米科技有限责任公司 播放控制方法及装置、电子设备
CN105100665A (zh) * 2015-08-21 2015-11-25 广州飞米电子科技有限公司 存储飞行器采集的多媒体信息的方法和装置
CN105519095A (zh) * 2014-12-14 2016-04-20 深圳市大疆创新科技有限公司 一种视频处理方法、装置及播放装置

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7483624B2 (en) * 2002-08-30 2009-01-27 Hewlett-Packard Development Company, L.P. System and method for indexing a video sequence
US9398350B1 (en) * 2006-08-08 2016-07-19 CastTV Inc. Video matching service to offline counterpart
US8565538B2 (en) * 2010-03-16 2013-10-22 Honda Motor Co., Ltd. Detecting and labeling places using runtime change-point detection
US20220046315A1 (en) * 2010-12-12 2022-02-10 Verint Americas Inc. Thinning video based on content
CN102740106B (zh) * 2011-03-31 2014-12-03 富士通株式会社 在视频中检测摄像机运动类型的方法及装置
US9916538B2 (en) * 2012-09-15 2018-03-13 Z Advanced Computing, Inc. Method and system for feature detection
US8874429B1 (en) * 2012-05-18 2014-10-28 Amazon Technologies, Inc. Delay in video for language translation
US8942542B1 (en) * 2012-09-12 2015-01-27 Google Inc. Video segment identification and organization based on dynamic characterizations
WO2015122163A1 (ja) * 2014-02-14 2015-08-20 日本電気株式会社 映像処理システム
EP4087247A1 (en) * 2014-02-26 2022-11-09 Dolby Laboratories Licensing Corp. Luminance based coding tools for video compression
CN105493496B (zh) * 2014-12-14 2019-01-18 深圳市大疆创新科技有限公司 一种视频处理方法、装置及图像系统
US9860553B2 (en) * 2015-03-18 2018-01-02 Intel Corporation Local change detection in video
CN105430761B (zh) * 2015-10-30 2018-12-11 小米科技有限责任公司 建立无线网络连接的方法、装置及系统
US10021339B2 (en) * 2015-12-01 2018-07-10 Qualcomm Incorporated Electronic device for generating video data
US10204417B2 (en) * 2016-05-10 2019-02-12 International Business Machines Corporation Interactive video generation
CN108509465B (zh) * 2017-02-28 2022-03-15 阿里巴巴集团控股有限公司 一种视频数据的推荐方法、装置和服务器
US10587919B2 (en) * 2017-09-29 2020-03-10 International Business Machines Corporation Cognitive digital video filtering based on user preferences

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050257240A1 (en) * 2004-04-29 2005-11-17 Harris Corporation, Corporation Of The State Of Delaware Media asset management system for managing video news segments and associated methods
CN105519095A (zh) * 2014-12-14 2016-04-20 深圳市大疆创新科技有限公司 一种视频处理方法、装置及播放装置
CN104702919A (zh) * 2015-03-31 2015-06-10 小米科技有限责任公司 播放控制方法及装置、电子设备
CN105100665A (zh) * 2015-08-21 2015-11-25 广州飞米电子科技有限公司 存储飞行器采集的多媒体信息的方法和装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112689200A (zh) * 2020-12-15 2021-04-20 万兴科技集团股份有限公司 视频编辑方法、电子设备及存储介质
CN113099266A (zh) * 2021-04-02 2021-07-09 云从科技集团股份有限公司 基于无人机pos数据的视频融合方法、系统、介质及装置

Also Published As

Publication number Publication date
US20200356780A1 (en) 2020-11-12
CN115103166A (zh) 2022-09-23
CN110612721A (zh) 2019-12-24
US11587317B2 (en) 2023-02-21
CN110612721B (zh) 2022-08-09

Similar Documents

Publication Publication Date Title
WO2019140621A1 (zh) 视频处理方法及终端设备
US10679676B2 (en) Automatic generation of video and directional audio from spherical content
US10084961B2 (en) Automatic generation of video from spherical content using audio/visual analysis
US10771736B2 (en) Compositing and transmitting contextual information during an audio or video call
US10686985B2 (en) Moving picture reproducing device, moving picture reproducing method, moving picture reproducing program, moving picture reproducing system, and moving picture transmission device
WO2018102243A1 (en) Live video recording, streaming, viewing, and storing mobile application, and systems and methods of use thereof
CN105190511A (zh) 图像处理方法、图像处理装置和图像处理程序
US20180103197A1 (en) Automatic Generation of Video Using Location-Based Metadata Generated from Wireless Beacons
US10084959B1 (en) Color adjustment of stitched panoramic video
CN108702464B (zh) 一种视频处理方法、控制终端及可移动设备
WO2017157135A1 (zh) 媒体信息处理方法及媒体信息处理装置、存储介质
JP2020198556A (ja) 画像処理装置及びその制御方法、プログラム、記憶媒体
US11398254B2 (en) Methods and systems for an augmented film crew using storyboards
US10582125B1 (en) Panoramic image generation from video
US20160100110A1 (en) Apparatus, Method And Computer Program Product For Scene Synthesis
KR101843025B1 (ko) 카메라워크 기반 영상합성 시스템 및 영상합성방법
US20180352253A1 (en) Portable Device for Multi-Stream Video Recording
JP2016173827A (ja) 送信装置
CN112887620A (zh) 视频拍摄方法、装置及电子设备
KR101511868B1 (ko) 다중 카메라 디바이스를 활용한 멀티미디어 촬영 방법 및 그 시스템
WO2018214075A1 (zh) 视频画面生成方法及装置
CN116260986A (zh) 自由视角视频的弹幕的显示方法、装置及系统
CN116264640A (zh) 自由视角视频的视角切换方法、装置及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18901755

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18901755

Country of ref document: EP

Kind code of ref document: A1