WO2019075617A1 - 一种视频处理方法、控制终端及可移动设备 - Google Patents

一种视频处理方法、控制终端及可移动设备 Download PDF

Info

Publication number
WO2019075617A1
WO2019075617A1 PCT/CN2017/106382 CN2017106382W WO2019075617A1 WO 2019075617 A1 WO2019075617 A1 WO 2019075617A1 CN 2017106382 W CN2017106382 W CN 2017106382W WO 2019075617 A1 WO2019075617 A1 WO 2019075617A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
motion
information
attribute
shooting
Prior art date
Application number
PCT/CN2017/106382
Other languages
English (en)
French (fr)
Inventor
苏冠华
黄志聪
张若颖
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2017/106382 priority Critical patent/WO2019075617A1/zh
Priority to CN201780009987.5A priority patent/CN108702464B/zh
Publication of WO2019075617A1 publication Critical patent/WO2019075617A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Definitions

  • the present invention relates to the field of image processing technologies, and in particular, to a video processing method, a control terminal, and a mobile device.
  • shooting video is no longer limited by the synthesis of a single image by a computer, but can be directly captured by a shooting device (such as a camera, a camera, etc.), enriching the video acquisition.
  • a shooting device such as a camera, a camera, etc.
  • a video is more and more rich in content, and can be combined with audio information such as audio, and is increasingly becoming an important way for people to record daily life and learning.
  • the camera may experience jitter, sudden drop, sudden rise, etc. due to environmental factors or its own factors, resulting in unclear picture of the captured video. Therefore, how to post-process the video becomes A popular research topic.
  • the embodiment of the invention discloses a video processing method, a control terminal and a mobile device, which can enrich the post-processing mode of the video to a certain extent.
  • a first aspect of the embodiments of the present invention discloses a video processing method, including:
  • a second aspect of the embodiments of the present invention discloses a control terminal, including a communication component, configured to communicate with a controlled device, and further includes: a memory and a processor;
  • the memory is configured to store program instructions
  • the processor is configured to execute the program instructions stored by the memory, when the program instructions are executed, for:
  • a third aspect of the embodiments of the present invention discloses a mobile device, including a mobile device body, and a camera device mounted on the mobile device body.
  • the mobile device further includes: a memory and a processor;
  • the memory is configured to store program instructions
  • the processor is configured to execute the program instructions stored by the memory, when the program instructions are executed, for:
  • the orientation attribute information of the photographing apparatus in each shooting period during the process of capturing the video may be acquired, and then the target video clip satisfying the smooth motion condition is determined from the video according to the orientation attribute information, and finally determined according to the determined At least one target video segment is synthesized video, and the target video segment can be automatically extracted by the orientation attribute information, and the user can be richly edited to a certain extent to enrich the post-processing method of the video to a certain extent.
  • FIG. 1 is a schematic diagram of a scenario for video processing according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of another scenario for video processing according to an embodiment of the present invention.
  • FIG. 3 is a schematic flowchart of a video processing method according to an embodiment of the present invention.
  • FIG. 4 is a schematic flowchart of another video processing method according to an embodiment of the present invention.
  • FIG. 5 is a schematic flowchart diagram of still another video processing method according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of a control terminal according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of a mobile device according to an embodiment of the present invention.
  • the shooting device may have problems such as jitter, sudden drop, sudden rise, etc. due to surrounding environmental factors or its own factors, resulting in image blurring and low pixel values of the captured video. Therefore, Post-processing of the captured video is necessary.
  • Traditional video processing usually involves manually cropping a clip in a video into a specific video clip, and then editing, synthesizing, playing, and the like of the clipped video clip in the video editor.
  • the video editor can filter out videos larger than the duration threshold (for example, greater than 30 seconds), and the filtered video is a long shot video; in 102, the video editor can Displaying the video frame of the long-lens video, the user may use one of the video images A of the long-lens video as the starting end, wherein one frame of the video image B serves as the end; in 103, the video editor can according to the starting end and At the end, the video clip is cropped from the long shot video; in 104, the video editor combines the cropped video clips.
  • the duration threshold for example, greater than 30 seconds
  • the manual cropping method is too cumbersome and consumes a lot of time for the user, and when the video is a long time (for example, more than 50 seconds, more than 100 seconds, etc.), the user may perform manual cropping, and the error rate will also be Increased, reduced the flexibility of video processing.
  • FIG. 2 is a schematic diagram of another scenario for video processing according to an embodiment of the present invention.
  • the video is processed by a mobile phone or a tablet computer with a control APP, and specifically, the video can be processed by a processor of a mobile device such as a drone or a video editor of the control terminal, etc. No restrictions are imposed.
  • the video editor can filter video that is greater than the duration threshold (eg, greater than 30 seconds) Come out, the filtered video is a long shot video.
  • the duration threshold eg, greater than 30 seconds
  • the video editor can determine the target video segment from the long shot video in accordance with the orientation attribute information. For example, the user may first click on the long shot video and then enter the [fragment clip] page, and the video editor may determine, from the long shot video, the video clip that satisfies the smooth motion condition as the target video clip according to the orientation attribute information.
  • the photographing device can record the posture information of the camera in real time during the process of capturing the video (including the long-lens video), and display the orientation attribute information according to the posture information of the photographing device, and the orientation attribute information is
  • the time axis of the video itself is associated such that the time axis of the video itself has orientation attribute information.
  • the video editor can save the target video clip. Among them, the user can drag and drop a single target video clip to the play area for playback.
  • the video editor can synthesize the video segments.
  • the video editor can display a highlight button on the display interface for the target video segment that satisfies the smooth motion condition.
  • the video editor can extract the target video segment and extract the target video segment.
  • the target video clip is synthesized.
  • the video editor can automatically synthesize at least one target video segment when the target video segment is extracted.
  • the user may also determine at least one target video segment through the human-computer interaction interface, and the video editor may perform a synthesis process on the at least one target video segment selected by the user.
  • the above method can automatically extract the target video segment by using the orientation attribute information, which does not require manual clipping of the video, and can ensure that the clipped video segment is a smooth video segment, which reduces the error rate, facilitates operation, and improves video processing. Flexibility also enriches the post-processing of video to a certain extent.
  • the executive body of the method embodiment of the present application may be a mobile device or a control terminal.
  • the mobile device may be, for example, an aviation aircraft, a drone, a flying device, a handheld cloud platform, an unmanned vehicle, an unmanned ship, etc.
  • the control terminal may be, for example, a virtual reality device, a smart terminal, a remote controller, a ground station, and a belt.
  • the embodiment of the present invention does not impose any limitation on a mobile phone or a tablet computer or the like that controls the APP.
  • the control terminal is described as an example, but it should be noted that the executive body of the method embodiment of the present application may also be a mobile device.
  • FIG. 3 is a schematic flowchart diagram of a video processing method according to an embodiment of the present invention.
  • the video processing method described in this embodiment includes:
  • orientation attribute information may include information such as the position, orientation, and the like of the photographing device (including the device connected to the photographing device) during shooting.
  • the orientation attribute information of the photographing device in each shooting period is separately stored from the video captured by the photographing device, and corresponds to the time information.
  • the orientation attribute information may be stored in the orientation attribute information file, and the orientation attribute file may include the orientation attribute information and time information corresponding to the orientation attribute information.
  • control terminal acquires the orientation attribute information of the photographing apparatus in each shooting period, and specifically may obtain the orientation attribute information from the orientation attribute information file.
  • the orientation attribute information of the photographing device at each shooting period may be stored as an attribute of the video captured by the photographing device, corresponding to each frame of the video.
  • the photographing apparatus may record the orientation attribute information corresponding to the video image of each frame as the attribute of the video in the video in real time in the process of capturing the video.
  • control terminal acquires the orientation attribute information of the photographing apparatus in each shooting period during the process of capturing the video, and specifically may obtain the orientation attribute information of each shooting time period from the video.
  • satisfying the smooth motion condition may mean that the orientation attribute information of the photographing device is a continuous change, and the continuous change is specifically that the change amount of the orientation attribute information between adjacent time segments is less than a preset value.
  • the orientation of the photographing device in the orientation attribute information as an example, if the orientations between adjacent time segments are all the same orientation (for example, 45 degrees to the left), or the orientation changes slowly, such as each adjacent time. If the segment change is less than 5 degrees, etc., it can be determined that the orientation of the camera is continuously changed, that is, determined Meet smooth motion conditions.
  • the time period A and the time period A may be determined according to the time period A. Corresponding time segment, and the video segment corresponding to the corresponding time segment is used as the target video segment satisfying the gentle motion condition.
  • the orientation attribute information is recorded in the video in real time, and the orientation attribute information in the time period B is continuously changed, it may be determined that the video segment in the time period B is a target video satisfying the gentle motion condition. Fragment.
  • the user may select at least one target video segment from the plurality of target video segments.
  • control terminal may display a plurality of target video segments for the user to select, and the user may determine at least one target video segment from the displayed plurality of target video segments by means of human-computer interaction, and the control terminal may At least one target video segment is synthesized.
  • control terminal may automatically determine at least one target video segment from the target video segment according to the orientation attribute information of each target video segment, and perform a synthesis process according to the at least one video segment.
  • the embodiment of the present invention obtains the orientation attribute information of the photographing apparatus in each shooting period during the process of capturing the video, and then determines the target video clip that satisfies the smooth motion condition from the video according to the orientation attribute information, and finally determines according to the determined At least one target video segment is synthesized video, and the target video segment can be automatically extracted by the orientation attribute information, without the user manually editing, and the cut video segment is ensured to be a smooth video segment, which is convenient to operate and enriches the video to a certain extent.
  • Post-processing method is a method.
  • FIG. 4 is a schematic diagram of a scenario of another video processing method according to an embodiment of the present invention.
  • the video processing method described in the embodiment of the present invention includes:
  • the video collection includes a plurality of captured videos.
  • control terminal can save multiple captured videos and the multiple The captured video is taken as the video collection.
  • the duration threshold may be, for example, a duration of 30 seconds, a duration of 40 seconds, a duration of 50 seconds, and the like, which is not limited in this embodiment of the present invention.
  • the duration threshold may be a default threshold of the control terminal, or may be set by the user, and the control terminal uses the value set by the user as the duration threshold.
  • the duration of the video segment may be shorter (eg, 2 seconds, 4 seconds, 6 seconds, etc.), resulting in the video.
  • the fragment contains less content, so the control terminal can perform no processing for the video whose duration is less than or equal to the duration threshold.
  • control terminal may first obtain a duration corresponding to each video in the video set, and then filter the video whose duration is greater than the duration threshold according to the duration.
  • the posture information includes location information and direction information.
  • the location information may be, for example, the location coordinates of the camera, the relative position of the device connected to the camera, and the like, which is not limited in this embodiment of the present invention.
  • the direction information may refer to the orientation of the photographing device, and may be, for example, an attitude angle of the photographing device (including a yaw angle, a pitch, etc.), a field angle (FOF). And so on, the embodiment of the present invention does not impose any limitation on this.
  • the photographing apparatus may record the posture information of the photographing apparatus in real time according to a time point or a time period during the shooting of the video.
  • the image capturing device may record the posture information once every second, or may record the posture information every 2 seconds, or may record the posture information every 10 seconds, which is not limited in this embodiment of the present invention.
  • the photographing device is mounted on a movable device; and acquiring the posture information of the photographing device in the process of capturing the video comprises: acquiring the movable device to capture the video Attitude information in the process; determining posture information of the photographing device according to the posture information of the movable device.
  • the mobile device may be a device that can be moved, such as a handheld cloud platform, an unmanned vehicle, an unmanned vehicle/unmanned vehicle, and the like, which is not limited in this embodiment of the present invention.
  • the photographing device can be fixed on the movable device, and at this time, the movable device can be moved.
  • the posture information of the moving device can indicate the posture information of the photographing device. For example, when the movable device moves 40 degrees to the left, the photographing device also moves to the left 40 degrees, and the posture information of the photographing device can be determined according to the posture information of the movable device.
  • control terminal can be interconnected with the mobile device and the photographing device through a wireless link, and the mobile device can record its posture information in real time and send the posture information of the movable device.
  • the control terminal can determine the posture information of the photographing device according to the posture information of the movable device.
  • the control terminal is a smart phone
  • the mobile device is a drone
  • a camera for example, a camera
  • the smart phone can be connected with the drone and the photographing device, and the drone is set.
  • positioning devices such as GPS, inertial measurement unit, etc., for acquiring the real-time position and orientation information (ie, posture information) of the drone, and when the smartphone can acquire the posture information of the drone through the communication component, when the camera is opposite to the camera
  • the attitude information of the camera can be obtained according to the posture information of the drone.
  • the photographing device may also be movably disposed on the movable device.
  • the photographing device is mounted on the drone through the pan/tilt, and the movement of the photographing device is realized by the rotation of the pan/tilt; or the photographing The device is mounted on the drone by other non-panning devices, and the camera moves by the rotation of the mounted device.
  • the control terminal acquires the posture information of the photographing apparatus in the process of capturing the video by the communication component, including: acquiring posture information of the drone during the process of capturing the video, and the pan/tilt Gesture information.
  • the processor determines posture information of the photographing device according to the posture information of the drone and the posture information of the pan/tilt.
  • the attitude information of the UAV and the attitude information of the PTZ can collectively determine the posture information of the imaging device.
  • the location information of the UAV and the direction information of the PTZ can jointly determine the posture information of the imaging device.
  • the control terminal is a smart phone
  • a shooting device for example, a camera
  • the drone can be connected to the cloud platform, and the smart phone can communicate with the drone and the photographing device.
  • the smartphone detects that the camera is in the process of capturing video, it can acquire the attitude information of the drone and the attitude information acquired by the drone to the pan/tilt, and according to the posture information of the drone and the attitude information of the gimbal.
  • the posture information of the photographing device is determined.
  • the mobile device such as a drone can also implement the method implemented by the above control terminal, and details are not described herein.
  • the photographing device is configured on the smart terminal; and acquiring the posture information of the photographing device in the process of capturing the video comprises: acquiring the smart terminal in the process of capturing the video Gesture information; determining posture information of the photographing device according to posture information of the smart terminal.
  • the smart terminal may be, for example, a device that can be configured with a camera, such as a smart phone, a tablet computer, or a wearable device, and is not limited in this embodiment of the present invention.
  • the manner in which the camera is disposed in the smart terminal may be fixed on the smart terminal.
  • the posture information of the photographing device can be represented by the posture information of the smart terminal.
  • the camera device is disposed in the smart terminal or can be rotated on the smart terminal.
  • the camera of the smart phone can be rotated back and forth by means of screws or the like to further realize self-photographing and normal shooting.
  • the posture information of the smart terminal may include posture information of the smart terminal itself and rotation information of the device such as a screw, and the posture information of the camera may be represented by the posture information of the smart terminal itself and the rotation information of the device such as a screw.
  • the posture information of the photographing device may indicate the orientation attribute information of the photographing device during each shooting period. For example, if the control terminal is in the shooting period of 10 seconds, the control terminal can start from the first second of the shooting start, and the 10th second in the shooting is the last point, and the recording is started between the starting point and the ending point.
  • the attitude information is used as the orientation attribute information of the photographing device during this shooting period.
  • the orientation attribute may include posture information of a plurality of time points (or time periods depending on a manner in which the photographing means records posture information).
  • step S430 reference may be made to the description of the step S302 in the foregoing method embodiment, and no further details are provided herein.
  • the motion attribute is specifically a motion trajectory characteristic of the camera.
  • the motion trajectory characteristic can be used to indicate that the motion of the camera is smooth. move.
  • the control terminal may determine the orientation attribute information corresponding to the target video segment as the motion attribute that can represent the smooth motion after determining the target video segment.
  • the motion attribute may include: at least one of in-situ rotation, straight forward, straight backward, curved movement shooting, reverse flying away from rising shooting, and reverse flying horizontally away from shooting in all scenes; And/or locking at least one of left surround shooting, right surround shooting, following shooting, and parallel shooting corresponding to the subject scene.
  • the in-situ rotation may include rotating in situ around the y-axis (also called in-situ yaw), rotating around the x-axis (also called in-situ pitch).
  • the motion attribute may also include a straight line corresponding to all the scenes forward and simultaneously shooting around the x-axis (also called straight forward, while the pitch is low-headed), straight forward and simultaneously shooting around the x-axis (also called a straight line) Forward, at the same time, the pitch is raised to the head), the line is backwards and at the same time, the head is shot around the x-axis (also called the line backwards, while the pitch is low-headed), the line is backwards and the image is taken around the x-axis (also called the line backwards, at the same time Pitch head-up shooting), straight line to the left pan shooting, straight line to the right pan shooting, rising top view shooting, rising head-up shooting, rising top view and rotating around the y-axis (also called rising top view, turn yaw shooting), descending overhead shooting, falling flat time
  • the photographing, the descending, and the y-axis rotation shooting also called the descending top view, the yaw shooting
  • the method when determining the motion attribute according to the orientation attribute information corresponding to the target video segment, specifically includes: matching the change trend of the orientation attribute information corresponding to the target video segment with the pre-stored motion attribute to determine the target.
  • a motion attribute of the video clip and determining a corresponding shooting scene according to the motion attribute.
  • the shooting scene includes normal flight and tracking shooting, pointing flight, and the like.
  • the pre-stored motion attribute may be a motion attribute pre-stored in the control terminal.
  • the control terminal can determine the video attributes that can be used to represent the smooth motion through multiple video processing, and store the video attributes that can represent the smooth motion in the form of files in the control terminal.
  • control terminal may match the change trend of the orientation attribute information corresponding to the target video segment with the pre-stored motion attribute, if the change trend is the same as the change trend of the motion attribute A (or the similarity reaches the similarity threshold,
  • the similarity threshold may be, for example, 90%, 95%, etc., and then the motion attribute A may be determined as the motion attribute of the target video segment, and the shooting scene corresponding to the motion attribute may be further determined.
  • control terminal may play the target video segment according to the motion attribute.
  • the upper tab adds attributes to the target video clip. For example, for a video segment in the video in which the motion attribute is determined, the control terminal may display a highlight button on the display interface of the control terminal.
  • step S405 in the specific implementation manner of the foregoing step S405, reference may be made to the step S303 in the foregoing method embodiment, and details are not described herein.
  • the video preset by the time length is longer than the duration, and the posture information of the shooting device during the shooting of the video is acquired, and the orientation information of each shooting time period is represented by the posture information, and according to
  • the orientation attribute information determines a target video segment that satisfies the smooth motion condition from the video, determines a motion attribute according to the orientation attribute information corresponding to the target video segment, and synthesizes the video according to the at least one target video segment, which can be implemented without human operation.
  • the editing of the video ensures that the clipped video clip is a smooth video clip, which is convenient to operate and enriches the post-processing of the video to a certain extent.
  • the synthesis may be performed according to the motion attribute of the target video segment, so that the synthesized video conforms to the motion combination rule or the user's viewing habit.
  • FIG. 5 is a schematic flowchart of still another video processing method according to an embodiment of the present invention.
  • the video processing method shown in the embodiment of the present invention may include:
  • control terminal may determine each frame of the video image in the target video segment, and then perform image analysis on the video image of the adjacent frame to determine the motion attribute.
  • performing image analysis on a video frame in the target video segment to determine a motion attribute specifically: determining a motion attribute according to a change in a feature point between adjacent frames in the target video segment, and according to The motion attribute determines a corresponding shooting scene.
  • control terminal can perform a feature ratio on video images of adjacent frames in the target video segment. Right, the feature point is extracted, and then the position of the feature point in the two frames of the video image can be determined. If the position change is within the preset position change range, the shooting device can be in the process of capturing the target video end.
  • the trajectory characteristics that is, the motion properties.
  • the control terminal may determine the orientation attribute information corresponding to the target video segment as a motion attribute that can represent the smooth motion, and may select the motion attribute and the motion attribute corresponding to the motion attribute.
  • the scene is saved as a property file, and the information in the property file can represent attribute information of the target video segment.
  • control terminal may automatically add the attribute information to the target video segment according to the time information according to the attribute file.
  • control terminal may also add the attribute information to the target video segment according to the time information when receiving the add instruction (for example, receiving the added attribute operation of the user).
  • control terminal may determine at least one target video segment from the target video segment after determining the target video segment.
  • the user may select at least one target video segment through a human-machine interaction interface, and the control terminal may use the target video segment selected by the user as the determined at least one target video segment.
  • the control terminal may determine the at least one target video segment according to the motion attribute of the target video segment. For example, the control terminal may select the target video segment with the same motion attribute as the determined at least one target video segment, or the control terminal may also select the target video segment with the similar motion attribute as the determined at least one target video segment, and the like.
  • the embodiment of the present invention does not impose any limitation on this.
  • the synthesizing the video according to the motion attribute specifically includes: sorting the determined at least one target video segment according to the motion attribute, and combining the motion law of the object and/or the viewing habit of the user. Synthetic video.
  • the determined motion attribute of at least one video segment is a straight line in all scenes. Forward, at this time, the object in the captured image is linearly backward and backward for the user. Therefore, the control terminal can perform the at least one target video according to the motion law of the backward movement of the object and the viewing habit of the user.
  • the segments are sorted to synthesize the video.
  • the target video segment that satisfies the smooth motion condition is determined from the video, and the target video segment is subjected to image analysis to determine the motion attribute, and the motion attribute and the shooting scene are added to the target video segment.
  • the at least one target video segment is synthesized into a video, and the video can be synthesized according to the motion trajectory characteristics of the shooting device, thereby better satisfying the user's automation of the synthesized video.
  • the way of video processing is enriched to some extent.
  • FIG. 6 is a schematic structural diagram of a control terminal according to an embodiment of the present invention.
  • the control terminal described in this embodiment includes:
  • the memory 601 is configured to store program instructions
  • the processor 602 is configured to execute program instructions stored in the memory.
  • the communication component 603 is configured to acquire orientation attribute information of the capturing apparatus during each shooting period in the process of capturing a video;
  • the processor 602 determines, from the video, a target video segment that satisfies a smooth motion condition according to the orientation attribute information; and synthesizes a video according to the determined at least one target video segment.
  • the processor 602 is configured to: before the target video segment that satisfies the smooth motion condition is determined from the video according to the orientation attribute information, and to: filter, from the video set, a duration greater than a duration threshold. a video, wherein the video collection includes a plurality of captured videos.
  • the video is captured by a photographing device
  • the orientation attribute information is represented by posture information of the photographing device
  • the communication component 603 is configured to acquire the photographing device at each shooting time during the process of capturing the video.
  • the orientation attribute information of the segment is specifically used to: acquire posture information of the photographing apparatus in the process of capturing the video, and the posture information includes position information and direction information.
  • the photographing device is mounted on a mobile device; the communication component 603 is configured to acquire the posture information of the photographing device during the process of capturing the video, specifically for: obtaining Obtaining posture information of the movable device in the process of capturing the video; determining posture information of the photographing device according to the posture information of the movable device.
  • the photographing device is mounted on the drone through the pan/tilt;
  • the communication component 603 is configured to acquire the posture information of the photographing device during the process of capturing the video, specifically for: acquiring The posture information of the drone in the process of capturing the video and the attitude information of the pan/tilt;
  • the processor 602 is configured to determine, according to the posture information of the drone and the posture information of the pan/tilt The posture information of the imaging device is described.
  • the photographing device is configured on the smart terminal;
  • the communication component 603 is configured to acquire the posture information of the photographing device in the process of capturing the video, specifically for acquiring the smart terminal.
  • the processor 602 is configured to determine posture information of the camera according to the posture information of the smart terminal.
  • the target video segment that satisfies the smooth motion condition is that the orientation attribute information of the photographing device is a continuous change, and the continuous change is specifically a change amount of the orientation attribute information between adjacent time segments. Less than the preset value.
  • the processor 602 is further configured to: according to the orientation attribute information corresponding to the target video segment. Determine the motion properties.
  • the processor 602 when the processor 602 is configured to determine a motion attribute according to the orientation attribute information corresponding to the target video segment, the processor 602 is specifically configured to: according to the change trend of the orientation attribute information corresponding to the target video segment and the pre-stored motion attribute. Matching to determine a motion attribute of the target video segment, and determining a corresponding shooting scene based on the motion attribute.
  • the processor 602 is configured to: after determining, from the video, the target video segment that satisfies the smooth motion condition, according to the orientation attribute information, and to: perform an image on the video frame in the target video segment. Analyze and determine the motion properties.
  • the processor 602 is configured to perform image analysis on a video frame in the target video segment, and when determining the motion attribute, specifically, according to: changing a feature point between adjacent frames in the target video segment. Determining a motion attribute and determining a corresponding shooting scene based on the motion attribute.
  • the processor 602 is further configured to: add the motion attribute and the shooting scene to the attribute information of the target video segment.
  • the motion attribute is specifically a motion trajectory characteristic of the photographing device, including: in-situ rotation corresponding to all scenes, straight forward, straight backward, curved moving shooting, reverse flying away from rising shooting, and flying backward Horizontally away from at least one of the shooting; and/or locking at least one of left surround shooting, right surround shooting, following shooting, parallel shooting corresponding to the subject scene.
  • the processor 602 is configured to: when the video is synthesized according to the determined at least one target video segment, specifically: acquiring the determined at least one target video segment and the motion corresponding to the target video segment Attribute; synthesize a video based on the motion attribute.
  • the method when the processor 602 synthesizes a video according to the motion attribute, the method is specifically configured to: according to the motion attribute, combined with the motion law of the object and/or the viewing habit of the user, the determined at least one target. Video clips are sorted and the video is composited.
  • the orientation attribute information of the photographing device in each shooting period is separately stored from the video captured by the photographing device, and corresponds to the time information.
  • the orientation attribute information of the photographing device in each shooting period is stored as an attribute of the video captured by the photographing device, corresponding to each frame of the video.
  • FIG. 7 is a schematic structural diagram of a mobile device according to an embodiment of the present invention.
  • the mobile device described in this embodiment includes a mobile device body and a camera mounted on the mobile device body.
  • the removable device also includes:
  • Memory 701 and processor 702 are Memory 701 and processor 702;
  • the memory 701 is configured to store program instructions.
  • the processor 702 is configured to execute the program instructions stored in the memory, when the program instructions are executed, to:
  • the mobile device is provided with a positioning device such as a GPS, an inertial measurement unit, etc., for acquiring the real-time position and orientation information (ie, posture information) of the drone.
  • a positioning device such as a GPS, an inertial measurement unit, etc., for acquiring the real-time position and orientation information (ie, posture information) of the drone.
  • the processor 702 is configured to use the video from the video according to the orientation attribute information. Before determining the target video segment that satisfies the smooth motion condition, the method further includes: screening, from the video set, a video whose duration is greater than a duration threshold, where the video collection includes a plurality of captured video.
  • the video is captured by a photographing device
  • the orientation attribute information is represented by posture information of the photographing device
  • the processor 702 is configured to acquire the photographing device at each shooting time during the process of capturing the video.
  • the orientation attribute information of the segment is specifically used to: acquire posture information of the photographing apparatus in the process of capturing the video, and the posture information includes position information and direction information.
  • the photographing device is mounted on the mobile device; the processor 702 is configured to acquire the posture information of the photographing device during the process of capturing the video, specifically for: acquiring the The posture information of the mobile device in the process of capturing the video; determining the posture information of the photographing device according to the posture information of the movable device.
  • the photographing device is mounted on the drone through the pan/tilt; the processor 702 is configured to acquire the posture information of the photographing device during the process of capturing the video, specifically for: acquiring The posture information of the drone during the shooting of the video and the posture information of the pan/tilt are determined; and the posture information of the photographing device is determined according to the posture information of the drone and the posture information of the pan/tilt.
  • the camera is configured to acquire the gesture information in the process of capturing the video
  • the processor 702 is configured to: acquire the smart terminal. Attitude information in the process of capturing the video; determining posture information of the photographing device according to the posture information of the smart terminal.
  • the target video segment that satisfies the smooth motion condition is that the orientation attribute information of the photographing device is a continuous change, and the continuous change is specifically a change amount of the orientation attribute information between adjacent time segments. Less than the preset value.
  • the processor 702 is further configured to: according to the orientation attribute information corresponding to the target video segment. Determine the motion properties.
  • the processor 702 when the processor 702 is configured to determine a motion attribute according to the orientation attribute information corresponding to the target video segment, the processor 702 is specifically configured to: according to the change trend of the orientation attribute information corresponding to the target video segment and the pre-stored motion attribute. Matching to determine a motion attribute of the target video segment, and determining a corresponding shooting scene based on the motion attribute.
  • the processor 702 is configured to: after determining, from the video, the target video segment that satisfies the smooth motion condition, according to the orientation attribute information, and configured to: perform an image on the video frame in the target video segment. Analyze and determine the motion properties.
  • the processor 702 is configured to perform image analysis on a video frame in the target video segment, and when determining the motion attribute, specifically, according to: changing a feature point between adjacent frames in the target video segment. Determining a motion attribute and determining a corresponding shooting scene based on the motion attribute.
  • the processor 702 is further configured to: add the motion attribute and the shooting scene to the attribute information of the target video segment.
  • the movable device is an aircraft
  • the motion attribute is specifically a motion trajectory characteristic of the photographing device, including: in-situ rotation corresponding to all scenes, straight forward, straight line backward, curve moving shooting, Flying backwards away from rising shooting, backing horizontally away from at least one of shooting; and/or locking at least one of left surround shooting, right surround shooting, following shooting, parallel shooting corresponding to the subject scene.
  • the processor 702 is configured to: when the video is synthesized according to the determined at least one target video segment, to obtain the determined at least one target video segment and the motion corresponding to the target video segment. Attribute; synthesize a video based on the motion attribute.
  • the method when the processor 702 synthesizes a video according to the motion attribute, the method is specifically configured to: according to the motion attribute, combined with the motion law of the object and/or the viewing habit of the user, the determined at least one target. Video clips are sorted and the video is composited.
  • the orientation attribute information of the photographing device in each shooting period is separately stored from the video captured by the photographing device, and corresponds to the time information.
  • the mobile device can store the video in a storage device on the removable device or back to the control terminal to control the terminal to play or store.
  • the orientation attribute information of the photographing device in each shooting period is stored as an attribute of the video captured by the photographing device, corresponding to each frame of the video.
  • the program can be stored in a computer readable storage medium, and the storage medium can include: Flash disk, Read-Only Memory (ROM), Random Access Memory (RAM), disk or optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

一种视频处理方法、控制终端及可移动设备,其中方法包括:获取在拍摄视频的过程中拍摄装置在各个拍摄时间段的方位属性信息;根据所述方位属性信息从所述视频中确定满足平滑运动条件的目标视频片段;根据所述确定的至少一个目标视频片段合成视频,可以在一定程度上丰富对视频的后期处理方式。

Description

一种视频处理方法、控制终端及可移动设备 技术领域
本发明涉及图像处理技术领域,尤其涉及一种视频处理方法、控制终端及可移动设备。
背景技术
随着图像处理技术的不断发展,拍摄视频不再受限于通过计算机对单张图像的合成才可得到,而可以通过拍摄装置(例如摄像头、相机等等)直接进行拍摄,丰富了视频获取的方式,而视频相比于单张图像而言,由于其包含内容更丰富,且可结合音频等有声信息,也日益成为人们记录日常生活与学习的一种重要方式。
拍摄装置在拍摄视频的过程中,可能会因为周围环境因素或自身因素而出现抖动、突然下滑、突然上升等问题,导致拍摄到的视频出现画面不清晰问题,因此,如何对视频进行后期处理成为一个热门的研究话题。
发明内容
本发明实施例公开了一种视频处理方法、控制终端及可移动设备,可以在一定程度上丰富对视频的后期处理方式。
本发明实施例第一方面公开了一种视频处理方法,包括:
获取在拍摄视频的过程中拍摄装置在各个拍摄时间段的方位属性信息;
根据所述方位属性信息从所述视频中确定满足平滑运动条件的目标视频片段;
根据所述确定的至少一个目标视频片段合成视频。
本发明实施例第二方面公开了一种控制终端,包括通信元件,用于与被控设备通信,还包括:存储器和处理器;
所述存储器,用于存储程序指令;
所述处理器,用于执行所述存储器存储的程序指令,当程序指令被执行时,用于:
获取在拍摄视频的过程中拍摄装置在各个拍摄时间段的方位属性信息;
根据所述方位属性信息从所述视频中确定满足平滑运动条件的目标视频片段;
根据所述确定的至少一个目标视频片段合成视频。
本发明实施例第三方面公开了一种可移动设备,包括可移动设备本体,和搭载于所述可移动设备本体的拍摄装置,可移动设备还包括:存储器和处理器;
所述存储器,用于存储程序指令;
所述处理器,用于执行所述存储器存储的程序指令,当程序指令被执行时,用于:
获取在拍摄视频的过程中拍摄装置在各个拍摄时间段的方位属性信息;
根据所述方位属性信息从所述视频中确定满足平滑运动条件的目标视频片段;
根据所述确定的至少一个目标视频片段合成视频。
本发明实施例中,可以获取在拍摄视频的过程中拍摄装置在各个拍摄时间段的方位属性信息,然后根据该方位属性信息从该视频中确定满足平滑运动条件的目标视频片段,最后根据确定的至少一个目标视频片段合成视频,可以通过方位属性信息自动抽取出目标视频片段,无需用户手动剪辑,可以在一定程度上丰富对视频的后期处理方式。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例提供的一种用于视频处理的情景示意图;
图2是本发明实施例提供的另一种用于视频处理的情景示意图;
图3是本发明实施例提供的一种视频处理方法的流程示意图;
图4是本发明实施例提供的另一种视频处理方法的流程示意图;
图5是本发明实施例提供的又一种视频处理方法的流程示意图;
图6是本发明实施例提供的一种控制终端的结构示意图;
图7是本发明实施例提供的一种可移动设备的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述。
拍摄装置在拍摄视频的过程中,可能会因为周围环境因素或自身因素而出现抖动、突然下滑、突然上升等问题,导致拍摄到的视频出现画面抖动、像素值低等画面不清晰问题,因此,对拍摄到的视频进行后期处理十分必要。
传统的视频处理,通常是人为手动的将视频中的片段裁剪成为特定的视频片段,然后在视频编辑器中,将人为裁好的视频片段进行剪辑、合成、播放等等。
举例来说,请参见图1,在101中,视频编辑器可以将大于时长阈值(例如大于30秒)的视频筛选出来,筛选出的视频即为长镜头视频;在102中,视频编辑器可以显示该长镜头视频的视频帧,用户可以将该长镜头视频中的其中一帧视频图像A作为起端,其中一帧视频图像B作为末端;在103中,视频编辑器可以根据该起端和末端,从该长镜头视频中裁剪出视频片段;在104中,该视频编辑器将裁剪出的视频片段进行合成处理。
然而,手动裁剪的方式过于繁琐,也消耗了用户的大量时间,并且在视频为时长较长(例如大于50秒、大于100秒等)的视频时,用户在进行手动裁剪时,出错率也会增加,降低了视频处理的灵活度。
为了解决上述技术问题,本发明实施例提供一种视频处理方法。请参阅图2,为本发明实施例提供的另一种用于视频处理的情景示意图。其中,可以由可移动设备(例如航空飞机、无人机、飞行设备、手持云台、无人车、无人船等)或控制终端(例如虚拟现实设备、智能终端、遥控器、地面站、带控制APP的手机或平板电脑等)对视频进行处理,具体的,可以由无人机等可移动设备的处理器或控制终端的视频编辑器对视频进行处理等等,本发明实施例对此不作任何限制。
在201中,视频编辑器可以将大于时长阈值(例如大于30秒)的视频筛选 出来,筛选出的视频即为长镜头视频。
在202中,该视频编辑器可以按照方位属性信息从该长镜头视频中确定出目标视频片段。例如,用户可以首先点击该长镜头视频,然后进入【片段剪辑】页面,视频编辑器可以根据方位属性信息从该长镜头视频中确定满足平滑运动条件的视频片段为目标视频片段。
在一个实施例中,拍摄装置可以在拍摄视频(包括长镜头视频)的过程中,对自身的姿态信息进行实时记录,根据拍摄装置的姿态信息表示出方位属性信息,并将该方位属性信息与视频自身的时间轴相关联,使得视频自身的时间轴带有方位属性信息。
在一个实施例中,视频编辑器可以将该目标视频片段进行保存。其中,用户可以将单个的目标视频片段拖拽到播放区域进行播放。
在203中,该视频编辑器可以将视频片段进行合成处理。其中,视频编辑器可以针对满足平滑运动条件的目标视频片段,在显示界面上显示高亮按钮,当用户点击了该高亮按钮,视频编辑器就可以提取该目标视频片段,并针对提取出的目标视频片段进行合成处理。
在一个实施例中,视频编辑器可以在提取到该目标视频片段时,自动对至少一个目标视频片段进行合成处理。或者,用户也可以通过人机交互界面确定出至少一个目标视频片段,该视频编辑器可以对用户选择出的该至少一个目标视频片段进行合成处理。
可见,上述方式可以通过方位属性信息自动抽取出目标视频片段,既无需人为手动裁剪视频,又可以保证裁剪出的视频片段为平滑的视频片段,降低了出错率,操作便捷,提高了视频处理的灵活度,也在一定程度上丰富了对视频的后期处理方式。
为了更好的说明,下面描述本申请的方法实施例。需要说明的是,本申请的方法实施例的执行主体可以为可移动设备,也可以为控制终端。该可移动设备可以是例如航空飞机、无人机、飞行设备、手持云台、无人车、无人船等,该控制终端例如可以是虚拟现实设备、智能终端、遥控器、地面站、带控制APP的手机或平板电脑等,本发明实施例不作任何限制。为便于说明,下面以 控制终端为例进行阐述,但应知,本申请的方法实施例的执行主体还可以为可移动设备。
请参阅图3,为本发明实施例提供的一种视频处理方法的流程示意图。本实施例中所描述的视频处理方法,包括:
S301、获取在拍摄视频的过程中拍摄装置在各个拍摄时间段的方位属性信息。
需要说明的是,该方位属性信息可以包括在拍摄过程中的拍摄装置(包括与拍摄装置相连的设备)的位置、朝向等信息。
在一个实施例中,所述拍摄装置在各个拍摄时间段的方位属性信息与所述拍摄装置拍摄的视频分开存放,并通过时间信息对应。
举例来说,该方位属性信息可以存储在方位属性信息文件中,该方位属性文件中可以包括该方位属性信息以及与该方位属性信息对应的时间信息。
在一些可行的实施方式中,控制终端获取拍摄装置在各个拍摄时间段的方位属性信息,具体可以是从方位属性信息文件中获取该方位属性信息。
在一个实施例中,拍摄装置在各个拍摄时间段的方位属性信息可以作为该拍摄装置拍摄的视频的属性,与所述视频的每一帧对应存储。
举例来说,拍摄装置可以在拍摄视频的过程中,按照每一帧为单位,将每一帧的视频图像对应的方位属性信息作为该视频的属性实时记录在该视频中。
在一些可行的实施方式中,控制终端获取在拍摄视频的过程中拍摄装置在各个拍摄时间段的方位属性信息,具体可以是从视频中获取各个拍摄时间段的方位属性信息。
S302、根据所述方位属性信息从所述视频中确定满足平滑运动条件的目标视频片段。
在一个实施例中,满足平滑运动条件可以是指:所述拍摄装置的方位属性信息为连续变化,所述连续变化具体为相邻时间段之间的方位属性信息的变化量小于预设值。
举例来说,以该方位属性信息中的拍摄装置的朝向为例,如果相邻时间段之间的朝向均为同一朝向(例如为朝左45度),或朝向缓慢变化,如每相邻时间段变化小于5度等,则可以确定该拍摄装置的朝向为连续变化,也即确定 满足平滑运动条件。
在一个实施例中,如果该方位属性信息是从方位属性信息文件中获取的,且在时间段A的方位属性信息为连续变化,就可以根据该时间段A确定该视频中与该时间段A对应的时间段,并将该对应的时间段对应的视频片段作为满足平缓运动条件的目标视频片段。
在一个实施例中,如果该方位属性信息是实时记录在视频中,且在时间段B的方位属性信息为连续变化,就可以确定该时间段B下的视频片段为满足平缓运动条件的目标视频片段。
S303、根据所述确定的至少一个目标视频片段合成视频。
需要说明的是,在根据所述确定的至少一个目标视频片段合成视频之前,用户可以从多个目标视频片段中选择出至少一个目标视频片段。
在一个实施例中,控制终端可以显示多个目标视频片段供用户选择,用户可以通过人机交互的方式,从显示的多个目标视频片段中确定出至少一个目标视频片段,控制终端可以根据该至少一个目标视频片段进行合成处理。
在一个实施例中,控制终端可以根据各个目标视频片段的方位属性信息,自动从目标视频片段中确定出至少一个目标视频片段,并根据该至少一个视频片段进行合成处理。
可见,本发明实施例通过获取在拍摄视频的过程中拍摄装置在各个拍摄时间段的方位属性信息,然后根据该方位属性信息从该视频中确定满足平滑运动条件的目标视频片段,最后根据确定的至少一个目标视频片段合成视频,可以通过方位属性信息自动抽取出目标视频片段,无需用户手动剪辑,且保证了裁剪出的视频片段为平滑的视频片段,操作便捷,在一定程度上丰富了对视频的后期处理方式。
请参阅图4,为本发明实施例提供的另一种视频处理方法的情景示意图。本发明实施例中所描述的视频处理方法,包括:
S401、从视频集合中筛选出时长大于时长阈值的视频。
其中,所述视频集合中包括多个已拍摄得到的视频。
需要说明的是,控制终端可以保存多个已拍摄得到的视频,并将该多个已 拍摄得到的视频作为该视频集合。
还需要说明的是,该时长阈值,例如可以为时长为30秒,时长为40秒,时长为50秒等,本发明实施例对此不作任何限制。
其中,该时长阈值可以为控制终端默认的阈值,也可以由用户进行设置,控制终端将用户设置的值作为该时长阈值。
在一些可行的实施方式中,由于时长小于等于时长阈值的视频在裁剪为多个视频片段时,视频片段的时长可能较短(例如为2秒,4秒,6秒等等),导致该视频片段包含的内容较少,因此,针对该时长小于等于时长阈值的视频,控制终端可以不作任何处理。
在一些可行的实施方式中,控制终端可以首先获取视频集合中的各个视频各自对应的时长,然后按照该时长筛选出时长大于时长阈值的视频。
S402、获取所述拍摄装置在拍摄所述视频的过程中的姿态信息。
其中,所述姿态信息包括位置信息和方向信息。
需要说明的是,该位置信息例如可以是拍摄装置的位置坐标、与拍摄装置相连的设备的相对位置等等,本发明实施例对此不作任何限制。
还需要说明的是,该方向信息可以指该拍摄装置的朝向,例如可以是拍摄装置的姿态角(包括偏航角(yaw)、俯仰角(pitch)等等)、视场角(fieldangle,FOV)等等,本发明实施例对此不作任何限制。
在一些可行的实施方式中,拍摄装置在拍摄该视频的过程中,可以按照时间点或时间段实时记录该拍摄装置的姿态信息。举例来说,拍摄装置可以每一秒记录一次该姿态信息,也可以每2秒记录一次姿态信息,也可以每10秒记录一次姿态信息,本发明实施例对此不作任何限制。
在一个实施例中,所述拍摄装置搭载在可移动设备上;所述获取所述拍摄装置在拍摄所述视频的过程中的姿态信息,包括:获取所述可移动设备在拍摄所述视频的过程中的姿态信息;根据所述可移动设备的姿态信息确定所述拍摄装置的姿态信息。
需要说明的是,该可移动设备,例如可以是手持云台、无人车、无人船/无人机等可以进行移动的设备,本发明实施例对此不作任何限制。
还需要说明的是,该拍摄装置可以固定在该可移动设备上,这时,该可移 动设备的姿态信息可以表示该拍摄装置的姿态信息。例如,在可移动设备向左40度移动时,该拍摄装置也向左40度移动,根据该可移动设备的姿态信息可以确定出拍摄装置的姿态信息。
在一些可行的实施方式中,控制终端可以与可移动设备以及拍摄装置通过无线链路的方式两两互连,可移动设备可以实时记录自身的姿态信息,并将该可移动设备的姿态信息发送给该控制终端,控制终端便可以根据该可移动设备的姿态信息确定拍摄装置的姿态信息。
举例来说,该控制终端为智能手机,该可移动设备为无人机,无人机上固定有拍摄装置(例如为摄像头),智能手机可以与无人机和拍摄装置通信连接,无人机上设置有定位装置,如GPS、惯性测量单元等,用于获取无人机的实时位置以及朝向信息(即姿态信息),当智能手机可以通过通信元件获取无人机的姿态信息,当拍摄装置相对该无人机静止时,则可根据该无人机的姿态信息得到该拍摄装置的姿态信息。
还需要说明的是,该拍摄装置也可以相对可移动设备上可移动地设置,例如,该拍摄装置通过云台搭载在无人机上,通过云台的转动实现拍摄装置的移动;或者,该拍摄装置通过非云台的其他搭载设备搭载在无人机上,通过搭载设备的转动实现拍摄装置的移动。
此时,控制终端通过通信元件获取所述拍摄装置在拍摄所述视频的过程中的姿态信息,包括:获取所述无人机在拍摄所述视频的过程中的姿态信息以及所述云台的姿态信息。处理器则根据所述无人机的姿态信息以及所述云台的姿态信息确定所述拍摄装置的姿态信息。
需要说明的是,该拍摄装置通过云台搭载在无人机上时,该无人机的姿态信息以及该云台的姿态信息可以共同决定该拍摄装置的姿态信息。例如,该无人机的位置信息以及该云台的方向信息可以共同决定该拍摄装置的姿态信息。
举例来说,该控制终端为智能手机,拍摄装置(例如为摄像头)通过云台搭载在无人机上,无人机可以与云台相连,智能手机可以与无人机和拍摄装置通信连接,当智能手机检测到拍摄装置处于拍摄视频的过程中,便可以获取无人机的姿态信息以及通过无人机获取到云台的姿态信息,并根据该无人机的姿态信息以及云台的姿态信息确定出该拍摄装置的姿态信息。
还需要说明的是,无人机等可移动设备也可以实现上述控制终端所实现的方法,在此不作赘述。
在一个实施例中,所述拍摄装置配置在智能终端上;所述获取所述拍摄装置在拍摄所述视频的过程中的姿态信息,包括:获取所述智能终端在拍摄所述视频的过程中的姿态信息;根据所述智能终端的姿态信息确定所述拍摄装置的姿态信息。
需要说明的是,该智能终端,例如可以是智能手机、平板电脑、可穿戴设备等可配置拍摄装置的设备,本发明实施例对此不作任何限制。
还需要说明的是,该拍摄装置配置在该智能终端的方式,可以是固定在该智能终端上。这时,该拍摄装置的姿态信息可以由该智能终端的姿态信息表示。
还需要说明的是,该拍摄装置配置在该智能终端的方式,也可以是可在智能终端上转动,例如,智能手机的摄像头可以通过螺丝等器件实现前后转动,以进一步实现自拍以及正常拍摄。这时,该智能终端的姿态信息可以包括智能终端自身的姿态信息以及螺丝等器件的旋转信息,通过智能终端自身的姿态信息以及螺丝等器件的旋转信息可以表示该拍摄装置的姿态信息。
还需要说明的是,该拍摄装置的姿态信息可以表示该拍摄装置在各个拍摄时间段的方位属性信息。例如,控制终端是以10秒这一时间段为拍摄时间段,那么,控制终端可以以拍摄开始的第1秒为起点,拍摄中的第10秒为末点,将起点和末点间所记录的姿态信息作为该拍摄装置在这一拍摄时间段的方位属性信息。
也就是说,该方位属性可以包括多个时间点(或时间段,取决于拍摄装置记录姿态信息的方式)的姿态信息。
S403、根据方位属性信息从所述视频中确定满足平滑运动条件的目标视频片段。
需要说明的是,上述S430步骤的具体实现方式可以参考前述方法实施例中对S302步骤的描述,在此不作任何赘述。
S404、根据所述目标视频片段对应的方位属性信息确定运动属性。
在一个实施例中,所述运动属性具体为拍摄装置的运动轨迹特性。
需要说明的是,该运动轨迹特性可以用于表示拍摄装置的运动为平滑运 动。控制终端可以在确定出目标视频片段之后,将该目标视频片段对应的方位属性信息确定为可以表示平滑运动的运动属性。
在一个实施例中,该运动属性可以包括:所有场景所对应的原地旋转、直线向前、直线向后、曲线移动拍摄、倒飞远离上升拍摄、倒飞水平远离拍摄中的至少一种;和/或锁定拍摄对象场景所对应的往左环绕拍摄、往右环绕拍摄、跟随拍摄、平行拍摄中的至少一种。
其中,该原地旋转,可以包括绕y轴原地旋转(也叫原地转yaw)、绕x轴原地旋转(也叫原地转pitch)。
其中,该运动属性还可以包括所有场景所对应的直线向前且同时绕x轴低头拍摄(也叫直线向前,同时pitch低头拍摄)、直线向前且同时绕x轴抬头拍摄(也叫直线向前,同时pitch抬头拍摄)、直线向后且同时绕x轴低头拍摄(也叫直线向后,同时pitch低头拍摄)、直线向后且同时绕x轴抬头拍摄(也叫直线向后,同时pitch抬头拍摄)、直线往左平移拍摄、直线往右平移拍摄、上升俯视拍摄、上升平视拍摄、上升俯视且绕y轴旋转拍摄(也叫上升俯视,转yaw拍摄)、下降俯视拍摄、下降平时拍摄、下降俯视且绕y轴旋转拍摄(也叫下降俯视,转yaw拍摄)等等,本发明实施例对此不作任何限制。
在一个实施例中,根据目标视频片段对应的方位属性信息确定运动属性时,具体包括:根据所述目标视频片段对应的方位属性信息的变化趋势与预存的运动属性相匹配,以确定所述目标视频片段的运动属性,并根据所述运动属性确定对应的拍摄场景。所述拍摄场景包括普通飞行和跟踪拍摄、指点飞行等。
其中,该预存的运动属性可以为预先存储在控制终端中的运动属性。控制终端可以通过多次的视频处理来确定出可以用于表示平滑运动的视频属性,并将这些可以表示平滑运动的视频属性以文件的形式存储在控制终端中。
举例来说,控制终端可以根据所述目标视频片段对应的方位属性信息的变化趋势与预存的运动属性相匹配,如果该变化趋势与运动属性A的变化趋势相同(或相似度达到相似度阈值,该相似度阈值例如可以是90%、95%等),则可以确定运动属性A为该目标视频片段的运动属性,并可以进一步确定与该运动属性对应的拍摄场景。
在一些可行的实施例中,控制终端可以按照该运动属性对目标视频片段打 上标签,即对目标视频片段添加属性。例如,针对视频中确定出运动属性的视频片段,控制终端可以在该控制终端的显示界面上显示高亮按钮。
S405、根据所述确定的至少一个目标视频片段合成视频。
需要说明的是,上述S405步骤的具体实现方式可参考前述方法实施例中的S303步骤,在此不作赘述。
可见,本发明实施例通过从视频集合中筛选出时长大于时长预置的视频,并获取拍摄装置在拍摄视频过程中的姿态信息,用该姿态信息表示各个拍摄时间段的方位属性信息,并根据该方位属性信息从视频中确定出满足平滑运动条件的目标视频片段,根据该目标视频片段对应的方位属性信息确定出运动属性,并根据至少一个目标视频片段合成视频,可以无需人为操作便可实现视频的剪辑,且保证了剪辑出的视频片段为平滑的视频片段,操作便捷,在一定程度上丰富了对视频的后期处理方式。此外,后续合成时,可根据目标视频片段的运动属性进行合成,使合成的视频符合运动组合规律或用户的观看习惯。
请参阅图5,是本发明实施例所提供的又一种视频处理方法的流程示意图,本发明实施例所示的视频处理方法可包括:
S501、获取在拍摄视频的过程中拍摄装置在各个拍摄时间段的方位属性信息。
S502、根据所述方位属性信息从所述视频中确定满足平滑运动条件的目标视频片段。
需要说明的是,上述S501以及S502步骤的具体实现方式可以参考前述方法实施例中对S301以及S302步骤的描述,在此不作赘述。
S503、对所述目标视频片段中视频帧进行图像分析,确定运动属性。
需要说明的是,控制终端可以确定目标视频片段中每一帧视频图像,然后对相邻帧的视频图像进行图像分析,以确定运动属性。
在一个实施例中,所述对所述目标视频片段中视频帧进行图像分析,确定运动属性,具体包括:根据目标视频片段中相邻帧之间的特征点的变化,确定运动属性,并根据所述运动属性确定对应的拍摄场景。
举例来说,控制终端可以将目标视频片段中相邻帧的视频图像进行特征比 对,提取出特征点,然后可以确定出特征点在两帧视频图像的位置变化,如果该位置变化处于预设的位置变化范围内,便可以根据该拍摄装置在拍摄该目标视频端的过程中的运动轨迹特性,也即运动属性。
S504、将所述运动属性和拍摄场景加入所述目标视频片段的属性信息中。
需要说明的是,控制终端可以在确定出目标视频片段之后,将该目标视频片段对应的方位属性信息确定为可以表示平滑运动的运动属性,并可以将该运动属性,以及与运动属性对应的拍摄场景保存为一属性文件,该属性文件中的信息可以表示该目标视频片段的属性信息。
在一些可行的实施方式中,控制终端可以根据该属性文件,按照时间信息将属性信息自动对应添加至目标视频片段中。或者,该控制终端也可以在接收到添加指令时(例如接收到用户的添加属性操作),按照时间信息将属性信息对应添加至目标视频片段中。
S505、获取所述确定的至少一个目标视频片段以及所述目标视频片段对应的运动属性。
需要说明的是,控制终端可以在确定出目标视频片段之后,从目标视频片段中确定出至少一个目标视频片段。
在一些可行的实施方式中,用户可以通过人机交互界面选择至少一个目标视频片段,控制终端可以将用户选择的目标视频片段作为该确定的至少一个目标视频片段。
在一些可行的实施方式中,控制终端可以根据目标视频片段的运动属性来确定出至少一个目标视频片段。例如,控制终端可以选择运动属性相同的目标视频片段作为该确定出的至少一个目标视频片段,或者,控制终端也可以选择运动属性相似的目标视频片段作为该确定出的至少一个目标视频片段等等,本发明实施例对此不作任何限制。
S506、依据所述运动属性合成视频。
在一个实施例中,所述依据所述运动属性合成视频,具体包括:依据所述运动属性,并结合物体运动规律和/或用户的观看习惯对所述确定的至少一个目标视频片段进行排序并合成视频。
举例来说,确定出的至少一个视频片段的运动属性均为所有场景下的直线 向前,这时,摄取到的画面中的物体对于用户来说是直线向后运动的,因此,该控制终端可以根据物体直线向后运动的运动规律以及用户的观看习惯对该至少一个目标视频片段进行排序来合成视频。
可见,在本发明实施例中,通过从视频中确定出满足平滑运动条件的目标视频片段,并对目标视频片段进行图像分析,确定出运动属性,将该运动属性和拍摄场景加入到目标视频片段的属性信息中,最后根据至少一个目标视频片段对应的运动属性,将该至少一个目标视频片段合成视频,可以根据拍摄装置的运动轨迹特性来合成视频,更好的满足了用户对合成视频的自动化与智能化需求,一定程度上丰富了视频处理的方式。
本发明实施例提供一种控制终端。请参阅图6,为本发明实施例提供的一种控制终端的结构示意图,本实施例中所描述的控制终端,包括:
存储器601、处理器602和通信元件603;
所述存储器601,用于存储程序指令;
所述处理器602,用于执行所述存储器存储的程序指令,当程序指令被执行时,所述通信元件603用于获取在拍摄视频的过程中拍摄装置在各个拍摄时间段的方位属性信息;
处理器602根据所述方位属性信息从所述视频中确定满足平滑运动条件的目标视频片段;并根据所述确定的至少一个目标视频片段合成视频。
在一个实施例中,所述处理器602用于根据所述方位属性信息从所述视频中确定满足平滑运动条件的目标视频片段之前,还用于:从视频集合中筛选出时长大于时长阈值的视频,其中,所述视频集合中包括多个已拍摄得到的视频。
在一个实施例中,所述视频是由拍摄装置拍摄得到的,所述方位属性信息由拍摄装置的姿态信息表示;所述通信元件603用于获取在拍摄视频的过程中拍摄装置在各个拍摄时间段的方位属性信息时,具体用于:获取所述拍摄装置在拍摄所述视频的过程中的姿态信息,所述姿态信息包括位置信息和方向信息。
在一个实施例中,所述拍摄装置搭载在可移动设备上;所述通信元件603用于获取所述拍摄装置在拍摄所述视频的过程中的姿态信息时,具体用于:获 取所述可移动设备在拍摄所述视频的过程中的姿态信息;根据所述可移动设备的姿态信息确定所述拍摄装置的姿态信息。
在一个实施例中,所述拍摄装置通过云台搭载在无人机上;所述通信元件603用于获取所述拍摄装置在拍摄所述视频的过程中的姿态信息时,具体用于:获取所述无人机在拍摄所述视频的过程中的姿态信息以及所述云台的姿态信息;所述处理器602用于根据所述无人机的姿态信息以及所述云台的姿态信息确定所述拍摄装置的姿态信息。
在一个实施例中,所述拍摄装置配置在智能终端上;所述通信元件603用于获取所述拍摄装置在拍摄所述视频的过程中的姿态信息时,具体用于:获取所述智能终端在拍摄所述视频的过程中的姿态信息;所述处理器602用于根据所述智能终端的姿态信息确定所述拍摄装置的姿态信息。
在一个实施例中,所述满足平滑运动条件的目标视频片段是指:所述拍摄装置的方位属性信息为连续变化,所述连续变化具体为相邻时间段之间的方位属性信息的变化量小于预设值。
在一个实施例中,所述处理器602用于根据所述方位属性信息从所述视频中确定满足平滑运动条件的目标视频片段之后,还用于:根据所述目标视频片段对应的方位属性信息确定运动属性。
在一个实施例中,所述处理器602用于根据目标视频片段对应的方位属性信息确定运动属性时,具体用于:根据所述目标视频片段对应的方位属性信息的变化趋势与预存的运动属性相匹配,以确定所述目标视频片段的运动属性,并根据所述运动属性确定对应的拍摄场景。
在一个实施例中,所述处理器602用于根据所述方位属性信息从所述视频中确定满足平滑运动条件的目标视频片段之后,还用于:对所述目标视频片段中视频帧进行图像分析,确定运动属性。
在一个实施例中,所述处理器602用于对所述目标视频片段中视频帧进行图像分析,确定运动属性时,具体用于:根据目标视频片段中相邻帧之间的特征点的变化,确定运动属性,并根据所述运动属性确定对应的拍摄场景。
在一个实施例中,所述处理器602还用于:将所述运动属性和拍摄场景加入所述目标视频片段的属性信息中。
在一个实施例中,所述运动属性具体为拍摄装置的运动轨迹特性,包括:所有场景所对应的原地旋转、直线向前、直线向后、曲线移动拍摄、倒飞远离上升拍摄、倒飞水平远离拍摄中的至少一种;和/或锁定拍摄对象场景所对应的往左环绕拍摄、往右环绕拍摄、跟随拍摄、平行拍摄中的至少一种。
在一个实施例中,所述处理器602用于根据所述确定的至少一个目标视频片段合成视频时,具体用于:获取所述确定的至少一个目标视频片段以及所述目标视频片段对应的运动属性;依据所述运动属性合成视频。
在一个实施例中,所述处理器602依据所述运动属性合成视频时,具体用于:依据所述运动属性,并结合物体运动规律和/或用户的观看习惯对所述确定的至少一个目标视频片段进行排序并合成视频。
在一个实施例中,所述拍摄装置在各个拍摄时间段的方位属性信息与所述拍摄装置拍摄的视频分开存放,并通过时间信息对应。
在一个实施例中,所述拍摄装置在各个拍摄时间段的方位属性信息作为所述拍摄装置拍摄的视频的属性,与所述视频的每一帧对应存储。
本发明实施例提供一种可移动设备。请参阅图7,为本发明实施例提供的一种可移动设备的结构示意图,本实施例中所描述的可移动设备,包括可移动设备本体以及搭载于所述可移动设备本体的拍摄装置,该可移动设备还包括:
存储器701和处理器702;
所述存储器701,用于存储程序指令;
所述处理器702,用于执行所述存储器存储的程序指令,当程序指令被执行时,用于:
获取在拍摄视频的过程中拍摄装置在各个拍摄时间段的方位属性信息;
根据所述方位属性信息从所述视频中确定满足平滑运动条件的目标视频片段;
根据所述确定的至少一个目标视频片段合成视频。
可以理解,可移动设备上设置有定位装置,如GPS、惯性测量单元等,用于获取无人机的实时位置以及朝向信息(即姿态信息)。
在一个实施例中,所述处理器702用于根据所述方位属性信息从所述视频 中确定满足平滑运动条件的目标视频片段之前,还用于:从视频集合中筛选出时长大于时长阈值的视频,其中,所述视频集合中包括多个已拍摄得到的视频。
在一个实施例中,所述视频是由拍摄装置拍摄得到的,所述方位属性信息由拍摄装置的姿态信息表示;所述处理器702用于获取在拍摄视频的过程中拍摄装置在各个拍摄时间段的方位属性信息时,具体用于:获取所述拍摄装置在拍摄所述视频的过程中的姿态信息,所述姿态信息包括位置信息和方向信息。
在一个实施例中,所述拍摄装置搭载在可移动设备上;所述处理器702用于获取所述拍摄装置在拍摄所述视频的过程中的姿态信息时,具体用于:获取所述可移动设备在拍摄所述视频的过程中的姿态信息;根据所述可移动设备的姿态信息确定所述拍摄装置的姿态信息。
在一个实施例中,所述拍摄装置通过云台搭载在无人机上;所述处理器702用于获取所述拍摄装置在拍摄所述视频的过程中的姿态信息时,具体用于:获取所述无人机在拍摄所述视频的过程中的姿态信息以及所述云台的姿态信息;根据所述无人机的姿态信息以及所述云台的姿态信息确定所述拍摄装置的姿态信息。
在一个实施例中,所述拍摄装置配置在智能终端上;所述处理器702用于获取所述拍摄装置在拍摄所述视频的过程中的姿态信息时,具体用于:获取所述智能终端在拍摄所述视频的过程中的姿态信息;根据所述智能终端的姿态信息确定所述拍摄装置的姿态信息。
在一个实施例中,所述满足平滑运动条件的目标视频片段是指:所述拍摄装置的方位属性信息为连续变化,所述连续变化具体为相邻时间段之间的方位属性信息的变化量小于预设值。
在一个实施例中,所述处理器702用于根据所述方位属性信息从所述视频中确定满足平滑运动条件的目标视频片段之后,还用于:根据所述目标视频片段对应的方位属性信息确定运动属性。
在一个实施例中,所述处理器702用于根据目标视频片段对应的方位属性信息确定运动属性时,具体用于:根据所述目标视频片段对应的方位属性信息的变化趋势与预存的运动属性相匹配,以确定所述目标视频片段的运动属性,并根据所述运动属性确定对应的拍摄场景。
在一个实施例中,所述处理器702用于根据所述方位属性信息从所述视频中确定满足平滑运动条件的目标视频片段之后,还用于:对所述目标视频片段中视频帧进行图像分析,确定运动属性。
在一个实施例中,所述处理器702用于对所述目标视频片段中视频帧进行图像分析,确定运动属性时,具体用于:根据目标视频片段中相邻帧之间的特征点的变化,确定运动属性,并根据所述运动属性确定对应的拍摄场景。
在一个实施例中,所述处理器702还用于:将所述运动属性和拍摄场景加入所述目标视频片段的属性信息中。
在一个实施例中,所述可移动设备为飞行器,所述运动属性具体为拍摄装置的运动轨迹特性,包括:所有场景所对应的原地旋转、直线向前、直线向后、曲线移动拍摄、倒飞远离上升拍摄、倒飞水平远离拍摄中的至少一种;和/或锁定拍摄对象场景所对应的往左环绕拍摄、往右环绕拍摄、跟随拍摄、平行拍摄中的至少一种。
在一个实施例中,所述处理器702用于根据所述确定的至少一个目标视频片段合成视频时,具体用于:获取所述确定的至少一个目标视频片段以及所述目标视频片段对应的运动属性;依据所述运动属性合成视频。
在一个实施例中,所述处理器702依据所述运动属性合成视频时,具体用于:依据所述运动属性,并结合物体运动规律和/或用户的观看习惯对所述确定的至少一个目标视频片段进行排序并合成视频。
在一个实施例中,所述拍摄装置在各个拍摄时间段的方位属性信息与所述拍摄装置拍摄的视频分开存放,并通过时间信息对应。
当合成视频以后,所述可移动设备可以将该视频存储于可移动设备上的存储装置中,或回传至控制终端以便控制终端回放或存储。
在一个实施例中,所述拍摄装置在各个拍摄时间段的方位属性信息作为所述拍摄装置拍摄的视频的属性,与所述视频的每一帧对应存储。
需要说明的是,对于前述的各个方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制,因为依据本发明,某一些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优 选实施例,所涉及的动作和模块并不一定是本发明所必须的。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:闪存盘、只读存储器(Read-Only Memory,ROM)、随机存取器(Random Access Memory,RAM)、磁盘或光盘等。
以上对本发明实施例所提供的一种飞行控制方法、遥控器及遥控系统进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (51)

  1. 一种视频处理方法,其特征在于,包括:
    获取在拍摄视频的过程中拍摄装置在各个拍摄时间段的方位属性信息;
    根据所述方位属性信息从所述视频中确定满足平滑运动条件的目标视频片段;
    根据所述确定的至少一个目标视频片段合成视频。
  2. 如权利要求1所述的方法,其特征在于,所述根据所述方位属性信息从所述视频中确定满足平滑运动条件的目标视频片段之前,还包括:
    从视频集合中筛选出时长大于预设时长阈值的视频,其中,所述视频集合中包括多个已拍摄得到的视频。
  3. 如权利要求1或2所述的方法,其特征在于,所述视频是由拍摄装置拍摄得到的,所述方位属性信息通过拍摄装置的姿态信息表示;
    所述获取在拍摄视频的过程中拍摄装置在各个拍摄时间段的方位属性信息,具体包括:
    获取所述拍摄装置在拍摄所述视频的过程中的姿态信息,所述姿态信息包括位置信息和方向信息。
  4. 如权利要求3所述的方法,其特征在于,所述拍摄装置搭载在可移动设备上,所述获取所述拍摄装置在拍摄所述视频的过程中的姿态信息,包括:
    获取所述可移动设备在拍摄所述视频的过程中的姿态信息;
    根据所述可移动设备的姿态信息确定所述拍摄装置的姿态信息。
  5. 如权利要求3或4所述的方法,其特征在于,所述拍摄装置通过云台搭载在可移动设备上,所述获取所述拍摄装置在拍摄所述视频的过程中的姿态信息,包括:
    获取在拍摄所述视频的过程中所述可移动设备的姿态信息以及所述云台 的姿态信息;
    根据所述可移动设备的姿态信息以及所述云台的姿态信息确定所述拍摄装置的姿态信息。
  6. 如权利要求3所述的方法,其特征在于,所述拍摄装置配置在智能终端上,所述获取所述拍摄装置在拍摄所述视频的过程中的姿态信息,包括:
    获取所述智能终端在拍摄所述视频的过程中的姿态信息;
    根据所述智能终端的姿态信息确定所述拍摄装置的姿态信息。
  7. 如权利要求1-6任一项所述的方法,其特征在于,所述满足平滑运动条件的目标视频片段是指:所述拍摄装置的方位属性信息为连续变化,所述连续变化具体为相邻时间段之间的方位属性信息的变化量小于预设值。
  8. 如权利要求1-7任一项所述的方法,其特征在于,所述根据所述方位属性信息从所述视频中确定满足平滑运动条件的目标视频片段之后,还包括:
    根据所述目标视频片段对应的方位属性信息确定运动属性。
  9. 如权利要求8所述的方法,其特征在于,所述根据目标视频片段对应的方位属性信息确定运动属性,具体包括:
    根据所述目标视频片段对应的方位属性信息的变化趋势与预存的运动属性相匹配,以确定所述目标视频片段的运动属性,并根据所述运动属性确定对应的拍摄场景。
  10. 如权利要求1-7任一项所述的方法,其特征在于,所述根据所述方位属性信息从所述视频中确定满足平滑运动条件的目标视频片段之后,还包括:
    对所述目标视频片段中视频帧进行图像分析,确定运动属性。
  11. 如权利要求10所述的方法,其特征在于,所述对所述目标视频片段中视频帧进行图像分析,确定运动属性,具体包括:
    根据目标视频片段中相邻帧之间的特征点的变化,确定运动属性,并根据所述运动属性确定对应的拍摄场景。
  12. 如权利要求9-11任一项所述的方法,其特征在于,还包括:
    将所述运动属性和拍摄场景加入所述目标视频片段的属性信息中。
  13. 如权利要求8-12任一项所述的方法,其特征在于,所述拍摄装置设置于飞行器上,所述运动属性具体为拍摄装置的运动轨迹特性,包括:
    所有场景所对应的原地旋转、直线向前、直线向后、曲线移动拍摄、倒飞远离上升拍摄、倒飞水平远离拍摄中的至少一种;和/或
    锁定拍摄对象场景所对应的往左环绕拍摄、往右环绕拍摄、跟随拍摄、平行拍摄中的至少一种。
  14. 如权利要求8-13任一项所述的方法,其特征在于,根据所述确定的至少一个目标视频片段合成视频,包括:
    获取所述确定的至少一个目标视频片段以及所述目标视频片段对应的运动属性;
    依据所述运动属性合成视频。
  15. 如权利要求14所述的方法,其特征在于,所述依据所述运动属性合成视频,具体包括:
    依据所述运动属性,并结合物体运动规律和/或用户的观看习惯对至少一个目标视频片段进行排序并合成视频。
  16. 如权利要求1-15任一项所述的方法,其特征在于,所述拍摄装置在各个拍摄时间段的方位属性信息与所述拍摄装置拍摄的视频分开存放,并通过时间信息对应。
  17. 如权利要求1-16任一项所述的方法,其特征在于,所述拍摄装置在各 个拍摄时间段的方位属性信息作为所述拍摄装置拍摄的视频的属性,与所述视频的每一帧对应存储。
  18. 一种控制终端,其特征在于,包括通信元件,用于与被控设备通信,还包括:存储器和处理器;
    所述存储器,用于存储程序指令;
    所述处理器,用于执行所述存储器存储的程序指令,当程序指令被执行时,用于:
    通过通信元件获取在拍摄视频的过程中拍摄装置在各个拍摄时间段的方位属性信息;
    处理器根据所述方位属性信息从所述视频中确定满足平滑运动条件的目标视频片段,并根据所述确定的至少一个目标视频片段合成视频。
  19. 如权利要求18所述的控制终端,其特征在于,所述处理器用于根据所述方位属性信息从所述视频中确定满足平滑运动条件的目标视频片段之前,还用于:
    从视频集合中筛选出时长大于时长阈值的视频,其中,所述视频集合中包括多个已拍摄得到的视频。
  20. 如权利要求18或19所述的控制终端,其特征在于,所述视频是由拍摄装置拍摄得到的,所述方位属性信息由拍摄装置的姿态信息表示;所述通信元件用于获取在拍摄视频的过程中拍摄装置在各个拍摄时间段的方位属性信息时,具体用于:
    获取所述拍摄装置在拍摄所述视频的过程中的姿态信息,所述姿态信息包括位置信息和方向信息。
  21. 如权利要求20所述的控制终端,其特征在于,所述拍摄装置搭载在可移动设备上;
    所述通信元件用于获取所述拍摄装置在拍摄所述视频的过程中的姿态信 息时,具体用于:
    获取所述可移动设备在拍摄所述视频的过程中的姿态信息;
    根据所述可移动设备的姿态信息确定所述拍摄装置的姿态信息。
  22. 如权利要求20或21所述的控制终端,其特征在于,所述拍摄装置通过云台搭载在可移动设备上;
    所述通信元件用于获取所述拍摄装置在拍摄所述视频的过程中的姿态信息时,具体用于:
    获取在拍摄所述视频的过程中所述可移动设备的姿态信息以及所述云台的姿态信息;
    根据所述可移动设备的姿态信息以及所述云台的姿态信息确定所述拍摄装置的姿态信息。
  23. 如权利要求20所述的控制终端,其特征在于,所述拍摄装置配置在智能终端上;所述通信元件用于获取所述拍摄装置在拍摄所述视频的过程中的姿态信息时,具体用于:
    获取所述智能终端在拍摄所述视频的过程中的姿态信息;
    根据所述智能终端的姿态信息确定所述拍摄装置的姿态信息。
  24. 如权利要求18-23任一项所述的控制终端,其特征在于,所述满足平滑运动条件的目标视频片段是指:所述拍摄装置的方位属性信息为连续变化,所述连续变化具体为相邻时间段之间的方位属性信息的变化量小于预设值。
  25. 如权利要求18-24任一项所述的控制终端,其特征在于,所述处理器用于根据所述方位属性信息从所述视频中确定满足平滑运动条件的目标视频片段之后,还用于:
    根据所述目标视频片段对应的方位属性信息确定运动属性。
  26. 如权利要求25所述的控制终端,其特征在于,所述处理器用于根据目 标视频片段对应的方位属性信息确定运动属性时,具体用于:
    根据所述目标视频片段对应的方位属性信息的变化趋势与预存的运动属性相匹配,以确定所述目标视频片段的运动属性,并根据所述运动属性确定对应的拍摄场景。
  27. 如权利要求18-24任一项所述的控制终端,其特征在于,所述处理器用于根据所述方位属性信息从所述视频中确定满足平滑运动条件的目标视频片段之后,还用于:
    对所述目标视频片段中视频帧进行图像分析,确定运动属性。
  28. 如权利要求27所述的控制终端,其特征在于,所述处理器用于对所述目标视频片段中视频帧进行图像分析,确定运动属性时,具体用于:
    根据目标视频片段中相邻帧之间的特征点的变化,确定运动属性,并根据所述运动属性确定对应的拍摄场景。
  29. 如权利要求26-28任一项所述的控制终端,其特征在于,所述处理器还用于:
    将所述运动属性和拍摄场景加入所述目标视频片段的属性信息中。
  30. 如权利要求25-29任一项所述的控制终端,其特征在于,所述拍摄装置设置于飞行器上,所述运动属性具体为拍摄装置的运动轨迹特性,包括:
    所有场景所对应的原地旋转、直线向前、直线向后、曲线移动拍摄、倒飞远离上升拍摄、倒飞水平远离拍摄中的至少一种;和/或
    锁定拍摄对象场景所对应的往左环绕拍摄、往右环绕拍摄、跟随拍摄、平行拍摄中的至少一种。
  31. 如权利要求25-30任一项所述的控制终端,其特征在于,所述处理器用于根据所述确定的至少一个目标视频片段合成视频时,具体用于:
    获取所述确定的至少一个目标视频片段以及所述目标视频片段对应的运 动属性;
    依据所述运动属性合成视频。
  32. 如权利要求31所述的控制终端,其特征在于,所述处理器依据所述运动属性合成视频时,具体用于:
    依据所述运动属性,并结合物体运动规律和/或用户的观看习惯对所述确定的至少一个目标视频片段进行排序并合成视频。
  33. 如权利要求18-32任一项所述的控制终端,其特征在于,所述拍摄装置在各个拍摄时间段的方位属性信息与所述拍摄装置拍摄的视频分开存放,并通过时间信息对应。
  34. 如权利要求18-33任一项所述的控制终端,其特征在于,所述拍摄装置在各个拍摄时间段的方位属性信息作为所述拍摄装置拍摄的视频的属性,与所述视频的每一帧对应存储。
  35. 一种可移动设备,包括可移动设备本体,和搭载于所述可移动设备本体的拍摄装置,其特征在于,所述可移动设备还包括:存储器和处理器;
    所述存储器,用于存储程序指令;
    所述处理器,用于执行所述存储器存储的程序指令,当程序指令被执行时,用于:
    获取在拍摄视频的过程中拍摄装置在各个拍摄时间段的方位属性信息;
    根据所述方位属性信息从所述视频中确定满足平滑运动条件的目标视频片段;
    根据所述确定的至少一个目标视频片段合成视频。
  36. 如权利要求35所述的可移动设备,其特征在于,所述处理器用于根据所述方位属性信息从所述视频中确定满足平滑运动条件的目标视频片段之前,还用于:
    从视频集合中筛选出时长大于时长阈值的视频,其中,所述视频集合中包括多个已拍摄得到的视频。
  37. 如权利要求35或36所述的可移动设备,其特征在于,所述视频是由拍摄装置拍摄得到的,所述方位属性信息由拍摄装置的姿态信息表示;所述处理器用于获取在拍摄视频的过程中拍摄装置在各个拍摄时间段的方位属性信息时,具体用于:
    获取所述拍摄装置在拍摄所述视频的过程中的姿态信息,所述姿态信息包括位置信息和方向信息。
  38. 如权利要求37所述的可移动设备,其特征在于,所述拍摄装置搭载在可移动设备上;
    所述处理器用于获取所述拍摄装置在拍摄所述视频的过程中的姿态信息时,具体用于:
    获取所述可移动设备在拍摄所述视频的过程中的姿态信息;
    根据所述可移动设备的姿态信息确定所述拍摄装置的姿态信息。
  39. 如权利要求37或38所述的可移动设备,其特征在于,所述可移动设备还包括云台,所述拍摄装置通过云台搭载在可移动设备上;
    所述处理器用于获取所述拍摄装置在拍摄所述视频的过程中的姿态信息时,具体用于:
    获取所述可移动设备在拍摄所述视频的过程中的姿态信息以及所述云台的姿态信息;
    根据所述可移动设备的姿态信息以及所述云台的姿态信息确定所述拍摄装置的姿态信息。
  40. 如权利要求37所述的可移动设备,其特征在于,所述可移动设备为智能终端,所述拍摄装置配置在智能终端本体上;所述处理器用于获取所述拍摄装置在拍摄所述视频的过程中的姿态信息时,具体用于:
    获取所述智能终端在拍摄所述视频的过程中的姿态信息;
    根据所述智能终端的姿态信息确定所述拍摄装置的姿态信息。
  41. 如权利要求35-40任一项所述的可移动设备,其特征在于,所述满足平滑运动条件的目标视频片段是指:所述拍摄装置的方位属性信息为连续变化,所述连续变化具体为相邻时间段之间的方位属性信息的变化量小于预设值。
  42. 如权利要求35-41任一项所述的可移动设备,其特征在于,所述处理器用于根据所述方位属性信息从所述视频中确定满足平滑运动条件的目标视频片段之后,还用于:
    根据所述目标视频片段对应的方位属性信息确定运动属性。
  43. 如权利要求42所述的可移动设备,其特征在于,所述处理器用于根据目标视频片段对应的方位属性信息确定运动属性时,具体用于:
    根据所述目标视频片段对应的方位属性信息的变化趋势与预存的运动属性相匹配,以确定所述目标视频片段的运动属性,并根据所述运动属性确定对应的拍摄场景。
  44. 如权利要求35-41任一项所述的可移动设备,其特征在于,所述处理器用于根据所述方位属性信息从所述视频中确定满足平滑运动条件的目标视频片段之后,还用于:
    对所述目标视频片段中视频帧进行图像分析,确定运动属性。
  45. 如权利要求44所述的可移动设备,其特征在于,所述处理器用于对所述目标视频片段中视频帧进行图像分析,确定运动属性时,具体用于:
    根据目标视频片段中相邻帧之间的特征点的变化,确定运动属性,并根据所述运动属性确定对应的拍摄场景。
  46. 如权利要求43-45任一项所述的可移动设备,其特征在于,所述处理器还用于:
    将所述运动属性和拍摄场景加入所述目标视频片段的属性信息中。
  47. 如权利要求42-46任一项所述的可移动设备,所述可移动设备为无人机,其特征在于,所述运动属性具体为拍摄装置的运动轨迹特性,包括:
    所有场景所对应的原地旋转、直线向前、直线向后、曲线移动拍摄、倒飞远离上升拍摄、倒飞水平远离拍摄中的至少一种;和/或
    锁定拍摄对象场景所对应的往左环绕拍摄、往右环绕拍摄、跟随拍摄、平行拍摄中的至少一种。
  48. 如权利要求42-47任一项所述的可移动设备,其特征在于,所述处理器用于根据所述确定的至少一个目标视频片段合成视频时,具体用于:
    获取所述确定的至少一个目标视频片段以及所述目标视频片段对应的运动属性;
    依据所述运动属性合成视频。
  49. 如权利要求48所述的可移动设备,其特征在于,所述处理器依据所述运动属性合成视频时,具体用于:
    依据所述运动属性,并结合物体运动规律和/或用户的观看习惯对所述确定的至少一个目标视频片段进行排序并合成视频。
  50. 如权利要求42-49任一项所述的可移动设备,其特征在于,所述拍摄装置在各个拍摄时间段的方位属性信息与所述拍摄装置拍摄的视频分开存放,并通过时间信息对应。
  51. 如权利要求35-50任一项所述的可移动设备,其特征在于,所述拍摄装置在各个拍摄时间段的方位属性信息作为所述拍摄装置拍摄的视频的属性,与所述视频的每一帧对应存储。
PCT/CN2017/106382 2017-10-16 2017-10-16 一种视频处理方法、控制终端及可移动设备 WO2019075617A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2017/106382 WO2019075617A1 (zh) 2017-10-16 2017-10-16 一种视频处理方法、控制终端及可移动设备
CN201780009987.5A CN108702464B (zh) 2017-10-16 2017-10-16 一种视频处理方法、控制终端及可移动设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/106382 WO2019075617A1 (zh) 2017-10-16 2017-10-16 一种视频处理方法、控制终端及可移动设备

Publications (1)

Publication Number Publication Date
WO2019075617A1 true WO2019075617A1 (zh) 2019-04-25

Family

ID=63844133

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/106382 WO2019075617A1 (zh) 2017-10-16 2017-10-16 一种视频处理方法、控制终端及可移动设备

Country Status (2)

Country Link
CN (1) CN108702464B (zh)
WO (1) WO2019075617A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115701093A (zh) * 2021-07-15 2023-02-07 上海幻电信息科技有限公司 视频拍摄信息获取方法,及视频拍摄和处理指示方法

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109743511B (zh) * 2019-01-03 2021-04-20 苏州佳世达光电有限公司 自动调整播放画面显示方向的方法及系统
CN110611840B (zh) * 2019-09-03 2021-11-09 北京奇艺世纪科技有限公司 一种视频生成方法、装置、电子设备及存储介质
WO2021056353A1 (zh) * 2019-09-26 2021-04-01 深圳市大疆创新科技有限公司 视频剪辑方法及终端设备
WO2022061660A1 (zh) * 2020-09-24 2022-03-31 深圳市大疆创新科技有限公司 一种视频剪辑方法、电子设备、无人飞行器及存储介质
CN113099266B (zh) * 2021-04-02 2023-05-26 云从科技集团股份有限公司 基于无人机pos数据的视频融合方法、系统、介质及装置
CN113438409B (zh) * 2021-05-18 2022-12-20 影石创新科技股份有限公司 延迟校准方法、装置、计算机设备和存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102473172A (zh) * 2009-07-24 2012-05-23 数字标记公司 改进的音频/视频方法和系统
CN103262169A (zh) * 2010-12-14 2013-08-21 高通股份有限公司 用于删除失效帧的视频编辑装置
CN105493496A (zh) * 2014-12-14 2016-04-13 深圳市大疆创新科技有限公司 一种视频处理方法、装置及图像系统
CN105830427A (zh) * 2013-10-11 2016-08-03 脸谱公司 应用视频稳定化至多媒体剪辑
KR101670187B1 (ko) * 2015-10-14 2016-10-27 연세대학교 산학협력단 영상 자동 편집 방법 및 장치

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3862688B2 (ja) * 2003-02-21 2006-12-27 キヤノン株式会社 画像処理装置及び画像処理方法
JP4195991B2 (ja) * 2003-06-18 2008-12-17 パナソニック株式会社 監視映像モニタリングシステム、監視映像生成方法、および監視映像モニタリングサーバ
JP5649429B2 (ja) * 2010-12-14 2015-01-07 パナソニックIpマネジメント株式会社 映像処理装置、カメラ装置および映像処理方法
KR101106576B1 (ko) * 2011-07-01 2012-01-19 (주)올포랜드 광학적 오차를 최소화하는 항공촬영이미지 영상의 도화시스템
CN103188431A (zh) * 2011-12-27 2013-07-03 鸿富锦精密工业(深圳)有限公司 控制无人飞行载具进行影像采集的系统及方法
CN102967311A (zh) * 2012-11-30 2013-03-13 中国科学院合肥物质科学研究院 基于天空偏振分布模型匹配的导航定位方法
CN103096043B (zh) * 2013-02-21 2015-08-05 安徽大学 基于平行视频拼接技术的矿井安全监控方法
CN104184961A (zh) * 2013-05-22 2014-12-03 辉达公司 用于生成全景视频的移动设备和系统
CN104363385B (zh) * 2014-10-29 2017-05-10 复旦大学 一种图像融合的基于行的硬件实现方法
CN205017419U (zh) * 2015-09-22 2016-02-03 杨珊珊 航拍装置
CN105872367B (zh) * 2016-03-30 2019-01-04 东斓视觉科技发展(北京)有限公司 视频生成方法及视频拍摄装置
CN105721788B (zh) * 2016-04-07 2019-06-07 福州瑞芯微电子股份有限公司 一种多摄像头电子设备及其拍摄方法
CN106210450B (zh) * 2016-07-20 2019-01-11 罗轶 一种多通道多视角大数据视频剪辑方法
KR20180066370A (ko) * 2016-12-08 2018-06-19 성현철 Vr용 흔들림 보정 헤드버디캠

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102473172A (zh) * 2009-07-24 2012-05-23 数字标记公司 改进的音频/视频方法和系统
CN103262169A (zh) * 2010-12-14 2013-08-21 高通股份有限公司 用于删除失效帧的视频编辑装置
CN105830427A (zh) * 2013-10-11 2016-08-03 脸谱公司 应用视频稳定化至多媒体剪辑
CN105493496A (zh) * 2014-12-14 2016-04-13 深圳市大疆创新科技有限公司 一种视频处理方法、装置及图像系统
KR101670187B1 (ko) * 2015-10-14 2016-10-27 연세대학교 산학협력단 영상 자동 편집 방법 및 장치

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115701093A (zh) * 2021-07-15 2023-02-07 上海幻电信息科技有限公司 视频拍摄信息获取方法,及视频拍摄和处理指示方法

Also Published As

Publication number Publication date
CN108702464B (zh) 2021-03-26
CN108702464A (zh) 2018-10-23

Similar Documents

Publication Publication Date Title
WO2019075617A1 (zh) 一种视频处理方法、控制终端及可移动设备
US11688034B2 (en) Virtual lens simulation for video and photo cropping
US11490054B2 (en) System and method for adjusting an image for a vehicle mounted camera
US10084961B2 (en) Automatic generation of video from spherical content using audio/visual analysis
US11587317B2 (en) Video processing method and terminal device
US10582149B1 (en) Preview streaming of video data
US9578279B1 (en) Preview streaming of video data
WO2019127332A1 (zh) 视频数据处理方法、设备、系统及存储介质
US20170264822A1 (en) Mounting Device for Portable Multi-Stream Video Recording Device
CN114520877A (zh) 视频录制方法、装置和电子设备
AU2019271924B2 (en) System and method for adjusting an image for a vehicle mounted camera
WO2022082454A1 (zh) 视频下载方法、设备、系统及计算机可读存储介质
WO2022061660A1 (zh) 一种视频剪辑方法、电子设备、无人飞行器及存储介质
CN113491102A (zh) 变焦视频拍摄方法、拍摄系统、拍摄装置和存储介质
JP2018128804A (ja) 画像処理装置、画像処理プログラムおよび画像処理方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17929395

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17929395

Country of ref document: EP

Kind code of ref document: A1