CN108702464B - Video processing method, control terminal and mobile device - Google Patents

Video processing method, control terminal and mobile device Download PDF

Info

Publication number
CN108702464B
CN108702464B CN201780009987.5A CN201780009987A CN108702464B CN 108702464 B CN108702464 B CN 108702464B CN 201780009987 A CN201780009987 A CN 201780009987A CN 108702464 B CN108702464 B CN 108702464B
Authority
CN
China
Prior art keywords
shooting
video
attribute
information
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201780009987.5A
Other languages
Chinese (zh)
Other versions
CN108702464A (en
Inventor
苏冠华
黄志聪
张若颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN108702464A publication Critical patent/CN108702464A/en
Application granted granted Critical
Publication of CN108702464B publication Critical patent/CN108702464B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

A video processing method, a control terminal and a mobile device are provided, wherein the method comprises the following steps: acquiring azimuth attribute information of a shooting device in each shooting time period in the process of shooting a video; determining a target video segment meeting a smooth motion condition from the video according to the orientation attribute information; and synthesizing the video according to the determined at least one target video segment, so that the post-processing mode of the video can be enriched to a certain extent.

Description

Video processing method, control terminal and mobile device
Technical Field
The invention relates to the technical field of image processing, in particular to a video processing method, a control terminal and a mobile device.
Background
With the continuous development of image processing technology, a shot video is not limited to be obtained only by synthesizing a single image through a computer, but can be directly shot through a shooting device (such as a camera, a camera and the like), so that the video acquisition mode is enriched.
In the process of shooting a video by a shooting device, the shooting device may have the problems of jitter, sudden slide-down, sudden rise and the like due to the surrounding environment factors or self factors, so that the problem of unclear pictures of the shot video occurs, and therefore, how to carry out post-processing on the video becomes a popular research topic.
Disclosure of Invention
The embodiment of the invention discloses a video processing method, a control terminal and a mobile device, which can enrich the post-processing mode of a video to a certain extent.
The first aspect of the embodiments of the present invention discloses a video processing method, including:
acquiring azimuth attribute information of a shooting device in each shooting time period in the process of shooting a video;
determining a target video segment meeting a smooth motion condition from the video according to the orientation attribute information;
and synthesizing a video according to the determined at least one target video segment.
The second aspect of the embodiments of the present invention discloses a control terminal, including a communication element, configured to communicate with a controlled device, further including: a memory and a processor;
the memory to store program instructions;
the processor is configured to execute the program instructions stored in the memory, and when executed, is configured to:
acquiring azimuth attribute information of a shooting device in each shooting time period in the process of shooting a video;
determining a target video segment meeting a smooth motion condition from the video according to the orientation attribute information;
and synthesizing a video according to the determined at least one target video segment.
A third aspect of the embodiments of the present invention discloses a mobile device, including a mobile device body, and a camera mounted on the mobile device body, the mobile device further including: a memory and a processor;
the memory to store program instructions;
the processor is configured to execute the program instructions stored in the memory, and when executed, is configured to:
acquiring azimuth attribute information of a shooting device in each shooting time period in the process of shooting a video;
determining a target video segment meeting a smooth motion condition from the video according to the orientation attribute information;
and synthesizing a video according to the determined at least one target video segment.
In the embodiment of the invention, the orientation attribute information of the shooting device in each shooting time period in the process of shooting the video can be acquired, then the target video segment meeting the smooth motion condition is determined from the video according to the orientation attribute information, and finally the video is synthesized according to at least one determined target video segment.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
Fig. 1 is a schematic view of a scenario for video processing according to an embodiment of the present invention;
fig. 2 is a schematic view of another scenario for video processing according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of a video processing method according to an embodiment of the present invention;
fig. 4 is a schematic flow chart of another video processing method according to an embodiment of the present invention;
fig. 5 is a schematic flowchart of another video processing method according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a control terminal according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a mobile device according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
In the process of shooting a video, the shooting device may have the problems of jitter, sudden slide-down, sudden rise and the like due to the surrounding environment factors or self factors, so that the shot video has the problems of picture jitter, low pixel value and the like and unclear pictures, and therefore, post-processing of the shot video is necessary.
In conventional video processing, a segment in a video is usually manually cut into a specific video segment by a human, and then the human-cut video segment is cut, synthesized, played, and so on in a video editor.
For example, referring to fig. 1, in 101, the video editor may screen out videos that are greater than a duration threshold (e.g., greater than 30 seconds), where the screened videos are long shot videos; at 102, the video editor may display video frames of the long shot video, and the user may take one of the video images a in the long shot video as a starting end and one of the video images B as an ending end; in 103, the video editor can cut out video clips from the telephoto video according to the start end and the end; at 104, the video editor performs a compositing process on the cropped video segments.
However, the manual cropping method is too cumbersome, consumes a lot of time of the user, and when the video is a video with a long duration (for example, more than 50 seconds, more than 100 seconds, etc.), the error rate of the user is increased when performing the manual cropping, which reduces the flexibility of the video processing.
In order to solve the above technical problem, an embodiment of the present invention provides a video processing method. Please refer to fig. 2, which is a schematic view illustrating another scenario for video processing according to an embodiment of the present invention. The video can be processed by a mobile device (e.g., an aviation airplane, an unmanned aerial vehicle, a flying device, a handheld cradle head, an unmanned vehicle, an unmanned ship, etc.) or a control terminal (e.g., a virtual reality device, an intelligent terminal, a remote controller, a ground station, a mobile phone with a control APP, a tablet computer, etc.), specifically, the video can be processed by a processor of the mobile device such as the unmanned aerial vehicle, etc., or a video editor of the control terminal, etc., which is not limited in this respect.
In 201, the video editor may screen out videos that are greater than a duration threshold (e.g., greater than 30 seconds), and the screened videos are telephoto videos.
At 202, the video editor may determine a target video clip from the tele video according to the orientation attribute information. For example, the user may first click on the long shot video and then enter a [ clip ] page, and the video editor may determine, from the long shot video, a video segment satisfying a smooth motion condition as a target video segment according to the orientation attribute information.
In one embodiment, the shooting device can record own attitude information in real time during the process of shooting videos (including long-shot videos), express azimuth attribute information according to the attitude information of the shooting device, and associate the azimuth attribute information with the time axis of the video, so that the time axis of the video carries the azimuth attribute information.
In one embodiment, the video editor may save the target video segment. The user can drag a single target video clip to the playing area to play.
In 203, the video editor may perform a compositing process on the video segments. The video editor can display a highlight button on a display interface aiming at a target video segment meeting a smooth motion condition, and when a user clicks the highlight button, the video editor can extract the target video segment and perform synthesis processing aiming at the extracted target video segment.
In one embodiment, the video editor may automatically perform a compositing process on at least one target video segment when the target video segment is extracted. Or, the user may also determine at least one target video segment through the human-computer interaction interface, and the video editor may perform synthesis processing on the at least one target video segment selected by the user.
Therefore, the target video segment can be automatically extracted through the orientation attribute information in the mode, manual cutting of the video is not needed, the cut video segment can be guaranteed to be a smooth video segment, the error rate is reduced, the operation is convenient and fast, the flexibility of video processing is improved, and the post-processing mode of the video is enriched to a certain extent.
For better illustration, method embodiments of the present application are described below. It should be noted that the execution subject of the method embodiment of the present application may be a mobile device, and may also be a control terminal. The mobile device may be, for example, an aviation airplane, an unmanned aerial vehicle, a flying device, a handheld cradle head, an unmanned vehicle, an unmanned ship, and the like, and the control terminal may be, for example, a virtual reality device, an intelligent terminal, a remote controller, a ground station, a mobile phone with a control APP, or a tablet computer, and the like. For convenience of explanation, the following description will be given by taking a control terminal as an example, but it should be understood that the execution main body of the method embodiment of the present application may also be a mobile device.
Please refer to fig. 3, which is a flowchart illustrating a video processing method according to an embodiment of the present invention. The video processing method described in this embodiment includes:
s301, acquiring azimuth attribute information of the shooting device in each shooting time period in the process of shooting the video.
It should be noted that the orientation attribute information may include information such as the position and orientation of the camera (including the device connected to the camera) during shooting.
In one embodiment, the orientation attribute information of the photographing apparatus at each photographing time period is stored separately from the video photographed by the photographing apparatus and corresponds to the passing time information.
For example, the azimuth attribute information may be stored in an azimuth attribute information file, and the azimuth attribute file may include the azimuth attribute information and time information corresponding to the azimuth attribute information.
In some possible embodiments, the control terminal obtains the orientation attribute information of the shooting device in each shooting time period, and specifically, the orientation attribute information can be obtained from an orientation attribute information file.
In one embodiment, the orientation attribute information of the photographing apparatus at each photographing time period may be stored in correspondence with each frame of the video as an attribute of the video photographed by the photographing apparatus.
For example, the shooting device may record, in real time, orientation attribute information corresponding to a video image of each frame as an attribute of the video in units of each frame during shooting of the video.
In some possible embodiments, the control terminal acquires the orientation attribute information of the shooting device in each shooting time period in the process of shooting the video, and specifically, the control terminal acquires the orientation attribute information of each shooting time period from the video.
S302, determining a target video segment meeting a smooth motion condition from the video according to the orientation attribute information.
In one embodiment, satisfying the smooth motion condition may refer to: the azimuth attribute information of the shooting device is continuously changed, and the continuous change is that the variation of the azimuth attribute information between adjacent time periods is smaller than a preset value.
For example, taking the orientation of the camera in the orientation attribute information as an example, if the orientations between adjacent time periods are the same orientation (for example, 45 degrees towards the left), or the orientations change slowly, for example, the change per adjacent time period is less than 5 degrees, the orientation of the camera may be determined to change continuously, that is, it is determined that the smooth motion condition is satisfied.
In one embodiment, if the orientation attribute information is obtained from the orientation attribute information file and the orientation attribute information in the time period a is continuously changed, the time period corresponding to the time period a in the video may be determined according to the time period a, and the video segment corresponding to the corresponding time period may be taken as the target video segment satisfying the moderate motion condition.
In one embodiment, if the orientation attribute information is recorded in the video in real time and the orientation attribute information in the time period B is continuously changed, the video segment in the time period B can be determined as the target video segment satisfying the smooth motion condition.
S303, synthesizing a video according to the determined at least one target video clip.
It should be noted that, before synthesizing a video according to the determined at least one target video clip, the user may select at least one target video clip from the plurality of target video clips.
In one embodiment, the control terminal may display a plurality of target video segments for a user to select, the user may determine at least one target video segment from the displayed plurality of target video segments in a human-computer interaction manner, and the control terminal may perform synthesis processing according to the at least one target video segment.
In one embodiment, the control terminal may automatically determine at least one target video segment from the target video segments according to the orientation attribute information of each target video segment, and perform composition processing according to the at least one target video segment.
Therefore, the method and the device for processing the video automatically extract the target video segment according to the orientation attribute information can automatically extract the target video segment according to the orientation attribute information by acquiring the orientation attribute information of the shooting device in each shooting time period in the process of shooting the video, determining the target video segment meeting the smooth motion condition from the video according to the orientation attribute information, and finally synthesizing the video according to at least one determined target video segment, so that the user does not need to manually clip the target video segment, the clipped video segment is ensured to be a smooth video segment, the operation is convenient, and the post-processing mode of the video is enriched to a certain extent.
Please refer to fig. 4, which is a schematic view illustrating another video processing method according to an embodiment of the present invention. The video processing method described in the embodiment of the present invention includes:
s401, screening out videos with the duration being larger than a duration threshold value from the video set.
And the video set comprises a plurality of shot videos.
It should be noted that the control terminal may store a plurality of captured videos and use the plurality of captured videos as the video set.
It should be further noted that the time duration threshold may be, for example, a time duration of 30 seconds, a time duration of 40 seconds, a time duration of 50 seconds, and the like, which is not limited in this embodiment of the present invention.
The duration threshold may be a default threshold of the control terminal, or may be set by the user, and the control terminal uses a value set by the user as the duration threshold.
In some possible embodiments, since the video with the duration less than or equal to the duration threshold may have a shorter duration (e.g., 2 seconds, 4 seconds, 6 seconds, etc.) when being cut into a plurality of video segments, which results in less content in the video segments, the control terminal may not perform any processing on the video with the duration less than or equal to the duration threshold.
In some possible embodiments, the control terminal may first acquire a duration corresponding to each video in the video set, and then screen out the videos with the duration greater than the duration threshold according to the duration.
S402, acquiring attitude information of the shooting device in the process of shooting the video.
Wherein the attitude information includes position information and direction information.
It should be noted that the position information may be, for example, position coordinates of the camera, a relative position of a device connected to the camera, and the like, and the embodiment of the present invention does not limit this.
It should be further noted that the orientation information may refer to an orientation of the camera, and may be, for example, an attitude angle (including a yaw angle (yaw), a pitch angle (pitch), and the like), a field angle (FOV), and the like of the camera, which is not limited in this respect by the embodiment of the present invention.
In some possible embodiments, during the process of shooting the video, the shooting device may record the posture information of the shooting device in real time according to the time point or the time period. For example, the camera may record the posture information once every second, or record the posture information once every 2 seconds, or record the posture information once every 10 seconds, which is not limited in this embodiment of the present invention.
In one embodiment, the camera is mounted on a mobile device; the acquiring of the attitude information of the shooting device in the process of shooting the video includes: acquiring attitude information of the movable equipment in the process of shooting the video; and determining the attitude information of the shooting device according to the attitude information of the movable equipment.
It should be noted that the movable device may be, for example, a handheld pan/tilt, an unmanned vehicle, an unmanned ship/unmanned aerial vehicle, and the like, which may be capable of moving.
It should be noted that the camera may be fixed to the movable device, and in this case, the attitude information of the movable device may indicate the attitude information of the camera. For example, when the movable device moves 40 degrees to the left, the camera also moves 40 degrees to the left, and the attitude information of the camera can be determined according to the attitude information of the movable device.
In some possible embodiments, the control terminal may be connected with the mobile device and the camera two by a wireless link, the mobile device may record the posture information of the mobile device in real time and send the posture information of the mobile device to the control terminal, and the control terminal may determine the posture information of the camera according to the posture information of the mobile device.
For example, this control terminal is the smart mobile phone, this mobile device is unmanned aerial vehicle, be fixed with shooting device (for example for the camera) on the unmanned aerial vehicle, the smart mobile phone can with unmanned aerial vehicle and shooting device communication connection, be provided with positioner on the unmanned aerial vehicle, like GPS, inertia measuring unit etc. for obtain unmanned aerial vehicle's real-time position and orientation information (promptly attitude information), when the smart mobile phone can obtain unmanned aerial vehicle's attitude information through communication element, when shooting device is static relative this unmanned aerial vehicle, then can obtain this shooting device's attitude information according to this unmanned aerial vehicle's attitude information.
It should be further noted that the shooting device may also be movably disposed on the movable apparatus, for example, the shooting device is mounted on the unmanned aerial vehicle through a cradle head, and the shooting device is moved through rotation of the cradle head; or, the shooting device is carried on the unmanned aerial vehicle through other carrying equipment which is not a tripod head, and the shooting device is moved through the rotation of the carrying equipment.
At this time, the control terminal acquires attitude information of the photographing device in the process of photographing the video through a communication element, and the attitude information includes: and acquiring the attitude information of the unmanned aerial vehicle in the video shooting process and the attitude information of the holder. And the processor determines the attitude information of the shooting device according to the attitude information of the unmanned aerial vehicle and the attitude information of the holder.
It should be noted that, when the shooting device is mounted on the unmanned aerial vehicle through the cradle head, the attitude information of the unmanned aerial vehicle and the attitude information of the cradle head can jointly determine the attitude information of the shooting device. For example, the position information of the drone and the direction information of the pan/tilt head may jointly determine the attitude information of the camera.
For example, this control terminal is the smart mobile phone, shoot the device (for example for the camera) and carry on unmanned aerial vehicle through the cloud platform, unmanned aerial vehicle can link to each other with the cloud platform, the smart mobile phone can with unmanned aerial vehicle and shoot device communication connection, detect when the smart mobile phone that the shooting device is in the in-process of shooing the video, alright in order to obtain unmanned aerial vehicle's attitude information and the attitude information who obtains the cloud platform through unmanned aerial vehicle, and confirm this shooting device's attitude information according to this unmanned aerial vehicle's attitude information and the attitude information of cloud platform.
It should be further noted that the method implemented by the control terminal can also be implemented by a mobile device such as an unmanned aerial vehicle, which is not described herein again.
In one embodiment, the shooting device is configured on a smart terminal; the acquiring of the attitude information of the shooting device in the process of shooting the video includes: acquiring attitude information of the intelligent terminal in the process of shooting the video; and determining the attitude information of the shooting device according to the attitude information of the intelligent terminal.
It should be noted that, the smart terminal may be, for example, a device such as a smart phone, a tablet computer, and a wearable device, which is capable of configuring a camera, and the embodiment of the present invention does not limit this.
The imaging device may be disposed in the smart terminal, and may be fixed to the smart terminal. At this time, the posture information of the photographing device may be represented by the posture information of the smart terminal.
It should be noted that, the mode of the camera device being configured at the smart terminal may also be that the camera device can rotate on the smart terminal, for example, a camera of a smart phone can rotate back and forth through a screw or the like, so as to further realize self-shooting and normal shooting. In this case, the posture information of the smart terminal may include posture information of the smart terminal itself and rotation information of a device such as a screw, and the posture information of the photographing device may be represented by the posture information of the smart terminal itself and the rotation information of the device such as the screw.
It should be noted that the attitude information of the camera may indicate orientation attribute information of the camera in each shooting period. For example, if the control terminal takes a time period of 10 seconds as a shooting time period, the control terminal may use the 1 st second from which shooting starts and the 10 th second from which shooting ends as end points, and use the attitude information recorded between the start point and the end point as the azimuth attribute information of the shooting device in the shooting time period.
That is, the orientation attribute may include pose information for a plurality of points in time (or time periods, depending on the manner in which the camera records the pose information).
And S403, determining a target video segment meeting the smooth motion condition from the video according to the orientation attribute information.
It should be noted that, for a specific implementation manner of the step S430, reference may be made to the description of the step S302 in the foregoing method embodiment, and no further description is given here.
S404, determining the motion attribute according to the azimuth attribute information corresponding to the target video clip.
In one embodiment, the motion attribute is a motion trajectory characteristic of the camera.
It should be noted that the motion trajectory characteristic may be used to indicate that the motion of the shooting device is a smooth motion. After the target video segment is determined, the control terminal may determine the orientation attribute information corresponding to the target video segment as a motion attribute that may indicate smooth motion.
In one embodiment, the motion attributes may include: at least one of in-situ rotation, straight line forward, straight line backward, curve moving shooting, reverse flying far-rising shooting and reverse flying horizontal far-shooting corresponding to all scenes; and/or locking at least one of left surround shooting, right surround shooting, follow-up shooting and parallel shooting corresponding to the shooting object scene.
The in-situ rotation may include in-situ rotation about a y axis (also called original yaw) and in-situ rotation about an x axis (also called original pitch).
Wherein the motion attributes can also include shooting straight ahead and simultaneously around the x-axis head-lowering (also called straight ahead and simultaneously pitch head-lowering), shooting straight ahead and simultaneously around the x-axis head-raising (also called straight ahead and simultaneously pitch head-raising), shooting straight backwards and simultaneously around the x-axis head-lowering (also called straight backwards and simultaneously pitch head-lowering), shooting straight backwards and simultaneously around the x-axis head-raising (also called straight backwards and simultaneously pitch head-raising), shooting straight left translation, shooting straight right translation, the present invention is not limited to the above-described embodiments, and the present invention is not limited to the embodiments, but may include an upward looking shooting, an upward looking shooting with rotation around the y-axis (also called an upward looking shooting, a turn-around shooting), a downward looking shooting with normal descending, a downward looking shooting with rotation around the y-axis (also called a downward looking shooting, a turn-around shooting), and the like.
In one embodiment, when determining the motion attribute according to the orientation attribute information corresponding to the target video segment, the method specifically includes: and matching the variation trend of the azimuth attribute information corresponding to the target video clip with a pre-stored motion attribute to determine the motion attribute of the target video clip, and determining a corresponding shooting scene according to the motion attribute. The shooting scenes comprise ordinary flight and tracking shooting, pointing flight and the like.
The pre-stored motion attribute may be a motion attribute pre-stored in the control terminal. The control terminal can determine the video attributes which can be used for representing the smooth motion through a plurality of times of video processing, and the video attributes which can represent the smooth motion are stored in the control terminal in a file form.
For example, the control terminal may match a pre-stored motion attribute with a variation trend of the orientation attribute information corresponding to the target video segment, and if the variation trend is the same as the variation trend of the motion attribute a (or the similarity reaches a similarity threshold, which may be 90%, 95%, or the like), may determine that the motion attribute a is the motion attribute of the target video segment, and may further determine a shooting scene corresponding to the motion attribute.
In some possible embodiments, the control terminal may tag the target video segment according to the motion attribute, i.e., add an attribute to the target video segment. For example, for a video segment of the video for which the motion attribute is determined, the control terminal may display a highlight button on a display interface of the control terminal.
S405, synthesizing a video according to the determined at least one target video clip.
It should be noted that, for a specific implementation manner of the step S405, reference may be made to the step S303 in the foregoing method embodiment, which is not described herein again.
Therefore, the embodiment of the invention screens out the video with the duration longer than the preset duration from the video set, acquires the attitude information of the shooting device in the video shooting process, represents the azimuth attribute information of each shooting time period by using the attitude information, determines the target video segment meeting the smooth motion condition from the video according to the azimuth attribute information, determines the motion attribute according to the azimuth attribute information corresponding to the target video segment, synthesizes the video according to at least one target video segment, can realize the video editing without manual operation, ensures that the edited video segment is the smooth video segment, is convenient to operate, and enriches the post-processing mode of the video to a certain extent. In addition, during subsequent synthesis, synthesis can be performed according to the motion attributes of the target video segments, so that the synthesized video conforms to the motion combination rule or the watching habits of users.
Referring to fig. 5, a flow chart of another video processing method according to an embodiment of the present invention is shown, where the video processing method according to the embodiment of the present invention includes:
s501, acquiring azimuth attribute information of the shooting device in each shooting time period in the process of shooting the video.
S502, determining a target video segment meeting a smooth motion condition from the video according to the orientation attribute information.
It should be noted that, for a specific implementation manner of the steps S501 and S502, reference may be made to the description of the steps S301 and S302 in the foregoing method embodiment, which is not described herein again.
S503, carrying out image analysis on the video frames in the target video clip, and determining the motion attribute.
It should be noted that the control terminal may determine each frame of video image in the target video segment, and then perform image analysis on the video images of the adjacent frames to determine the motion attribute.
In one embodiment, the analyzing the image of the video frame in the target video segment to determine the motion attribute specifically includes: and determining a motion attribute according to the change of the characteristic points between adjacent frames in the target video clip, and determining a corresponding shooting scene according to the motion attribute.
For example, the control terminal may compare the features of the video images of the adjacent frames in the target video segment to extract the feature points, and then may determine the position change of the feature points in the two frames of video images, and if the position change is within a preset position change range, the motion trajectory characteristics, that is, the motion attributes, of the shooting device in the process of shooting the target video end may be determined.
S504, adding the motion attributes and the shooting scenes into the attribute information of the target video clip.
It should be noted that, after the target video segment is determined, the control terminal may determine the orientation attribute information corresponding to the target video segment as a motion attribute that may indicate smooth motion, and may store the motion attribute and a shooting scene corresponding to the motion attribute as an attribute file, where information in the attribute file may indicate attribute information of the target video segment.
In some possible embodiments, the control terminal may automatically and correspondingly add the attribute information to the target video clip according to the time information according to the attribute file. Alternatively, the control terminal may correspondingly add the attribute information to the target video segment according to the time information when receiving an add instruction (for example, receiving an attribute adding operation by a user).
And S505, acquiring the determined at least one target video segment and the motion attribute corresponding to the target video segment.
It should be noted that, after determining the target video segments, the control terminal may determine at least one target video segment from the target video segments.
In some possible embodiments, the user may select at least one target video clip through the human-computer interaction interface, and the control terminal may use the target video clip selected by the user as the determined at least one target video clip.
In some possible embodiments, the control terminal may determine at least one target video segment according to the motion attribute of the target video segment. For example, the control terminal may select a target video segment with the same motion attribute as the determined at least one target video segment, or the control terminal may also select a target video segment with a similar motion attribute as the determined at least one target video segment, and so on, which is not limited in this embodiment of the present invention.
And S506, synthesizing the video according to the motion attributes.
In an embodiment, the synthesizing a video according to the motion attribute specifically includes: and sequencing the determined at least one target video segment according to the motion attribute and by combining with the motion rule of the object and/or the watching habit of the user and synthesizing the video.
For example, the determined motion attributes of at least one video segment are all straight-ahead in all scenes, and at this time, the object in the captured picture is moving straight-backwards for the user, so the control terminal can sort the at least one target video segment according to the motion rule of the object moving straight backwards and the watching habits of the user to synthesize the video.
Therefore, in the embodiment of the invention, the target video segments meeting the smooth motion condition are determined from the video, the image analysis is carried out on the target video segments to determine the motion attribute, the motion attribute and the shooting scene are added into the attribute information of the target video segments, and finally the at least one target video segment is synthesized into the video according to the motion attribute corresponding to the at least one target video segment, so that the video can be synthesized according to the motion track characteristic of the shooting device, the requirements of users on the automation and the intellectualization of the synthesized video are better met, and the video processing mode is enriched to a certain extent.
The embodiment of the invention provides a control terminal. Referring to fig. 6, a schematic structural diagram of a control terminal according to an embodiment of the present invention is shown, where the control terminal described in this embodiment includes:
memory 601, processor 602, and communication element 603;
the memory 601 is used for storing program instructions;
the processor 602 is configured to execute the program instructions stored in the memory, and when the program instructions are executed, the communication component 603 is configured to obtain orientation attribute information of the shooting device in each shooting time period in the process of shooting the video;
the processor 602 determines a target video segment satisfying a smooth motion condition from the video according to the orientation attribute information; and synthesizing a video according to the determined at least one target video segment.
In one embodiment, before the processor 602 is configured to determine a target video segment satisfying a smooth motion condition from the video according to the orientation attribute information, it is further configured to: and screening videos with the duration longer than a duration threshold value from a video set, wherein the video set comprises a plurality of videos obtained through shooting.
In one embodiment, the video is captured by a camera, and the orientation attribute information is represented by pose information of the camera; the communication component 603 is configured to, when acquiring the orientation attribute information of the shooting device in each shooting time period in the process of shooting the video, specifically: and acquiring attitude information of the shooting device in the process of shooting the video, wherein the attitude information comprises position information and direction information.
In one embodiment, the camera is mounted on a mobile device; the communication component 603 is configured to, when acquiring the pose information of the shooting device in the process of shooting the video, specifically: acquiring attitude information of the movable equipment in the process of shooting the video; and determining the attitude information of the shooting device according to the attitude information of the movable equipment.
In one embodiment, the shooting device is carried on the unmanned aerial vehicle through a holder; the communication component 603 is configured to, when acquiring the pose information of the shooting device in the process of shooting the video, specifically: acquiring attitude information of the unmanned aerial vehicle in the process of shooting the video and attitude information of the holder; the processor 602 is configured to determine attitude information of the shooting device according to the attitude information of the unmanned aerial vehicle and the attitude information of the pan/tilt head.
In one embodiment, the shooting device is configured on a smart terminal; the communication component 603 is configured to, when acquiring the pose information of the shooting device in the process of shooting the video, specifically: acquiring attitude information of the intelligent terminal in the process of shooting the video; the processor 602 is configured to determine pose information of the camera according to the pose information of the intelligent terminal.
In one embodiment, the target video segment satisfying the smooth motion condition refers to: the azimuth attribute information of the shooting device is continuously changed, and the continuous change is that the variation of the azimuth attribute information between adjacent time periods is smaller than a preset value.
In one embodiment, after the processor 602 is configured to determine a target video segment satisfying a smooth motion condition from the video according to the orientation attribute information, it is further configured to: and determining the motion attribute according to the azimuth attribute information corresponding to the target video segment.
In an embodiment, when the processor 602 is configured to determine the motion attribute according to the orientation attribute information corresponding to the target video segment, specifically, to: and matching the variation trend of the azimuth attribute information corresponding to the target video clip with a pre-stored motion attribute to determine the motion attribute of the target video clip, and determining a corresponding shooting scene according to the motion attribute.
In one embodiment, after the processor 602 is configured to determine a target video segment satisfying a smooth motion condition from the video according to the orientation attribute information, it is further configured to: and carrying out image analysis on the video frames in the target video clip to determine the motion attribute.
In an embodiment, the processor 602 is configured to perform image analysis on a video frame in the target video segment, and when determining the motion attribute, specifically configured to: and determining a motion attribute according to the change of the characteristic points between adjacent frames in the target video clip, and determining a corresponding shooting scene according to the motion attribute.
In one embodiment, the processor 602 is further configured to: and adding the motion attribute and the shooting scene into the attribute information of the target video clip.
In one embodiment, the motion attribute is a motion trajectory characteristic of the shooting device, and includes: at least one of in-situ rotation, straight line forward, straight line backward, curve moving shooting, reverse flying far-rising shooting and reverse flying horizontal far-shooting corresponding to all scenes; and/or locking at least one of left surround shooting, right surround shooting, follow-up shooting and parallel shooting corresponding to the shooting object scene.
In an embodiment, the processor 602, when being configured to synthesize a video according to the determined at least one target video segment, is specifically configured to: acquiring the determined at least one target video segment and the motion attribute corresponding to the target video segment; and synthesizing the video according to the motion attribute.
In one embodiment, when the processor 602 synthesizes a video according to the motion attribute, the processor is specifically configured to: and sequencing the determined at least one target video segment according to the motion attribute and by combining with the motion rule of the object and/or the watching habit of the user and synthesizing the video.
In one embodiment, the orientation attribute information of the photographing apparatus at each photographing time period is stored separately from the video photographed by the photographing apparatus and corresponds to the passing time information.
In one embodiment, the orientation attribute information of the photographing apparatus at each photographing time period is stored in correspondence with each frame of the video as an attribute of the video photographed by the photographing apparatus.
The embodiment of the invention provides a movable device. Referring to fig. 7, which is a schematic structural diagram of a mobile device according to an embodiment of the present invention, the mobile device described in this embodiment includes a mobile device body and a camera mounted on the mobile device body, and the mobile device further includes:
a memory 701 and a processor 702;
the memory 701 is used for storing program instructions;
the processor 702 is configured to execute the program instructions stored in the memory, and when executed, is configured to:
acquiring azimuth attribute information of a shooting device in each shooting time period in the process of shooting a video;
determining a target video segment meeting a smooth motion condition from the video according to the orientation attribute information;
and synthesizing a video according to the determined at least one target video segment.
It can be understood that the mobile device is provided with a positioning device, such as a GPS, an inertial measurement unit, etc., for acquiring the real-time position and orientation information (i.e., attitude information) of the drone.
In one embodiment, before the processor 702 is configured to determine the target video segment satisfying the smooth motion condition from the video according to the orientation attribute information, it is further configured to: and screening videos with the duration longer than a duration threshold value from a video set, wherein the video set comprises a plurality of videos obtained through shooting.
In one embodiment, the video is captured by a camera, and the orientation attribute information is represented by pose information of the camera; the processor 702 is configured to, when acquiring the orientation attribute information of the shooting device in each shooting time period in the process of shooting the video, specifically: and acquiring attitude information of the shooting device in the process of shooting the video, wherein the attitude information comprises position information and direction information.
In one embodiment, the camera is mounted on a mobile device; the processor 702 is configured to, when acquiring pose information of the shooting device in the process of shooting the video, specifically: acquiring attitude information of the movable equipment in the process of shooting the video; and determining the attitude information of the shooting device according to the attitude information of the movable equipment.
In one embodiment, the shooting device is carried on the unmanned aerial vehicle through a holder; the processor 702 is configured to, when acquiring pose information of the shooting device in the process of shooting the video, specifically: acquiring attitude information of the unmanned aerial vehicle in the process of shooting the video and attitude information of the holder; and determining the attitude information of the shooting device according to the attitude information of the unmanned aerial vehicle and the attitude information of the holder.
In one embodiment, the shooting device is configured on a smart terminal; the processor 702 is configured to, when acquiring pose information of the shooting device in the process of shooting the video, specifically: acquiring attitude information of the intelligent terminal in the process of shooting the video; and determining the attitude information of the shooting device according to the attitude information of the intelligent terminal.
In one embodiment, the target video segment satisfying the smooth motion condition refers to: the azimuth attribute information of the shooting device is continuously changed, and the continuous change is that the variation of the azimuth attribute information between adjacent time periods is smaller than a preset value.
In one embodiment, after the processor 702 is configured to determine a target video segment satisfying a smooth motion condition from the video according to the orientation attribute information, it is further configured to: and determining the motion attribute according to the azimuth attribute information corresponding to the target video segment.
In an embodiment, when the processor 702 is configured to determine the motion attribute according to the orientation attribute information corresponding to the target video segment, it is specifically configured to: and matching the variation trend of the azimuth attribute information corresponding to the target video clip with a pre-stored motion attribute to determine the motion attribute of the target video clip, and determining a corresponding shooting scene according to the motion attribute.
In one embodiment, after the processor 702 is configured to determine a target video segment satisfying a smooth motion condition from the video according to the orientation attribute information, it is further configured to: and carrying out image analysis on the video frames in the target video clip to determine the motion attribute.
In an embodiment, the processor 702 is configured to perform image analysis on a video frame in the target video segment, and when determining the motion attribute, specifically configured to: and determining a motion attribute according to the change of the characteristic points between adjacent frames in the target video clip, and determining a corresponding shooting scene according to the motion attribute.
In one embodiment, the processor 702 is further configured to: and adding the motion attribute and the shooting scene into the attribute information of the target video clip.
In one embodiment, the movable device is an aircraft, and the motion attribute is a motion trajectory characteristic of a camera, including: at least one of in-situ rotation, straight line forward, straight line backward, curve moving shooting, reverse flying far-rising shooting and reverse flying horizontal far-shooting corresponding to all scenes; and/or locking at least one of left surround shooting, right surround shooting, follow-up shooting and parallel shooting corresponding to the shooting object scene.
In an embodiment, when the processor 702 is configured to synthesize a video according to the determined at least one target video segment, it is specifically configured to: acquiring the determined at least one target video segment and the motion attribute corresponding to the target video segment; and synthesizing the video according to the motion attribute.
In one embodiment, when the processor 702 synthesizes a video according to the motion attribute, it is specifically configured to: and sequencing the determined at least one target video segment according to the motion attribute and by combining with the motion rule of the object and/or the watching habit of the user and synthesizing the video.
In one embodiment, the orientation attribute information of the photographing apparatus at each photographing time period is stored separately from the video photographed by the photographing apparatus and corresponds to the passing time information.
After the video is composed, the removable device may store the video in a storage device on the removable device or transmit back to the control terminal for playback or storage by the control terminal.
In one embodiment, the orientation attribute information of the photographing apparatus at each photographing time period is stored in correspondence with each frame of the video as an attribute of the video photographed by the photographing apparatus.
It should be noted that, for simplicity of description, the above-mentioned embodiments of the method are described as a series of acts or combinations, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The flight control method, the remote controller and the remote control system provided by the embodiment of the invention are described in detail, a specific example is applied in the text to explain the principle and the implementation of the invention, and the description of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (48)

1. A video processing method, comprising:
acquiring azimuth attribute information of a shooting device in each shooting time period in the process of shooting a video, wherein the azimuth attribute information is represented by attitude information of the shooting device, and the attitude information comprises position information and direction information;
determining a target video segment meeting a smooth motion condition from the video according to the orientation attribute information;
and synthesizing a video according to the determined at least one target video segment meeting the smooth motion condition.
2. The method of claim 1, wherein prior to determining from the video a target video segment that satisfies a smooth motion condition based on the orientation attribute information, further comprising:
and screening videos with the duration being larger than a preset duration threshold value from a video set, wherein the video set comprises a plurality of videos obtained through shooting.
3. The method of claim 1, wherein the camera is mounted on a mobile device, and the obtaining of the orientation attribute information of the camera at each shooting time period in the process of shooting the video comprises:
acquiring attitude information of the movable equipment in the process of shooting the video;
and determining the attitude information of the shooting device according to the attitude information of the movable equipment.
4. The method of claim 3, wherein the shooting device is mounted on a movable device through a cradle head, and the acquiring of the attitude information of the movable device in the process of shooting the video comprises:
acquiring attitude information of the movable equipment and attitude information of the holder in the process of shooting the video;
and determining the attitude information of the shooting device according to the attitude information of the movable equipment and the attitude information of the holder.
5. The method of claim 3, wherein the camera is configured on a smart terminal, and the obtaining of the pose information of the mobile device during the process of shooting the video comprises:
acquiring attitude information of the intelligent terminal in the process of shooting the video;
and determining the attitude information of the shooting device according to the attitude information of the intelligent terminal.
6. The method according to any one of claims 1-5, wherein the target video segment satisfying the smooth motion condition is: the azimuth attribute information of the shooting device is continuously changed, and the continuous change is that the variation of the azimuth attribute information between adjacent time periods is smaller than a preset value.
7. The method of claim 1, wherein after determining a target video segment from the video that satisfies a smooth motion condition based on the orientation attribute information, further comprising:
and determining the motion attribute according to the azimuth attribute information corresponding to the target video segment.
8. The method of claim 7, wherein the determining the motion attribute according to the orientation attribute information corresponding to the target video segment specifically comprises:
and matching the variation trend of the azimuth attribute information corresponding to the target video clip with a pre-stored motion attribute to determine the motion attribute of the target video clip, and determining a corresponding shooting scene according to the motion attribute.
9. The method of claim 1, wherein after determining a target video segment from the video that satisfies a smooth motion condition based on the orientation attribute information, further comprising:
and carrying out image analysis on the video frames in the target video clip to determine the motion attribute.
10. The method of claim 9, wherein the analyzing the image of the video frame in the target video segment to determine the motion attribute comprises:
and determining a motion attribute according to the change of the characteristic points between adjacent frames in the target video clip, and determining a corresponding shooting scene according to the motion attribute.
11. The method of any one of claims 8-10, further comprising:
and adding the motion attribute and the shooting scene into the attribute information of the target video clip.
12. The method of claim 11, wherein the camera is disposed on an aircraft, and the motion attribute is a motion trajectory characteristic of the camera, and comprises:
at least one of in-situ rotation, straight line forward, straight line backward, curve moving shooting, reverse flying far-rising shooting and reverse flying horizontal far-shooting corresponding to all scenes; and/or
And locking at least one of left surrounding shooting, right surrounding shooting, following shooting and parallel shooting corresponding to the shooting object scene.
13. The method of claim 12, wherein synthesizing a video from the at least one target video segment determined to satisfy the smooth motion condition comprises:
acquiring the at least one determined target video segment meeting the smooth motion condition and the motion attribute corresponding to the target video segment;
and synthesizing the video according to the motion attribute.
14. The method according to claim 13, wherein said synthesizing a video according to said motion attributes comprises:
and sequencing at least one target video segment according to the motion attribute and combining with the motion rule of the object and/or the watching habit of the user and synthesizing the video.
15. The method of claim 1, wherein the azimuth attribute information of the photographing apparatus at each photographing time period is stored separately from the video photographed by the photographing apparatus and corresponds to the passing time information.
16. The method according to claim 1, wherein orientation attribute information of the photographing apparatus at each photographing time period is stored in correspondence with each frame of the video as an attribute of the video photographed by the photographing apparatus.
17. A control terminal, comprising a communication element for communicating with a controlled device, further comprising: a memory and a processor;
the memory to store program instructions;
the processor is configured to execute the program instructions stored in the memory, and when executed, is configured to:
acquiring orientation attribute information of a shooting device in each shooting time period in the process of shooting a video through a communication element, wherein the orientation attribute information is represented by attitude information of the shooting device, and the attitude information comprises position information and direction information;
and the processor determines a target video segment meeting the smooth motion condition from the video according to the orientation attribute information, and synthesizes a video according to at least one target video segment meeting the smooth motion condition.
18. The control terminal of claim 17, wherein the processor, prior to determining the target video segment from the video that satisfies the smooth motion condition based on the orientation attribute information, is further configured to:
and screening videos with the duration longer than a duration threshold value from a video set, wherein the video set comprises a plurality of videos obtained through shooting.
19. The control terminal according to claim 17, wherein the photographing means is mounted on a movable device;
the communication element is used for acquiring the orientation attribute information of the shooting device in each shooting time period in the process of shooting the video, and is specifically used for:
acquiring attitude information of the movable equipment in the process of shooting the video;
and determining the attitude information of the shooting device according to the attitude information of the movable equipment.
20. The control terminal of claim 19, wherein the camera is mounted on the mobile device via a cradle head;
the communication element is configured to, when acquiring pose information of the mobile device in a process of shooting the video, specifically:
acquiring attitude information of the movable equipment and attitude information of the holder in the process of shooting the video;
and determining the attitude information of the shooting device according to the attitude information of the movable equipment and the attitude information of the holder.
21. The control terminal of claim 19, wherein the camera is configured on a smart terminal; the communication element is configured to, when acquiring pose information of the mobile device in a process of shooting the video, specifically:
acquiring attitude information of the intelligent terminal in the process of shooting the video;
and determining the attitude information of the shooting device according to the attitude information of the intelligent terminal.
22. The control terminal according to any of claims 17-21, wherein the target video segment satisfying the smooth motion condition is: the azimuth attribute information of the shooting device is continuously changed, and the continuous change is that the variation of the azimuth attribute information between adjacent time periods is smaller than a preset value.
23. The control terminal according to claim 17, wherein the processor, after determining a target video segment satisfying a smooth motion condition from the video according to the orientation attribute information, is further configured to:
and determining the motion attribute according to the azimuth attribute information corresponding to the target video segment.
24. The control terminal according to claim 23, wherein the processor, when determining the motion attribute according to the orientation attribute information corresponding to the target video segment, is specifically configured to:
and matching the variation trend of the azimuth attribute information corresponding to the target video clip with a pre-stored motion attribute to determine the motion attribute of the target video clip, and determining a corresponding shooting scene according to the motion attribute.
25. The control terminal according to claim 17, wherein the processor, after determining a target video segment satisfying a smooth motion condition from the video according to the orientation attribute information, is further configured to:
and carrying out image analysis on the video frames in the target video clip to determine the motion attribute.
26. The control terminal according to claim 25, wherein the processor is configured to perform image analysis on the video frames in the target video segment, and when determining the motion attribute, specifically configured to:
and determining a motion attribute according to the change of the characteristic points between adjacent frames in the target video clip, and determining a corresponding shooting scene according to the motion attribute.
27. The control terminal of any of claims 24-26, wherein the processor is further configured to:
and adding the motion attribute and the shooting scene into the attribute information of the target video clip.
28. The control terminal according to claim 27, wherein the camera is disposed on an aircraft, and the motion attribute is a motion trajectory characteristic of the camera, and includes:
at least one of in-situ rotation, straight line forward, straight line backward, curve moving shooting, reverse flying far-rising shooting and reverse flying horizontal far-shooting corresponding to all scenes; and/or
And locking at least one of left surrounding shooting, right surrounding shooting, following shooting and parallel shooting corresponding to the shooting object scene.
29. The control terminal according to claim 28, wherein the processor is configured to, when synthesizing a video from the at least one target video segment determined to satisfy the smooth motion condition, specifically:
acquiring the at least one determined target video segment meeting the smooth motion condition and the motion attribute corresponding to the target video segment;
and synthesizing the video according to the motion attribute.
30. The control terminal of claim 29, wherein the processor, when composing a video based on the motion attributes, is further configured to:
and sequencing the determined at least one target video segment according to the motion attribute and by combining with the motion rule of the object and/or the watching habit of the user and synthesizing the video.
31. The control terminal according to claim 17, wherein the orientation attribute information of the photographing device at each photographing time period is stored separately from the video photographed by the photographing device and corresponds to the passing time information.
32. The control terminal according to claim 17, wherein orientation attribute information of the photographing device at each photographing time period is stored in correspondence with each frame of the video as an attribute of the video photographed by the photographing device.
33. A portable device including a portable device body and a camera mounted on the portable device body, characterized by further comprising: a memory and a processor;
the memory to store program instructions;
the processor is configured to execute the program instructions stored in the memory, and when executed, is configured to:
acquiring azimuth attribute information of a shooting device in each shooting time period in the process of shooting a video, wherein the azimuth attribute information is represented by attitude information of the shooting device, and the attitude information comprises position information and direction information;
determining a target video segment meeting a smooth motion condition from the video according to the orientation attribute information;
and synthesizing a video according to the determined at least one target video segment meeting the smooth motion condition.
34. The removable device of claim 33, wherein the processor, prior to determining from the video a target video segment that satisfies a smooth motion condition based on the orientation attribute information, is further configured to:
and screening videos with the duration longer than a duration threshold value from a video set, wherein the video set comprises a plurality of videos obtained through shooting.
35. The removable device of claim 33, wherein the camera is onboard the removable device;
the processor is configured to, when acquiring the orientation attribute information of the shooting device in each shooting time period in the process of shooting the video, specifically:
acquiring attitude information of the movable equipment in the process of shooting the video;
and determining the attitude information of the shooting device according to the attitude information of the movable equipment.
36. The movable apparatus of claim 35, further comprising a pan-tilt, the camera being carried on the movable apparatus by the pan-tilt;
the processor is configured to, when acquiring pose information of the mobile device in a process of shooting the video, specifically:
acquiring attitude information of the movable equipment in the process of shooting the video and attitude information of the holder;
and determining the attitude information of the shooting device according to the attitude information of the movable equipment and the attitude information of the holder.
37. The mobile device of claim 35, wherein the mobile device is an intelligent terminal, and the camera is configured on an intelligent terminal body; the processor is configured to, when acquiring pose information of the mobile device in a process of shooting the video, specifically:
acquiring attitude information of the intelligent terminal in the process of shooting the video;
and determining the attitude information of the shooting device according to the attitude information of the intelligent terminal.
38. The removable device according to any of claims 33-37, wherein the target video segment satisfying the smooth motion condition is: the azimuth attribute information of the shooting device is continuously changed, and the continuous change is that the variation of the azimuth attribute information between adjacent time periods is smaller than a preset value.
39. The removable device of any of claim 33, wherein the processor, after determining from the video a target video segment that satisfies a smooth motion condition based on the orientation attribute information, is further configured to:
and determining the motion attribute according to the azimuth attribute information corresponding to the target video segment.
40. The mobile device of claim 39, wherein the processor, when determining the motion attribute based on the orientation attribute information corresponding to the target video segment, is specifically configured to:
and matching the variation trend of the azimuth attribute information corresponding to the target video clip with a pre-stored motion attribute to determine the motion attribute of the target video clip, and determining a corresponding shooting scene according to the motion attribute.
41. The removable device of claim 33, wherein the processor, after determining from the video a target video segment that satisfies a smooth motion condition based on the orientation attribute information, is further configured to:
and carrying out image analysis on the video frames in the target video clip to determine the motion attribute.
42. The removable device of claim 41, wherein the processor is configured to perform image analysis on the video frames in the target video segment, and when determining the motion attribute, is specifically configured to:
and determining a motion attribute according to the change of the characteristic points between adjacent frames in the target video clip, and determining a corresponding shooting scene according to the motion attribute.
43. The removable device of any one of claims 40-42, wherein the processor is further configured to:
and adding the motion attribute and the shooting scene into the attribute information of the target video clip.
44. The mobile device of claim 43, wherein the mobile device is a drone, wherein the motion attribute is a motion trajectory characteristic of a camera, comprising:
at least one of in-situ rotation, straight line forward, straight line backward, curve moving shooting, reverse flying far-rising shooting and reverse flying horizontal far-shooting corresponding to all scenes; and/or
And locking at least one of left surrounding shooting, right surrounding shooting, following shooting and parallel shooting corresponding to the shooting object scene.
45. The removable device of claim 44, wherein the processor, when composing the video from the at least one target video segment determined to satisfy the smooth motion condition, is further configured to:
acquiring the at least one determined target video segment meeting the smooth motion condition and the motion attribute corresponding to the target video segment;
and synthesizing the video according to the motion attribute.
46. The removable device of claim 45, wherein the processor, when composing the video in accordance with the motion attributes, is further configured to:
and sequencing the determined at least one target video segment according to the motion attribute and by combining with the motion rule of the object and/or the watching habit of the user and synthesizing the video.
47. The removable apparatus of claim 33, wherein the orientation attribute information of the photographing means at each photographing time period is stored separately from the video photographed by the photographing means and corresponds to the passage time information.
48. The removable apparatus according to claim 33, wherein orientation attribute information of the photographing means at respective photographing periods is stored in correspondence with each frame of the video as an attribute of the video photographed by the photographing means.
CN201780009987.5A 2017-10-16 2017-10-16 Video processing method, control terminal and mobile device Expired - Fee Related CN108702464B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/106382 WO2019075617A1 (en) 2017-10-16 2017-10-16 Video processing method, control terminal and mobile device

Publications (2)

Publication Number Publication Date
CN108702464A CN108702464A (en) 2018-10-23
CN108702464B true CN108702464B (en) 2021-03-26

Family

ID=63844133

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201780009987.5A Expired - Fee Related CN108702464B (en) 2017-10-16 2017-10-16 Video processing method, control terminal and mobile device

Country Status (2)

Country Link
CN (1) CN108702464B (en)
WO (1) WO2019075617A1 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109743511B (en) * 2019-01-03 2021-04-20 苏州佳世达光电有限公司 Method and system for automatically adjusting display direction of playing picture
CN110611840B (en) * 2019-09-03 2021-11-09 北京奇艺世纪科技有限公司 Video generation method and device, electronic equipment and storage medium
WO2021056353A1 (en) * 2019-09-26 2021-04-01 深圳市大疆创新科技有限公司 Video editing method, and terminal apparatus
CN111026107A (en) * 2019-11-08 2020-04-17 北京外号信息技术有限公司 Method and system for determining the position of a movable object
WO2022061660A1 (en) * 2020-09-24 2022-03-31 深圳市大疆创新科技有限公司 Video trimming method, electronic device, unmanned aerial vehicle, and storage medium
CN113099266B (en) * 2021-04-02 2023-05-26 云从科技集团股份有限公司 Video fusion method, system, medium and device based on unmanned aerial vehicle POS data
CN113438409B (en) * 2021-05-18 2022-12-20 影石创新科技股份有限公司 Delay calibration method, delay calibration device, computer equipment and storage medium
CN115701093A (en) * 2021-07-15 2023-02-07 上海幻电信息科技有限公司 Video shooting information acquisition method and video shooting and processing indication method

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105493496A (en) * 2014-12-14 2016-04-13 深圳市大疆创新科技有限公司 Video processing method, and device and image system

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3862688B2 (en) * 2003-02-21 2006-12-27 キヤノン株式会社 Image processing apparatus and image processing method
JP4195991B2 (en) * 2003-06-18 2008-12-17 パナソニック株式会社 Surveillance video monitoring system, surveillance video generation method, and surveillance video monitoring server
WO2011011737A1 (en) * 2009-07-24 2011-01-27 Digimarc Corporation Improved audio/video methods and systems
US20120148216A1 (en) * 2010-12-14 2012-06-14 Qualcomm Incorporated Self-editing video recording
JP5649429B2 (en) * 2010-12-14 2015-01-07 パナソニックIpマネジメント株式会社 Video processing device, camera device, and video processing method
KR101106576B1 (en) * 2011-07-01 2012-01-19 (주)올포랜드 Drawing system for the aerial work image
CN103188431A (en) * 2011-12-27 2013-07-03 鸿富锦精密工业(深圳)有限公司 System and method for controlling unmanned aerial vehicle to conduct image acquisition
CN102967311A (en) * 2012-11-30 2013-03-13 中国科学院合肥物质科学研究院 Navigational positioning method based on sky polarization distribution model matching
CN103096043B (en) * 2013-02-21 2015-08-05 安徽大学 Based on the mine safety monitoring method of parallel video-splicing technology
CN104184961A (en) * 2013-05-22 2014-12-03 辉达公司 Mobile device and system used for generating panoramic video
US9066014B2 (en) * 2013-10-11 2015-06-23 Facebook, Inc. Applying video stabilization to a multimedia clip
CN104363385B (en) * 2014-10-29 2017-05-10 复旦大学 Line-oriented hardware implementing method for image fusion
CN205017419U (en) * 2015-09-22 2016-02-03 杨珊珊 Device of taking photo by plane
KR101670187B1 (en) * 2015-10-14 2016-10-27 연세대학교 산학협력단 Method and Device for Automatically Editing Image
CN105872367B (en) * 2016-03-30 2019-01-04 东斓视觉科技发展(北京)有限公司 Video generation method and video capture device
CN105721788B (en) * 2016-04-07 2019-06-07 福州瑞芯微电子股份有限公司 A kind of multi-cam electronic equipment and its image pickup method
CN106210450B (en) * 2016-07-20 2019-01-11 罗轶 A kind of multichannel multi-angle of view big data video clipping method
KR20180066370A (en) * 2016-12-08 2018-06-19 성현철 Head Birdy Cam Reducing Vibration for Virtual Reality

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105493496A (en) * 2014-12-14 2016-04-13 深圳市大疆创新科技有限公司 Video processing method, and device and image system

Also Published As

Publication number Publication date
WO2019075617A1 (en) 2019-04-25
CN108702464A (en) 2018-10-23

Similar Documents

Publication Publication Date Title
CN108702464B (en) Video processing method, control terminal and mobile device
US11490054B2 (en) System and method for adjusting an image for a vehicle mounted camera
US10395338B2 (en) Virtual lens simulation for video and photo cropping
US11290692B2 (en) Unmanned aerial vehicle imaging control method, unmanned aerial vehicle imaging method, control terminal, unmanned aerial vehicle control device, and unmanned aerial vehicle
US11587317B2 (en) Video processing method and terminal device
CN109076263B (en) Video data processing method, device, system and storage medium
CN110720209B (en) Image processing method and device
AU2019271924B2 (en) System and method for adjusting an image for a vehicle mounted camera
CN113709377A (en) Method, device, equipment and medium for controlling aircraft to shoot rotation delay video
CN112995507A (en) Method and device for prompting object position
CN108419052A (en) A kind of more unmanned plane method for panoramic imaging
CN110291776B (en) Flight control method and aircraft
CN108475410B (en) Three-dimensional watermark adding method, device and terminal
CN117693946A (en) Unmanned aerial vehicle control method, image display method, unmanned aerial vehicle and control terminal
CN113287297A (en) Control method, handheld cloud deck, system and computer readable storage medium
CN113491102A (en) Zoom video shooting method, shooting system, shooting device and storage medium
CN113498611A (en) Video downloading method, device, system and computer readable storage medium
CN117294932A (en) Shooting method, shooting device and electronic equipment
WO2018214075A1 (en) Video image generation method and device
JP2018128804A (en) Image processor, image processing program and image processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210326

CF01 Termination of patent right due to non-payment of annual fee