WO2018214075A1 - Procédé et dispositif de production d'image vidéo - Google Patents

Procédé et dispositif de production d'image vidéo Download PDF

Info

Publication number
WO2018214075A1
WO2018214075A1 PCT/CN2017/085774 CN2017085774W WO2018214075A1 WO 2018214075 A1 WO2018214075 A1 WO 2018214075A1 CN 2017085774 W CN2017085774 W CN 2017085774W WO 2018214075 A1 WO2018214075 A1 WO 2018214075A1
Authority
WO
WIPO (PCT)
Prior art keywords
video stream
video
drone
backhaul
stream
Prior art date
Application number
PCT/CN2017/085774
Other languages
English (en)
Chinese (zh)
Inventor
苏冠华
郭灼
黄志聪
张若颖
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2017/085774 priority Critical patent/WO2018214075A1/fr
Priority to CN201780004592.6A priority patent/CN108521868A/zh
Publication of WO2018214075A1 publication Critical patent/WO2018214075A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present invention relates to the field of video data processing, and in particular, to a video picture generation method and apparatus.
  • the return video stream transmitted during the flight of the drone is generally for the user to directly watch, because the video transmitted to the ground equipment (such as smart phones, tablet computers, etc.) during the flight of the drone
  • the flow is generally large, and it is difficult for a user to share the return video transmitted by the drone directly on a social network such as a circle of friends.
  • most users need to manually cut back the video stream transmitted by the drone to obtain a small video that is easy to share, and the way the user manually cuts the small video may not be professional enough, and the small video effect is poor.
  • the invention provides a video picture generating method and device.
  • a video picture generating method comprising:
  • a frequency picture generating apparatus comprising a processor, the processor being configured to:
  • the present invention can convert a large backhaul video stream into a small video that is easy to share by processing the backhaul video stream obtained by the drone in the specified mode.
  • the first preset duration video screen enables the user to quickly share in the social media such as the circle of friends, increasing the interest of the aerial photography of the drone, and eliminating the trouble of manual cutting by the user.
  • FIG. 1 is a flowchart of a video frame generating method according to an embodiment of the present invention
  • FIG. 2 is a schematic structural view of an aerial camera system of a drone according to an embodiment of the present invention
  • FIG. 3 is a flowchart of a video picture generating method in another embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of a video picture generating apparatus according to an embodiment of the present invention.
  • FIG. 5 is a functional block diagram of a video picture generating apparatus according to an embodiment of the present invention.
  • FIG. 6 is a functional block diagram of a video picture generating apparatus in another embodiment of the present invention.
  • Figure 7 is a functional block diagram of a video picture generating apparatus in still another embodiment of the present invention.
  • Figure 8 is a functional block diagram of a video picture generating apparatus in still another embodiment of the present invention.
  • the embodiment of the invention provides a method and a device for generating a video image, which are used for further processing an image of an aerial camera of a drone, thereby obtaining an easy to share into a social network such as a circle of friends.
  • the drone is equipped with a pan/tilt, and the pan/tilt is equipped with an imaging device.
  • the pan/tilt can be a three-axis pan/tilt that is capable of three axes of yaw, pitch pitch, and roll roll. Turn up. In some implementations, the pan/tilt can be a two-axis pan/tilt that can rotate on both the pitch pitch and the roll roll.
  • the yaw direction of the unmanned aerial vehicle can be controlled to achieve the attitude change of the two-axis pan/tilt in the yaw direction.
  • the imaging device may be a device having an image capturing function such as a camera or an image sensor.
  • the target object can be an object specified by the user, such as an environmental object.
  • the image captured by the camera device can be displayed in a user interface, and the user selects an object as a target object by clicking operation on the image displayed in the user interface.
  • a user can select a tree, an animal, or an object of a certain area as a target object.
  • the user can also input only the image features of certain objects, such as inputting a facial feature or the shape feature of an object, and performing image processing by the corresponding processing module to find a character or object corresponding to the image feature, and then The person or object found is taken as the target object.
  • the target object may be a stationary object or continue The object does not move for a period of time during shooting, or the speed of movement during continuous shooting is much smaller than the moving speed of a moving object such as a drone, for example, the speed difference between the two is less than a preset threshold.
  • a video screen generating method and apparatus provided by an embodiment of the present invention can be applied to an intelligent terminal 1 installed with an APP (application software), and the smart terminal 1 can be selected as a smart phone, a tablet computer, or the like.
  • the smart terminal 1 is communicably connected to at least one of a drone, a pan-tilt mounted on the drone, and an imaging device mounted on the pan-tilt.
  • the drone is communicatively coupled to the smart terminal 1.
  • the image including the target object captured by the imaging device can be transmitted back to the smart terminal 1 through the wireless link.
  • the video picture generating method may include the following steps:
  • Step S101 Receiving a return video stream transmitted by the drone when flying according to the specified flight mode
  • the unmanned person stores the video data (that is, the original data stream) captured by the current camera device in real time, and compresses the original data stream in real time, and generates a return video stream to be sent to the smart terminal 1.
  • the smart terminal 1 In order for the smart terminal 1 to display the image currently taken by the drone in real time.
  • the smart terminal 1 performs buffering after receiving the backhaul video stream, thereby obtaining a complete backhaul video stream of the drone in the specified flight mode.
  • the specified flight mode includes at least one of a slash mode, a surround mode, a spiral mode, a skyrocket mode, and a comet surround mode, each flight mode including a corresponding flight strategy, and the flight strategy is used by the flight strategy Instructing the drone to fly.
  • the drone of the embodiment automatically flies according to the specified flight mode, and the drone can capture a rich and smooth video stream.
  • the user does not need to manually operate the joystick to make the camera device ,
  • the parameters of the gimbal and the drone are matched to obtain a more beautiful composition, which is more suitable for beginners of drone aerial photography.
  • the flight strategy corresponding to the oblique line mode may include: the control device 2 on the UAV side controls the drone to first follow a horizontal plane according to position information of the target object (ie, parallel to The direction of the ground) flies and then flies along a plane at an angle to the horizontal.
  • the position information of the target object refers to the absolute position information of the target object (the position information below indicates the position information of the target object, which can be explained with reference to this), for example, the coordinate value of the target object in the northeast coordinate system.
  • the size of the included angle can be set as needed, for example, 45°, so that the target object is captured at different angles, and a richly-captured shooting picture is obtained.
  • controlling the UAV to fly along the horizontal plane means that the UAV only has a flying speed in a horizontal direction, and there is no flying speed in a vertical direction (ie, a direction perpendicular to the ground).
  • control device 2 on the UAV side controls the UAV to fly along a horizontal plane and then fly along a plane at a certain angle with the horizontal plane, and may include: controlling the The man-machine flies along the horizontal plane; when it is determined that the connection between the lowest point of the target object and the UAV center and the highest point of the target object and the connection of the drone center are smaller than the view of the camera device
  • the preset angle of the field angle is multiple
  • the drone is controlled to fly along a plane at a certain angle with the horizontal plane according to the first position information, wherein the preset multiple is ⁇ 1, thereby making the composition more beautiful. Picture.
  • the controlling the UAV to fly along a plane at a certain angle to the horizontal plane includes: controlling the UAV to move away from the connection direction of the target object and the UAV
  • the target object flies.
  • the connection between the target object and the drone can refer to any position on the target object and a connection to any position on the drone.
  • the connection of the target object to the drone refers to the connection between the center position of the target object and the center position of the drone.
  • the determination rule of the center position of the target object and the center position of the drone can be set as needed, and taking the center position of the target object as an example, a regular shape (for example, a rectangle, a square pentagon, or the like) can be used.
  • a circle or the like surrounds the target object, and the center position of the shape of the rule is the center position of the target object.
  • the flight strategy corresponding to the oblique line mode includes: controlling, according to the position information, the drone to fly away from the target object in a S-shaped curve, thereby capturing a picture with a more beautiful composition.
  • the degree of bending of the S-shaped curve can be set as needed to meet the needs of shooting.
  • the lowest point and the highest point of the target object are the position closest to the ground on the target object and the position farther from the ground on the target object.
  • the angle between the lowest point of the target object and the connection between the center of the drone and the highest point of the target object and the connection between the center of the drone and the center of the drone can also be referred to as the angle of the target object relative to the drone, for example
  • the target object is a character
  • the angle of the character relative to the drone is the lowest point of the character and the connection between the center of the drone and the highest point of the character.
  • the preset multiple is 1/3
  • the target object is located on the ground.
  • the drone When the angle of the target object relative to the drone is less than 1/3 of the angle of view of the imaging device, the drone will fly away from the target object along the connection direction of the target object and the drone, thereby enabling Make the horizon in the picture appear in the upper 1/3 of the screen (ie, the pixel distance from the top edge of the horizon is 1/3 of the total pixel distance in the Y direction of the physical coordinate system of the screen), and the target object can also appear in the scene. In the picture taken, the picture with a more beautiful composition is obtained.
  • the flight strategy corresponding to the surround mode includes: the control device 2 on the drone side controls the drone to fly around the target object according to the specified distance according to the position information.
  • the drone of the present embodiment centers around the target object and performs circular motion around the target object, thereby realizing the shooting of the target object in the 360° direction.
  • the shape of the flight trajectory around the target object can be selected as needed.
  • the flight path of the surrounding target object may be circular.
  • the flight path of the surrounding target object may be elliptical.
  • the flight around the target object may also be other flight trajectories similar to a circle or an ellipse.
  • the specified distance is used to indicate that the drone is in each The distance from a target object at a location.
  • the specified distance is a default distance, and optionally, the flight strategy corresponding to the surround mode includes a default distance.
  • the specified distance is distance information input by the user, that is, the distance information of the drone around the target object is set by the user according to actual needs, thereby satisfying different user requirements.
  • the user may input a specified distance corresponding to the surround mode on the smart terminal 1 to indicate distance information of the drone flying around the target object.
  • the specified distance is the distance between the drone and the target object at the current time.
  • the distance between the UAV and the target object at the current moment is calculated according to the location information of the target object and the positioning information of the current time of the UAV, thereby further improving the intelligence of the UAV.
  • the flight strategy corresponding to the spiral mode includes: the control device 2 on the UAV side controls the UAV to take a ⁇ spiral, an equal ratio spiral according to the position information, An isometric spiral, an Archimedes spiral, or other shaped spiral travels around the target object for the trajectory.
  • the drone of the present embodiment is centered on the target object, and is photographed by a ⁇ ⁇ ⁇ spiral, an isometric spiral, an isometric spiral, an Archimedes spiral, or other shape of a spiral. A richer picture.
  • the flight strategy corresponding to the spiral mode further includes: the control device 2 on the drone side controls the drone according to the position information A ⁇ spiral, an equal spiral, an isosceles spiral, an Archimedes spiral, or other shape of a spiral that traverses the target object while also controlling the drone to rise vertically at a preset rate. Or drop.
  • the target object is photographed from more angles by controlling the flight of the drone in the vertical ground direction to rise or fall, so as to improve the content richness of the captured picture.
  • the flying speed of the drone rising or falling can be set according to actual needs.
  • the drone is based on the position information, and is a trajectory of a ⁇ Bonachet spiral, an equal spiral, an equiangular spiral, an Archimedes spiral, or other shape of a spiral.
  • the surrounding target object flies along the horizontal plane, that is, the drone only has a horizontal flying speed, and the vertical flying speed is zero, thereby changing the size of the target object in the picture and increasing the richness of the shooting picture.
  • the flight strategy corresponding to the skylight mode includes: the control device 2 on the drone side controls the drone to fly to the target object according to the preset angle according to the position information. After the first designated position, the drone is controlled to rise vertically.
  • the preset angle, the first designated position, and the flying speed of the drone can be set according to actual needs, thereby capturing a variety of pictures.
  • the first designated location refers to a specific distance from a specified location of the target object, and the first designated location is located at a specific location of the specified location of the target object. In this embodiment, the first designated location may be set by the user as needed.
  • controlling the drone to fly to a first designated position relative to the target object according to a preset angle comprises: controlling the drone to fly in a direction close to the target object The first specified location. In some examples, the controlling the drone to fly to a first designated position relative to the target object according to a preset angle comprises: controlling the drone to fly in a direction away from the target object The first specified location.
  • the drone in the sky-rushing mode, can be controlled to fly from any starting point (ie, the current position of the drone) to the first designated position, or the drone can be controlled to fly to a specific starting point, and then the control is performed.
  • the drone flies from the particular starting point to a first designated location. It should be noted that, in the case where the drone is first controlled to fly to a specific starting point, and then the drone is controlled to fly from the specific starting point to the first designated position, the imaging device on the drone is The drone will not start recording until after the specific starting point.
  • the flight strategy corresponding to the comet surround mode includes: the control device 2 on the drone side controls the drone to fly close to the target object to the second designated position according to the position information, and The second designated position flies away from the target object after flying around the target object.
  • the second designated position may be set as needed, for example, the second specified position is at a specific distance from the specified position of the target object, and the second specified position is located at a specific position of the specified position of the target object, thereby shooting A variety of pictures.
  • the number of laps of the unmanned aircraft flying around the target object after flying to the second designated position may be set as needed, for example, one week, several weeks, or less than one week.
  • the drone in the comet surround mode, can be controlled to fly from the arbitrary starting point (ie, the current position of the drone) close to the target object to the second designated position, and surround the second designated position.
  • the target object flies away from the target object after flight,
  • the drone in the comet surround mode, may be first controlled to fly to a specific starting point, and then the drone is controlled to fly from the specific starting point to the target object to the second designation. a position that flies away from the target object after flying around the target object from the second designated location.
  • the imaging device on the drone starts recording after the drone is located at the specific starting point.
  • each flight mode further includes at least one of a corresponding trajectory distance and a flight speed for indicating the flight of the drone, thereby instructing the drone to take an image with better effect.
  • Step S102 Processing the backhaul video stream to generate a video picture of a first preset duration, where the first preset duration is less than the duration of the backhaul video stream.
  • a large back-back video stream can be converted into a small video that is easy to share by processing the back-to-back video stream obtained by the drone in the specified mode without manual cutting by the user (first The preset video screen) allows users to quickly share in social media such as circle of friends, increasing the fun of drone aerial photography.
  • the small video obtained by the embodiment of the invention has higher professionalism and more special effects.
  • the first preset duration may be set as needed, for example, 10 seconds, to obtain a small video for sharing.
  • the small video in the embodiment of the present invention refers to a video whose duration is less than a specific duration (can be set as needed).
  • small videos can also be video with a capacity smaller than a specific capacity (which can be set as needed).
  • step S102 is performed after determining that the drone meets the specified condition.
  • the specified condition includes the drone completing the flight of the specified flight mode.
  • the drone completes the flight of the specified flight mode.
  • the smart terminal 1 receives the return video stream of the complete drone in the specified flight mode, thereby facilitating the user to select the processing direction according to all the information of the returned video stream.
  • the smart terminal 1 determines whether the drone completes the flight of the specified flight mode based on the returned video stream returned by the drone.
  • the drone adds the flight state information corresponding to the specified flight mode to the image captured when the flight is in the specified flight mode, and transmits the original data stream with the flight state information to the smart
  • the terminal 1, that is, the return video stream obtained by the smart terminal 1 also carries flight status information.
  • the smart terminal 1 can determine whether the drone completes the flight of the specified flight mode according to the flight state information in the backhaul video stream.
  • the smart terminal 1 determines that the flight state information in the backhaul video stream is changed from the flight state information corresponding to the specified flight mode to the flight state information of another flight mode or the backhaul video stream There is a return video stream in which the flight state information of the specified flight mode changes to no flight state information, that is, the flight indicating that the drone has completed the specified flight mode.
  • the smart terminal 1 receives the information of the specified flight mode end sent by the drone, thereby determining that the drone is completed. The flight of the specified flight mode.
  • the specified condition includes receiving a backhaul video stream transmitted while the drone is flying in accordance with the specified flight mode.
  • the smart terminal 1 performs step S102 immediately after receiving the return video stream transmitted when the drone is flying according to the specified flight mode, without waiting for the drone to execute the specified flight mode, thereby saving small
  • the smart terminal 1 can generate a small video while the drone ends the flight of the specified flight mode.
  • step S102 includes: performing frame extraction processing on the backhaul video stream to generate a video picture of a first preset duration.
  • the frameback processing is performed on the backhaul video stream to generate a video image of a first preset duration, including: according to a flight mode, a flight speed, and a flight direction of the drone And performing at least one frame drawing process on the video stream to generate a video picture of a first preset duration.
  • the small video to be generated By associating the small video to be generated with at least one of a flight mode, a flight speed, and a flight direction of the drone, the small video to be generated and the image obtained by the drone are more closely attached. And the picture of the small video to be generated is richer, and the composition is more matched with the flight parameters of the drone.
  • the performing a frame drawing process on the backhaul video stream to generate a video picture of a first preset duration includes: according to the duration of the backhaul video stream and the number of frames, the back The video stream is subjected to frame drawing processing to generate a video picture of a first preset duration.
  • a small video with a higher degree of fit to the backhaul video stream can be obtained according to the number of the returned video stream frames. A more complete image of the drone is presented.
  • the framed processing is performed on the returned video stream according to the duration of the returned video stream and the number of frames, to generate a first preset.
  • the video frame of the duration includes: splitting the backhaul video stream into multiple segments to obtain a multi-segment return video stream; performing frame extraction processing on a part of the backhaul video stream in the multi-segment return video stream, and obtaining a corresponding segment back And transmitting a framed image of the video stream; and generating a video image of the first preset duration according to another partial return video stream in the multi-segment return video stream and the obtained framed image of the corresponding segment return video stream.
  • the splitting of the returned video stream into multiple segments includes: The sequence of shooting times splits the returned video stream into at least three segments. Performing a frame drawing process on a part of the backhaul video stream in the multi-segment return video stream to obtain a framed image of the corresponding segment of the returned video stream, including: capturing time in the at least three segments of the returned video stream The backhaul video stream in the intermediate time period is subjected to frame drawing processing to obtain a framed frame corresponding to the segment of the returned video stream. image.
  • the part of the returned video stream in the multi-segment return video stream is subjected to frame extraction processing to obtain a framed image of the corresponding segment of the returned video stream, including: according to a preset framed frame.
  • the rate is framed by the corresponding segment return video stream, and the framed image corresponding to the corresponding segment back video stream is obtained.
  • the video stream of the corresponding segment is uniformly framed, thereby avoiding the unevenness of the frame and causing the discontinuity of the video picture.
  • the frame rate of the multi-segment return video stream is the same, further ensuring continuity of the generated video picture, thereby ensuring smoothness of the generated video picture.
  • step S102 may further include: further compressing the backhaul video stream, thereby reducing the size of the backhaul video stream, and obtaining a video picture that is easy to share.
  • the method further comprises: transmitting the video picture to a remote terminal server to enable sharing of the small video.
  • the remote terminal server may be a third-party website such as a video website such as Youku, Tudou, or a social media network such as a circle of friends.
  • the transmitting the video picture to the remote terminal server is performed immediately after the step S102 is completed, thereby enabling fast sharing of the small video.
  • the method before the sending the video screen to the remote terminal server, the method further includes: receiving a sharing instruction input by the user, where the sharing instruction includes a corresponding remote terminal server; and sending the The video screen is sent to the remote terminal server, so that the small video can be flexibly shared according to the actual needs of the user.
  • the quality of the returned video stream sent by the drone to the intelligent terminal 1 by means of the picture transmission is also It will be poor, and accordingly, the quality of the generated small video is also poor.
  • the video picture generating method may further include the following steps:
  • Step S201 Acquire an original data stream captured by the drone
  • the unmanned person stores the video data (that is, the original data stream) captured by the current camera device in real time, and compresses the original data stream in real time, and generates a return video stream to be sent to the image transmission mode.
  • the smart terminal 1 is such that the smart terminal 1 displays the image currently captured by the drone in real time.
  • the smart terminal 1 can also acquire the original data stream stored by the drone, and use the original data stream to generate a small video.
  • the original data stream captured by the drone is stored in a storage unit of the drone or the imaging device.
  • the smart terminal 1 can directly read the original data stream captured by the drone stored in the storage unit.
  • the manner of data transmission is different: in step S101, the drone transmits the captured video stream to the smart terminal 1 by means of wireless communication during the flight, because none The communication distance between the human machine and the smart device is long, so that the communication quality between the drone and the smart terminal 1 may be poor; and in step S201, the smart terminal 1 can read the storage unit by wired communication.
  • the intelligent terminal 1 In the case of the original data stream or in the case of ensuring good wireless communication quality, the intelligent terminal 1 directly reads the original data stream in the storage unit, thereby ensuring that the intelligent terminal 1 can obtain the original data stream with good picture quality.
  • the storage unit is a device capable of storing data such as an SD card or a hard disk or a magnetic disk.
  • step S201 is performed after the drone meets the specified condition.
  • the specified condition includes the drone completing the flight of the specified flight mode. Specifically, after the drone ends the flight of the specified flight mode, the smart terminal 1 directly reads the original data stream captured by the drone stored in the storage unit, thereby obtaining a raw data stream with good picture quality for processing. Generate small videos with good picture quality.
  • Step S202 Determine, according to the original data stream, an original video stream captured by the drone in the specified flight mode.
  • the original data stream in the video data stored by the storage unit, also carries a corresponding video tag.
  • the smart terminal 1 finds a corresponding original data stream from the storage unit according to the video tag.
  • step S202 includes: corresponding to according to the specified flight mode a video stream tag, determining an original video stream captured by the drone in the specified flight mode in the original data stream, and obtaining a non-accurate and fast from a large number of video streams through a video tag The original video stream captured by the HMI in the specified flight mode to more quickly generate small videos in the specified flight mode.
  • Step S203 Process the original video stream to generate a new video picture of a second preset duration, where the second preset duration is less than the duration of the original video stream.
  • the method further comprises: transmitting the new video frame to a remote terminal server to enable sharing of the small video.
  • the remote terminal server may be a third-party website such as a video website such as Youku, Tudou, or a social media network such as a circle of friends.
  • the transmitting the video picture to the remote terminal server is performed immediately after the step S203 is completed, thereby enabling fast sharing of the small video.
  • the method before the sending the new video screen to the remote terminal server, the method further includes: receiving a sharing instruction input by the user, where the sharing instruction includes a corresponding remote terminal server; according to the sharing instruction, sending the location
  • the video screen is described to the remote terminal server, so that the small video can be flexibly shared according to the actual needs of the user.
  • step S201, step S202, and step S203 are performed after determining that the resolution of the video picture obtained according to the returned video stream is less than the preset resolution, thereby obtaining a new video picture of higher quality.
  • the smart terminal 1 simultaneously performs step S201, step S202 and step S203, and steps S101 and S102, thereby obtaining two video pictures for the user to select, increasing the richness of the selection.
  • the method further comprises transmitting at least one of the two to the remote terminal server based on the video picture generated in step S102 and the new video picture generated in step S203.
  • the video screen generated in step S102 and the new video screen generated in step S203 may be transmitted to the remote terminal server with a higher resolution.
  • the method further includes: receiving a sharing instruction input by the user, wherein the sharing instruction includes a corresponding remote terminal a server and a video identifier to be shared, the video identifier to be shared is a representation corresponding to at least one of the video screen generated in step S102 and the new video screen generated in step S203; and sent according to the sharing instruction
  • One of the video screen generated in step S102 and the new video screen generated in step S203 is sent to the remote terminal server, thereby flexibly sharing the small video according to the actual needs of the user.
  • the second preset duration may be set according to requirements, and optionally the second preset duration is equal to the first preset duration.
  • step S203 the policy used to process the original video stream in step S203 is similar to the strategy used to process the return video stream in step S102. For details, refer to step S102 for processing the backhaul video stream. The strategy is not repeated here.
  • an embodiment of the present invention provides a video picture generating apparatus, which is applied to an intelligent terminal 1 with an APP installed.
  • the video picture generating apparatus may include a processor 11 for performing the steps of the shooting control method according to the first embodiment.
  • the processor 11 is configured to be in communication with the control device 2 on the UAV side, wherein the control device 2 on the UAV side can be implemented by a dedicated control device or by a drone.
  • the flight controller is implemented by a pan/tilt controller, so that the processor 11 can send a start command to the control device 2 on the drone side to indicate the aerial photography of the drone, and can be received by the processor 11.
  • an embodiment of the present invention provides a video picture generating apparatus, which is applied to the smart terminal 1.
  • the video picture generating apparatus may include a receiving module 10 and a processing module 20.
  • the receiving module 10 is configured to receive a backhaul video stream transmitted by the drone when flying according to a specified flight mode.
  • the processing module 20 is configured to process the backhaul video stream to generate a video image of a first preset duration, where the first preset duration is less than the duration of the backhaul video stream.
  • the processing module 20 processes the backhaul video stream, and the step of generating the video screen of the first preset duration is performed after determining that the drone meets the specified condition.
  • the apparatus further includes a first determining module 30, where the specified condition comprises: the first determining module 30 determines that the drone completes the flight of the specified flight mode.
  • the processing module 20 is configured to perform frame drawing processing on the backhaul video stream to generate a video picture of a first preset duration.
  • the processing module 20 is configured to perform frame drawing processing on the video stream according to at least one of a flight mode, a flight speed, and a flight direction of the drone to generate a video image of a first preset duration.
  • the processing module 20 is configured to perform frame extraction processing on the backhaul video stream according to the duration and the number of frames of the backhaul video stream, to generate a video picture of a first preset duration.
  • the processing module 20 is configured to split the backhaul video stream into multiple segments, obtain a multi-segment return video stream, and perform frame extraction processing on a part of the backhaul video stream in the multi-segment return video stream. Obtaining a framed image of the corresponding segment of the returned video stream; and generating a first preset according to another part of the returned video stream in the multi-segment return video stream and the obtained framed image of the corresponding segment of the returned video stream The video of the duration.
  • the processing module 20 is configured to split the backhaul video stream into at least three segments according to a sequence of shooting times; and to record the time in the at least three segments of the returned video stream in an intermediate time period.
  • the video stream is returned to perform frame drawing processing, and the framed image corresponding to the segment of the returned video stream is obtained.
  • the processing module 20 is configured to perform frame drawing processing on the corresponding segment return video stream according to a preset frame drawing rate, to obtain a framed image of the corresponding segment back video stream.
  • the apparatus further includes a reading module 40 and a determining module 50.
  • the reading module 40 is configured to acquire an original data stream captured by the drone.
  • the determining module 50 is configured to determine, according to the original data stream, an original video stream captured by the drone in the specified flight mode.
  • the processing module 20 is further configured to process the original video stream to generate a new video picture of a second preset duration, where the second preset duration is less than a duration of the backhaul video stream.
  • the determining module 50 determines, according to the video stream label corresponding to the specified flight mode, the original video stream captured by the drone in the specified flight mode in the original data stream.
  • the apparatus further includes a second determining module 60, the reading module 40 acquires a raw data stream captured by the drone, and the determining module 50 is configured according to the original data stream. Determining a step of the original video stream captured by the drone in the specified flight mode, and the processing module 20 is configured to process the original video stream to generate a new video frame of a second preset duration The steps are all performed after the second determining module 60 determines that the resolution of the video picture obtained according to the returned video stream is less than the preset resolution.
  • the apparatus further includes a sharing module 70, configured to send at least one of the video picture and the new video picture to a remote terminal server.
  • the sharing module 70 sends the video picture and the new video picture Before the at least one to the remote terminal server, the receiving module 10 receives the sharing instruction input by the user, where the sharing instruction includes a corresponding remote terminal server and a video identifier to be shared, and the video identifier to be shared is the video And an identifier corresponding to at least one of the picture and the new video picture; the sharing module 70, according to the sharing instruction, transmitting at least one of the video picture and the new video picture to a remote terminal server.
  • An embodiment of the present invention provides a computer storage medium having stored therein program instructions, wherein the computer storage medium stores program instructions, and the program executes the video picture generation method of the first embodiment.
  • a "computer-readable medium" can be any apparatus that can contain, store, communicate, propagate, or transport a program for use in an instruction execution system, apparatus, or device, or in conjunction with the instruction execution system, apparatus, or device.
  • computer readable media include the following: electrical connections (electronic devices) having one or more wires, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
  • the computer readable medium may even be a paper or other suitable medium on which the program can be printed, as it may be optically scanned, for example by paper or other medium, followed by editing, interpretation or, if appropriate, other suitable The method is processed to obtain the program electronically and then stored in computer memory.
  • portions of the invention may be implemented in hardware, software, firmware or a combination thereof.
  • multiple steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if implemented in hardware, as in another embodiment, it can be implemented with any one or combination of the following techniques well known in the art: having logic gates for implementing logic functions on data signals. Discrete logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), etc.
  • each unit may exist physically separately, or two or more units may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the integrated modules, if implemented in the form of software functional modules and sold or used as stand-alone products, may also be stored in a computer readable storage medium.
  • the above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Studio Devices (AREA)

Abstract

La présente invention concerne un procédé et un dispositif de production d'image vidéo. Le procédé consiste : à recevoir un flux vidéo retransmis, transmis par un véhicule aérien sans pilote volant selon un mode de vol désigné ; à traiter le flux vidéo retransmis afin de produire une image vidéo d'une première durée prédéfinie, la première durée prédéfinie étant inférieure à la durée du flux vidéo retransmis. Par traitement du flux vidéo retransmis obtenu par le véhicule aérien sans pilote dans le mode désigné, un flux vidéo retransmis plus grand peut être converti en petites séquences vidéo qui sont faciles à partager, de telle sorte qu'un utilisateur peut rapidement partager ces derniers sur les réseaux sociaux tels que les Moments, et l'intérêt de la photographie aérienne faisant appel au véhicule aérien sans pilote est augmenté.
PCT/CN2017/085774 2017-05-24 2017-05-24 Procédé et dispositif de production d'image vidéo WO2018214075A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2017/085774 WO2018214075A1 (fr) 2017-05-24 2017-05-24 Procédé et dispositif de production d'image vidéo
CN201780004592.6A CN108521868A (zh) 2017-05-24 2017-05-24 视频画面生成方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/085774 WO2018214075A1 (fr) 2017-05-24 2017-05-24 Procédé et dispositif de production d'image vidéo

Publications (1)

Publication Number Publication Date
WO2018214075A1 true WO2018214075A1 (fr) 2018-11-29

Family

ID=63434487

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/085774 WO2018214075A1 (fr) 2017-05-24 2017-05-24 Procédé et dispositif de production d'image vidéo

Country Status (2)

Country Link
CN (1) CN108521868A (fr)
WO (1) WO2018214075A1 (fr)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060197835A1 (en) * 2005-03-04 2006-09-07 Anderson Robert C Wrist-attached display system for unmanned vehicle imagery and communication
CN105187715A (zh) * 2015-08-03 2015-12-23 杨珊珊 一种航拍内容分享方法、装置及无人飞行器
CN105516604A (zh) * 2016-01-20 2016-04-20 陈昊 一种航拍视频分享方法和系统
CN105763791A (zh) * 2016-01-29 2016-07-13 珠海汇迪科技有限公司 一种通过手持设备将运动相机上的视频进行分享的方法
CN105824967A (zh) * 2016-04-01 2016-08-03 北京飞蝠科技有限公司 一种无人机航拍服务的内容处理方法及系统
US20160306351A1 (en) * 2015-04-14 2016-10-20 Vantage Robotics, Llc System for authoring, executing, and distributing unmanned aerial vehicle flight-behavior profiles
CN106155092A (zh) * 2015-04-21 2016-11-23 高域(北京)智能科技研究院有限公司 一种智能多模式飞行拍摄设备及其飞行控制方法
CN206031808U (zh) * 2016-08-30 2017-03-22 潍坊歌尔电子有限公司 一种无人机拍摄系统
CN106603970A (zh) * 2016-11-11 2017-04-26 重庆零度智控智能科技有限公司 视频拍摄方法、系统及无人机
US20170125058A1 (en) * 2015-08-07 2017-05-04 Fusar Technologies, Inc. Method for automatically publishing action videos to online social networks

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160055883A1 (en) * 2014-08-22 2016-02-25 Cape Productions Inc. Methods and Apparatus for Automatic Editing of Video Recorded by an Unmanned Aerial Vehicle
CN106027896A (zh) * 2016-06-20 2016-10-12 零度智控(北京)智能科技有限公司 视频拍摄控制装置、方法及无人机
CN106101844A (zh) * 2016-06-30 2016-11-09 北京奇艺世纪科技有限公司 一种视频分享方法及装置
CN106067948A (zh) * 2016-07-27 2016-11-02 杨珊珊 无人机及其航拍素材处理设备、自动整合系统及整合方法

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060197835A1 (en) * 2005-03-04 2006-09-07 Anderson Robert C Wrist-attached display system for unmanned vehicle imagery and communication
US20160306351A1 (en) * 2015-04-14 2016-10-20 Vantage Robotics, Llc System for authoring, executing, and distributing unmanned aerial vehicle flight-behavior profiles
CN106155092A (zh) * 2015-04-21 2016-11-23 高域(北京)智能科技研究院有限公司 一种智能多模式飞行拍摄设备及其飞行控制方法
CN105187715A (zh) * 2015-08-03 2015-12-23 杨珊珊 一种航拍内容分享方法、装置及无人飞行器
US20170125058A1 (en) * 2015-08-07 2017-05-04 Fusar Technologies, Inc. Method for automatically publishing action videos to online social networks
CN105516604A (zh) * 2016-01-20 2016-04-20 陈昊 一种航拍视频分享方法和系统
CN105763791A (zh) * 2016-01-29 2016-07-13 珠海汇迪科技有限公司 一种通过手持设备将运动相机上的视频进行分享的方法
CN105824967A (zh) * 2016-04-01 2016-08-03 北京飞蝠科技有限公司 一种无人机航拍服务的内容处理方法及系统
CN206031808U (zh) * 2016-08-30 2017-03-22 潍坊歌尔电子有限公司 一种无人机拍摄系统
CN106603970A (zh) * 2016-11-11 2017-04-26 重庆零度智控智能科技有限公司 视频拍摄方法、系统及无人机

Also Published As

Publication number Publication date
CN108521868A (zh) 2018-09-11

Similar Documents

Publication Publication Date Title
WO2018214078A1 (fr) Procédé et dispositif de commande de photographie
US10863073B2 (en) Control method for photographing using unmanned aerial vehicle, photographing method using unmanned aerial vehicle, mobile terminal, and unmanned aerial vehicle
EP3167604B1 (fr) Procédés et systèmes de traitement vidéo
US11722647B2 (en) Unmanned aerial vehicle imaging control method, unmanned aerial vehicle imaging method, control terminal, unmanned aerial vehicle control device, and unmanned aerial vehicle
EP3182202B1 (fr) Système selfie par drone et son procédé de réalisation
WO2017181511A1 (fr) Borne et système de commande pour véhicule aérien sans pilote
KR20170136750A (ko) 전자 장치 및 그의 동작 방법
CN108702464B (zh) 一种视频处理方法、控制终端及可移动设备
WO2019041276A1 (fr) Procédé de traitement d'image, véhicule aérien sans pilote et système
WO2018036040A1 (fr) Procédé et ssystème de photographie de dispositif intelligent monté sur la tête de berceau d'un véhicule aérien sans pilote
WO2019140621A1 (fr) Procédé de traitement vidéo et dispositif terminal
WO2019061295A1 (fr) Procédé et dispositif de traitement de vidéo, véhicule aérien sans pilote et système
CN107450573B (zh) 飞行拍摄控制系统和方法、智能移动通信终端、飞行器
WO2019127376A1 (fr) Procédé d'acquisition de vidéo, terminal de commande, aéronef et système
WO2020019106A1 (fr) Procédé de commande de cardan et de véhicule aérien sans pilote, cardan et véhicule aérien sans pilote
WO2021237619A1 (fr) Procédé d'édition de fichier vidéo, et dispositif, système et support d'enregistrement lisible par ordinateur
WO2022141956A1 (fr) Procédé de commande de vol, procédé d'édition de vidéos, dispositif, véhicule aérien sans pilote et support de stockage
WO2020014953A1 (fr) Procédé et dispositif de traitement d'image
WO2019205070A1 (fr) Procédé et appareil de commande de véhicule aérien sans pilote et véhicule aérien sans pilote
CN108965689A (zh) 无人机拍摄方法及装置、无人机和地面控制装置
WO2018049642A1 (fr) Procédé et dispositif de fourniture d'une image dans un dispositif vestimentaire et objet déplaçable
WO2017181930A1 (fr) Procédé et dispositif d'affichage de direction de vol et véhicule aérien sans pilote
US20210258494A1 (en) Flight control method and aircraft
WO2019127402A1 (fr) Procédé de synthèse d'image panoramique sphérique, système d'uav, uav, terminal et procédé de commande associé
CN108419052A (zh) 一种多台无人机全景成像方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17910878

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17910878

Country of ref document: EP

Kind code of ref document: A1