WO2019061063A1 - 无人机图像采集方法及无人机 - Google Patents

无人机图像采集方法及无人机 Download PDF

Info

Publication number
WO2019061063A1
WO2019061063A1 PCT/CN2017/103624 CN2017103624W WO2019061063A1 WO 2019061063 A1 WO2019061063 A1 WO 2019061063A1 CN 2017103624 W CN2017103624 W CN 2017103624W WO 2019061063 A1 WO2019061063 A1 WO 2019061063A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
target
current image
photographic subject
subject
Prior art date
Application number
PCT/CN2017/103624
Other languages
English (en)
French (fr)
Inventor
张伟
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201780010140.9A priority Critical patent/CN108702448B/zh
Priority to PCT/CN2017/103624 priority patent/WO2019061063A1/zh
Priority to CN202110304772.4A priority patent/CN113038016B/zh
Publication of WO2019061063A1 publication Critical patent/WO2019061063A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image

Definitions

  • the present application relates to the field of drones, and more particularly to a drone image acquisition method drone.
  • the shooting process of the drone is relatively cumbersome. Firstly, the user is required to control the take-off of the aircraft, and then the position of the drone is adjusted by the remote controller or the application program to compose the picture, and then the picture can be taken, and the user is not satisfied after the filming. For the photo effect, you need to continue to operate the aircraft to other locations and recompose the photo. Such a photographing process is not automated enough, the user needs to input a lot of operations, and the aircraft cannot provide a variety of options.
  • the present application provides a UAV image acquisition method and a drone, which can automate the photographing process of the drone without requiring manual operation by the user, and provide users with various choices.
  • a first aspect of the embodiments of the present application provides a method for collecting an image of a drone, including:
  • Analyzing a position of the target subject in the current image if the target subject is The image in the current image satisfies the image acquisition condition, and the image is acquired.
  • a second aspect of the embodiments of the present application provides a drone, including:
  • a memory for storing a drone image acquisition program
  • a position of the target photographic subject in the current image is analyzed, and if the position of the target photographic subject in the current image satisfies an image acquisition condition, an image is acquired.
  • a third aspect of the present application provides a computer readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the unmanned image acquisition method provided by the first aspect of the embodiment of the present application is executed.
  • the UAV image acquisition method receives the takeoff command, acquires the target photographic subject and saves the feature of the target photographic object, and tracks the target photographic object according to the feature of the target photographic subject.
  • FIG. 1 is a structural diagram of a drone according to an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of a method for collecting an image of a drone according to an embodiment of the present application
  • FIG. 3 is a schematic flowchart of a method for taking off a drone according to an embodiment of the present application
  • FIG. 4 is a schematic flowchart of another method for taking off a drone according to an embodiment of the present application.
  • FIG. 5 is a schematic flowchart of a method for collecting an image of a drone according to another embodiment of the present application.
  • FIG. 6 is a schematic diagram of a distance between a drone and a target photographic object according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram of an adjustment distance effect provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of an effect of adjusting a heading angle according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of an effect of adjusting a pitch angle according to an embodiment of the present application.
  • FIG. 10 is a schematic diagram of an overlay image provided by an embodiment of the present application.
  • FIG. 11 is a schematic flowchart of an image splicing method according to an embodiment of the present application.
  • FIG. 12 is a schematic diagram of relative positions between a drone and a target photographic object according to an embodiment of the present application.
  • FIG. 13 is a schematic structural diagram of a drone according to an embodiment of the present application.
  • FIG. 14 is a schematic structural diagram of an instruction receiving module according to an embodiment of the present application.
  • FIG. 15 is a schematic structural diagram of another instruction receiving module according to an embodiment of the present disclosure.
  • FIG. 16 is a schematic structural diagram of an analysis and collection module according to an embodiment of the present application.
  • FIG. 17 is a schematic structural diagram of a drone according to another embodiment of the present application.
  • FIG. 18 is a schematic structural diagram of a drone according to another embodiment of the present application.
  • FIG. 19 is a schematic structural diagram of a drone according to another embodiment of the present application.
  • FIG. 20 is a schematic structural diagram of a drone according to another embodiment of the present application.
  • FIG. 1 is a structural diagram of a drone according to an embodiment of the present invention. As shown in FIG. 1, the drone in this embodiment may include:
  • a power system 120 mounted on the fuselage for providing flight power
  • the pan/tilt 130 and the imaging device 140 are mounted on the body 110 of the drone through the pan/tilt head 130.
  • the imaging device 140 is used for image or video shooting during flight of the drone, including but not limited to multi-spectral imager, hyperspectral imager, visible light camera and infrared camera, etc.
  • the pan/tilt 130 is multi-axis transmission and stabilized
  • the pan/tilt motor compensates for the photographing angle of the image forming apparatus 140 by adjusting the rotation angle of the rotating shaft, and prevents or reduces the shake of the image forming apparatus 140 by setting an appropriate buffer mechanism.
  • FIG. 2 is a schematic flowchart of a method for collecting an image of a drone according to an embodiment of the present application.
  • the drone image acquisition method may include at least the following steps:
  • the takeoff command may be a takeoff command input by the user through a control terminal that matches the drone.
  • the method of inputting the takeoff command may be that the user inputs the takeoff command by using a joystick on the control terminal, or the user inputs the takeoff command by using a takeoff button on the control panel of the control terminal, or may input the takeoff command by inputting a voice command by the user. It is also possible to input the takeoff command by scanning the face, or input the takeoff command by means of a throwing.
  • the method for inputting the takeoff command by scanning the face includes at least the following steps, as shown in FIG. 3:
  • S2011 Receive an instruction to trigger the takeoff.
  • the instruction for triggering the take-off may be that the user double-clicks or long-presses the power button on the drone, and the instruction to trigger the take-off may also be that the user double-clicks or long-presses the power button on the control terminal that matches the drone.
  • the pan/tilt 130 controls the imaging device 140 to search for a target image in the screen.
  • the imaging device 140 can be controlled to search for a target image in the screen by changing the heading angle or the pitch angle of the pan-tilt 130.
  • the preset image may be an image that the user previously saves in the memory of the drone, and the image may be a facial image of the user or other image.
  • the imaging device 140 searches for a target image in the screen to match the preset image previously saved by the user, the power system 120 that controls the drone generates lift.
  • the threshold may be, for example, 80%, 90%, 95%, 100%, or the like.
  • the method of inputting the takeoff command by means of throwing at least includes the following steps, as shown in FIG. 4:
  • the Inertial Measurement Unit consists of three single-axis accelerometers and three single-axis gyroscopes.
  • the accelerometer detects the acceleration signals of the object in the carrier coordinate system independently of the three axes, while the gyroscope detection carrier is relative to
  • the angular velocity signal of the navigation coordinate system measures the angular velocity and acceleration of the object in three-dimensional space, and solves the posture of the object.
  • the IMU is used to measure the horizontal tilt and acceleration produced by the drone when it is in the current flight position.
  • the first preset condition may be that the horizontal tilt angle of the drone does not exceed the first range, and the acceleration exceeds the second range.
  • the horizontal inclination angle does not exceed the first range for determining whether the drone is thrown up in the flat direction
  • the first range may be, for example, an interval range from -30 degrees to 30 degrees.
  • the acceleration is used to determine if the drone is being thrown into control
  • the second range may be, for example, a range of intervals from -0.6 g to -1.2 g (where g is gravitational acceleration).
  • the IMU can also be used to detect other flight parameters that can reflect the drone being thrown.
  • the first preset condition can also be a range regarding other flight parameters, the first range and the second.
  • the scope can also be other reasonable ranges, and no limitation is imposed here.
  • S203 Acquire a target subject and save the feature of the target subject.
  • the target photographic subject may be a photographic object set manually by the user, or may be a photographic subject that is searched by the drone during the take-off process, and the feature of the target photographic subject is extracted after determining the photographic subject, wherein
  • the algorithm of the feature may be an algorithm such as a Convolutional Neural Network (CNN).
  • the order of S201 and S203 is not limited herein. It may be that the target object in the screen is taken as the target object after the drone takes off, or the user may manually set the target object before taking off, and then control the drone to take off.
  • S205 Track the target photographic object according to the feature of the target photographic subject, and acquire the current image.
  • the drone tracks the target subject by the features of the saved target subject.
  • the current image includes the target subject, that is, the current picture previewed by the imaging device 140.
  • S207 Analyze the position of the target subject in the current image, and if the position of the target subject in the current image satisfies the image capturing condition, the image is acquired.
  • the drone can automatically analyze the position of the target subject in the current image, and collect the image when the position of the target subject in the current image satisfies the image capturing condition.
  • the drone can acquire the current image and upload the current image data to the server, and the server analyzes the position of the target photographic object in the current image according to the data uploaded by the drone, and the position of the target photographic subject in the current image is satisfied.
  • the server sends an image acquisition instruction to the drone, and the drone acquires an image after receiving the image acquisition instruction sent by the server.
  • the image acquisition conditions can be determined according to the composition mode.
  • the image acquisition conditions of different composition modes are different, and the finally acquired images may include multiple images in multiple composition modes.
  • the present application provides various patterning modes, as described in the following embodiments.
  • the image acquisition instruction may be a specific gesture instruction or a voice instruction issued by the user, indicating that no plane can start taking the picture, so that the user can pose in front of the photo to obtain a more ideal photo.
  • the gesture command or voice command is pre-stored in the storage device of the drone. It can be known that the manner in which the image acquisition instruction is issued is not limited to the gesture instruction or the voice instruction issued by the user, and other implementation manners may be used in the actual use process, and no limitation is imposed herein.
  • the drone may also send a signal to the user to inform the user that the image capturing condition is currently met, so that the user can be timely and accurate. Issue an image capture command.
  • the signal sent by the drone to the user that satisfies the image capturing condition may be sent by a certain signal light on the drone, for example, by causing the signal light to illuminate at a specific frequency to emit a signal.
  • the main body of the signal is not limited to a signal light, and the manner of the signal sent by the signal light is not limited to the frequency of lighting. Other implementation manners may be used in actual use, and no limitation is imposed herein.
  • the embodiment of the present application controls the drone to take off and acquire the target photographic subject, tracks the target photographic subject, analyzes whether the position of the target photographic subject in the current image satisfies the image capturing condition, and acquires the image if the condition is met. Automate the process of taking pictures of drones without the need for manual operation by the user. Provide a variety of choices to enhance the user experience.
  • the current image further includes a background image.
  • the drone image acquisition method may further include:
  • the position of the target photographic subject in the current image can be changed by changing the distance between the drone and the target photographic subject.
  • the distance between the drone and the target subject includes a horizontal distance and a vertical distance.
  • INS Inertial Navigation System
  • composition schemes can change the position of the target subject in the current image in different ways.
  • Each composition scheme can change the position of the target subject in the current image in a number of different ways.
  • the position of the target photographic subject in the current image can be changed by adjusting the distance from the target photographic subject.
  • the distance between the drone and the target subject includes a horizontal distance and a vertical distance.
  • the user can pre-set the desired shooting effect, and the drone automatically composes the image during the photo taking process to meet the user's setting requirements. It can also be a drone automatically taking pictures under various effects for users to choose to meet the diverse needs of users.
  • the horizontal distance from the target subject can change the ratio of the target subject to the current picture. For example, when the horizontal distance is relatively close, the user's half body photo can be collected. When the horizontal distance is far away, the user's full body photo can be collected. . As shown in Figure 7.
  • the composition when the composition is performed according to the classical composition mode, for example, the three-point composition method and the nine-square grid pattern method may be performed, and the target object may be placed by adjusting the position of the target object in the current image.
  • Fun center in which the current picture is divided into three horizontal and vertical averages, and the intersection of the lines is the center of interest.
  • it is also possible to compose images according to other classical composition patterns such as diagonal composition method or golden spiral composition method.
  • the unmanned The heading angle and/or the pitch angle of the machine or pan/tilt 130 to adjust the position of the target subject in the current image.
  • the second preset condition may be a distance from the target photographic subject determined according to a shooting effect that the user wants to obtain in advance, and the second preset condition may also be various shooting effects automatically obtained according to the drone. Determine the distance from the target subject. For example, when the heading angle is adjusted, the left and right positions of the target subject in the current picture can be changed. As shown in FIG.
  • the up and down position of the target subject in the current picture can be changed, as shown in FIG. Similarly, the user can pre-set the final desired image, and the drone automatically composes the image during the photo-taking process to meet the user's needs. It can also be a drone automatically taking pictures under various effects for users to choose to meet the diverse needs of users.
  • the at least two images are acquired; wherein a coincidence ratio between the adjacent two images is in a preset range. If it is determined that the position of the target subject in the current image satisfies the image capturing condition, after acquiring at least two images, the at least two images are stitched together. As shown in Figure 10. The portion where the image 1 overlaps with the image 2 is an overlapping region, and the ratio of the overlapping region to the entire image is a coincidence ratio.
  • the preset range of the coincidence ratio between the connected two images may be, for example, 20% to 30%.
  • multiple images are obtained by adjusting the heading angle, and the coincidence rate between the adjacent two images can be determined by the heading angle.
  • the coincidence rate can be determined by the heading angle.
  • feature points are matched in the overlapping regions of the adjacent images, and then Bundle Adjustment (BA) optimization is performed to make the relative positions between the images more precise.
  • the image is subjected to exposure compensation, and the splicing line is searched for, and finally, by morphing, the image is projected as an image to splicing the plurality of images.
  • the specific splicing method is shown in Figure 11.
  • the splicing algorithm can expand the shooting angle of the drone, provide a wider shooting angle for the user, and overcome the technical problem of slow panoramic shooting speed in the prior art, and provide a more rapid panoramic photographing method. .
  • the drone image acquisition method may further include:
  • the drone can have the functions of intelligent background recognition and segmentation algorithms, and can fully utilize the background characteristics for composition.
  • the position of the target photographic subject in the current image can be changed by recognizing the background image by changing the relative distance between the drone and the target photographic subject. That is, the drone surrounds different orientations around the target subject. Get different background photos. Specifically, as shown in FIG.
  • the distance between the drone and the target photographic object includes the horizontal distance and the vertical distance. Specifically, as shown in Figure 6.
  • the horizontal distance and the vertical distance adjustment between the drone and the target subject can be adjusted to make the background wide enough.
  • the vertical distance between the drone and the target subject should be higher than the overall height of the target subject. A little higher than one meter, the horizontal distance between the drone and the target subject should be about four or five meters.
  • the distance between the target and the target object may also be adjusted according to the background by identifying the background of the current image; wherein the distance includes a horizontal distance and a vertical distance.
  • the position of the target subject in the current image is changed by changing the heading angle and/or the pitch angle of the drone, or by changing to be mounted on the drone.
  • the heading angle and/or pitch angle of the pan/tilt head 130 changes the position of the target subject in the current image.
  • the second preset condition may be the distance from the target photographic subject determined according to the photographic effect that is preset by the user mentioned in the previous embodiment, and the second preset condition may also be based on the previous one.
  • the various shooting effects automatically obtained by the drone mentioned in the embodiment determine the distance from the target subject.
  • the distance between the target subject and the target subject can be determined separately, and then the heading angle and/or the pitch angle of the drone are changed to change the target subject in the current image.
  • the background image of the current image is the subject in the scenic spot, that is, when there is a prominent subject in the background
  • Side to highlight the attractions of the object.
  • the distance between the target and the target object may also be adjusted according to the background by identifying the background of the current image; wherein the distance includes a horizontal distance and a vertical distance.
  • the relative position with the target subject is adjusted according to the background, that is, the different orientation of the drone around the target subject.
  • the position of the target subject in the current image is changed by changing the heading angle and/or the pitch angle of the drone, or by changing the carrying position
  • the heading angle and/or pitch angle of the pan/tilt head 130 on the aircraft changes the position of the target subject in the current image.
  • the second preset condition may be the distance from the target photographic subject determined according to the photographic effect that is preset by the user mentioned in the previous embodiment, and the second preset condition may also be based on the previous one.
  • the various shooting effects automatically obtained by the drone mentioned in the embodiment determine the distance from the target subject.
  • the third preset condition can satisfy different preset requirements of the user by changing different background images acquired by the drone in the relative position with the target photographic subject, for example, the user can input the orientation corresponding to the multiple shooting positions in advance, the third pre-
  • the setting condition may also be a relative position with the target subject determined according to various shooting backgrounds automatically acquired by the drone.
  • the relative position with the target subject can be adjusted after determining the distance between the various target objects, and finally the heading angle and/or the pitch angle can be adjusted to change the current image.
  • the background image and the position of the target subject in the current image to obtain pictures under various effects for the user to select, to meet the diverse needs of the user.
  • the unmanned person always follows the target subject, so the target subject must be in the picture, and then according to the background, first change the distance from the target subject to compose the image, if it is a big scene, then stay away from some If it is a close shot, it can be slightly closer. Then change the relative position with the target subject, and adjust the highlight features in the background to the picture. Finally, it is judged whether there is a focused subject around the target subject, and if so, the focus object is placed in the center of the screen by the heading angle and/or the tilt angle, and if not, the target subject is placed in the center to compose the image.
  • a comparison image that is similar to the current image and exceeds a first threshold; and acquire a shooting parameter of the comparison image; wherein the shooting parameter is included between the target and the target image in the comparison image.
  • Distance, heading angle and pitch angle the distance includes a horizontal distance and a vertical distance; and the position of the target subject in the current image is adjusted according to the shooting parameters of the contrast image.
  • the first threshold may be, for example, 80%, 85%, 90%, or the like. When there is more than one image having a similarity with the current image exceeding the first threshold, the image with the highest similarity may be selected as the comparison image.
  • the best photos of each scene can be directly collected from the network, and then the image scene is learned using the CNN algorithm, and the model is trained and saved in the drone.
  • the CNN algorithm can be used to find the contrast image closest to the current image, and then imitation
  • the composition is patterned according to the composition pattern in the comparison image, and in this way, the length of each professional photographer can be integrated to produce a beautiful image.
  • the shooting parameters of the contrast image may be acquired according to the position of the photographic subject in the contrast image in the contrast image, including the distance from the photographic subject, the heading angle, and the pitch angle. Then, according to the shooting parameters of the contrast image, the distance, heading angle and pitch angle between the drone and the target subject are adjusted. Thereby, a composition pattern similar to the contrast image is obtained, and a better photographing effect is obtained.
  • the embodiment of the present application provides an image acquisition method for a drone, which can control a drone to take off and acquire a target photographic object, track a target photographic object, and analyze whether the position of the target photographic subject in the current image satisfies the image acquisition condition. Images are acquired when the conditions are met. Automate the drone camera process without the need for manual operation by the user. Further, the embodiment of the present application further provides a plurality of automatic composition modes of the drone, and can obtain pictures under various shooting effects, and the user provides various choices to enhance the user experience.
  • the embodiment of the present application further provides a drone.
  • the drone 30 can include at least an instruction receiving module 310, an acquisition saving module 320, a tracking acquisition module 330, and an analysis collection module 340;
  • the instruction receiving module 310 is configured to receive a takeoff instruction.
  • the acquisition save module 320 is configured to acquire a target photographic subject and save the feature of the target photographic subject.
  • the tracking acquisition module 330 is configured to track the target photographic object according to the feature of the target photographic subject, and acquire the current image; wherein the current image includes the target photographic subject.
  • the analysis acquisition module 340 is configured to analyze the position of the target photographic subject in the current image, and if the position of the target photographic subject in the current image satisfies the image collection condition, the image is acquired.
  • the instruction receiving module 310 includes: a first detecting unit 3110, a first take-off unit 3120; wherein:
  • the first detecting unit 3110 is configured to search for a target image after detecting an instruction to trigger the takeoff.
  • the first flying unit 3120 is configured to cause the drone 30 to be generated when the target image matches the preset image. Lift.
  • the instruction receiving module 310 includes: a second detecting unit 3130 and a second take-off unit 3140; wherein:
  • the second detecting unit 3130 is configured to detect a change in the inertial measurement unit data of the drone 30;
  • the second take-off unit 3140 is configured to cause the drone 30 to generate lift if the change of the inertial measurement unit data satisfies the first preset condition.
  • the analysis collection module 340 includes: an analysis determination unit 3410, an acquisition unit 3420; wherein:
  • the analysis determining unit 3410 is configured to analyze the position of the target photographic subject in the current image, and if the position of the target photographic subject in the current image satisfies the image capturing condition, determine whether an image capturing instruction is received;
  • the acquisition unit 3420 is configured to acquire an image if the analysis determination unit 3410 determines that an image acquisition instruction is received.
  • the drone 30 further includes a position changing module 350, as shown in FIG. 17, for tracking the target capturing object according to the feature of the target photographic subject after the tracking acquisition module 330 acquires the current image.
  • the analysis acquisition module 340 analyzes the position of the target photographic subject in the current image. If the position of the target photographic subject in the current image satisfies the image acquisition condition, the position of the target photographic subject in the current image is changed before the image is acquired.
  • the position changing module 350 is specifically configured to adjust a distance from the target photographic subject; wherein the distance includes a horizontal distance and a vertical distance; and the target photographic subject is changed by the distance from the target photographic subject. The position in the current image.
  • the position changing module 350 is specifically configured to adjust a distance from the target photographic subject; wherein the distance includes a horizontal distance and a vertical distance; when the distance from the target photographic subject satisfies the second preset After the condition, adjust the heading angle and/or the pitch angle; change the position of the target subject in the current image by the heading angle and/or the pitch angle.
  • the analytic acquisition module 340 is specifically configured to analyze the position of the target photographic subject in the current image, and if the position of the target photographic subject in the current image satisfies the image capturing condition, at least two images are acquired; The coincidence rate between two adjacent images is within a preset range.
  • the drone 30 further includes an image splicing module 360, as shown in FIG. 18, for splicing at least two images after the analytic acquisition module 340 acquires at least two images.
  • the current image further includes a background image.
  • the drone 30 further includes a position changing module 350.
  • the analytic acquisition module 340 analyzes the target photographic subject. The position in the current image, if the position of the target subject in the current image satisfies the image capturing condition, the position of the target subject in the current image is changed before the image is acquired.
  • the analysis acquisition module 340 is specifically configured to analyze the position of the target photographic subject in the current image. If it is determined that the position of the target photographic subject in the current image satisfies the image collection condition according to the background image of the current image, the image is acquired.
  • the position change module 350 is specifically configured to adjust a relative position with the target photographic subject.
  • the position changing module 350 is specifically configured to identify a background image of the current image, and adjust a distance from the target photographic subject according to the background image; wherein the distance includes a horizontal distance and a vertical distance.
  • the position changing module 350 is specifically configured to identify a background of the current image, and adjust a distance from the target photographic subject according to the background; wherein the distance includes a horizontal distance and a vertical distance; and when the target object is After the distance between the two meets the second preset condition, the heading angle and/or the pitch angle are adjusted.
  • the position changing module 350 is specifically configured to identify a background of the current image, and adjust a distance from the target photographic subject according to the background; wherein the distance includes a horizontal distance and a vertical distance; and when the target object is After the distance between the two meets the second preset condition, the background adjustment and The relative position of the target subject; when the relative position with the target subject satisfies the third preset condition, the heading angle and/or the pitch angle are adjusted.
  • the drone 30 includes a search module 301, a parameter acquisition module 380, and an adjustment module in addition to the instruction receiving module 310, the acquisition and storage module 320, the tracking acquisition module 330, and the analysis acquisition module 340. 390, as shown in Figure 19, wherein:
  • the lookup module 370 is configured to find a comparison image that is similar to the current image by a first threshold.
  • the parameter acquisition module 380 is configured to acquire a shooting parameter of the comparison image; wherein the shooting parameter includes a distance, a heading angle, and a pitch angle with the target object in the comparison image; the distance includes a horizontal distance and a vertical distance.
  • the adjustment module 390 is configured to adjust the position of the target photographic subject in the current image according to the shooting parameters of the comparison image.
  • the embodiment of the present application can control the drone to take off and acquire the target photographic subject, track the target photographic subject, and analyze whether the position of the target photographic subject in the current image satisfies the image capturing condition, and if the condition is met, the image is acquired. Automate the drone camera process without the need for manual operation by the user. Further, the embodiment of the present application further provides a plurality of automatic composition modes of the drone, and can obtain pictures under various shooting effects, and the user provides various choices to enhance the user experience.
  • FIG. 20 is a schematic structural diagram of another unmanned aerial vehicle according to an embodiment of the present application.
  • the drone 40 may include at least a memory 410 and a processor 420.
  • the memory 410 and the processor 420 are connected by a bus 430.
  • the memory 410 is configured to store a drone image acquisition program
  • the processor 420 is configured to invoke the drone image acquisition program in the memory 410 and execute:
  • Receiving a take-off instruction acquiring a target photographic subject and saving a feature of the target photographic subject; tracking the target photographic subject according to the feature of the target photographic subject, acquiring a current image; wherein the current image includes The target subject is analyzed; the position of the target subject in the current image is analyzed, and if the position of the target subject in the current image satisfies the image capturing condition, the image is acquired.
  • the processor 420 receiving the takeoff command includes: after detecting the instruction to trigger the takeoff, searching for the target image; when the target image matches the preset image, causing the drone to generate lift.
  • the processor 420 receiving the takeoff command includes: detecting a change in the inertial measurement unit data of the drone; and causing the drone to generate lift if the change in the inertial measurement unit data satisfies the first preset condition.
  • the processor 420 acquires the image, if the position of the target photographic subject in the current image satisfies the image capturing condition, determining whether to receive the image. Go to the image acquisition command; if an image capture command is received, the image is acquired.
  • the target photographic subject is tracked according to the feature of the target photographic subject, and after acquiring the current image, the position of the target photographic subject in the current image is analyzed, and if the position of the target photographic subject in the current image satisfies the image capturing condition
  • the processor 420 is further configured to: change the position of the target photographic subject in the current image before acquiring the image.
  • the processor 420 changing the position of the target photographic subject in the current image comprises: adjusting a distance between the target photographic subject; wherein the distance includes a horizontal distance and a vertical distance; and the target photographic subject The distance between the objects changes the position of the target subject in the current image.
  • the processor 420 changing the position of the target photographic subject in the current image comprises: adjusting a distance between the target photographic subject and the target photographic subject; wherein the distance includes a horizontal distance and a vertical distance; After the distance between the two meets the second preset condition, the heading angle and/or the pitch angle are adjusted; the heading angle and/or the pitch angle are used to change the position of the target subject in the current image.
  • the processor 420 acquiring the image includes: if the position of the target photographic subject in the current image satisfies the image capturing condition, collecting at least two Image; where the coincidence rate between two adjacent images is located Within the preset range; if it is determined that the position of the target subject in the current image satisfies the image capturing condition, after acquiring at least two images, the processor 420 is further configured to: stitch at least two images.
  • the current image further includes a background image. Tracking the target photographic subject according to the feature of the target photographic subject, and after acquiring the current image, analyzing the position of the target photographic subject in the current image, and if the position of the target photographic subject in the current image satisfies the image capturing condition, before acquiring the image
  • the processor 420 is further configured to: change a position of the target photographic subject in the current image; if the position of the target photographic subject in the current image satisfies the image capturing condition, the processor 420 collects the image, including: determining the target according to the background image of the current image The image is acquired when the position of the subject in the current image satisfies the image capturing condition.
  • the processor 420 changing the position of the target photographic subject in the current image comprises: adjusting a relative position with the target photographic subject.
  • the processor 420 changing the position of the target photographic subject in the current image comprises: identifying a background image of the current image, adjusting a distance from the target photographic subject according to the background image; wherein the distance includes a horizontal distance And vertical distance.
  • the processor 420 changing the position of the target photographic subject in the current image comprises: identifying a background of the current image, adjusting a distance from the target photographic subject according to the background; wherein the distance includes a horizontal distance And a vertical distance; when the distance from the target subject satisfies the second preset condition, the heading angle and/or the pitch angle are adjusted.
  • the processor 420 changing the position of the target photographic subject in the current image comprises: identifying a background of the current image, adjusting a distance from the target photographic subject according to the background; wherein the distance includes a horizontal distance And a vertical distance; when the distance from the target subject satisfies the second preset condition, the relative position with the target subject is adjusted according to the background; when the relative position with the target subject satisfies the third preset condition, Adjust the heading angle and/or pitch angle.
  • the target photographic subject is tracked according to the feature of the target photographic subject, and after the current image is acquired, the position of the target photographic subject in the current image is analyzed, if the target photographic subject is
  • the processor 420 is further configured to: search for a comparison image whose similarity with the current image exceeds a first threshold; and acquire a shooting parameter of the comparison image; wherein the shooting parameter includes and contrasts, before the image is captured, the processor 420 is further configured to: The distance between the target subjects in the image, the heading angle and the pitch angle; the distance includes the horizontal distance and the vertical distance; and the position of the target subject in the current image is adjusted according to the shooting parameters of the contrast image.
  • the embodiment of the present application can control the drone to take off and acquire the target photographic subject, track the target photographic subject, and analyze whether the position of the target photographic subject in the current image satisfies the image capturing condition, and if the condition is met, the image is acquired. Automate the drone camera process without the need for manual operation by the user. Further, the embodiment of the present application further provides a plurality of automatic composition modes of the drone, and can obtain pictures under various shooting effects, and the user provides various choices to enhance the user experience.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

本申请实施例提供了一种无人机图像采集方法及无人机,其中,该方法包括:接收起飞指令;获取目标拍摄对象并保存所述目标拍摄对象的特征;根据所述目标拍摄对象的特征跟踪所述目标拍摄对象,获取当前图像;其中,所述当前图像包括目标拍摄对象;分析所述目标拍摄对象在所述当前图像中的位置,若所述目标拍摄对象在所述当前图像中的位置满足图像采集条件,则采集图像,实现无人机拍照过程的自动化,无需用户手动操作,为用户提供多样化的选择,提升用户体验。

Description

无人机图像采集方法及无人机
本专利文件披露的内容包含受版权保护的材料。该版权为版权所有人所有。版权所有人不反对任何人复制专利与商标局的官方记录和档案中所存在的该专利文件或该专利披露。
技术领域
本申请涉及无人机领域,尤其涉及无人机图像采集方法无人机。
背景技术
在无人机应用中,市面上出现了一系列的自拍无人机,围绕着快捷拍照和录制小视频,用来发朋友圈、微博等社交软件来进行分享。但是现有技术中无人机的拍摄过程都比较繁琐,首先需要用户控制飞机起飞,再通过遥控器或者应用程序调整无人机的位置进行构图,然后才能进行拍照,拍完之后用户如果不满意拍照效果,则需继续操作飞机到其他位置,重新构图拍照。这样的拍照过程自动化程度不高,用户需要输入很多操作,飞机也不能提供多样化的选择。
发明内容
有鉴于此,本申请提供了一种无人机图像采集方法及无人机,能够使无人机的拍照过程自动化,无需用户手动操作,为用户提供多样化的选择。
本申请实施例第一方面提供了一种无人机图像采集方法,包括:
接收起飞指令;
获取目标拍摄对象并保存所述目标拍摄对象的特征;
根据所述目标拍摄对象的特征跟踪所述目标拍摄对象,获取当前图像;其中,所述当前图像包括目标拍摄对象;
分析所述目标拍摄对象在所述当前图像中的位置,若所述目标拍摄对象在 所述当前图像中的位置满足图像采集条件,则采集图像。
本申请实施例第二方面提供了一种无人机,包括:
存储器,用于存储无人机图像采集程序;
处理器,用于调用所述存储器中的无人机图像采集程序并执行:
接收起飞指令;
获取目标拍摄对象并保存所述目标拍摄对象的特征;
根据所述目标拍摄对象的特征跟踪所述目标拍摄对象,获取当前图像;其中,所述当前图像包括目标拍摄对象;
分析所述目标拍摄对象在所述当前图像中的位置,若所述目标拍摄对象在所述当前图像中的位置满足图像采集条件,则采集图像。
本申请实施例第三方面提供了一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时执行本申请实施例第一方面提供的无人机图像采集方法。
本申请实施例提供的无人机图像采集方法无人机,通过接收起飞指令;获取目标拍摄对象并保存所述目标拍摄对象的特征;根据所述目标拍摄对象的特征跟踪所述目标拍摄对象,获取当前图像;其中,所述当前图像包括目标拍摄对象;分析所述目标拍摄对象在所述当前图像中的位置,若所述目标拍摄对象在所述当前图像中的位置满足图像采集条件,则采集图像,自动跟踪目标拍摄对象并根据目标拍摄对象在图像中的位置判断是否满足拍照的条件,实现了无人机拍照过程的自动化,提升用户体验。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的 前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的一种无人机的结构图;
图2为本申请实施例提供的一种无人机图像采集方法流程示意图;
图3为本申请实施例提供的一种无人机起飞方法流程示意图;
图4为本申请实施例提供的另一种无人机起飞方法流程示意图;
图5为本申请另一实施例提供的一种无人机图像采集方法流程示意图;
图6为本申请实施例提供的无人机与目标拍摄对象之间的距离示意图;
图7为本申请实施例提供的调整距离效果示意图;
图8为本申请实施例提供的调整航向角效果示意图;
图9为本申请实施例提供的调整俯仰角效果示意图;
图10为本申请实施例提供的重叠图像示意图;
图11为本申请实施例提供的图像拼接方法流程示意图;
图12为本申请实施例提供的无人机与目标拍摄对象之间的相对位置示意图;
图13为本申请实施例提供的一种无人机结构示意图;
图14为本申请实施例提供的一种指令接收模块结构示意图;
图15为本申请实施例提供的另一种指令接收模块结构示意图;
图16为本申请实施例提供的一种分析采集模块结构示意图;
图17为本申请另一实施例提供的一种无人机结构示意图;
图18为本申请另一实施例提供的一种无人机结构示意图;
图19为本申请另一实施例提供的一种无人机结构示意图;
图20为本申请另一实施例提供的一种无人机结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对申请明实施例中的技术方案进行清 楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请实施例提供一种无人机,图1为本发明实施例提供的一种无人机的结构图。如图1所示,本实施例中的无人机,可以包括:
机身110;
安装在机身上的动力系统120,用于提供飞行动力;
云台130以及成像设备140,成像设备140通过云台130搭载于无人机的机身110上。成像设备140用于在无人机的飞行过程中进行图像或视频拍摄,包括但不限于多光谱成像仪、高光谱成像仪、可见光相机及红外相机等,云台130为多轴传动及增稳系统,云台电机通过调整转动轴的转动角度来对成像设备140的拍摄角度进行补偿,并通过设置适当的缓冲机构来防止或减小成像设备140的抖动。
接下来结合图2-12介绍本申请实施例提供的无人机图像采集方法。
首先请参阅图2。图2为本申请实施例提供的无人机图像采集方法流程示意图。如图2所示,无人机图像采集方法至少可以包括以下几个步骤:
S201:接收起飞指令。
具体地,起飞指令可以是用户通过与无人机匹配的控制终端输入的起飞指令。其中,输入起飞指令的方式可以是用户通过控制终端上的控制杆输入起飞指令,还可以是用户通过控制终端的控制面板上的起飞按钮输入起飞指令,还可以是通过用户输入语音指令输入起飞指令,还可以是通过扫脸的方式输入起飞指令,还可以是通过抛飞的方式输入起飞指令。
具体地,通过扫脸的方式输入起飞指令的方法至少包括以下几个步骤,如图3所示:
S2011:接收触发起飞的指令。
具体地,触发起飞的指令可以是用户双击或者长按无人机上的电源按键,触发起飞的指令还可以是用户双击或者长按与无人机匹配的控制终端上的电源按键等。
S2013:搜索目标图像。
具体地,在接收到触发起飞的指令后,云台130控制成像设备140在画面中搜索目标图像。具体可以通过改变云台130的航向角或者俯仰角控制成像设备140在画面中搜索目标图像。
S2015:当目标图像与预设图像匹配时,使无人机产生升力。
具体地,预设图像可以是用户预先保存在无人机的存储器中的图像,该图像可以是用户的脸部图像或者其他的图像。当成像设备140在画面中搜索目标图像与用户预先保存的预设图像匹配时,控制无人机的动力系统120产生升力。
具体地,当目标图像与预设图像的相似度超过一定阈值,可以认为是该目标图像与预设图像匹配。其中,该阈值例如可以是80%、90%、95%、100%等等。
另外,通过抛飞的方式输入起飞指令的方法至少包括以下几个步骤,如图4所示:
S2017:检测无人机的惯性测量单元数据的变化。
具体地,用户手持无人机向外抛出,无人机在运动的过程中检测无人机的惯性测量单元数据的变化。惯性测量单元(Inertial measurement unit,IMU)包含了三个单轴的加速度计和三个单轴的陀螺仪,加速度计检测物体在载体坐标系统独立三轴的加速度信号,而陀螺仪检测载体相对于导航坐标系的角速度信号,测量物体在三维空间中的角速度和加速度,并以此解算出物体的姿态。在一种具体的实现方式中,IMU用于测量无人机处于当前的飞行位置时产生的水平倾角和加速度。
S2019:若惯性测量单元数据的变化满足第一预设条件,使无人机产生升力。
具体地,第一预设条件可以是无人机的水平倾角不超过第一范围,且加速度超过第二范围。其中,水平倾角不超过第一范围用于确定无人机是否为平放向上抛起,第一范围例如可以是从-30度至30度的区间范围。加速度用于确定无人机是否被抛起到控制,第二范围例如可以是从-0.6g至-1.2g(其中g为重力加速度)的区间范围。
在某种具体地实现方式中,IMU还可以用于检测其他可以反映无人机被抛出的飞行参数,第一预设条件也还可以是关于其他飞行参数的范围,第一范围和第二范围也可以是其他合理的范围,在此不做限制。
S203:获取目标拍摄对象并保存目标拍摄对象的特征。
具体地,目标拍摄对象可以是用户手动设置的拍摄对象,也可以是无人机在起飞的过程中搜索到的处于画面内的拍摄对象,确定拍摄对象后提取目标拍摄对象的特征,其中,提取特征的算法可以是卷积神经网络(Convolutional Neural Network,CNN)等算法。
可以知道的是,S201和S203的先后顺序在此不做限定。可以是在无人机起飞后将画面内的拍摄对象作为目标拍摄对象,也可以是在起飞前,用户先手动设置目标拍摄对象,再控制无人机起飞。
S205:根据目标拍摄对象的特征跟踪目标拍摄对象,获取当前图像。
具体地,无人机通过保存的目标拍摄对象的特征跟踪该目标拍摄对象。当前图像包括目标拍摄对象,即通过成像设备140预览到的当前画面。
S207:分析目标拍摄对象在当前图像中的位置,若目标拍摄对象在当前图像中的位置满足图像采集条件,则采集图像。
具体地,无人机可以自动分析目标拍摄对象在当前图像中的位置,当目标拍摄对象在当前图像中的位置满足图像采集条件时则采集图像。
具体地,无人机可以获取当前图像,并将当前图像数据上传至服务器,服务器根据无人机上传的数据分析目标拍摄对象在当前图像中的位置,当目标拍摄对象在当前图像中的位置满足图像采集条件时,服务器向无人机发送图像采集指令,无人机在接收到服务器发送的图像采集指令后,采集图像。
其中,图像采集条件可以根据构图方式决定。不同的构图方式图像采集条件不同,最终采集得到的图像可以包括多种构图方式下的多张图像,本申请提供了多种构图方式,详见后续实施例的描述。
在另外一种实现方式中,判断出目标拍摄对象在当前图像中的位置满足图像采集条件后,进一步判断是否接收到用户发出的图像采集指令,若接收到图像采集指令,则采集图像。
其中,图像采集指令可以是用户发出的某一个特定的手势指令或者语音指令,告知无飞机可以开始拍照了,便于用户在拍照前摆好姿势,获得更加理想的照片。该手势指令或者语音指令预先保存在无人机的存储设备内。可以知道的是,图像采集指令的发出方式不限于是用户发出的手势指令或者语音指令,在实际使用过程中还可以有其他的实现方式,在此不做限制。
此外,当判断出目标拍摄对象在当前图像中的位置满足图像采集条件后,用户发出图像采集指令之前,无人机还可以向用户发出信号,告知用户当前满足图像采集条件,让用户能及时准确的发出图像采集指令。其中,无人机向用户发出的满足图像采集条件的信号可以是通过无人机上某一个信号灯发出,例如可以是通过使该信号灯以特定的频率点亮来发出信号。当然,发出信号的主体不限于是信号灯,信号灯发出的信号的方式也不限于是点亮的频率,在实际使用过程中还可以有其他的实现方式,在此不做限制。
本申请实施例通过控制无人机起飞并获取目标拍摄对象,跟踪目标拍摄对象,分析目标拍摄对象在当前图像中的位置是否满足图像采集条件,若满足条件则采集图像。实现无人机拍照过程的自动化,无需用户手动操作,为用户提 供多样化的选择,提升用户体验。
在另外一个实施例中,当前图像还包括背景图像。在S205之后,S207之前,如图5所示,无人机图像采集方法还可以包括:
S206:改变目标拍摄对象在当前图像中的位置。
具体地,可以通过改变无人机与目标拍摄对象之间的距离来改变目标拍摄对象在当前图像中的位置。其中,无人机与目标拍摄对象之间的距离包括水平距离及垂直距离。具体如图6所示。具体地,无人机与目标拍摄对象之间的垂直距离h可以从无人机的惯性导航系统惯性导航系统(Inertial Navigation System,INS)得到,无人机与目标拍摄对象之间的水平距离s可以根据h和θ得到,s=h*tanθ,其中,θ也可以根据无人机的INS得到该角度信息。
具体地,还可以通过改变无人机的航向角和/或俯仰角来改变目标拍摄对象在当前图像中的位置,或者通过改变搭载在无人机上的云台130的航向角和/或俯仰角来改变目标拍摄对象在当前图像中的位置。
需要说明的是,不同的构图方案可以对应不同的方式改变目标拍摄对象在当前图像中的位置。每一种构图方案可以对应多种不同的方式改变目标拍摄对象在当前图像中的位置。
在一种可能的实现方式中,可以通过调整与目标拍摄对象之间的距离来改变目标拍摄对象在当前图像中的位置。具体地,无人机与目标拍摄对象之间的距离包括水平距离和垂直距离。通过调整与目标拍摄对象之间的垂直距离,可以调整最终采集到的图片的效果。例如当与目标拍摄对象的垂直距离较小时,可以营造出仰拍的效果,当与目标拍摄对象之间的垂直距离逐渐增大时,可以营造出平拍以及俯拍的效果。用户可以预先设置想要获得的拍摄效果,无人机在拍照过程中自动构图以满足用户设置的需求。还可以是无人机自动拍摄各种效果下的图片,以供用户选择,满足用户多样化的需求。在调整好与目标拍摄对象之间的垂直距离后,再进一步调整与目标拍摄对象之间的水平距离。调节 与目标拍摄对象之间的水平距离可以改变目标拍摄对象与当前图片的比例,例如当水平距离较近时,可以采集到用户的半身照,当水平距离较远时,可以采集到用户的全身照。如图7所示。
在一种具体的实现方式中,当根据经典的构图方式进行构图时,例如可以是三分构图法、九宫格构图法,可以通过调整目标拍摄对象在当前图像中的位置,将目标拍摄对象置于趣味中心;其中,将当前画面横向和纵向平均分成三份,线条的交叉处即为趣味中心。此外,在具体的实现中,还可以根据其他的经典的构图方式例如对角线构图法或者黄金螺旋构图法等等进行构图。
进一步的,在另外一种可能的实现方式中,当与目标拍摄对象之间的距离满足第二预设条件后,即在调整完与目标拍摄对象之间的距离之后,还可以再改变无人机或云台130的航向角和/或俯仰角,来调整目标拍摄对象在当前图像中的位置。其中,第二预设条件可以是根据用户预先设置的想要获得的拍摄效果确定的与目标拍摄对象之间的距离,第二预设条件还可以是根据无人机自动获取的各种拍摄效果确定的与目标拍摄对象之间的距离。例如当调整航向角时可以改变目标拍摄对象在当前图片中的左右位置,如图8所示,当调整俯仰角时可以改变目标拍摄对象在当前图片中的上下位置等,如图9所示。同样的,用户可以预先设定好最终想要获得拍照效果,无人机在拍照过程中自动构图以满足用户设置的需求。还可以是无人机自动拍摄各种效果下的图片,以供用户选择,满足用户多样化的需求。
此外,在另外一种可能的实现方式中,当目标拍摄对象在当前图像中的位置满足图像采集条件,则采集至少两幅图像;其中,相邻两幅图像之间的重合率位于预设范围内;若判断目标拍摄对象在当前图像中的位置满足图像采集条件,则采集至少两幅图像之后,拼接上述至少两幅图像。如图10所示。图像1与图像2交叠的部分为重叠区域,该重叠区域占整个图像的比例为重合率。
具体地,相连两幅图像之间的重合率的预设范围例如可以是20%到30%。
具体地,在调整好与目标拍摄对象之间的距离(包括水平距离及垂直距离)后,通过调节航向角来获得多幅图像,相邻两幅图像之间的重合率可以通过航向角的大小来控制。在获得了多幅满足重合率的图像之后,在这些相邻图像的重合区域进行特征点的匹配,然后进行光束平差(Bundle Adjustment,BA)优化,使得图像之间的相对位置更为精确,然后对图像进行曝光补偿,寻找拼接线,最后通过变形处理,投影为一张图像可以对上述多幅图像进行拼接。具体的拼接方法如图11所示。采用该拼接算法可以扩大无人机的拍摄角度,为用户提供一种更广阔的拍摄视角,同时克服了现有技术中全景拍照速度较慢的技术问题,提供了一种更加快速的全景拍照方法。
在另外一个实施例中,在S205之后,S207之前,无人机图像采集方法还可以包括:
S206:改变目标拍摄对象在当前图像中的位置。
具体地,无人机可以具有智能背景识别及分割算法的功能,可以充分利用背景特性进行构图。
在一种可能的实现方式中,可以通过识别背景图像,通过改变无人机与目标拍摄对象之间的相对距离来改变目标拍摄对象在当前图像中的位置。即无人机围绕在目标拍摄对象周围的不同方位。可以获得不同背景照片。具体如图12所示。
在另一种可能的实现方式中,还可以通过识别当前图像的背景图像,根据背景图像调整无人机与目标拍摄对象之间的距离,从而改变目标拍摄对象在当前图像中的位置;其中,无人机与目标拍摄对象之间的距离包括水平距离及垂直距离。具体如图6所示。
例如,当识别出当前图像的背景图像为海边时,可以通过调整无人机与目标拍摄对象之间的水平距离及垂直距离调整进行构图,使得背景足够开阔。具体地,无人机与目标拍摄对象之间的垂直距离应该比目标拍摄对象的整体高度 略高一米,无人机与目标拍摄对象之间的水平距离应该大约四五米。
在另一种可能的实现方式中,还可以通过识别当前图像的背景,根据背景调整与目标拍摄对象之间的距离;其中,距离包括水平距离及垂直距离。当与目标拍摄对象之间的距离满足第二预设条件后,通过改变无人机的航向角和/或俯仰角来改变目标拍摄对象在当前图像中的位置,或者通过改变搭载在无人机上的云台130的航向角和/或俯仰角来改变目标拍摄对象在当前图像中的位置。其中,第二预设条件可以是根据上一实施例中提到的用户预先设置的想要获得的拍摄效果确定的与目标拍摄对象之间的距离,第二预设条件还可以是根据上一实施例中提到的无人机自动获取的各种拍摄效果确定的与目标拍摄对象之间的距离。当需要获得多种拍摄效果时,可以分别在各种拍摄效果下确定与目标拍摄对象之间的距离后,再改变无人机的航向角和/或俯仰角来改变目标拍摄对象在当前图像中的位置,以获得各种效果下的图片,以供用户选择,满足用户多样化的需求。
例如,当识别出当前图像的背景图像为景区中的主体时,即背景中有突出的拍摄对象时,则需要通过改变航向角及俯仰角进行构图,使得主体风景在画面中央,而人在侧边,以重点突出景点对象。
在另一种可能的实现方式中,还可以通过识别当前图像的背景,根据背景调整与目标拍摄对象之间的距离;其中,距离包括水平距离及垂直距离。当与目标拍摄对象之间的距离满足第二预设条件后,根据背景调整与目标拍摄对象的相对位置,即无人机围绕在目标拍摄对象周围的不同方位。当与目标拍摄对象之间的相对位置满足第三预设条件后,通过改变无人机的航向角和/或俯仰角来改变目标拍摄对象在当前图像中的位置,或者通过改变搭载在无人机上的云台130的航向角和/或俯仰角来改变目标拍摄对象在当前图像中的位置。其中,第二预设条件可以是根据上一实施例中提到的用户预先设置的想要获得的拍摄效果确定的与目标拍摄对象之间的距离,第二预设条件还可以是根据上一 实施例中提到的无人机自动获取的各种拍摄效果确定的与目标拍摄对象之间的距离。第三预设条件可以通过改变无人机在与目标拍摄对象之间的相对位置获取的不同的背景图像满足用户预先设置的要求,例如用户可以预先输入多个拍摄位置对应的方位,第三预设条件还可以是根据无人机自动获取的各种拍摄背景确定的与目标拍摄对象之间的相对位置。当需要多种拍摄效果时,可以分别在确定各种与目标拍摄对象之间的距离后,再调整与目标拍摄对象之间的相对位置,最后再调整航向角和/或俯仰角来改变当前图像的背景图像及目标拍摄对象在当前图像中的位置,以获得各种效果下的图片,以供用户选择,满足用户多样化的需求。
例如,在拍照过程中,无人机会始终跟随目标拍摄对象,所以目标拍摄对象一定在画面中,然后根据背景,先改变与目标拍摄对象之间的距离进行构图,如果是大景,则远离一些,如果是近景,则可以稍微靠近。然后再改变与目标拍摄对象之间的相对位置,将背景中的亮点特色调整到画面中。最后,通过判断在目标拍摄对象周围是否有重点拍摄对象,如果有则通过航向角和/或俯仰角将该重点对象放在画面中央,如果没有则将目标拍摄对象放在中央,进行构图。
在另一种可能的实现方式中,还可以通过查找与当前图像相似度超过第一阈值的对比图像;获取对比图像的拍摄参数;其中,拍摄参数包括与对比图像中的目标拍摄对象之间的距离、航向角及俯仰角,距离包括水平距离及垂直距离;根据对比图像的拍摄参数调整目标拍摄对象在当前图像中的位置。其中,第一阈值例如可以是80%、85%、90%等。当与当前图像相似度超过第一阈值的图像不止一个时,可以选取相似度最高的图像作为对比图像。
具体地,可以直接从网络汇总出各个场景的最佳照片,然后使用CNN算法对图像场景进行学习,训练模型,保存在无人机中。当触发无人机自动构图拍照时,可以直接通过CNN算法找出与当前图像最相近的对比图像,然后仿 照该对比图像中的构图方式进行构图,采用这种方式可以综合各专业拍摄者之长,拍出精美图像。
具体地,根据对比图像中的拍摄对象在对比图像中的位置可以获取对比图像的拍摄参数,包括与拍摄对象之间的距离、航向角及俯仰角。再根据对比图像的拍摄参数调整无人机与目标拍摄对象之间的距离、航向角及俯仰角。从而获得与对比图像相似的构图方式,获得更好地拍摄效果。
本申请实施例提供了一种无人机图像采集方法,可以通过控制无人机起飞并获取目标拍摄对象,跟踪目标拍摄对象,分析目标拍摄对象在当前图像中的位置是否满足图像采集条件,若满足条件则采集图像。实现无人机拍照过程的自动化,无需用户手动操作。进一步的,本申请实施例还提供了多种无人机自动构图方式,可以获得各种拍摄效果下的图片,用户提供多样化的选择,提升用户体验。
为了更好地理解上述实施例描述的无人机图像采集方法,本申请实施例还相应提供了一种无人机。如图13所示,无人机30至少可以包括:指令接收模块310、获取保存模块320、跟踪获取模块330及分析采集模块340;其中:
指令接收模块310用于接收起飞指令。
获取保存模块320,用于获取目标拍摄对象并保存目标拍摄对象的特征。
跟踪获取模块330用于根据目标拍摄对象的特征跟踪目标拍摄对象,获取当前图像;其中,当前图像包括目标拍摄对象。
分析采集模块340用于分析目标拍摄对象在当前图像中的位置,若目标拍摄对象在当前图像中的位置满足图像采集条件,则采集图像。
在一个可选的实施例中,如图14所示,指令接收模块310包括:第一检测单元3110、第一起飞单元3120;其中:
第一检测单元3110用于检测到触发起飞的指令后,搜索目标图像。
第一起飞单元3120用于当目标图像与预设图像匹配时,使无人机30产生 升力。
在一个可选的实施例中,如图15所示,指令接收模块310包括:第二检测单元3130、第二起飞单元3140;其中:
第二检测单元3130用于检测无人机30的惯性测量单元数据的变化;
第二起飞单元3140用于若惯性测量单元数据的变化满足第一预设条件,使无人机30产生升力。
在一个可选的实施例中,如图16所示,分析采集模块340包括:分析判断单元3410、采集单元3420;其中:
分析判断单元3410用于分析目标拍摄对象在当前图像中的位置,若目标拍摄对象在当前图像中的位置满足图像采集条件,则判断是否接收到图像采集指令;
采集单元3420用于若分析判断单元3410判断出接收到图像采集指令,则采集图像。
在一个可选的实施例中,无人机30还包括位置改变模块350,如图17所示,用于在跟跟踪获取模块330根据目标拍摄对象的特征跟踪目标拍摄对象,获取当前图像之后,分析采集模块340分析目标拍摄对象在当前图像中的位置,若目标拍摄对象在当前图像中的位置满足图像采集条件,则采集图像之前,改变目标拍摄对象在当前图像中的位置。
在一个可选的实施例中,位置改变模块350具体用于调整与目标拍摄对象之间的距离;其中,距离包括水平距离及垂直距离;通过与目标拍摄对象之间的距离改变目标拍摄对象在当前图像中的位置。
在一个可选的实施例中,位置改变模块350具体用于调整与目标拍摄对象之间的距离;其中,距离包括水平距离及垂直距离;当与目标拍摄对象之间的距离满足第二预设条件后,调整航向角和/或俯仰角;通过航向角和/或俯仰角改变目标拍摄对象在当前图像中的位置。
在一个可选的实施例中,分析采集模块340具体用于分析目标拍摄对象在当前图像中的位置,若目标拍摄对象在当前图像中的位置满足图像采集条件,则采集至少两幅图像;其中,相邻两幅图像之间的重合率位于预设范围内。
无人机30还包括图像拼接模块360,如图18所示,用于在分析采集模块340采集至少两幅图像之后,拼接至少两幅图像。
在一个可选的实施例中,当前图像还包括背景图像。无人机30还包括位置改变模块350,如图17所示,用于在跟跟踪获取模块330根据目标拍摄对象的特征跟踪目标拍摄对象,获取当前图像之后,分析采集模块340分析目标拍摄对象在当前图像中的位置,若目标拍摄对象在当前图像中的位置满足图像采集条件,则采集图像之前,改变目标拍摄对象在当前图像中的位置。分析采集模块340具体用于分析目标拍摄对象在当前图像中的位置,若根据所述当前图像的背景图像判断所述目标拍摄对象在所述当前图像中的位置满足图像采集条件,则采集图像。
在一个可选的实施例中,位置改变模块350具体用于调整与所述目标拍摄对象的相对位置。
在一个可选的实施例中,位置改变模块350具体用于识别当前图像的背景图像,根据背景图像调整与目标拍摄对象之间的距离;其中,距离包括水平距离及垂直距离。
在一个可选的实施例中,位置改变模块350具体用于识别当前图像的背景,根据背景调整与目标拍摄对象之间的距离;其中,距离包括水平距离及垂直距离;当与目标拍摄对象之间的距离满足第二预设条件后,调整航向角和/或俯仰角。
在一个可选的实施例中,位置改变模块350具体用于识别当前图像的背景,根据背景调整与目标拍摄对象之间的距离;其中,距离包括水平距离及垂直距离;当与目标拍摄对象之间的距离满足第二预设条件后,根据背景调整与 目标拍摄对象的相对位置;当与目标拍摄对象之间的相对位置满足第三预设条件后,调整航向角和/或俯仰角。
在一个可选的实施例中,无人机30除了包括指令接收模块310、获取保存模块320、跟踪获取模块330及分析采集模块340之外,还包括查找模块370、参数获取模块380、调整模块390,如图19所示,其中:
查找模块370用于查找与当前图像相似度超过第一阈值的对比图像。
参数获取模块380用于获取对比图像的拍摄参数;其中,拍摄参数包括与对比图像中的目标拍摄对象之间的距离、航向角及俯仰角;距离包括水平距离及垂直距离。
调整模块390用于根据对比图像的拍摄参数调整目标拍摄对象在当前图像中的位置。
本申请实施例中,无人机的各个模块的具体实现可参考上述各个方法实施例中相关内容的描述。
本申请实施例可以通过控制无人机起飞并获取目标拍摄对象,跟踪目标拍摄对象,分析目标拍摄对象在当前图像中的位置是否满足图像采集条件,若满足条件则采集图像。实现无人机拍照过程的自动化,无需用户手动操作。进一步的,本申请实施例还提供了多种无人机自动构图方式,可以获得各种拍摄效果下的图片,用户提供多样化的选择,提升用户体验。
再请参见图20,图20为本申请实施例提供的另一种无人机的结构示意图。如图20所示,无人机40至少可以包括:存储器410、处理器420,存储器410与处理器420之间通过总线430相连。
存储器410用于存储无人机图像采集程序;
处理器420,用于调用存储器410中的无人机图像采集程序并执行:
接收起飞指令;获取目标拍摄对象并保存目标拍摄对象的特征;根据所述目标拍摄对象的特征跟踪目标拍摄对象,获取当前图像;其中,当前图像包括 目标拍摄对象;分析目标拍摄对象在当前图像中的位置,若目标拍摄对象在当前图像中的位置满足图像采集条件,则采集图像。
在一个可选地实施例中,处理器420接收起飞指令包括:检测到触发起飞的指令后,搜索目标图像;当目标图像与预设图像匹配时,使无人机产生升力。
在一个可选地实施例中,处理器420接收起飞指令包括:检测无人机的惯性测量单元数据的变化;若惯性测量单元数据的变化满足第一预设条件,使无人机产生升力。
在一个可选地实施例中,若目标拍摄对象在当前图像中的位置满足图像采集条件,处理器420采集图像包括:若目标拍摄对象在当前图像中的位置满足图像采集条件,则判断是否接收到图像采集指令;若接收到图像采集指令,则采集图像。
在一个可选地实施例中,根据目标拍摄对象的特征跟踪目标拍摄对象,获取当前图像之后,分析目标拍摄对象在当前图像中的位置,若目标拍摄对象在当前图像中的位置满足图像采集条件,则采集图像之前,处理器420还用于:改变目标拍摄对象在当前图像中的位置。
在一个可选地实施例中,处理器420改变目标拍摄对象在当前图像中的位置包括:调整与目标拍摄对象之间的距离;其中,距离包括水平距离及垂直距离;通过与目标拍摄对象之间的距离改变目标拍摄对象在当前图像中的位置。
在一个可选地实施例中,处理器420改变目标拍摄对象在当前图像中的位置包括:调整与目标拍摄对象之间的距离;其中,距离包括水平距离及垂直距离;当与目标拍摄对象之间的距离满足第二预设条件后,调整航向角和/或俯仰角;通过航向角和/或俯仰角改变目标拍摄对象在当前图像中的位置。
在一个可选地实施例中,若目标拍摄对象在当前图像中的位置满足图像采集条件,处理器420采集图像包括:若目标拍摄对象在当前图像中的位置满足图像采集条件,则采集至少两幅图像;其中,相邻两幅图像之间的重合率位于 预设范围内;若判断目标拍摄对象在当前图像中的位置满足图像采集条件,则采集至少两幅图像之后,处理器420还用于:拼接至少两幅图像。
在一个可选地实施例中,当前图像还包括背景图像。根据所述目标拍摄对象的特征跟踪所述目标拍摄对象,获取当前图像之后,分析目标拍摄对象在当前图像中的位置,若目标拍摄对象在当前图像中的位置满足图像采集条件,则采集图像之前,处理器420还用于:改变目标拍摄对象在当前图像中的位置;若目标拍摄对象在当前图像中的位置满足图像采集条件,处理器420采集图像包括:若根据当前图像的背景图像判断目标拍摄对象在当前图像中的位置满足图像采集条件,则采集图像。
在一个可选地实施例中,处理器420改变目标拍摄对象在当前图像中的位置包括:调整与目标拍摄对象的相对位置。
在一个可选地实施例中,处理器420改变目标拍摄对象在当前图像中的位置包括:识别当前图像的背景图像,根据背景图像调整与目标拍摄对象之间的距离;其中,距离包括水平距离及垂直距离。
在一个可选地实施例中,处理器420改变目标拍摄对象在当前图像中的位置包括:识别当前图像的背景,根据背景调整与所述目标拍摄对象之间的距离;其中,距离包括水平距离及垂直距离;当与目标拍摄对象之间的距离满足第二预设条件后,调整航向角和/或俯仰角。
在一个可选地实施例中,处理器420改变目标拍摄对象在当前图像中的位置包括:识别所当前图像的背景,根据所背景调整与目标拍摄对象之间的距离;其中,距离包括水平距离及垂直距离;当与目标拍摄对象之间的距离满足第二预设条件后,根据背景调整与目标拍摄对象的相对位置;当与目标拍摄对象之间的相对位置满足第三预设条件后,调整航向角和/或俯仰角。
在一个可选地实施例中,根据目标拍摄对象的特征跟踪目标拍摄对象,获取当前图像之后,分析目标拍摄对象在当前图像中的位置,若目标拍摄对象在 当前图像中的位置满足图像采集条件,则采集图像之前,处理器420还用于:查找与当前图像相似度超过第一阈值的对比图像;获取对比图像的拍摄参数;其中,拍摄参数包括与对比图像中的目标拍摄对象之间的距离、航向角及俯仰角;距离包括水平距离及垂直距离;根据对比图像的拍摄参数调整目标拍摄对象在当前图像中的位置。
本申请实施例可以通过控制无人机起飞并获取目标拍摄对象,跟踪目标拍摄对象,分析目标拍摄对象在当前图像中的位置是否满足图像采集条件,若满足条件则采集图像。实现无人机拍照过程的自动化,无需用户手动操作。进一步的,本申请实施例还提供了多种无人机自动构图方式,可以获得各种拍摄效果下的图片,用户提供多样化的选择,提升用户体验。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。
以上所揭露的仅为本发明较佳实施例而已,当然不能以此来限定本发明之权利范围,因此依本发明权利要求所作的等同变化,仍属本发明所涵盖的范围。

Claims (29)

  1. 一种无人机图像采集方法,其特征在于,包括:
    接收起飞指令;
    获取目标拍摄对象并保存所述目标拍摄对象的特征;
    根据所述目标拍摄对象的特征跟踪所述目标拍摄对象,获取当前图像;其中,所述当前图像包括目标拍摄对象;
    分析所述目标拍摄对象在所述当前图像中的位置,若所述目标拍摄对象在所述当前图像中的位置满足图像采集条件,则采集图像。
  2. 如权利要求1所述的方法,其特征在于,所述接收起飞指令包括:
    检测到触发起飞的指令后,搜索目标图像;
    当所述目标图像与预设图像匹配时,使所述无人机产生升力。
  3. 如权利要求1所述的方法,其特征在于,所述接收起飞指令包括:
    检测所述无人机的惯性测量单元数据的变化;
    若所述惯性测量单元数据的变化满足第一预设条件,使所述无人机产生升力。
  4. 如权利要求1所述的方法,其特征在于,所述若所述目标拍摄对象在所述当前图像中的位置满足图像采集条件,则采集图像包括:
    若所述目标拍摄对象在所述当前图像中的位置满足图像采集条件,则判断是否接收到图像采集指令;
    若接收到图像采集指令,则采集图像。
  5. 如权利要求1或4所述的方法,其特征在于,所述根据所述目标拍摄对象的特征跟踪所述目标拍摄对象,获取当前图像之后,所述分析所述目标拍摄对象在所述当前图像中的位置,若所述目标拍摄对象在所述当前图像中的位置满足图像采集条件,则采集图像之前,所述方法还包括:改变所述目标拍摄对象在所述当前图像中的位置。
  6. 如权利要求5所述的方法,其特征在于,所述改变所述目标拍摄对象在所述当前图像中的位置包括:
    调整与所述目标拍摄对象之间的距离;其中,所述距离包括水平距离及垂直距离;
    通过与所述目标拍摄对象之间的距离改变所述目标拍摄对象在所述当前图像中的位置。
  7. 如权利要求5所述的方法,其特征在于,所述改变所述目标拍摄对象在所述当前图像中的位置包括:
    调整与所述目标拍摄对象之间的距离;其中,所述距离包括水平距离及垂直距离;
    当与所述目标拍摄对象之间的距离满足第二预设条件后,调整航向角和/或俯仰角;
    通过所述航向角和/或俯仰角改变所述目标拍摄对象在所述当前图像中的位置。
  8. 如权利要求7所述的方法,其特征在于,所述若所述目标拍摄对象在所述当前图像中的位置满足图像采集条件,则采集图像包括:
    若所述目标拍摄对象在所述当前图像中的位置满足图像采集条件,则采集至少两幅图像;其中,相邻两幅图像之间的重合率位于预设范围内;
    所述若判断所述目标拍摄对象在所述当前图像中的位置满足图像采集条件,则采集至少两幅图像之后,所述方法还包括:拼接所述至少两幅图像。
  9. 如权利要求1或4所述的方法,其特征在于,所述当前图像还包括背景图像;
    所述根据所述目标拍摄对象的特征跟踪所述目标拍摄对象,获取当前图像之后,所述分析所述目标拍摄对象在所述当前图像中的位置,若所述目标拍摄对象在所述当前图像中的位置满足图像采集条件,则采集图像之前,所述方法 还包括:改变所述目标拍摄对象在所述当前图像中的位置;
    所述若所述目标拍摄对象在所述当前图像中的位置满足图像采集条件,则采集图像包括:若根据所述当前图像的背景图像判断所述目标拍摄对象在所述当前图像中的位置满足图像采集条件,则采集图像。
  10. 如权利要求9所述的方法,其特征在于,所述改变所述目标拍摄对象在所述当前图像中的位置包括:调整与所述目标拍摄对象的相对位置。
  11. 如权利要求9所述的方法,其特征在于,所述改变所述目标拍摄对象在所述当前图像中的位置包括:
    识别所述当前图像的背景图像,根据所述背景图像调整与所述目标拍摄对象之间的距离;其中,所述距离包括水平距离及垂直距离。
  12. 如权利要求9所述的方法,其特征在于,所述改变所述目标拍摄对象在所述当前图像中的位置包括:
    识别所述当前图像的背景,根据所述背景调整与所述目标拍摄对象之间的距离;其中,所述距离包括水平距离及垂直距离;
    当与所述目标拍摄对象之间的距离满足第二预设条件后,调整航向角和/或俯仰角。
  13. 如权利要求9所述的方法,其特征在于,所述改变所述目标拍摄对象在所述当前图像中的位置包括:
    识别所述当前图像的背景,根据所述背景调整与所述目标拍摄对象之间的距离;其中,所述距离包括水平距离及垂直距离;
    当与所述目标拍摄对象之间的距离满足第二预设条件后,根据所述背景调整与所述目标拍摄对象的相对位置;
    当与所述目标拍摄对象之间的相对位置满足第三预设条件后,调整航向角和/或俯仰角。
  14. 如权利要求1或4所述的方法,其特征在于,所述根据所述目标拍摄 对象的特征跟踪所述目标拍摄对象,获取当前图像之后,所述分析所述目标拍摄对象在所述当前图像中的位置,若所述目标拍摄对象在所述当前图像中的位置满足图像采集条件,则采集图像之前,所述方法还包括:
    查找与所述当前图像相似度超过第一阈值的对比图像;
    获取所述对比图像的拍摄参数;其中,所述拍摄参数包括与所述对比图像中的目标拍摄对象之间的距离、航向角及俯仰角;所述距离包括水平距离及垂直距离;
    根据所述对比图像的拍摄参数调整所述目标拍摄对象在所述当前图像中的位置。
  15. 一种无人机,其特征在于,包括:
    存储器,用于存储无人机图像采集程序;
    处理器,用于调用所述存储器中的无人机图像采集程序并执行:
    接收起飞指令;
    获取目标拍摄对象并保存所述目标拍摄对象的特征;
    根据所述目标拍摄对象的特征跟踪所述目标拍摄对象,获取当前图像;其中,所述当前图像包括目标拍摄对象;
    分析所述目标拍摄对象在所述当前图像中的位置,若所述目标拍摄对象在所述当前图像中的位置满足图像采集条件,则采集图像。
  16. 如权利要求15所述的无人机,其特征在于,所述处理器接收起飞指令包括:
    检测到触发起飞的指令后,搜索目标图像;
    当所述目标图像与预设图像匹配时,使所述无人机产生升力。
  17. 如权利要求15所述的无人机,其特征在于,所述处理器接收起飞指令包括:
    检测所述无人机的惯性测量单元数据的变化;
    若所述惯性测量单元数据的变化满足第一预设条件,使所述无人机产生升力。
  18. 如权利要求15所述的无人机,其特征在于,所述处理器若所述目标拍摄对象在所述当前图像中的位置满足图像采集条件,则采集图像包括:
    若所述目标拍摄对象在所述当前图像中的位置满足图像采集条件,则判断是否接收到图像采集指令;
    若接收到图像采集指令,则采集图像。
  19. 如权利要求15或18所述的无人机,其特征在于,所述根据所述目标拍摄对象的特征跟踪所述目标拍摄对象,获取当前图像之后,所述分析所述目标拍摄对象在所述当前图像中的位置,若所述目标拍摄对象在所述当前图像中的位置满足图像采集条件,则采集图像之前,所述处理器还用于:改变所述目标拍摄对象在所述当前图像中的位置。
  20. 如权利要求19所述的无人机,其特征在于,所述处理器改变所述目标拍摄对象在所述当前图像中的位置包括:
    调整与所述目标拍摄对象之间的距离;其中,所述距离包括水平距离及垂直距离;
    通过与所述目标拍摄对象之间的距离改变所述目标拍摄对象在所述当前图像中的位置。
  21. 如权利要求19所述的无人机,其特征在于,所述处理器改变所述目标拍摄对象在所述当前图像中的位置包括:
    调整与所述目标拍摄对象之间的距离;其中,所述距离包括水平距离及垂直距离;
    当与所述目标拍摄对象之间的距离满足第二预设条件后,调整航向角和/或俯仰角;
    通过所述航向角和/或俯仰角改变所述目标拍摄对象在所述当前图像中的 位置。
  22. 如权利要求21所述的无人机,其特征在于,所述处理器执行若所述目标拍摄对象在所述当前图像中的位置满足图像采集条件,则采集图像包括:
    若所述目标拍摄对象在所述当前图像中的位置满足图像采集条件,则采集至少两幅图像;其中,相邻两幅图像之间的重合率位于预设范围内;
    所述若判断所述目标拍摄对象在所述当前图像中的位置满足图像采集条件,则采集至少两幅图像之后,所述处理器还用于:拼接所述至少两幅图像。
  23. 如权利要求15或18所述的无人机,其特征在于,所述当前图像还包括背景图像;
    所述根据所述目标拍摄对象的特征跟踪所述目标拍摄对象,获取当前图像之后,所述分析所述目标拍摄对象在所述当前图像中的位置,若所述目标拍摄对象在所述当前图像中的位置满足图像采集条件,则采集图像之前,所述处理器还用于:改变所述目标拍摄对象在所述当前图像中的位置;
    所述处理器执行若所述目标拍摄对象在所述当前图像中的位置满足图像采集条件,则采集图像包括:若根据所述当前图像的背景图像判断所述目标拍摄对象在所述当前图像中的位置满足图像采集条件,则采集图像。
  24. 如权利要求23所述的无人机,其特征在于,所述处理器改变所述目标拍摄对象在所述当前图像中的位置包括:调整与所述目标拍摄对象的相对位置。
  25. 如权利要求23所述的无人机,其特征在于,所述处理器改变所述目标拍摄对象在所述当前图像中的位置包括:
    识别所述当前图像的背景图像,根据所述背景图像调整与所述目标拍摄对象之间的距离;其中,所述距离包括水平距离及垂直距离。
  26. 如权利要求23所述的无人机,其特征在于,所述处理器改变所述目标拍摄对象在所述当前图像中的位置包括:
    识别所述当前图像的背景,根据所述背景调整与所述目标拍摄对象之间的距离;其中,所述距离包括水平距离及垂直距离;
    当与所述目标拍摄对象之间的距离满足第二预设条件后,调整航向角和/或俯仰角。
  27. 如权利要求23所述的无人机,其特征在于,所述处理器改变所述目标拍摄对象在所述当前图像中的位置包括:
    识别所述当前图像的背景,根据所述背景调整与所述目标拍摄对象之间的距离;其中,所述距离包括水平距离及垂直距离;
    当与所述目标拍摄对象之间的距离满足第二预设条件后,根据所述背景调整与所述目标拍摄对象的相对位置;
    当与所述目标拍摄对象之间的相对位置满足第三预设条件后,调整航向角和/或俯仰角。
  28. 如权利要求15或18所述的无人机,其特征在于,所述根据所述目标拍摄对象的特征跟踪所述目标拍摄对象,获取当前图像之后,所述分析所述目标拍摄对象在所述当前图像中的位置,若所述目标拍摄对象在所述当前图像中的位置满足图像采集条件,则采集图像之前,所述处理器还用于:
    查找与所述当前图像相似度超过第一阈值的对比图像;
    获取所述对比图像的拍摄参数;其中,所述拍摄参数包括与所述对比图像中的目标拍摄对象之间的距离、航向角及俯仰角;所述距离包括水平距离及垂直距离;
    根据所述对比图像的拍摄参数调整所述目标拍摄对象在所述当前图像中的位置。
  29. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时执行如权利要求1-14任一项所述的无人机图像采集方法。
PCT/CN2017/103624 2017-09-27 2017-09-27 无人机图像采集方法及无人机 WO2019061063A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201780010140.9A CN108702448B (zh) 2017-09-27 2017-09-27 无人机图像采集方法及无人机、计算机可读存储介质
PCT/CN2017/103624 WO2019061063A1 (zh) 2017-09-27 2017-09-27 无人机图像采集方法及无人机
CN202110304772.4A CN113038016B (zh) 2017-09-27 2017-09-27 无人机图像采集方法及无人机

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/103624 WO2019061063A1 (zh) 2017-09-27 2017-09-27 无人机图像采集方法及无人机

Publications (1)

Publication Number Publication Date
WO2019061063A1 true WO2019061063A1 (zh) 2019-04-04

Family

ID=63843843

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/103624 WO2019061063A1 (zh) 2017-09-27 2017-09-27 无人机图像采集方法及无人机

Country Status (2)

Country Link
CN (2) CN113038016B (zh)
WO (1) WO2019061063A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110989649A (zh) * 2019-12-26 2020-04-10 中国航空工业集团公司沈阳飞机设计研究所 面向高机动固定翼无人机的飞行动作控制装置及训练方法
CN111737604A (zh) * 2020-06-24 2020-10-02 中国银行股份有限公司 一种目标对象的搜索方法及装置
CN113747071A (zh) * 2021-09-10 2021-12-03 深圳市道通智能航空技术股份有限公司 一种无人机拍摄方法、装置、无人机及存储介质
CN114040107A (zh) * 2021-11-19 2022-02-11 智己汽车科技有限公司 智能汽车图像拍摄系统、方法、车辆及介质
CN114285996A (zh) * 2021-12-23 2022-04-05 中国人民解放军海军航空大学 一种地面目标覆盖拍摄方法和系统
CN116027798A (zh) * 2022-09-30 2023-04-28 三峡大学 基于图像修正的无人机电力巡检系统及方法
CN116929306A (zh) * 2023-07-20 2023-10-24 深圳赛尔智控科技有限公司 一种数据采集方法、装置、设备及计算机可读存储介质

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109741271B (zh) * 2018-12-14 2021-11-19 陕西高速公路工程试验检测有限公司 一种检测方法及系统
CN109709982A (zh) * 2018-12-29 2019-05-03 东南大学 一种无人机定高控制系统及方法
CN110132049A (zh) * 2019-06-11 2019-08-16 南京森林警察学院 一种基于无人机平台的自动瞄准式狙击步枪
CN110426970B (zh) * 2019-06-25 2021-05-25 西安爱生无人机技术有限公司 一种无人机拍照系统及其控制方法
CN110971824A (zh) * 2019-12-04 2020-04-07 深圳市凯达尔科技实业有限公司 无人机拍摄控制方法
CN111445455B (zh) * 2020-03-26 2023-04-07 北京润科通用技术有限公司 一种图像采集方法及装置
WO2022027596A1 (zh) * 2020-08-07 2022-02-10 深圳市大疆创新科技有限公司 可移动平台的控制方法、装置、计算机可读存储介质
CN111709949A (zh) * 2020-08-19 2020-09-25 武汉精测电子集团股份有限公司 户外显示屏检测修复方法、装置、设备及存储介质
CN113129468B (zh) * 2021-04-06 2022-10-28 深圳市艾赛克科技有限公司 基于无人机的地下管廊巡检方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105549614A (zh) * 2015-12-17 2016-05-04 北京猎鹰无人机科技有限公司 无人机目标跟踪方法
WO2017038891A1 (ja) * 2015-09-04 2017-03-09 Necソリューションイノベータ株式会社 飛行制御装置、飛行制御方法、及びコンピュータ読み取り可能な記録媒体
CN106909172A (zh) * 2017-03-06 2017-06-30 重庆零度智控智能科技有限公司 环绕跟踪方法、装置和无人机
CN106991413A (zh) * 2017-05-04 2017-07-28 上海耐相智能科技有限公司 一种无人机
CN107016367A (zh) * 2017-04-06 2017-08-04 北京精英智通科技股份有限公司 一种跟踪控制方法及跟踪控制系统
CN107102647A (zh) * 2017-03-30 2017-08-29 中国人民解放军海军航空工程学院青岛校区 基于图像的无人机目标跟踪控制方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4894712B2 (ja) * 2007-10-17 2012-03-14 ソニー株式会社 構図判定装置、構図判定方法、プログラム
WO2017060782A1 (en) * 2015-10-07 2017-04-13 Lee Hoi Hung Herbert Flying apparatus with multiple sensors and gesture-based operation
CN106331508B (zh) * 2016-10-19 2020-04-03 深圳市道通智能航空技术有限公司 拍摄构图的方法及装置
CN106354157B (zh) * 2016-11-28 2019-05-14 中山市昌源模型有限公司 一种无人机自主飞行系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017038891A1 (ja) * 2015-09-04 2017-03-09 Necソリューションイノベータ株式会社 飛行制御装置、飛行制御方法、及びコンピュータ読み取り可能な記録媒体
CN105549614A (zh) * 2015-12-17 2016-05-04 北京猎鹰无人机科技有限公司 无人机目标跟踪方法
CN106909172A (zh) * 2017-03-06 2017-06-30 重庆零度智控智能科技有限公司 环绕跟踪方法、装置和无人机
CN107102647A (zh) * 2017-03-30 2017-08-29 中国人民解放军海军航空工程学院青岛校区 基于图像的无人机目标跟踪控制方法
CN107016367A (zh) * 2017-04-06 2017-08-04 北京精英智通科技股份有限公司 一种跟踪控制方法及跟踪控制系统
CN106991413A (zh) * 2017-05-04 2017-07-28 上海耐相智能科技有限公司 一种无人机

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110989649A (zh) * 2019-12-26 2020-04-10 中国航空工业集团公司沈阳飞机设计研究所 面向高机动固定翼无人机的飞行动作控制装置及训练方法
CN110989649B (zh) * 2019-12-26 2023-07-25 中国航空工业集团公司沈阳飞机设计研究所 面向高机动固定翼无人机的飞行动作控制装置及训练方法
CN111737604B (zh) * 2020-06-24 2023-07-21 中国银行股份有限公司 一种目标对象的搜索方法及装置
CN111737604A (zh) * 2020-06-24 2020-10-02 中国银行股份有限公司 一种目标对象的搜索方法及装置
CN113747071A (zh) * 2021-09-10 2021-12-03 深圳市道通智能航空技术股份有限公司 一种无人机拍摄方法、装置、无人机及存储介质
CN113747071B (zh) * 2021-09-10 2023-10-24 深圳市道通智能航空技术股份有限公司 一种无人机拍摄方法、装置、无人机及存储介质
CN114040107A (zh) * 2021-11-19 2022-02-11 智己汽车科技有限公司 智能汽车图像拍摄系统、方法、车辆及介质
CN114040107B (zh) * 2021-11-19 2024-04-16 智己汽车科技有限公司 智能汽车图像拍摄系统、方法、车辆及介质
CN114285996B (zh) * 2021-12-23 2023-08-22 中国人民解放军海军航空大学 一种地面目标覆盖拍摄方法和系统
CN114285996A (zh) * 2021-12-23 2022-04-05 中国人民解放军海军航空大学 一种地面目标覆盖拍摄方法和系统
CN116027798A (zh) * 2022-09-30 2023-04-28 三峡大学 基于图像修正的无人机电力巡检系统及方法
CN116027798B (zh) * 2022-09-30 2023-11-17 三峡大学 基于图像修正的无人机电力巡检系统及方法
CN116929306A (zh) * 2023-07-20 2023-10-24 深圳赛尔智控科技有限公司 一种数据采集方法、装置、设备及计算机可读存储介质
CN116929306B (zh) * 2023-07-20 2024-04-19 深圳赛尔智控科技有限公司 一种数据采集方法、装置、设备及计算机可读存储介质

Also Published As

Publication number Publication date
CN108702448B (zh) 2021-04-09
CN113038016A (zh) 2021-06-25
CN113038016B (zh) 2023-05-19
CN108702448A (zh) 2018-10-23

Similar Documents

Publication Publication Date Title
WO2019061063A1 (zh) 无人机图像采集方法及无人机
US11120261B2 (en) Imaging control method and device
CN110692027B (zh) 用于提供无人机应用的易用的释放和自动定位的系统和方法
CN110494360B (zh) 用于提供自主摄影及摄像的系统和方法
US10587790B2 (en) Control method for photographing using unmanned aerial vehicle, photographing method using unmanned aerial vehicle, mobile terminal, and unmanned aerial vehicle
CN107087427B (zh) 飞行器的控制方法、装置和设备以及飞行器
WO2017075964A1 (zh) 无人机拍摄控制方法、无人机拍摄方法、移动终端和无人机
JP6765917B2 (ja) 探索装置及び、その撮像装置及び、探索方法
WO2017020856A1 (zh) 一种利用无人机进行运动物体自动锁定拍摄装置及拍摄方法
TWI386056B (zh) A composition determination means, a composition determination method, and a composition determination program
WO2018072717A1 (zh) 拍摄构图的方法及其装置、可移动物体及计算机可读存储介质
KR101988152B1 (ko) 비디오로부터의 영상 생성
WO2019227441A1 (zh) 可移动平台的拍摄控制方法和设备
CN106973221B (zh) 基于美学评价的无人机摄像方法和系统
WO2019104569A1 (zh) 一种对焦方法、设备及可读存储介质
WO2019227333A1 (zh) 集体照拍摄方法和装置
CN108377328A (zh) 一种直升机巡视作业的目标拍摄方法及装置
JP2017072986A (ja) 自律飛行装置、自律飛行装置の制御方法及びプログラム
WO2022141956A1 (zh) 飞行控制方法、视频编辑方法、装置、无人机及存储介质
WO2021031159A1 (zh) 比赛拍摄方法、电子设备、无人机与存储介质
US20230359204A1 (en) Flight control method, video editing method, device, uav and storage medium
WO2021056411A1 (zh) 航线调整方法、地面端设备、无人机、系统和存储介质
WO2019227352A1 (zh) 飞行控制方法及飞行器
CN114641642A (zh) 跟踪目标对象的方法和云台
JP6347299B2 (ja) 飛行装置、方法、及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17926414

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17926414

Country of ref document: EP

Kind code of ref document: A1