CN108702448B - Unmanned aerial vehicle image acquisition method, unmanned aerial vehicle and computer readable storage medium - Google Patents

Unmanned aerial vehicle image acquisition method, unmanned aerial vehicle and computer readable storage medium Download PDF

Info

Publication number
CN108702448B
CN108702448B CN201780010140.9A CN201780010140A CN108702448B CN 108702448 B CN108702448 B CN 108702448B CN 201780010140 A CN201780010140 A CN 201780010140A CN 108702448 B CN108702448 B CN 108702448B
Authority
CN
China
Prior art keywords
image
target
current image
distance
shooting object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201780010140.9A
Other languages
Chinese (zh)
Other versions
CN108702448A (en
Inventor
张伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Priority to CN202110304772.4A priority Critical patent/CN113038016B/en
Publication of CN108702448A publication Critical patent/CN108702448A/en
Application granted granted Critical
Publication of CN108702448B publication Critical patent/CN108702448B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides an unmanned aerial vehicle image acquisition method and an unmanned aerial vehicle, wherein the method comprises the following steps: receiving a takeoff instruction; acquiring a target shooting object and storing the characteristics of the target shooting object; tracking the target shooting object according to the characteristics of the target shooting object to obtain a current image; wherein the current image includes a target photographic subject; the position of the target shooting object in the current image is analyzed, if the position of the target shooting object in the current image meets the image acquisition condition, the image is acquired, automation of the unmanned aerial vehicle shooting process is achieved, manual operation of a user is not needed, diversified selections are provided for the user, and user experience is improved.

Description

Unmanned aerial vehicle image acquisition method, unmanned aerial vehicle and computer readable storage medium
The disclosure of this patent document contains material which is subject to copyright protection. The copyright is owned by the copyright owner. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the patent and trademark office official records and records.
Technical Field
The application relates to the field of unmanned aerial vehicles, in particular to an unmanned aerial vehicle image acquisition method unmanned aerial vehicle.
Background
In unmanned aerial vehicle application, a series of self-photographing unmanned aerial vehicles appear in the market, and the self-photographing unmanned aerial vehicles are used for sending social software such as a friend circle and a microblog to share around fast photographing and recording videos. However, in the prior art, the shooting process of the unmanned aerial vehicle is complicated, the user is required to control the aircraft to take off at first, then the position of the unmanned aerial vehicle is adjusted through a remote controller or an application program to perform composition shooting, and then the user can take a picture only, and if the shooting effect is not satisfied after shooting, the user needs to continue to operate the aircraft to other positions to perform composition shooting again. Such a photographing process is not highly automated, the user needs to input many operations, and the airplane cannot provide diversified options.
Disclosure of Invention
In view of this, the application provides an unmanned aerial vehicle image acquisition method and an unmanned aerial vehicle, which can automate the photographing process of the unmanned aerial vehicle, do not need manual operation of a user, and provide diversified selections for the user.
The first aspect of the embodiment of the application provides an unmanned aerial vehicle image acquisition method, which comprises the following steps:
receiving a takeoff instruction;
acquiring a target shooting object and storing the characteristics of the target shooting object;
tracking the target shooting object according to the characteristics of the target shooting object to obtain a current image; wherein the current image includes a target photographic subject;
and analyzing the position of the target shooting object in the current image, and acquiring the image if the position of the target shooting object in the current image meets the image acquisition condition.
The embodiment of this application provides in a second aspect an unmanned aerial vehicle, includes:
the storage is used for storing an unmanned aerial vehicle image acquisition program;
a processor for invoking the unmanned aerial vehicle image acquisition program in the memory and executing:
receiving a takeoff instruction;
acquiring a target shooting object and storing the characteristics of the target shooting object;
tracking the target shooting object according to the characteristics of the target shooting object to obtain a current image; wherein the current image includes a target photographic subject;
and analyzing the position of the target shooting object in the current image, and acquiring the image if the position of the target shooting object in the current image meets the image acquisition condition.
A third aspect of the embodiments of the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, executes the method for capturing an image of a drone provided by the first aspect of the embodiments of the present application.
According to the unmanned aerial vehicle image acquisition method provided by the embodiment of the application, the unmanned aerial vehicle receives a takeoff instruction; acquiring a target shooting object and storing the characteristics of the target shooting object; tracking the target shooting object according to the characteristics of the target shooting object to obtain a current image; wherein the current image includes a target photographic subject; the position of the target shooting object in the current image is analyzed, if the position of the target shooting object in the current image meets the image acquisition condition, the image is acquired, the target shooting object is automatically tracked, and whether the shooting condition is met or not is judged according to the position of the target shooting object in the image, so that the automation of the unmanned aerial vehicle shooting process is realized, and the user experience is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or technical solutions in the prior art, the drawings required to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a structural diagram of an unmanned aerial vehicle provided in an embodiment of the present application;
fig. 2 is a schematic flow chart of an image acquisition method for an unmanned aerial vehicle according to an embodiment of the present application;
fig. 3 is a schematic flow chart of a takeoff method of an unmanned aerial vehicle according to an embodiment of the present application;
fig. 4 is a schematic flow chart of another takeoff method of an unmanned aerial vehicle according to an embodiment of the present application;
fig. 5 is a schematic flow chart of an image capturing method for an unmanned aerial vehicle according to another embodiment of the present application;
fig. 6 is a schematic distance diagram between an unmanned aerial vehicle and a target photographic object provided in the embodiment of the present application;
fig. 7 is a schematic diagram illustrating an effect of adjusting a distance according to an embodiment of the present application;
FIG. 8 is a schematic view illustrating an effect of adjusting a course angle according to an embodiment of the present disclosure;
fig. 9 is a schematic diagram of a pitch angle adjusting effect provided in an embodiment of the present application;
FIG. 10 is a schematic diagram of an overlay image provided by an embodiment of the present application;
fig. 11 is a schematic flowchart of an image stitching method according to an embodiment of the present application;
fig. 12 is a schematic diagram of a relative position between the drone and the target shooting object provided in the embodiment of the present application;
fig. 13 is a schematic structural diagram of an unmanned aerial vehicle according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of an instruction receiving module according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of another instruction receiving module according to an embodiment of the present application;
fig. 16 is a schematic structural diagram of an analysis and acquisition module according to an embodiment of the present application;
fig. 17 is a schematic structural diagram of an unmanned aerial vehicle according to another embodiment of the present application;
fig. 18 is a schematic structural diagram of an unmanned aerial vehicle according to another embodiment of the present application;
fig. 19 is a schematic structural diagram of an unmanned aerial vehicle according to another embodiment of the present application;
fig. 20 is a schematic structural view of an unmanned aerial vehicle according to another embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides an unmanned aerial vehicle, and fig. 1 is a structural diagram of the unmanned aerial vehicle provided by the embodiment of the invention. As shown in fig. 1, the unmanned aerial vehicle in this embodiment may include:
a body 110;
a power system 120 mounted on the fuselage for providing flight power;
the cloud platform 130 and imaging device 140, imaging device 140 is carried on the fuselage 110 of unmanned aerial vehicle through cloud platform 130. The imaging device 140 is used for capturing images or videos in the flight process of the unmanned aerial vehicle, and includes but is not limited to a multispectral imager, a hyperspectral imager, a visible light camera, an infrared camera, and the like, the cradle head 130 is a multi-axis transmission and stability augmentation system, and the cradle head motor compensates the capturing angle of the imaging device 140 by adjusting the rotation angle of the rotation shaft, and prevents or reduces the jitter of the imaging device 140 by setting a proper buffer mechanism.
The unmanned aerial vehicle image acquisition method provided by the embodiment of the application is described next with reference to fig. 2-12.
Please refer to fig. 2 first. Fig. 2 is a schematic flow chart of an unmanned aerial vehicle image acquisition method provided in the embodiment of the present application. As shown in fig. 2, the unmanned aerial vehicle image capturing method at least includes the following steps:
s201: and receiving a takeoff instruction.
Specifically, the takeoff instruction may be a takeoff instruction input by a user through a control terminal matched with the unmanned aerial vehicle. The takeoff instruction can be input by a user through a control rod on the control terminal, can be input by the user through a takeoff button on a control panel of the control terminal, can be input by the user through a voice instruction, can be input by a face-sweeping mode, and can be input by a throwing flight mode.
Specifically, the method for inputting the takeoff instruction in a face-sweeping manner at least includes the following steps, as shown in fig. 3:
s2011: and receiving an instruction for triggering take-off.
Specifically, the instruction for triggering takeoff may be that the user double-clicks or long-presses a power key on the unmanned aerial vehicle, and the instruction for triggering takeoff may also be that the user double-clicks or long-presses a power key on a control terminal matched with the unmanned aerial vehicle, and the like.
S2013: and searching for a target image.
Specifically, after receiving an instruction to trigger take-off, the cradle head 130 controls the imaging device 140 to search for a target image in the screen. Specifically, the imaging device 140 may be controlled to search for the target image in the screen by changing the heading angle or the pitch angle of the pan/tilt head 130.
S2015: when the target image is matched with the preset image, the unmanned aerial vehicle is enabled to generate lift force.
In particular, the preset image may be an image previously saved by the user in the memory of the drone, which may be an image of the face of the user or another image. When the imaging device 140 searches for a match between the target image in the screen and a preset image pre-saved by the user, the power system 120 of the unmanned aerial vehicle is controlled to generate lift.
Specifically, when the similarity between the target image and the preset image exceeds a certain threshold, it may be considered that the target image matches the preset image. Wherein the threshold may be, for example, 80%, 90%, 95%, 100%, etc.
In addition, the method for inputting the takeoff instruction in the tossing mode at least comprises the following steps, as shown in fig. 4:
s2017: detecting a change in inertial measurement unit data of the drone.
Specifically, the user holds the unmanned aerial vehicle and throws outward, and the unmanned aerial vehicle detects the change of the inertial measurement unit data of the unmanned aerial vehicle in the process of moving. The Inertial Measurement Unit (IMU) includes three single-axis accelerometers and three single-axis gyroscopes, the accelerometers detect acceleration signals of the object in three independent axes of the carrier coordinate system, and the gyroscopes detect angular velocity signals of the carrier relative to the navigation coordinate system, and measure the angular velocity and acceleration of the object in three-dimensional space, and then solve the attitude of the object. In one particular implementation, the IMU is used to measure the horizontal tilt and acceleration that the drone generates when in the current flight position.
S2019: if the change of the data of the inertia measurement unit meets a first preset condition, the unmanned aerial vehicle generates lift force.
Specifically, the first preset condition may be that the horizontal inclination angle of the drone does not exceed a first range, and the acceleration exceeds a second range. Wherein the horizontal inclination angle does not exceed a first range for determining whether the drone is thrown flat up, the first range may be, for example, an interval range from-30 degrees to 30 degrees. The acceleration is used to determine whether the drone is being thrown to control, and the second range may be, for example, an interval range from-0.6 g to-1.2 g (where g is gravitational acceleration).
In a specific implementation manner, the IMU may also be configured to detect other flight parameters that may reflect that the drone is thrown out, the first preset condition may also be a range related to the other flight parameters, and the first range and the second range may also be other reasonable ranges, which is not limited herein.
S203: and acquiring the target shooting object and storing the characteristics of the target shooting object.
Specifically, the target photographic object may be a photographic object manually set by a user, or a photographic object in a picture searched by the unmanned aerial vehicle in the process of takeoff, and the feature of the target photographic object is extracted after the photographic object is determined, where the algorithm for extracting the feature may be an algorithm such as a Convolutional Neural Network (CNN).
It is to be understood that the order of S201 and S203 is not limited herein. The shooting object in the picture can be taken as a target shooting object after the unmanned aerial vehicle takes off, or the user manually sets the target shooting object before taking off, and then controls the unmanned aerial vehicle to take off.
S205: and tracking the target shooting object according to the characteristics of the target shooting object to acquire a current image.
Specifically, the unmanned aerial vehicle tracks the target photographic object through the saved characteristics of the target photographic object. The current image includes a target photographic subject, i.e., a current picture previewed by the imaging device 140.
S207: and analyzing the position of the target shooting object in the current image, and acquiring the image if the position of the target shooting object in the current image meets the image acquisition condition.
Specifically, the unmanned aerial vehicle can automatically analyze the position of the target shooting object in the current image, and when the position of the target shooting object in the current image meets the image acquisition condition, the image is acquired.
Specifically, the unmanned aerial vehicle can acquire a current image and upload current image data to the server, the server analyzes the position of the target shooting object in the current image according to the data uploaded by the unmanned aerial vehicle, when the position of the target shooting object in the current image meets an image acquisition condition, the server sends an image acquisition instruction to the unmanned aerial vehicle, and the unmanned aerial vehicle acquires the image after receiving the image acquisition instruction sent by the server.
Wherein, the image acquisition condition can be decided according to the composition mode. Different composition modes have different image acquisition conditions, and the finally acquired image can comprise a plurality of images in a plurality of composition modes.
In another implementation, after the position of the target shooting object in the current image is determined to satisfy the image acquisition condition, whether an image acquisition instruction sent by a user is received is further determined, and if the image acquisition instruction is received, the image is acquired.
The image acquisition instruction can be a certain specific gesture instruction or voice instruction sent by a user, and informs that no plane can start photographing, so that the user can conveniently put a posture before photographing to obtain a more ideal picture. The gesture command or the voice command is stored in the storage device of the unmanned aerial vehicle in advance. It can be known that the sending mode of the image capturing instruction is not limited to the gesture instruction or the voice instruction sent by the user, and other implementation modes may be available in the actual using process, which is not limited herein.
In addition, after judging that the position of the target shooting object in the current image meets the image acquisition condition, before the user sends an image acquisition instruction, the unmanned aerial vehicle can also send a signal to the user to inform the user that the image acquisition condition is currently met, so that the user can timely and accurately send the image acquisition instruction. Wherein, the signal that satisfies the image acquisition condition that unmanned aerial vehicle sent to the user can be sent out through a certain signal lamp on the unmanned aerial vehicle, for example can be through making this signal lamp light with specific frequency and sending out the signal. Of course, the main body sending the signal is not limited to the signal lamp, the signal mode sent by the signal lamp is not limited to the lighting frequency, and other implementation modes can be provided in the actual use process, which is not limited herein.
According to the embodiment of the application, the unmanned aerial vehicle is controlled to take off and acquire the target shooting object, the target shooting object is tracked, whether the position of the target shooting object in the current image meets the image acquisition condition or not is analyzed, and if the position meets the condition, the image is acquired. The automation of the unmanned aerial vehicle photographing process is realized, the manual operation of a user is not needed, diversified selections are provided for the user, and the user experience is improved.
In another embodiment, the current image further comprises a background image. After S205 and before S207, as shown in fig. 5, the unmanned aerial vehicle image capturing method may further include:
s206: the position of the target photographic object in the current image is changed.
Specifically, the position of the target photographic object in the current image may be changed by changing the distance between the drone and the target photographic object. Wherein, the distance between unmanned aerial vehicle and the target shooting object includes horizontal distance and vertical distance. As shown in particular in fig. 6. Specifically, the vertical distance h between the unmanned aerial vehicle and the target photographic object may be obtained from an Inertial Navigation System (INS) of the unmanned aerial vehicle, and the horizontal distance s between the unmanned aerial vehicle and the target photographic object may be obtained according to h and θ, where s is h tan θ, where θ may also obtain the angle information according to the INS of the unmanned aerial vehicle.
Specifically, the position of the target photographic object in the current image may also be changed by changing the course angle and/or the pitch angle of the drone, or the position of the target photographic object in the current image may be changed by changing the course angle and/or the pitch angle of the pan-tilt 130 mounted on the drone.
It should be noted that different composition schemes may change the position of the target photographic subject in the current image in different ways. Each composition scheme may change the position of the target photographic subject in the current image in a number of different ways.
In one possible implementation, the position of the target photographic subject in the current image may be changed by adjusting the distance to the target photographic subject. Specifically, the distance between the drone and the target photographic object includes a horizontal distance and a vertical distance. The effect of the finally acquired picture can be adjusted by adjusting the vertical distance between the target shooting object and the picture. For example, when the vertical distance to the target photographic subject is small, the upward photographing effect can be created, and when the vertical distance to the target photographic subject is gradually increased, the flat photographing and downward photographing effect can be created. The user can preset the shooting effect that wants to obtain, and unmanned aerial vehicle is in the demand that the automatic composition of in-process was shot in order to satisfy the user setting. The unmanned aerial vehicle can also automatically shoot pictures under various effects so as to be selected by a user and meet the diversified requirements of the user. After the vertical distance between the target photographic object and the target photographic object is adjusted, the horizontal distance between the target photographic object and the target photographic object is further adjusted. Adjusting the horizontal distance to the target photographic subject can change the proportion of the target photographic subject to the current picture, for example, when the horizontal distance is close, the half-length picture of the user can be collected, and when the horizontal distance is far, the full-length picture of the user can be collected. As shown in fig. 7.
In a specific implementation manner, when composition is performed according to a classical composition manner, for example, a three-division composition method and a nine-grid pattern method, a target shooting object can be placed in an interest center by adjusting the position of the target shooting object in a current image; the current picture is divided into three parts in the horizontal direction and the longitudinal direction, and the intersection of lines is the interest center. In addition, in a specific implementation, the composition can be performed according to other classical composition methods, such as a diagonal composition method or a golden spiral composition method.
Further, in another possible implementation manner, after the distance between the unmanned aerial vehicle and the target photographic object meets the second preset condition, that is, after the distance between the unmanned aerial vehicle and the target photographic object is adjusted, the course angle and/or the pitch angle of the unmanned aerial vehicle or the pan/tilt head 130 may be changed to adjust the position of the target photographic object in the current image. The second preset condition may be a distance between the target photographic object and the second preset condition, which is determined according to a photographic effect that the user wants to obtain and is set in advance, and the second preset condition may also be a distance between the target photographic object and the second preset condition, which is determined according to various photographic effects automatically obtained by the unmanned aerial vehicle. For example, the left-right position of the target photographic subject in the current picture may be changed when the heading angle is adjusted, as shown in fig. 8, the up-down position of the target photographic subject in the current picture may be changed when the pitch angle is adjusted, and the like, as shown in fig. 9. Similarly, the user can preset that the user finally wants to obtain the photographing effect, and the unmanned aerial vehicle automatically composes the picture in the photographing process so as to meet the requirements set by the user. The unmanned aerial vehicle can also automatically shoot pictures under various effects so as to be selected by a user and meet the diversified requirements of the user.
In addition, in another possible implementation manner, when the position of the target shooting object in the current image meets the image acquisition condition, acquiring at least two images; wherein, the coincidence rate between two adjacent images is within a preset range; and if the position of the target shooting object in the current image meets the image acquisition condition, splicing the at least two images after acquiring the at least two images. As shown in fig. 10. The overlapping portion of the image 1 and the image 2 is an overlapping area, and the proportion of the overlapping area to the whole image is the overlapping rate.
Specifically, the preset range of the coincidence ratio between two consecutive images may be, for example, 20% to 30%.
Specifically, after the distances (including the horizontal distance and the vertical distance) between the target shooting object and the target shooting object are adjusted, a plurality of images are obtained by adjusting the heading angle, and the coincidence rate between two adjacent images can be controlled by the size of the heading angle. After obtaining a plurality of images meeting the coincidence rate, matching characteristic points in the coincidence area of the adjacent images, then optimizing Beam Adjustment (BA) to enable the relative position between the images to be more accurate, then carrying out exposure compensation on the images, searching for a splicing line, and finally projecting the images into one image through deformation processing to splice the plurality of images. The specific splicing method is shown in fig. 11. By adopting the splicing algorithm, the shooting angle of the unmanned aerial vehicle can be enlarged, a wider shooting visual angle is provided for a user, the technical problem that the panoramic shooting speed is slower in the prior art is solved, and a faster panoramic shooting method is provided.
In another embodiment, after S205 and before S207, the method for capturing images of a drone may further include:
s206: the position of the target photographic object in the current image is changed.
Specifically, unmanned aerial vehicle can have intelligent background discernment and segmentation algorithm's function, can make full use of background characteristic carry out the composition.
In one possible implementation, the position of the target photographic subject in the current image may be changed by identifying the background image, by changing the relative distance between the drone and the target photographic subject. I.e. the drone surrounds different orientations around the target photographic object. Different background photographs can be obtained. As shown in particular in fig. 12.
In another possible implementation manner, the distance between the unmanned aerial vehicle and the target photographic object can be adjusted according to the background image by identifying the background image of the current image, so that the position of the target photographic object in the current image is changed; wherein, the distance between unmanned aerial vehicle and the target shooting object includes horizontal distance and vertical distance. As shown in particular in fig. 6.
For example, when the background image of the current image is identified as seaside, composition can be performed by adjusting the horizontal distance and the vertical distance between the unmanned aerial vehicle and the target photographic subject, so that the background is sufficiently wide. Specifically, the vertical distance between the drone and the target photographic object should be slightly higher than the overall height of the target photographic object by one meter, and the horizontal distance between the drone and the target photographic object should be about forty-five meters.
In another possible implementation manner, the distance between the target shooting object and the current image can be adjusted according to the background by identifying the background of the current image; wherein, the distance comprises a horizontal distance and a vertical distance. And after the distance between the unmanned aerial vehicle and the target shooting object meets a second preset condition, changing the position of the target shooting object in the current image by changing the course angle and/or the pitch angle of the unmanned aerial vehicle, or changing the position of the target shooting object in the current image by changing the course angle and/or the pitch angle of a holder 130 carried on the unmanned aerial vehicle. The second preset condition may be a distance between the target photographic object and the second preset condition, where the distance is determined according to a photographic effect that the user wants to obtain and is preset in the previous embodiment, and the second preset condition may also be a distance between the target photographic object and the second preset condition, where the distance is determined according to various photographic effects that the unmanned aerial vehicle automatically obtains and is mentioned in the previous embodiment. When various shooting effects are required to be obtained, after the distance between the unmanned aerial vehicle and the target shooting object is determined under the various shooting effects, the position of the target shooting object in the current image is changed by changing the course angle and/or the pitch angle of the unmanned aerial vehicle, so that pictures under the various effects can be obtained for a user to select, and the diversified requirements of the user are met.
For example, when it is recognized that the background image of the current image is a subject in a scene, i.e., a salient object is in the background, composition needs to be performed by changing the heading angle and the pitch angle, so that the subject scene is in the center of the screen, and a person is at the side, so as to highlight the object of the scene.
In another possible implementation manner, the distance between the target shooting object and the current image can be adjusted according to the background by identifying the background of the current image; wherein, the distance comprises a horizontal distance and a vertical distance. And after the distance between the unmanned aerial vehicle and the target shooting object meets a second preset condition, adjusting the relative position between the unmanned aerial vehicle and the target shooting object according to the background, namely that the unmanned aerial vehicle surrounds different directions around the target shooting object. And after the relative position between the target shooting object and the target shooting object meets a third preset condition, changing the position of the target shooting object in the current image by changing the course angle and/or the pitch angle of the unmanned aerial vehicle, or changing the position of the target shooting object in the current image by changing the course angle and/or the pitch angle of a holder 130 mounted on the unmanned aerial vehicle. The second preset condition may be a distance between the target photographic object and the second preset condition, where the distance is determined according to a photographic effect that the user wants to obtain and is preset in the previous embodiment, and the second preset condition may also be a distance between the target photographic object and the second preset condition, where the distance is determined according to various photographic effects that the unmanned aerial vehicle automatically obtains and is mentioned in the previous embodiment. The third preset condition may be that different background images acquired by changing the relative position of the unmanned aerial vehicle to the target photographic object satisfy a requirement preset by a user, for example, the user may input directions corresponding to a plurality of photographic positions in advance, and the third preset condition may also be a relative position to the target photographic object determined according to various photographic backgrounds automatically acquired by the unmanned aerial vehicle. When various shooting effects are needed, after distances between various shooting objects and the target shooting object are determined, the relative position between the shooting objects and the target shooting object is adjusted, and finally, the course angle and/or the pitch angle are adjusted to change the background image of the current image and the position of the target shooting object in the current image, so that pictures under various effects are obtained for users to select, and diversified requirements of the users are met.
For example, during the photographing process, no one may always follow the target photographic subject, so the target photographic subject is always in the picture, and then the composition is performed by changing the distance to the target photographic subject according to the background, if the composition is a large scene, the composition is far away from the target photographic subject, and if the composition is a close scene, the composition may be slightly close to the target photographic subject. Then the relative position between the target shooting object and the background is changed, and the bright spot features in the background are adjusted into the picture. And finally, judging whether a key shooting object exists around the target shooting object or not, if so, putting the key shooting object in the center of the picture through a course angle and/or a pitch angle, and if not, putting the target shooting object in the center, and composing the picture.
In another possible implementation manner, the comparison image with the similarity exceeding the first threshold value with the current image can be searched; acquiring shooting parameters of a contrast image; the shooting parameters comprise the distance, the course angle and the pitch angle between the target shooting objects in the comparison image, and the distance comprises a horizontal distance and a vertical distance; and adjusting the position of the target shooting object in the current image according to the shooting parameters of the comparison image. The first threshold may be, for example, 80%, 85%, 90%, or the like. When there is more than one image whose similarity with the current image exceeds the first threshold, the image with the highest similarity may be selected as the comparison image.
Specifically, the optimal photos of each scene can be directly summarized from the network, and then the image scenes are learned by using a CNN algorithm, a model is trained, and the model is stored in the unmanned aerial vehicle. When the unmanned aerial vehicle is triggered to automatically compose the picture and shoot, a comparison image which is most similar to the current image can be directly found out through a CNN algorithm, and then the picture is composed by imitating the picture composition mode in the comparison image, and the mode can be used for synthesizing the advantages of professional photographers to shoot exquisite images.
Specifically, shooting parameters of the comparison image, including a distance, a course angle and a pitch angle with the shooting object, can be obtained according to the position of the shooting object in the comparison image. And adjusting the distance, the course angle and the pitch angle between the unmanned aerial vehicle and the target shooting object according to the shooting parameters of the comparison image. Therefore, a composition mode similar to that of the contrast image is obtained, and a better shooting effect is obtained.
The embodiment of the application provides an unmanned aerial vehicle image acquisition method, which can be used for tracking a target shooting object by controlling an unmanned aerial vehicle to take off and acquiring the target shooting object, analyzing whether the position of the target shooting object in a current image meets an image acquisition condition or not, and acquiring an image if the position meets the condition. The automation of the unmanned aerial vehicle photographing process is realized, and manual operation of a user is not needed. Further, this application embodiment still provides the automatic composition mode of multiple unmanned aerial vehicle, can obtain the picture under the various shooting effects, and the user provides diversified selection, promotes user experience.
In order to better understand the unmanned aerial vehicle image acquisition method described in the above embodiments, the embodiments of the present application also provide an unmanned aerial vehicle accordingly. As shown in fig. 13, the drone 30 may include at least: the system comprises an instruction receiving module 310, an acquisition and storage module 320, a tracking acquisition module 330 and an analysis acquisition module 340; wherein:
the instruction receiving module 310 is configured to receive a takeoff instruction.
And an acquisition and storage module 320, configured to acquire the target photographic object and store the characteristics of the target photographic object.
The tracking acquisition module 330 is configured to track the target photographic object according to characteristics of the target photographic object and acquire a current image; wherein the current image includes a target photographic subject.
The analysis and acquisition module 340 is configured to analyze a position of the target object in the current image, and acquire an image if the position of the target object in the current image meets an image acquisition condition.
In an alternative embodiment, as shown in fig. 14, the instruction receiving module 310 includes: a first detection unit 3110, a first flying unit 3120; wherein:
the first detection unit 3110 is configured to search for a target image after detecting an instruction to trigger takeoff.
The first flying unit 3120 is configured to cause the drone 30 to generate lift when the target image matches the preset image.
In an alternative embodiment, as shown in fig. 15, the instruction receiving module 310 includes: a second detection unit 3130, a second takeoff unit 3140; wherein:
the second detection unit 3130 is configured to detect a change in inertial measurement unit data of the drone 30;
the second takeoff unit 3140 is configured to enable the drone 30 to generate lift if the change in the inertial measurement unit data meets a first preset condition.
In an alternative embodiment, as shown in fig. 16, the analysis acquisition module 340 includes: an analysis determination unit 3410, an acquisition unit 3420; wherein:
the analysis and determination unit 3410 is configured to analyze a position of the target object in the current image, and if the position of the target object in the current image meets an image acquisition condition, determine whether an image acquisition instruction is received;
the acquisition unit 3420 is configured to acquire an image if the analysis and determination unit 3410 determines that the image acquisition instruction is received.
In an optional embodiment, the drone 30 further includes a position changing module 350, as shown in fig. 17, configured to, after the tracking obtaining module 330 tracks the target photographic object according to the feature of the target photographic object, obtain the current image, analyze the position of the target photographic object in the current image by the analysis and acquisition module 340, and change the position of the target photographic object in the current image before acquiring the image if the position of the target photographic object in the current image meets the image acquisition condition.
In an alternative embodiment, the position changing module 350 is specifically configured to adjust the distance to the target photographic subject; wherein, the distance comprises a horizontal distance and a vertical distance; the position of the target photographic object in the current image is changed by the distance from the target photographic object.
In an alternative embodiment, the position changing module 350 is specifically configured to adjust the distance to the target photographic subject; wherein, the distance comprises a horizontal distance and a vertical distance; when the distance between the target shooting object and the target shooting object meets a second preset condition, adjusting a course angle and/or a pitch angle; and changing the position of the target shooting object in the current image through the heading angle and/or the pitch angle.
In an optional embodiment, the analysis and acquisition module 340 is specifically configured to analyze a position of the target photographic object in the current image, and acquire at least two images if the position of the target photographic object in the current image meets an image acquisition condition; wherein, the coincidence rate between two adjacent images is in a preset range.
The drone 30 further includes an image stitching module 360, as shown in fig. 18, for stitching the at least two images after the analysis acquisition module 340 acquires the at least two images.
In an alternative embodiment, the current image also includes a background image. The drone 30 further includes a position changing module 350, as shown in fig. 17, configured to, after the tracking acquisition module 330 tracks the target photographic object according to the characteristics of the target photographic object, acquire the current image, analyze the position of the target photographic object in the current image by the analysis acquisition module 340, and change the position of the target photographic object in the current image before acquiring the image if the position of the target photographic object in the current image meets the image acquisition condition. The analysis and acquisition module 340 is specifically configured to analyze a position of a target photographic object in a current image, and acquire an image if it is determined that the position of the target photographic object in the current image satisfies an image acquisition condition according to a background image of the current image.
In an alternative embodiment, the position changing module 350 is specifically configured to adjust the relative position with respect to the target photographic subject.
In an alternative embodiment, the position changing module 350 is specifically configured to identify a background image of the current image, and adjust a distance between the current image and the target photographic object according to the background image; wherein, the distance comprises a horizontal distance and a vertical distance.
In an alternative embodiment, the position changing module 350 is specifically configured to identify a background of the current image, and adjust a distance between the current image and the target photographic object according to the background; wherein, the distance comprises a horizontal distance and a vertical distance; and when the distance between the target shooting object and the target shooting object meets a second preset condition, adjusting the course angle and/or the pitch angle.
In an alternative embodiment, the position changing module 350 is specifically configured to identify a background of the current image, and adjust a distance between the current image and the target photographic object according to the background; wherein, the distance comprises a horizontal distance and a vertical distance; when the distance between the target shooting object and the target shooting object meets a second preset condition, adjusting the relative position between the target shooting object and the target shooting object according to the background; and when the relative position between the target shooting object and the target shooting object meets a third preset condition, adjusting a course angle and/or a pitch angle.
In an alternative embodiment, the drone 30 includes, in addition to the instruction receiving module 310, the acquisition and storage module 320, the tracking acquisition module 330, and the analysis acquisition module 340, a search module 370, a parameter acquisition module 380, and an adjustment module 390, as shown in fig. 19, where:
the searching module 370 is used for searching for a contrast image with similarity exceeding a first threshold with the current image.
The parameter obtaining module 380 is configured to obtain shooting parameters of the comparison image; the shooting parameters comprise the distance, the course angle and the pitch angle between the target shooting objects in the comparison image; the distance includes a horizontal distance and a vertical distance.
The adjusting module 390 is configured to adjust a position of the target object in the current image according to the shooting parameters of the comparison image.
In this application embodiment, the specific implementation of each module of the unmanned aerial vehicle may refer to the description of the relevant content in each method embodiment described above.
The unmanned aerial vehicle can take off and acquire the target shooting object by controlling the unmanned aerial vehicle, track the target shooting object, analyze whether the position of the target shooting object in the current image meets the image acquisition condition, and acquire the image if the position meets the condition. The automation of the unmanned aerial vehicle photographing process is realized, and manual operation of a user is not needed. Further, this application embodiment still provides the automatic composition mode of multiple unmanned aerial vehicle, can obtain the picture under the various shooting effects, and the user provides diversified selection, promotes user experience.
Referring to fig. 20 again, fig. 20 is a schematic structural diagram of another unmanned aerial vehicle provided in the embodiment of the present application. As shown in fig. 20, the drone 40 may include at least: the memory 410, the processor 420, the memory 410 and the processor 420 are connected by a bus 430.
The memory 410 is used for storing the unmanned aerial vehicle image acquisition program;
a processor 420, configured to invoke the drone image capture program in the memory 410 and execute:
receiving a takeoff instruction; acquiring a target shooting object and storing the characteristics of the target shooting object; tracking the target shooting object according to the characteristics of the target shooting object to obtain a current image; wherein the current image includes a target photographic object; and analyzing the position of the target shooting object in the current image, and acquiring the image if the position of the target shooting object in the current image meets the image acquisition condition.
In an alternative embodiment, the processor 420 receiving the takeoff instruction comprises: searching a target image after detecting an instruction for triggering take-off; when the target image is matched with the preset image, the unmanned aerial vehicle is enabled to generate lift force.
In an alternative embodiment, the processor 420 receiving the takeoff instruction comprises: detecting the change of the data of an inertial measurement unit of the unmanned aerial vehicle; if the change of the data of the inertia measurement unit meets a first preset condition, the unmanned aerial vehicle generates lift force.
In an alternative embodiment, if the position of the target object in the current image satisfies the image capturing condition, the capturing the image by the processor 420 includes: if the position of the target shooting object in the current image meets the image acquisition condition, judging whether an image acquisition instruction is received; and if an image acquisition instruction is received, acquiring an image.
In an optional embodiment, the target photographic object is tracked according to features of the target photographic object, after the current image is acquired, the position of the target photographic object in the current image is analyzed, and if the position of the target photographic object in the current image satisfies an image capturing condition, before capturing the image, the processor 420 is further configured to: the position of the target photographic object in the current image is changed.
In an alternative embodiment, the processor 420 changes the position of the target photographic subject in the current image by: adjusting the distance between the target shooting object and the target shooting object; wherein, the distance comprises a horizontal distance and a vertical distance; the position of the target photographic object in the current image is changed by the distance from the target photographic object.
In an alternative embodiment, the processor 420 changes the position of the target photographic subject in the current image by: adjusting the distance between the target shooting object and the target shooting object; wherein, the distance comprises a horizontal distance and a vertical distance; when the distance between the target shooting object and the target shooting object meets a second preset condition, adjusting a course angle and/or a pitch angle; and changing the position of the target shooting object in the current image through the heading angle and/or the pitch angle.
In an alternative embodiment, if the position of the target object in the current image satisfies the image capturing condition, the capturing the image by the processor 420 includes: if the position of the target shooting object in the current image meets the image acquisition condition, acquiring at least two images; wherein, the coincidence rate between two adjacent images is within a preset range; if the position of the target object in the current image is determined to satisfy the image capturing condition, after at least two images are captured, the processor 420 is further configured to: and splicing at least two images.
In an optional embodiment, the current image further comprises a background image. Tracking the target photographic object according to the features of the target photographic object, analyzing the position of the target photographic object in the current image after acquiring the current image, and before acquiring the image if the position of the target photographic object in the current image meets the image acquisition condition, the processor 420 is further configured to: changing the position of a target shooting object in a current image; if the position of the target object in the current image satisfies the image capturing condition, the capturing the image by the processor 420 includes: and if the position of the target shooting object in the current image meets the image acquisition condition according to the background image of the current image, acquiring the image.
In an alternative embodiment, the processor 420 changes the position of the target photographic subject in the current image by: and adjusting the relative position of the target shooting object.
In an alternative embodiment, the processor 420 changes the position of the target photographic subject in the current image by: identifying a background image of a current image, and adjusting the distance between the current image and a target shooting object according to the background image; wherein, the distance comprises a horizontal distance and a vertical distance.
In an alternative embodiment, the processor 420 changes the position of the target photographic subject in the current image by: recognizing the background of the current image, and adjusting the distance between the current image and the target shooting object according to the background; wherein, the distance comprises a horizontal distance and a vertical distance; and when the distance between the target shooting object and the target shooting object meets a second preset condition, adjusting the course angle and/or the pitch angle.
In an alternative embodiment, the processor 420 changes the position of the target photographic subject in the current image by: recognizing the background of the current image, and adjusting the distance between the current image and a target shooting object according to the background; wherein, the distance comprises a horizontal distance and a vertical distance; when the distance between the target shooting object and the target shooting object meets a second preset condition, adjusting the relative position between the target shooting object and the target shooting object according to the background; and when the relative position between the target shooting object and the target shooting object meets a third preset condition, adjusting a course angle and/or a pitch angle.
In an optional embodiment, the target photographic object is tracked according to features of the target photographic object, after the current image is acquired, the position of the target photographic object in the current image is analyzed, and if the position of the target photographic object in the current image satisfies an image capturing condition, before capturing the image, the processor 420 is further configured to: searching a contrast image with the similarity exceeding a first threshold value with the current image; acquiring shooting parameters of a contrast image; the shooting parameters comprise the distance, the course angle and the pitch angle between the target shooting objects in the comparison image; the distance comprises a horizontal distance and a vertical distance; and adjusting the position of the target shooting object in the current image according to the shooting parameters of the comparison image.
The unmanned aerial vehicle can take off and acquire the target shooting object by controlling the unmanned aerial vehicle, track the target shooting object, analyze whether the position of the target shooting object in the current image meets the image acquisition condition, and acquire the image if the position meets the condition. The automation of the unmanned aerial vehicle photographing process is realized, and manual operation of a user is not needed. Further, this application embodiment still provides the automatic composition mode of multiple unmanned aerial vehicle, can obtain the picture under the various shooting effects, and the user provides diversified selection, promotes user experience.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention, and it is therefore to be understood that the invention is not limited by the scope of the appended claims.

Claims (29)

1. An unmanned aerial vehicle image acquisition method is characterized by comprising the following steps:
receiving a takeoff instruction;
acquiring a target shooting object and storing the characteristics of the target shooting object;
tracking the target shooting object according to the characteristics of the target shooting object to obtain a current image; wherein the current image comprises a target photographic object and a background image;
changing the position of the target photographic object in the current image;
the changing the position of the target photographic object in the current image comprises:
identifying a background image of the current image, and changing the position of a target photographic object in the current image by changing the relative distance between the unmanned aerial vehicle and the target photographic object according to the background image;
and analyzing the position of the target shooting object in the current image, and acquiring the image if the position of the target shooting object in the current image meets the image acquisition condition.
2. The method of claim 1, wherein the receiving a takeoff instruction comprises:
searching a target image after detecting an instruction for triggering take-off;
and when the target image is matched with a preset image, the unmanned aerial vehicle is enabled to generate a lift force.
3. The method of claim 1, wherein the receiving a takeoff instruction comprises:
detecting a change in inertial measurement unit data of the drone;
and if the change of the data of the inertia measurement unit meets a first preset condition, enabling the unmanned aerial vehicle to generate lift force.
4. The method of claim 1, wherein acquiring an image if the position of the target photographic subject in the current image satisfies an image acquisition condition comprises:
if the position of the target shooting object in the current image meets the image acquisition condition, judging whether an image acquisition instruction is received;
and if an image acquisition instruction is received, acquiring an image.
5. The method according to claim 1 or 4, wherein the target photographic object is tracked according to the characteristics of the target photographic object, after a current image is acquired, the position of the target photographic object in the current image is analyzed, and if the position of the target photographic object in the current image meets an image acquisition condition, before the image is acquired, the method further comprises: and changing the position of the target shooting object in the current image.
6. The method of claim 5, wherein the changing the position of the target photographic subject in the current image comprises:
adjusting the distance between the target shooting object and the target shooting object; wherein the distance comprises a horizontal distance and a vertical distance;
the position of the target photographic object in the current image is changed by the distance between the target photographic object and the target photographic object.
7. The method of claim 5, wherein the changing the position of the target photographic subject in the current image comprises:
adjusting the distance between the target shooting object and the target shooting object; wherein the distance comprises a horizontal distance and a vertical distance;
when the distance between the target shooting object and the target shooting object meets a second preset condition, adjusting a course angle and/or a pitch angle;
and changing the position of the target shooting object in the current image through the course angle and/or the pitch angle.
8. The method of claim 7, wherein acquiring an image if the position of the target photographic subject in the current image satisfies an image acquisition condition comprises:
if the position of the target shooting object in the current image meets the image acquisition condition, acquiring at least two images; wherein, the coincidence rate between two adjacent images is within a preset range;
if the position of the target shooting object in the current image meets the image acquisition condition, acquiring at least two images, and the method further comprises the following steps: and splicing the at least two images.
9. The method of claim 1 or 4,
if the position of the target shooting object in the current image meets the image acquisition condition, acquiring the image comprises: and if the position of the target shooting object in the current image meets the image acquisition condition according to the background image of the current image, acquiring the image.
10. The method of claim 9, wherein the changing the position of the target photographic subject in the current image comprises: and adjusting the relative position of the target shooting object.
11. The method of claim 1, wherein the distance comprises a horizontal distance and a vertical distance.
12. The method of claim 1, wherein the changing the distance between the drone and the target photographic object comprises:
the distance comprises a horizontal distance and a vertical distance;
and when the distance between the target shooting object and the target shooting object meets a second preset condition, adjusting a course angle and/or a pitch angle.
13. The method of claim 1, wherein the changing the distance between the drone and the target photographic object comprises:
the distance comprises a horizontal distance and a vertical distance;
after the distance between the target shooting object and the background meets a second preset condition, adjusting the relative position between the target shooting object and the background;
and when the relative position between the target shooting object and the target shooting object meets a third preset condition, adjusting a course angle and/or a pitch angle.
14. The method according to claim 1 or 4, wherein the target photographic object is tracked according to the characteristics of the target photographic object, after a current image is acquired, the position of the target photographic object in the current image is analyzed, and if the position of the target photographic object in the current image meets an image acquisition condition, before the image is acquired, the method further comprises:
searching a contrast image with the similarity exceeding a first threshold value with the current image;
acquiring shooting parameters of the contrast image; the shooting parameters comprise the distance, the course angle and the pitch angle between the target shooting object in the comparison image and the shooting parameters; the distance comprises a horizontal distance and a vertical distance;
and adjusting the position of the target shooting object in the current image according to the shooting parameters of the comparison image.
15. An unmanned aerial vehicle, comprising:
the storage is used for storing an unmanned aerial vehicle image acquisition program;
a processor for invoking the unmanned aerial vehicle image acquisition program in the memory and executing:
receiving a takeoff instruction;
acquiring a target shooting object and storing the characteristics of the target shooting object;
tracking the target shooting object according to the characteristics of the target shooting object to obtain a current image; wherein the current image comprises a target photographic object and a background image;
changing the position of the target photographic object in the current image;
the changing the position of the target photographic object in the current image comprises:
identifying a background image of the current image, and changing the position of a target photographic object in the current image by changing the relative distance between the unmanned aerial vehicle and the target photographic object according to the background image;
and analyzing the position of the target shooting object in the current image, and acquiring the image if the position of the target shooting object in the current image meets the image acquisition condition.
16. The drone of claim 15, wherein the processor receiving a takeoff instruction comprises:
searching a target image after detecting an instruction for triggering take-off;
and when the target image is matched with a preset image, the unmanned aerial vehicle is enabled to generate a lift force.
17. The drone of claim 15, wherein the processor receiving a takeoff instruction comprises:
detecting a change in inertial measurement unit data of the drone;
and if the change of the data of the inertia measurement unit meets a first preset condition, enabling the unmanned aerial vehicle to generate lift force.
18. The drone of claim 15, wherein the processor to capture an image if the position of the target photographic object in the current image satisfies an image capture condition comprises:
if the position of the target shooting object in the current image meets the image acquisition condition, judging whether an image acquisition instruction is received;
and if an image acquisition instruction is received, acquiring an image.
19. The drone of claim 15 or 18, wherein the target camera is tracked according to features of the target camera, after a current image is acquired, the position of the target camera in the current image is analyzed, and before an image is acquired if the position of the target camera in the current image meets an image acquisition condition, the processor is further configured to: and changing the position of the target shooting object in the current image.
20. The drone of claim 19, wherein the processor to change the position of the target photographic object in the current image comprises:
adjusting the distance between the target shooting object and the target shooting object; wherein the distance comprises a horizontal distance and a vertical distance;
the position of the target photographic object in the current image is changed by the distance between the target photographic object and the target photographic object.
21. The drone of claim 19, wherein the processor to change the position of the target photographic object in the current image comprises:
adjusting the distance between the target shooting object and the target shooting object; wherein the distance comprises a horizontal distance and a vertical distance;
when the distance between the target shooting object and the target shooting object meets a second preset condition, adjusting a course angle and/or a pitch angle;
and changing the position of the target shooting object in the current image through the course angle and/or the pitch angle.
22. The drone of claim 21, wherein the processor performs capturing an image if the position of the target photographic object in the current image satisfies an image capture condition comprises:
if the position of the target shooting object in the current image meets the image acquisition condition, acquiring at least two images; wherein, the coincidence rate between two adjacent images is within a preset range;
if the position of the target shooting object in the current image meets the image acquisition condition, acquiring at least two images, and then the processor is further configured to: and splicing the at least two images.
23. The unmanned aerial vehicle of claim 15 or 18,
the processor executes that if the position of the target shooting object in the current image meets an image acquisition condition, acquiring the image comprises: and if the position of the target shooting object in the current image meets the image acquisition condition according to the background image of the current image, acquiring the image.
24. The drone of claim 23, wherein the processor to change the position of the target photographic object in the current image comprises: and adjusting the relative position of the target shooting object.
25. The drone of claim 15, wherein the distance comprises a horizontal distance and a vertical distance.
26. The drone of claim 15, wherein the processor to change the distance between the drone and the target photographic object comprises:
the distance comprises a horizontal distance and a vertical distance;
and when the distance between the target shooting object and the target shooting object meets a second preset condition, adjusting a course angle and/or a pitch angle.
27. The drone of claim 15, wherein the processor to change the distance between the drone and the target photographic object comprises:
the distance comprises a horizontal distance and a vertical distance;
after the distance between the target shooting object and the background meets a second preset condition, adjusting the relative position between the target shooting object and the background;
and when the relative position between the target shooting object and the target shooting object meets a third preset condition, adjusting a course angle and/or a pitch angle.
28. The drone of claim 15 or 18, wherein the target camera is tracked according to features of the target camera, after a current image is acquired, the position of the target camera in the current image is analyzed, and before an image is acquired if the position of the target camera in the current image meets an image acquisition condition, the processor is further configured to:
searching a contrast image with the similarity exceeding a first threshold value with the current image;
acquiring shooting parameters of the contrast image; the shooting parameters comprise the distance, the course angle and the pitch angle between the target shooting object in the comparison image and the shooting parameters; the distance comprises a horizontal distance and a vertical distance;
and adjusting the position of the target shooting object in the current image according to the shooting parameters of the comparison image.
29. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, performs the drone image capture method of any one of claims 1-14.
CN201780010140.9A 2017-09-27 2017-09-27 Unmanned aerial vehicle image acquisition method, unmanned aerial vehicle and computer readable storage medium Active CN108702448B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110304772.4A CN113038016B (en) 2017-09-27 2017-09-27 Unmanned aerial vehicle image acquisition method and unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/103624 WO2019061063A1 (en) 2017-09-27 2017-09-27 Image collection method for unmanned aerial vehicle, and unmanned aerial vehicle

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202110304772.4A Division CN113038016B (en) 2017-09-27 2017-09-27 Unmanned aerial vehicle image acquisition method and unmanned aerial vehicle

Publications (2)

Publication Number Publication Date
CN108702448A CN108702448A (en) 2018-10-23
CN108702448B true CN108702448B (en) 2021-04-09

Family

ID=63843843

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201780010140.9A Active CN108702448B (en) 2017-09-27 2017-09-27 Unmanned aerial vehicle image acquisition method, unmanned aerial vehicle and computer readable storage medium
CN202110304772.4A Active CN113038016B (en) 2017-09-27 2017-09-27 Unmanned aerial vehicle image acquisition method and unmanned aerial vehicle

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202110304772.4A Active CN113038016B (en) 2017-09-27 2017-09-27 Unmanned aerial vehicle image acquisition method and unmanned aerial vehicle

Country Status (2)

Country Link
CN (2) CN108702448B (en)
WO (1) WO2019061063A1 (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109741271B (en) * 2018-12-14 2021-11-19 陕西高速公路工程试验检测有限公司 Detection method and system
CN109709982A (en) * 2018-12-29 2019-05-03 东南大学 A kind of unmanned plane determines high control system and method
CN110132049A (en) * 2019-06-11 2019-08-16 南京森林警察学院 A kind of automatic aiming formula sniper rifle based on unmanned aerial vehicle platform
CN110426970B (en) * 2019-06-25 2021-05-25 西安爱生无人机技术有限公司 Unmanned aerial vehicle photographing system and control method thereof
CN110971824A (en) * 2019-12-04 2020-04-07 深圳市凯达尔科技实业有限公司 Unmanned aerial vehicle shooting control method
CN110989649B (en) * 2019-12-26 2023-07-25 中国航空工业集团公司沈阳飞机设计研究所 Flight action control device for high-maneuver fixed-wing unmanned aerial vehicle and training method
CN111445455B (en) * 2020-03-26 2023-04-07 北京润科通用技术有限公司 Image acquisition method and device
CN111737604B (en) * 2020-06-24 2023-07-21 中国银行股份有限公司 Target object searching method and device
CN113906360A (en) * 2020-08-07 2022-01-07 深圳市大疆创新科技有限公司 Control method and device for movable platform and computer readable storage medium
CN111709949A (en) * 2020-08-19 2020-09-25 武汉精测电子集团股份有限公司 Outdoor display screen detection and repair method, device, equipment and storage medium
CN113129468B (en) * 2021-04-06 2022-10-28 深圳市艾赛克科技有限公司 Underground pipe gallery inspection method based on unmanned aerial vehicle
CN113747071B (en) * 2021-09-10 2023-10-24 深圳市道通智能航空技术股份有限公司 Unmanned aerial vehicle shooting method and device, unmanned aerial vehicle and storage medium
CN114040107B (en) * 2021-11-19 2024-04-16 智己汽车科技有限公司 Intelligent automobile image shooting system, intelligent automobile image shooting method, intelligent automobile image shooting vehicle and intelligent automobile image shooting medium
CN114285996B (en) * 2021-12-23 2023-08-22 中国人民解放军海军航空大学 Ground target coverage shooting method and system
CN116027798B (en) * 2022-09-30 2023-11-17 三峡大学 Unmanned aerial vehicle power inspection system and method based on image correction
CN116929306B (en) * 2023-07-20 2024-04-19 深圳赛尔智控科技有限公司 Data acquisition method, device, equipment and computer readable storage medium

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4894712B2 (en) * 2007-10-17 2012-03-14 ソニー株式会社 Composition determination apparatus, composition determination method, and program
WO2017038891A1 (en) * 2015-09-04 2017-03-09 Necソリューションイノベータ株式会社 Flight control device, flight control method, and computer-readable recording medium
WO2017060782A1 (en) * 2015-10-07 2017-04-13 Lee Hoi Hung Herbert Flying apparatus with multiple sensors and gesture-based operation
CN105549614B (en) * 2015-12-17 2018-06-05 北京猎鹰无人机科技有限公司 Unmanned plane target tracking
CN106331508B (en) * 2016-10-19 2020-04-03 深圳市道通智能航空技术有限公司 Method and device for shooting composition
CN106354157B (en) * 2016-11-28 2019-05-14 中山市昌源模型有限公司 A kind of unmanned plane autonomous flight system
CN106909172A (en) * 2017-03-06 2017-06-30 重庆零度智控智能科技有限公司 Around tracking, device and unmanned plane
CN107102647A (en) * 2017-03-30 2017-08-29 中国人民解放军海军航空工程学院青岛校区 Unmanned plane target tracking and controlling method based on image
CN107016367B (en) * 2017-04-06 2021-02-26 北京精英路通科技有限公司 Tracking control method and tracking control system
CN106991413A (en) * 2017-05-04 2017-07-28 上海耐相智能科技有限公司 A kind of unmanned plane

Also Published As

Publication number Publication date
WO2019061063A1 (en) 2019-04-04
CN108702448A (en) 2018-10-23
CN113038016B (en) 2023-05-19
CN113038016A (en) 2021-06-25

Similar Documents

Publication Publication Date Title
CN108702448B (en) Unmanned aerial vehicle image acquisition method, unmanned aerial vehicle and computer readable storage medium
US11120261B2 (en) Imaging control method and device
CN110494360B (en) System and method for providing autonomous photography and photography
CN105120146B (en) It is a kind of to lock filming apparatus and image pickup method automatically using unmanned plane progress moving object
US20220083078A1 (en) Method for controlling aircraft, device, and aircraft
CN108702444B (en) Image processing method, unmanned aerial vehicle and system
CN108700890B (en) Unmanned aerial vehicle return control method, unmanned aerial vehicle and machine readable storage medium
WO2017075964A1 (en) Unmanned aerial vehicle photographing control method, unmanned aerial vehicle photographing method, mobile terminal and unmanned aerial vehicle
CN109241820B (en) Unmanned aerial vehicle autonomous shooting method based on space exploration
CN110799921A (en) Shooting method and device and unmanned aerial vehicle
CN110692027A (en) System and method for providing easy-to-use release and automatic positioning of drone applications
CN107690305B (en) Generating images from video
CN111527463A (en) Method and system for multi-target tracking
CN112154649A (en) Aerial survey method, shooting control method, aircraft, terminal, system and storage medium
CN114679540A (en) Shooting method and unmanned aerial vehicle
CN106973221B (en) Unmanned aerial vehicle camera shooting method and system based on aesthetic evaluation
CN108235815B (en) Imaging control device, imaging system, moving object, imaging control method, and medium
WO2019000325A1 (en) Augmented reality method for aerial photography of unmanned aerial vehicle, processor, and unmanned aerial vehicle
JP6817422B2 (en) Information processing device, aerial photography route generation method, aerial photography route generation system, program, and recording medium
CN107643758A (en) Shoot the autonomous system and method that include unmanned plane and earth station of mobile image
CN108377328A (en) A kind of helicopter makes an inspection tour the target image pickup method and device of operation
WO2019227333A1 (en) Group photograph photographing method and apparatus
CN113875222A (en) Shooting control method and device, unmanned aerial vehicle and computer readable storage medium
US20230359204A1 (en) Flight control method, video editing method, device, uav and storage medium
WO2020225979A1 (en) Information processing device, information processing method, program, and information processing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant