WO2019140686A1 - 跟随控制方法、控制终端及无人机 - Google Patents

跟随控制方法、控制终端及无人机 Download PDF

Info

Publication number
WO2019140686A1
WO2019140686A1 PCT/CN2018/073626 CN2018073626W WO2019140686A1 WO 2019140686 A1 WO2019140686 A1 WO 2019140686A1 CN 2018073626 W CN2018073626 W CN 2018073626W WO 2019140686 A1 WO2019140686 A1 WO 2019140686A1
Authority
WO
WIPO (PCT)
Prior art keywords
following
drone
image
following object
instruction
Prior art date
Application number
PCT/CN2018/073626
Other languages
English (en)
French (fr)
Inventor
陈一
缪宝杰
郭灼
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201880032268.XA priority Critical patent/CN110622089A/zh
Priority to PCT/CN2018/073626 priority patent/WO2019140686A1/zh
Publication of WO2019140686A1 publication Critical patent/WO2019140686A1/zh
Priority to US16/935,875 priority patent/US11509809B2/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/0011Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots associated with a remote control arrangement
    • G05D1/0038Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots associated with a remote control arrangement by providing the operator with simple or augmented images from one or more cameras located onboard the vehicle, e.g. tele-operation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/08Control of attitude, i.e. control of roll, pitch, or yaw
    • G05D1/0808Control of attitude, i.e. control of roll, pitch, or yaw specially adapted for aircraft
    • G05D1/0816Control of attitude, i.e. control of roll, pitch, or yaw specially adapted for aircraft to ensure stability
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/12Target-seeking control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects

Definitions

  • the embodiments of the present invention relate to the field of drones, and in particular, to a follow control method, a control terminal, and a drone.
  • the drone can be used to intelligently follow the target object.
  • a scene of a multiplayer game a multiplayer game, a group photo, and the like, it is impossible to follow a plurality of target objects, thereby limiting the shooting scene of the drone.
  • Embodiments of the present invention provide a follow control method, a control terminal, and a drone, so that the drone can be applied to more shooting scenes such as a multiplayer game, a multiplayer game, a group photo, and the like.
  • a first aspect of the embodiments of the present invention provides a follow-up control method, which is applied to a control terminal of a drone, and includes:
  • the drone is controlled to follow the at least two following objects indicated by the following instruction such that the at least two following objects are in a shooting picture of the photographing device.
  • a second aspect of the embodiments of the present invention provides a follow-up control method applied to a drone, including:
  • the following instruction is used to indicate at least two following objects in the image
  • the at least two following objects are followed such that the at least two following objects are in a photographing picture of the photographing device.
  • a third aspect of the embodiments of the present invention provides a follow-up control method, which is applied to a control terminal of a drone, and includes:
  • the following object identification information being used to indicate at least one following object recognized by the drone from the image;
  • a fourth aspect of the embodiments of the present invention provides a follow control method applied to a drone, including:
  • the identification information of the following object is used to indicate at least one following object recognized by the drone from the image.
  • a fifth aspect of the embodiments of the present invention provides a control terminal for a drone, including: a communication interface and a processor;
  • the communication interface is configured to receive an image captured by a camera of the drone
  • the processor is used to:
  • the drone is controlled to follow the at least two following objects indicated by the following instruction such that the at least two following objects are in a shooting picture of the photographing device.
  • a sixth aspect of the embodiments of the present invention provides a drone, including:
  • a power system mounted to the fuselage for providing flight power
  • a photographing device mounted on the body for taking an image
  • the processor is configured to acquire an image captured by a camera mounted on the drone;
  • the communication interface is configured to send the image to a control terminal of the drone; receive a follow instruction sent by a control terminal of the drone, the follow instruction is used to indicate at least two of the images Follow the object;
  • the processor is further configured to: follow the at least two following objects such that the at least two following objects are in a shooting picture of the photographing device.
  • a seventh aspect of the present invention provides a control terminal for a drone, including: a communication interface and a processor;
  • the communication interface is used to:
  • the following object identification information being used to indicate at least one following object recognized by the drone from the image;
  • the processor is used to:
  • An eighth aspect of the embodiments of the present invention provides a drone, including:
  • a power system mounted to the fuselage for providing flight power
  • a photographing device mounted on the body for taking an image
  • the processor is used to:
  • the communication interface is used to:
  • the identification information of the following object is used to indicate at least one following object recognized by the drone from the image.
  • the follow-up control method, the control terminal and the drone provided by the embodiment receive and display the image captured by the camera of the drone by the control terminal, and detect the user's selection operation of at least two following objects in the image, and Determining a follow instruction according to the detected selection operation, the following instruction may indicate at least two following objects, so that the drone follows the at least two following objects, so that the at least two following objects are in the shooting picture of the photographing device In the middle, the drone can be applied to more shooting scenes such as multiplayer games, multiplayer games, group photos, and the like.
  • FIG. 1 is a flowchart of a follow control method according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a communication system according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of an interaction interface according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of an interaction interface according to another embodiment of the present invention.
  • FIG. 6 is a schematic diagram of an interaction interface according to another embodiment of the present invention.
  • FIG. 7 is a schematic diagram of an interaction interface according to another embodiment of the present invention.
  • FIG. 8 is a schematic diagram of an interaction interface according to another embodiment of the present invention.
  • FIG. 9 is a flowchart of a follow control method according to another embodiment of the present invention.
  • FIG. 10 is a schematic diagram of an interaction interface according to another embodiment of the present invention.
  • FIG. 11 is a schematic diagram of an interaction interface according to another embodiment of the present invention.
  • FIG. 12 is a flowchart of a follow control method according to another embodiment of the present invention.
  • FIG. 13 is a flowchart of a follow control method according to another embodiment of the present invention.
  • FIG. 14 is a flowchart of a follow control method according to another embodiment of the present invention.
  • FIG. 15 is a flowchart of a follow control method according to another embodiment of the present invention.
  • FIG. 16 is a flowchart of a follow control method according to another embodiment of the present invention.
  • FIG. 17 is a flowchart of a follow control method according to another embodiment of the present invention.
  • FIG. 18 is a flowchart of a follow control method according to another embodiment of the present invention.
  • 19 is a structural diagram of a control terminal of a drone according to an embodiment of the present invention.
  • FIG. 20 is a structural diagram of a drone according to an embodiment of the present invention.
  • FIG. 21 is a structural diagram of a control terminal of a drone according to another embodiment of the present invention.
  • a component when referred to as being "fixed” to another component, it can be directly on the other component or the component can be present. When a component is considered to "connect” another component, it can be directly connected to another component or possibly a central component.
  • FIG. 1 is a flowchart of a follow-up control method according to an embodiment of the present invention.
  • the following control method provided in this embodiment is applied to a control terminal of a drone. As shown in FIG. 1, the method in this embodiment may include:
  • Step S101 Receive and display an image captured by a camera of the drone.
  • the drone 21 is equipped with an imaging device 22, and the processor 23 in the drone 21 acquires an image captured by the imaging device 22, and the processor 23 may be a flight controller of the drone 21, or may be Other general purpose or dedicated processors.
  • the processor 23 acquires the image captured by the imaging device 22, the image is transmitted to the control terminal 25 of the drone 21 through the communication interface 24 of the drone 21, and the drone 21 and the control terminal 25 can be wired or communicated.
  • Wireless communication this embodiment is schematically illustrated by wireless communication.
  • the imaging device 22 is mounted on the drone 21 via the pan/tilt head 26.
  • the control terminal 25 may be a remote controller for controlling the drone 21, and may also be a smartphone, a tablet, a ground control station, a laptop, etc., and combinations thereof.
  • the control terminal 25 is provided with a display device such as a display screen, or the control terminal 25 can be connected to an external display device.
  • the control terminal 25 displays the image on the display device.
  • the control terminal 25 can display the image on the interactive interface, and the interactive interface is displayed on the display screen of the control terminal 25.
  • the display screen is a touch screen.
  • 30 denotes an interactive interface that the control terminal 25 controls to display.
  • the image displayed by the interactive interface 30 includes a plurality of following objects, such as a follow object 31, a follow object 32, a follow object 33, and a follow object 34. This is only a schematic illustration and does not limit the number and type of objects to follow.
  • Step S102 Detect a user's selection operation of at least two following objects in the image.
  • the user can select at least two of the following objects from the plurality of following objects displayed by the interactive interface 30.
  • the selection operation of the at least two following objects by the user may be implemented by a voice selection operation, or may be implemented by a selection operation of at least two following objects on the interaction interface 30.
  • the control terminal 25 can detect a user's selection operation for at least two following objects.
  • detecting a user selection operation of at least two following objects in the image includes the following feasible implementation manners:
  • One possible implementation is to detect a user's voice selection operation on at least two of the following objects in the image.
  • the follow-up object 31 displayed on the interactive interface 30 is wearing a red dress
  • the follow-up object 32 is wearing a yellow dress
  • the user can issue a voice selection operation of "the object following the red dress and the yellow dress" to the control terminal 25.
  • the control terminal 25 detects a user's voice selection operation on the following object 31 and the following object 32.
  • the following object 32 is located at the lower left corner of the interactive interface 30, and the following object 34 is located at the lower right corner of the interactive interface 30, and the user can issue a voice selection of "objects following the lower left corner and the lower right corner" to the control terminal 25.
  • the control terminal 25 detects a user's voice selection operation on the following object 32 and the following object 34.
  • Another possible implementation is to detect a user's selection operation on at least two following objects on the interactive interface displaying the image.
  • the user can perform a selection operation on at least two of the plurality of following objects displayed on the interactive interface 30, such as the following object 31 and the following object 32.
  • the control terminal 25 detects a user's selection operation on the interactive interface 30 for at least two following objects, such as the following object 31 and the following object 32, including but not limited to click, double click, frame selection, long press, and the like.
  • Step S103 Determine a follow instruction according to the detected selection operation.
  • the determining the following instruction according to the detected selection operation includes determining a follow instruction according to the detected voice selection operation.
  • the control terminal 25 can determine the following instruction according to the voice selection operation, the following command including the feature information of the following object selected by the user. For example, red clothes and yellow clothes.
  • the control terminal 25 transmits the following command to the drone 21, and the processor 23 of the drone 21 recognizes the following object 31 wearing the red dress and wearing from the image captured by the photographing device 22 based on the feature information included in the following command.
  • the yellow clothing follows the object 32 and follows the following object 31 and the following object 32.
  • the user issues a voice selection operation of "following the object of the lower left corner and the lower right corner" to the control terminal 25, and the control terminal 25 can determine a follow instruction according to the voice selection operation, the follow instruction including the feature of the following object selected by the user.
  • Information such as the lower left corner and the lower right corner.
  • the control terminal 25 transmits the following command to the drone 21, and the processor 23 of the drone 21 recognizes the following object 32 and the lower right corner of the lower left corner from the image captured by the photographing device 22 based on the feature information included in the following command.
  • follow object 34 and follow object 32 and follow object 34 follow object 34.
  • control terminal 25 when the control terminal 25 detects a voice selection operation of "objects following red clothes and yellow clothes” issued by the user, or a voice selection operation of "following objects of the lower left corner and the lower right corner", the control terminal 25 Identifying, according to the voice selection operation, feature information of at least two following objects selected by the user, such as red clothes and yellow clothes, and determining position information of at least two following objects in the image according to the feature information of the at least two following objects, And generating a follow instruction according to the position information of the at least two following objects in the image, for example, the tracking instruction includes position information of at least two following objects in the image.
  • control terminal 25 detects that the user has selected at least two following objects, such as the following object 31 and the following object 32, on the interactive interface 30, such as clicking, double-clicking, box-selecting, long-pressing, etc.
  • the operation of the touch screen may generate a follow instruction according to the user's selection operation, the following instruction including position information of at least two following objects selected by the user in the image.
  • Step S104 Control the drone to follow the at least two following objects indicated by the following instruction to make the at least two following objects in the shooting picture of the photographing device.
  • the drone 21 can be controlled to follow the at least two following objects indicated by the following instruction, such as the following object 31 and the following object 32, the drone 21
  • the processor 23 can adjust according to the moving direction of the following object 31 and the following object 32, and the position of the following object 31 and the following object 32 in the image.
  • At least one of the moving direction of the human machine 21, the posture of the drone 21, and the attitude of the head of the drone 21 is such that the following object 31 and the following object 32 are always in the photographing screen of the photographing device 22.
  • the embodiment does not limit the following manner of the drone 21 to at least two following objects.
  • the drone 21 may follow at least two following objects, or may be at least two following objects.
  • the side is parallel-followed, or the position of the drone 21 is unchanged, and the drone 21 can also adjust the body posture and/or the pan-tilt attitude so that at least two following objects are in the photographing screen of the photographing device. .
  • control terminal receives and displays the image captured by the camera of the drone, detects a user's selection operation of at least two following objects in the image, and determines a follow instruction according to the detected selection operation.
  • the following instruction may indicate at least two following objects, so that the drone follows at least two following objects, so that at least two following objects are in the shooting picture of the photographing device, so that the drone can be applied to more shooting Scenes such as multiplayer games, multiplayer games, group photos, etc.
  • FIG. 4 is a flowchart of a follow control method according to another embodiment of the present invention. As shown in FIG. 4, on the basis of the embodiment shown in FIG. 1, the method in this embodiment may further include:
  • Step S401 Receive follow-up object identifier information sent by the drone, and the following object identifier information is used to indicate at least one follow-up object recognized by the drone from the image.
  • the processor 23 can also identify the following object in the image.
  • the processor 23 can adopt a neural network model. At least one of the contour, the size, the category, and the distance between the object and the drone in the image is identified.
  • the processor 23 can determine whether the object can be used as a follow object according to the sharpness of the contour of the object; whether the object can be used as a follow object according to the size of the object; and whether the object can be used as a follow object according to the category of the object
  • the category of the object identified by the neural network model includes a person, an animal, a vehicle, etc., and the processor 23 may use the person identified by the neural network model as the following object.
  • the processor 23 may also use an object within a preset distance range from the drone as a follow object. This is merely a schematic illustration and does not limit the specific method by which the processor 23 recognizes the following objects.
  • the following object identification information can be sent to the control terminal 25 of the drone 21 through the communication interface 24 of the drone 21, the following object identification information is used to indicate at least one
  • the drone 21 identifies the following object from the image. At least one of the following objects recognized by the drone 21 from the image includes a plurality of following objects recognized by the drone 21 from the image. As shown in FIG. 3, the following object 31, the following object 32, and the following object 34 are the following objects recognized by the drone 21 from the image, and the drone 21 transmits the identification information of the following object 31 to the control terminal 25, following The identification information of the object 32 and the identification information of the following object 34.
  • Step S402 Identify, according to the following object identification information, the at least one following object identified by the drone from the image in the image.
  • the control terminal 25 After the control terminal 25 receives the identification information of the following object 31 transmitted by the drone 21, the identification information of the following object 32, and the identification information of the following object 34, the following object 31, the following object 32, and the following are respectively performed in the interactive interface 30.
  • the object 34 is identified such that by viewing the image, the user knows which of the objects in the image can be followed by at least the drone.
  • the following object identification information includes position information of the at least one following object recognized by the drone from the image in the image.
  • the identification information of the following object 31 transmitted by the drone 21 to the control terminal 25 is the position information of the following object 31 in the image
  • the identification information of the following object 32 is the position information of the following object 32 in the image
  • the identification information of the following object 34 is the position information of the following object 34 in the image.
  • Determining, in the image, the at least one following object identified by the drone from the image according to the following object identification information including: according to the following object identification information, An icon for identifying the at least one following object recognized by the drone from the image is displayed in the image.
  • the control terminal 25 can display an icon for identifying the following object 31, such as the circular icon 35 in the following object 31, in the image based on the position information of the following object 31 in the image.
  • the control terminal 25 can display an icon for identifying the following object 32, such as the circular icon 35 in the following object 32, in the image based on the position information of the following object 32 in the image.
  • the control terminal 25 can display an icon for identifying the following object 34, such as the circular icon 35 in the following object 34, in the image based on the position information of the following object 34 in the image.
  • other icons besides the circular icon may be used to identify the following objects recognized by the drone.
  • detecting a user selection operation of at least two following objects in the image includes the following feasible implementation manners:
  • the at least one following object recognized by the drone from the image comprises a plurality of following objects recognized by the drone from the image.
  • the detecting a user selection operation of at least two of the following objects in the image includes detecting a user selection operation of at least two of the images in the image that are recognized by the drone from the image.
  • the following object 31, the following object 32, and the following object 34 are the following objects recognized by the drone 21 from the image, and the user can follow the object 31, follow the object 32, and follow on the interactive interface 30.
  • At least two of the objects 34 are selected by the object, for example, the user clicks, double-clicks, frames, long presses, etc. on the at least two of the following objects 31, the following objects 32, and the following objects 34, or the user It is also possible to perform operations such as clicking, double-clicking, frame-selecting, long-pressing, etc. on the circular icon 35 in at least two of the following objects: the following object 31, the following object 32, and the following object 34.
  • the control terminal 25 can detect a user's selection operation of at least two of the images in the image that are recognized by the drone from the image.
  • the user clicks on the following object 31, the following object 32, and the following object 34.
  • the circle is updated on the interactive interface 30.
  • the shape and/or color of the icon 35 is schematically illustrated by taking the color update as an example.
  • the color of the circular icon 35 is different from the color of the circular icon 36, that is, the circular icon 35 indicates that the following object is in a to-be-selected state, and the circular icon 36 indicates that the following object is in a selected state.
  • Another possible implementation manner is: detecting the user selection operation of at least two following objects in the image, comprising: detecting that at least two of the images are not recognized by the drone from the image The selection operation of the following object.
  • the following object 31, the following object 32, and the following object 34 are the following objects recognized by the drone 21 from the image
  • the following object 33 and the following object 37 are the drones 21 from the image.
  • Unrecognized follower The user can also select the following objects that are not recognized by the at least two unmanned aerial vehicles 21 in the image, and the selection may be by clicking, double clicking, frame selection, long press, and the like.
  • the control terminal 25 can also detect a user's selection operation of at least two of the images that are not recognized by the drone from the image.
  • the user performs a frame selection on the following object 33 and the following object 37.
  • the icon for framing the following object 33 is displayed on the interactive interface 30.
  • An icon for framing the follow-up object 37, as shown in FIG. 7, 38 indicates an icon for framing the follow-up object 33
  • 39 indicates an icon for framing the follow-up object 37.
  • Yet another possible implementation manner is: detecting the user selection operation of at least two following objects in the image, comprising: detecting, by the user, at least one of the images is recognized by the drone from the image A selection operation following the object, and a selection operation for at least one of the following objects that are not recognized by the drone from the image.
  • the user may also select at least one of the following objects identified by the drone 21, such as the following object 31, the following object 32, the following object 34, and at least one of the images.
  • the control terminal 25 can detect the user's selection operation of the following object 31, the following object 32, the following object 34, and the user's selection operation of the following object 33.
  • the method further includes: receiving category information of the at least one following object sent by the drone; and following the at least one in the image according to the category information of the at least one following object The category of the object is identified.
  • the processor 23 in the drone 21 may use a neural network model to identify the category of the following object in the image.
  • the category of the following object may be a person, an animal, or a vehicle, etc., and the processor 23 may further pass
  • the communication interface 24 of the drone 21 transmits the identified category information of the following object to the control terminal 25 of the drone 21, and the control terminal 25 can also at least one of the followers 30 identified in the interactive interface 30.
  • the category of the object is identified, for example, category information is displayed around the following object 31, the following object 32, and the following object 34, respectively.
  • control terminal receives the following object identification information sent by the drone, and according to the following object identification information, at least one of the following indications of the following object identification information is identified by the drone from the image.
  • the following objects are identified, so that the user can know which objects in the image can be followed by the drone, which improves the user experience.
  • Embodiments of the present invention provide a follow control method.
  • FIG. 9 is a flowchart of a follow control method according to another embodiment of the present invention.
  • the following control method may further include: detecting a confirmation follow operation of the user; the controlling drone following the at least two following objects indicated by the following instruction to make the at least two
  • the following objects in the photographing screen of the photographing device include: after detecting the confirmation follow operation of the user, controlling the drone to follow the at least two following objects indicated by the following instruction to make the at least Two following objects are in the shooting picture of the camera.
  • the method further includes: displaying a confirmation follow icon; the detecting the user's confirmation following operation, comprising: detecting a user's operation of the confirmation following icon.
  • the control terminal 25 further displays a confirmation following icon 40 on the interactive interface 30, when the user clicks on the confirmation following icon 40. At this time, it is indicated that the control drone starts to follow the following object 31, the following object 32, the following object 34, and the following object 33. Specifically, after the control terminal 25 detects the user's operation of confirming the following icon 40, for example, clicking, the drone 21 is controlled to follow the following object 31, the following object 32, the following object 34, and the following object 33.
  • the processor 23 may be in accordance with the moving direction of the following object 31, the following object 32, the following object 34, the following object 33, and each of the following objects in the image At least one of the position of the drone 21, the posture of the drone 21, and the attitude of the head of the drone 21 is adjusted so that the following object 31, the following object 32, and the following object 34 are adjusted.
  • the follower object 33 is always in the photographing screen of the photographing device 22.
  • the control terminal 25 can also update the circular icon 36 and the icon 38 in the interactive interface 30, for example, as an icon 41 as shown in FIG. 41 indicates that the following object is in the following state.
  • the color and shape of the icon 41 are not limited.
  • control terminal 25 can also detect the end-following operation of the user and generate an end-following instruction according to the end-following operation, thereby controlling the drone 21 to no longer follow at least two following objects selected by the user, such as the following object shown in FIG. 31.
  • the drone 21 re-recognizes the following object in the photographing screen of the photographing device 22, and transmits the recognized follow-up object identification information to the control terminal 25, and the control terminal 25 pairs on the interactive interface.
  • the following object is recognized by the unrecognized object of the drone 21, which is similar to the identification method shown in FIG. 3, and details are not described herein again.
  • This embodiment does not limit the user's end-following operation. Specifically, the user can click the exit button on the interactive interface 30, or can perform voice control on the control terminal 25, for example, the user says "end-follow" to the control terminal 25.
  • the method further includes the steps shown in Figure 9:
  • Step S901 In the process of following the at least two following objects by the drone, detecting a flight control operation of the user.
  • the user can also operate the control button in the interactive interface 30, for example,
  • the interactive interface 30 can also display buttons for controlling the position of the drone 21, the attitude of the drone 21, or the attitude of the gimbal.
  • Step S902 determining a flight control instruction according to the detected flight control operation.
  • control terminal 25 When the control terminal 25 detects the user's operation of the button for controlling the position of the drone 21, the posture of the drone 21, or the attitude of the gimbal, a flight control command is generated.
  • Step S903 transmitting the flight control command to the drone, so that the drone adjusts the position of the drone, the posture of the drone, and the none according to the flight control instruction.
  • the flight control command is sent to the drone 21, so that the drone 21 adjusts the position of the drone, the posture of the drone, and the posture of the drone according to the flight control command. And one or more of the postures of the head of the drone.
  • the drone when the drone follows the at least two following objects indicated by the instruction instruction, the drone can adjust the movement direction of the drone according to the movement direction, position, and the like of the at least two following objects, At least one of the posture of the human machine and the attitude of the gimbal of the drone such that the at least two following objects are always in the photographing screen of the photographing device.
  • the drone may further receive a flight control command sent by the control terminal, and the drone adjusts the position of the drone according to the flight control instruction, and the One or more of the posture of the human machine and the attitude of the head of the drone.
  • the drone before the drone receives the flight control command sent by the control terminal, the drone follows the at least two following objects after the at least two following objects, and the flight control command is used to control the drone at the location The sides of the at least two following objects follow the at least two following objects, so after the drone receives the flight control command sent by the control terminal, the drone will adjust the following mode, ie, at the at least two following The sides of the object follow the at least two following objects.
  • the method further comprises: detecting a user's composition selection operation; determining a composition instruction based on the detected composition selection operation; the controlling the drone indicating the at least two following to the following instruction Following the object to cause the at least two following objects to be in the photographing screen of the photographing device, comprising: controlling the drone to follow the at least two following objects indicated by the following instruction to make the at least two
  • the composition rules indicated by the composition instruction in accordance with the composition instruction are located in the photographing screen of the photographing device.
  • the control terminal 25 can also control the drone 21 to follow at least two following objects so that at least two following objects are located at preset positions in the photographing screen of the photographing device.
  • the interactive interface 30 can also display a composition selection icon 110.
  • the interaction interface 30 displays a list 111, and the list 111 includes four options such as an upper left corner, a lower left corner, and an upper right corner. , the bottom right corner.
  • the upper left corner, the lower left corner, the upper right corner, and the lower right corner respectively indicate the position of at least two following objects that the drone follows in the shooting screen.
  • the control terminal 25 When the user operates any of the four options, the control terminal 25 generates a corresponding composition instruction, for example, when the user selects the upper right corner, the composition rule indicated by the composition instruction generated by the control terminal 25 can make the drone The at least two following objects that follow are located in the upper right corner of the photographing screen of the photographing device. Similarly, when the user selects the upper left corner, the composition rule indicated by the composition command generated by the control terminal 25 can cause at least two following objects followed by the drone to be located in the upper left corner of the photographing screen of the photographing device, and so on.
  • the drone 21 can be controlled to perform the at least two following objects indicated by the following instruction, such as the following object 31 and the following object 32.
  • the processor 23 can be in the image according to the moving direction of the following object 31 and the following object 32, and the following object 31 and the following object 32 are in the image.
  • the information such as the position adjusts at least one of the moving direction of the drone 21, the posture of the drone 21, and the attitude of the head of the drone 21 so that the following object 31 and the following object 32 are positioned on the photographing screen of the photographing device 22. In the upper right corner.
  • the control terminal detects a user's composition selection operation; and determines a composition instruction according to the detected composition selection operation, and when the control drone follows the at least two following objects indicated by the following instruction,
  • the composition rules indicated by the at least two following objects in accordance with the composition instruction are located in the photographing screen of the photographing device, which improves the flexibility of the drone to follow at least two following objects.
  • FIG. 12 is a flowchart of a follow control method according to another embodiment of the present invention.
  • the following control method provided by this embodiment is applied to a drone. As shown in FIG. 12, the method in this embodiment may include:
  • step S1201 an image captured by the imaging device mounted on the drone is acquired.
  • the drone 21 is equipped with an imaging device 22, and the processor 23 in the drone 21 acquires an image captured by the imaging device 22, and the processor 23 may be a flight controller of the drone 21, or may be Other general purpose or dedicated processors.
  • the processor 23 acquires an image taken by the photographing device 22.
  • Step S1202 Send the image to a control terminal of the drone.
  • the processor 23 acquires the image captured by the imaging device 22
  • the image is transmitted to the control terminal 25 of the drone 21 through the communication interface 24 of the drone 21, and the drone 21 and the control terminal 25 can be wired or communicated.
  • Wireless communication this embodiment is schematically illustrated by wireless communication.
  • the control terminal 25 receives and displays the image, and detects a user's selection operation of at least two following objects in the image, and further determines a follow instruction according to the detected selection operation.
  • the specific principles and implementation manners are consistent with the foregoing embodiments. I will not repeat them here.
  • Step S1203 Receive the following instruction sent by a control terminal of the drone, and the following instruction is used to indicate at least two following objects in the image.
  • the control terminal 25 determines the following instruction by the method described in the above embodiment, the following instruction is sent to the drone 21, which is used to indicate at least two following objects selected by the user.
  • Step S1204 follow the at least two following objects so that the at least two following objects are in a shooting picture of the photographing device.
  • the drone 21 follows the at least two following objects indicated by the following instruction, such as the following object 31 and the following object 32.
  • the processor 23 The direction of movement of the drone 21, the posture of the drone 21, and the drone 21 can be adjusted based on information such as the moving direction of the following object 31 and the following object 32, and the position of the following object 31 and the following object 32 in the image. At least one of the poses of the pan/tilt is such that the following object 31 and the following object 32 are in the photographing screen of the photographing device 22.
  • the image captured by the camera mounted thereon is acquired by the drone, the image is sent to the control terminal of the drone, and at least two follow instructions of the following command are followed according to the following command sent by the control terminal.
  • the object follows to make the at least two following objects in the photographing screen of the photographing device, so that the drone can be applied to more shooting scenes such as a multiplayer game, a multiplayer play, a group photo, and the like.
  • FIG. 13 is a flowchart of a follow control method according to another embodiment of the present invention.
  • the following control method provided by this embodiment is applied to a drone.
  • the method in this embodiment may further include:
  • Step S1301 identifying a follow object in the image.
  • the processor 23 in the drone 21 acquires the image captured by the photographing device 22, the processor 23 can also identify the following object in the image.
  • the processor 23 can adopt a neural network model to the object in the image. At least one of contour, size, category, and distance between the object and the drone is identified.
  • the processor 23 can determine whether the object can be used as a follow object according to the sharpness of the contour of the object; whether the object can be used as a follow object according to the size of the object; and whether the object can be used as a follow object according to the category of the object
  • the category of the object identified by the neural network model includes a person, an animal, a vehicle, etc., and the processor 23 may use the person identified by the neural network model as the following object.
  • the processor 23 may also use an object within a preset distance range from the drone as a follow object. This is merely a schematic illustration and does not limit the specific method by which the processor 23 recognizes the following objects.
  • Step S1302 Send, to the control terminal of the drone, identification information of the identified at least one following object, where the identification information of the following object is used to indicate at least one identifier recognized by the drone from the image.
  • the identification information of the following object is used to indicate at least one identifier recognized by the drone from the image.
  • the following object identification information can be sent to the control terminal 25 of the drone 21 through the communication interface 24 of the drone 21, the following object identification information is used to indicate at least one
  • the drone 21 identifies the following object from the image. At least one of the following objects recognized by the drone 21 from the image includes a plurality of following objects recognized by the drone 21 from the image. As shown in FIG. 3, the following object 31, the following object 32, and the following object 34 are the following objects recognized by the drone 21 from the image, and the drone 21 transmits the identification information of the following object 31 to the control terminal 25, following The identification information of the object 32 and the identification information of the following object 34.
  • the identification information of the following object includes position information of the at least one following object recognized by the drone from the image in the image.
  • the identification information of the following object 31 transmitted by the drone 21 to the control terminal 25 is the position information of the following object 31 in the image
  • the identification information of the following object 32 is the position information of the following object 32 in the image
  • the identification information of the following object 34 is the position information of the following object 34 in the image.
  • the method in this embodiment may further include the steps shown in FIG. 14 :
  • Step S1401 identifying a category of the following object in the image.
  • the processor 23 within the drone 21 may employ a neural network model to identify the category of the following object in the image, for example, the category of the following object may be a person, an animal, or a vehicle.
  • Step S1402 Send the identified category information of the at least one following object to the control terminal of the drone.
  • the processor 23 can further transmit the category information of the following object recognized by the unmanned aerial vehicle 21 to the control terminal 25 of the drone 21, and the control terminal 25 can also display the drone in the interactive interface 30.
  • the identified at least one category of the following object is identified, for example, the category information is displayed separately around the following object 31, the following object 32, and the following object 34.
  • the following control method may further include: receiving, in a process of following the at least two following objects, a flight control instruction sent by a control terminal of the drone; according to the flight control Commanding one or more of adjusting the position of the drone, the attitude of the drone, and the attitude of the head of the drone.
  • the user can also operate the control button in the interactive interface 30, for example,
  • the interactive interface 30 can also display buttons for controlling the position of the drone 21, the attitude of the drone 21, or the attitude of the gimbal.
  • the flight control command is generated when the control terminal 25 detects the user's operation of the button, and the flight control command is transmitted to the drone 21.
  • the drone 21 receives the flight control command sent by the control terminal 25 in the process of following the following object 31, the following object 32, the following object 34, and the following object 33, and adjusts the drone according to the flight control command.
  • the following control method may further include: receiving a composition instruction sent by the control terminal of the drone; the following: tracking the at least two following objects to cause the at least two to follow
  • the object in the photographing screen of the photographing device includes: following the at least two following objects such that the composition rules indicated by the at least two following objects in accordance with the composition instruction are located in a photographing screen of the photographing device .
  • the UAV 21 can also receive the composition command sent by the control terminal 25.
  • the specific principles and implementations of the composition command are the same as those in the foregoing embodiment, and are not described here.
  • the user selects the upper right corner as shown in FIG. 11 as an example for illustration.
  • the drone 21 receives the following command sent by the control terminal 25, the drone 21 follows the at least two instructions indicated by the following command.
  • the object follows, the drone 21 in the process of following the at least two following objects, the processor 23 may be according to the moving direction of the at least two following objects, and the at least two following objects are in the image Information such as the position and the like adjust at least one of the moving direction of the drone 21, the posture of the drone 21, and the attitude of the head of the drone 21 so that the at least two following objects are located at the photographing device The upper right corner of the screen.
  • the drone machine when the drone machine receives the composition instruction sent by the control terminal, when the at least two following objects indicated by the following instruction are followed, the at least two following objects are located according to the composition rule indicated by the composition instruction.
  • the photographing screen of the photographing device the flexibility of the drone to follow at least two following objects is improved.
  • FIG. 15 is a flowchart of a follow control method according to another embodiment of the present invention.
  • the following control method provided in this embodiment is applied to a control terminal of a drone. As shown in FIG. 15, the method in this embodiment may include:
  • Step S1501 Receive and display an image captured by the camera of the drone.
  • Step S1501 is consistent with the specific principles and implementation manners of step S101, and details are not described herein again.
  • Step S1502 Receive the following object identification information sent by the drone, and the following object identification information is used to indicate at least one following object recognized by the drone from the image.
  • Step S1502 is consistent with the specific principles and implementation manners of step S401, and details are not described herein again.
  • Step S1503 Identify, according to the following object identification information, the at least one following object identified by the drone from the image in the image.
  • Step S1503 is consistent with the specific principles and implementation manners of step S402, and details are not described herein again.
  • the following object identification information includes position information of the at least one following object recognized by the drone from the image in the image.
  • the identification information of the following object 31 transmitted by the drone 21 to the control terminal 25 is the position information of the following object 31 in the image
  • the identification information of the following object 32 is the position information of the following object 32 in the image
  • the identification information of the following object 34 is the position information of the following object 34 in the image.
  • Determining, in the image, the at least one following object identified by the drone from the image according to the following object identification information including: according to the following object identification information, An icon for identifying the at least one following object recognized by the drone from the image is displayed in the image.
  • the control terminal 25 can display an icon for identifying the following object 31, such as the circular icon 35 in the following object 31, in the image based on the position information of the following object 31 in the image.
  • the control terminal 25 can display an icon for identifying the following object 32, such as the circular icon 35 in the following object 32, in the image based on the position information of the following object 32 in the image.
  • the control terminal 25 can display an icon for identifying the following object 34, such as the circular icon 35 in the following object 34, in the image based on the position information of the following object 34 in the image.
  • other icons besides the circular icon may be used to identify the following objects recognized by the drone.
  • the method further includes: receiving category information of the at least one following object sent by the drone; and following the at least one in the image according to the category information of the at least one following object The category of the object is identified.
  • the processor 23 in the drone 21 may use a neural network model to identify the category of the following object in the image.
  • the category of the following object may be a person, an animal, or a vehicle, etc., and the processor 23 may further pass
  • the communication interface 24 of the drone 21 transmits the identified category information of the following object to the control terminal 25 of the drone 21, and the control terminal 25 can also at least one of the followers 30 identified in the interactive interface 30.
  • the category of the object is identified, and the at least one following object includes at least two following objects.
  • the control terminal 25 displays the category information separately in the interactive interface 30 following the object 31, the following object 32, and the surrounding object 34.
  • control terminal receives the following object identification information sent by the drone, and according to the following object identification information, at least one of the following indications of the following object identification information is identified by the drone from the image.
  • the following objects are identified, so that the user can know which objects in the image can be followed by the drone, which improves the user experience.
  • FIG. 16 is a flowchart of a follow control method according to another embodiment of the present invention.
  • the following control method provided in this embodiment is applied to a control terminal of a drone.
  • the method in this embodiment may further include:
  • Step S1601 detecting a user's selection operation of at least one of the following objects in the image.
  • the at least one following object includes at least two following objects.
  • the interactive interface 30 of the control terminal 25 displays a follow object 31, a follow object 32, a follow object 34, and a follow object 33, and the user can view the follow object 31, the follow object 32, the follow object 34, and the follow object 33.
  • At least one of the following objects is selected, for example, the user can frame the following object 33 that is not recognized by the drone 21, or the following object 31, the following object 32, and the following object 34 identified by the drone 21 At least one of the following objects is clicked.
  • the user may also select a follow object that is not recognized by the drone 21 and a follow object that the drone 21 recognizes.
  • Step S1601 is consistent with the specific principle and implementation manner of step S102, and details are not described herein again.
  • detecting a user selection operation of at least one following object in the image includes the following feasible implementation manners:
  • One possible implementation is to detect a user's voice selection operation on at least one of the following objects in the image.
  • the at least one following object includes at least two following objects.
  • the specific principle and implementation manner of detecting a user's voice selection operation on at least one of the following objects in the image is consistent with the specific principle and implementation manner of detecting a user's voice selection operation on at least two of the following objects in the image. Narration.
  • Another possible implementation is to detect a user's selection operation on at least one of the following objects on the interactive interface displaying the image.
  • the at least one following object includes at least two following objects.
  • Yet another possible implementation is to detect a user selection operation of at least one of the images in the image that is recognized by the drone from the image.
  • the at least one following object includes at least two following objects.
  • the following object 31, the following object 32, and the following object 34 are the following objects recognized by the drone 21 from the image, and the user can follow the object 31, follow the object 32, and follow on the interactive interface 30.
  • At least two of the objects 34 are selected by the object, for example, the user performs a click, double-click, frame selection, long press, etc. operation on the at least two of the following objects 31, the following objects 32, and the following objects 34.
  • Yet another possible implementation is to detect a user's selection operation of at least one of the images that is not recognized by the drone from the image.
  • the at least one following object includes at least two following objects.
  • the following object 31, the following object 32, and the following object 34 are the following objects recognized by the drone 21 from the image
  • the following object 33 and the following object 37 are the drones 21 from the image.
  • Unrecognized follower The user can also select the following objects that are not recognized by the at least two unmanned aerial vehicles 21 in the image, and the selection may be by clicking, double clicking, frame selection, long press, and the like.
  • the control terminal 25 can also detect a user's selection operation of at least two of the images that are not recognized by the drone from the image.
  • the user may also select at least one of the following objects identified by the drone 21, such as the following object 31, the following object 32, the following object 34, and at least one of the images.
  • the control terminal 25 can detect the user's selection operation of the following object 31, the following object 32, the following object 34, and the user's selection operation of the following object 33.
  • Step S1602 determining a follow instruction according to the detected selection operation.
  • determining the following instruction according to the detected selection operation includes: determining a following instruction according to the detected voice selection operation.
  • the control terminal 25 can determine the following instruction according to the voice selection operation, the following command including the feature information of the following object selected by the user. For example, red clothes and yellow clothes.
  • the control terminal 25 transmits the following command to the drone 21, and the processor 23 of the drone 21 recognizes the following object 31 wearing the red dress and wearing from the image captured by the photographing device 22 based on the feature information included in the following command.
  • the yellow clothing follows the object 32 and follows the following object 31 and the following object 32.
  • the user issues a voice selection operation of "following the object of the lower left corner and the lower right corner" to the control terminal 25, and the control terminal 25 can determine a follow instruction according to the voice selection operation, the follow instruction including the feature of the following object selected by the user.
  • Information such as the lower left corner and the lower right corner.
  • the control terminal 25 transmits the following command to the drone 21, and the processor 23 of the drone 21 recognizes the following object 32 and the lower right corner of the lower left corner from the image captured by the photographing device 22 based on the feature information included in the following command.
  • follow object 34 and follow object 32 and follow object 34 follow object 34.
  • control terminal 25 when the control terminal 25 detects a voice selection operation of "objects following red clothes and yellow clothes” issued by the user, or a voice selection operation of "following objects of the lower left corner and the lower right corner", the control terminal 25 Identifying, according to the voice selection operation, feature information of at least two following objects selected by the user, such as red clothes and yellow clothes, and determining position information of at least two following objects in the image according to the feature information of the at least two following objects, And generating a follow instruction according to the position information of the at least two following objects in the image, for example, the following instruction includes position information of at least two following objects selected by the user in the image.
  • control terminal 25 detects that the user has selected at least two following objects, such as the following object 31 and the following object 32, on the interactive interface 30, such as clicking, double-clicking, box-selecting, long-pressing, etc.
  • the operation of the touch screen may generate a follow instruction according to the user's selection operation, the following instruction including position information of at least two following objects selected by the user in the image.
  • Step S1603 Control the drone to follow the at least one following object indicated by the following instruction to make the at least one following object in a shooting picture of the photographing device.
  • the at least one following object includes at least two following objects.
  • the drone 21 can be controlled to follow the at least two following objects indicated by the following instruction, such as the following object 31 and the following object 32, the drone 21
  • the processor 23 can adjust according to the moving direction of the following object 31 and the following object 32, and the position of the following object 31 and the following object 32 in the image.
  • At least one of the moving direction of the human machine 21, the posture of the drone 21, and the attitude of the head of the drone 21 is such that the following object 31 and the following object 32 are in the photographing screen of the photographing device 22.
  • the method further comprises: detecting a confirmation follow operation of the user; the controlling drone following the at least one following object indicated by the following instruction to cause the at least one following object to be in the
  • the photographing screen of the photographing device includes: after detecting the confirmation follow operation of the user, controlling the drone to follow the at least one following object indicated by the following instruction to make the at least one following object in the In the shooting screen of the camera.
  • the at least one following object includes at least two following objects.
  • the method further includes: displaying a confirmation follow icon; the detecting the user's confirmation following operation, comprising: detecting an operation of the confirmation follow icon by the user.
  • the control terminal 25 further displays a confirmation following icon 40 on the interactive interface 30, when the user clicks on the confirmation following icon 40. At this time, it is indicated that the control drone starts to follow the following object 31, the following object 32, the following object 34, and the following object 33. Specifically, after the control terminal 25 detects the user's operation of confirming the following icon 40, for example, clicking, the drone 21 is controlled to follow the following object 31, the following object 32, the following object 34, and the following object 33.
  • the processor 23 may be in accordance with the moving direction of the following object 31, the following object 32, the following object 34, the following object 33, and each of the following objects in the image At least one of the position of the drone 21, the posture of the drone 21, and the attitude of the head of the drone 21 is adjusted so that the following object 31, the following object 32, and the following object 34 are adjusted.
  • the following object 33 is in the shooting screen of the imaging device 22.
  • the method further includes the steps shown in Figure 17:
  • Step S1701 In the process of following the at least one following object by the drone, detecting a flight control operation of the user.
  • Step S1701 is consistent with the specific principle and implementation manner of step S901, and details are not described herein again.
  • Step S1702 Determine a flight control instruction according to the detected flight control operation.
  • Step S1702 is consistent with the specific principle and implementation manner of step S902, and details are not described herein again.
  • Step S1703 Send the flight control command to the drone, so that the drone adjusts the position of the drone, the posture of the drone, and the none according to the flight control instruction.
  • Step S1703 is consistent with the specific principles and implementation manners of step S903, and details are not described herein again.
  • the method further comprises: detecting a user's composition selection operation; determining a composition instruction based on the detected composition selection operation; and controlling the at least one following object indicated by the drone to the following instruction Performing the following to cause the at least one following object to be in the photographing screen of the photographing device, comprising: controlling the drone to follow the at least one following object indicated by the following instruction to cause the at least one following object to follow
  • the composition rule indicated by the composition instruction is located in a photographing screen of the photographing device.
  • control terminal 25 can also control the drone 21 to follow at least two following objects so that at least two following objects are located in a preset position in the photographing screen of the photographing device, and the specific implementation process is as shown in FIG. 11 above.
  • the embodiments are consistent and will not be described here.
  • the control terminal receives and displays the image captured by the camera of the drone, detects a user's selection operation of at least two following objects in the image, and determines a follow instruction according to the detected selection operation.
  • the following instruction may indicate at least two following objects, so that the drone follows at least two following objects, so that at least two following objects are in the shooting picture of the photographing device, so that the drone can be applied to more shooting a scene such as a multiplayer game, a multiplayer game, a group photo, etc.;
  • the control terminal detects a user's composition selection operation; and determines a composition instruction according to the detected composition selection operation, and controls the drone to indicate the at least the indication of the following instruction
  • the composition rules indicated by the at least two following objects according to the composition instruction are located in the shooting picture of the photographing device, which improves the flexibility of the drone to follow at least two following objects. Sex.
  • FIG. 18 is a flowchart of a follow control method according to another embodiment of the present invention.
  • the following control method provided by this embodiment is applied to a drone. As shown in FIG. 18, the method in this embodiment may include:
  • step S1801 an image captured by the imaging device mounted on the drone is acquired.
  • Step S1801 is consistent with the specific principle and implementation manner of step S1201, and details are not described herein again.
  • Step S1802 Identify the following objects in the image.
  • Step S1802 is consistent with the specific principle and implementation manner of step S1301, and details are not described herein again.
  • Step S1803 Send, to the control terminal of the drone, identification information of the identified at least one following object, where the identification information of the following object is used to indicate at least one identifier recognized by the drone from the image.
  • the identification information of the following object is used to indicate at least one identifier recognized by the drone from the image.
  • Step S1803 is consistent with the specific principle and implementation manner of step S1302, and details are not described herein again.
  • the at least one following object includes at least two following objects.
  • the identification information of the following object includes position information of the at least one following object recognized by the drone from the image in the image.
  • the identification information of the following object 31 transmitted by the drone 21 to the control terminal 25 is the position information of the following object 31 in the image
  • the identification information of the following object 32 is the position information of the following object 32 in the image
  • the identification information of the following object 34 is the position information of the following object 34 in the image.
  • the method further comprises: identifying a category of the following object in the image; and transmitting, to the control terminal of the drone, category information of the identified at least one of the following objects.
  • the at least one following object includes at least two following objects.
  • identifying the category of the following object in the image is consistent with the specific principle and implementation manner of step S1401; transmitting the identified category information of the at least one following object to the control terminal of the drone and step S1402
  • step S1401 transmitting the identified category information of the at least one following object to the control terminal of the drone and step S1402
  • the method further includes: receiving a follow instruction sent by the control terminal of the drone; following at least one following object indicated by the following instruction to cause the at least one follow object to be in the In the shooting screen of the camera.
  • the at least one following object includes at least two following objects.
  • the control terminal 25 determines the following instruction by the method described in the above embodiment, the following instruction is sent to the drone 21, the following instruction is used to indicate at least two following objects selected by the user; the drone 21
  • the at least two following objects indicated by the following instruction such as the following object 31 and the following object 32, are followed, and in the process of following the following object 31 and the following object 32, the processor 23 can follow the following object. 31 and the moving direction of the following object 32, and the position of the following object 31 and the following object 32 in the image, etc., adjust the moving direction of the drone 21, the posture of the drone 21, and the pan/tilt of the drone 21
  • At least one of the gestures is such that the following object 31 and the following object 32 are in the photographing screen of the photographing device 22.
  • the method further includes: receiving, in a process of following the at least one following object, a flight control instruction sent by a control terminal of the drone; and adjusting the drone according to the flight control instruction One or more of the position, the attitude of the drone, and the attitude of the head of the drone.
  • the drone 21 receives the flight control command sent by the control terminal 25 in the process of following the following object 31, the following object 32, the following object 34, and the following object 33, and adjusts the unmanned person according to the flight control command.
  • the position of the machine, the attitude of the drone, and the attitude of the head of the drone are one or more of the position of the machine, the attitude of the drone, and the attitude of the head of the drone.
  • the method further includes: receiving a composition instruction sent by the control terminal of the drone; the following at least one following object indicated by the following instruction to follow the at least one The following object is in the photographing screen of the photographing device, comprising: following the at least one following object indicated by the following instruction to cause the at least one following object to be in accordance with the composition rule indicated by the composition instruction; In the shooting screen of the device.
  • the UAV 21 can also receive the composition command sent by the control terminal 25.
  • the specific principles and implementations of the composition command are the same as those in the foregoing embodiment, and are not described here.
  • the user selects the upper right corner as shown in FIG. 11 as an example for illustration.
  • the drone 21 receives the following command sent by the control terminal 25, the drone 21 follows the at least two instructions indicated by the following command.
  • the object follows, the drone 21 in the process of following the at least two following objects, the processor 23 may be according to the moving direction of the at least two following objects, and the at least two following objects are in the image Information such as the position and the like adjust at least one of the moving direction of the drone 21, the posture of the drone 21, and the attitude of the head of the drone 21 so that the at least two following objects are located at the photographing device The upper right corner of the screen.
  • the image captured by the camera mounted thereon is acquired by the drone, the image is sent to the control terminal of the drone, and at least two follow instructions of the following command are followed according to the following command sent by the control terminal.
  • the object follows to make the at least two following objects in the photographing screen of the photographing device, so that the drone can be applied to more shooting scenes such as multi-player game, multi-player play, group photo, etc.;
  • the human machine receives a composition instruction sent by the control terminal, and when the at least two following objects indicated by the following instruction are followed, causing the at least two following objects to be located in the photographing device according to a composition rule indicated by the composition instruction In the shooting picture, the flexibility of the drone to follow at least two following objects is improved.
  • FIG. 19 is a structural diagram of a control terminal of a drone according to an embodiment of the present invention.
  • the control terminal 190 of the drone includes a communication interface 191 and a processor 192.
  • the communication interface 191 is configured to receive an image captured by a camera of the drone; the processor 192 is configured to: display a captured image captured by the camera of the drone; and detect a user's selection operation of at least two following objects in the image Determining a follow instruction according to the detected selection operation; controlling the drone to follow the at least two following objects indicated by the following instruction to make the at least two following objects in a shooting picture of the photographing device in.
  • the processor 192 when the processor 192 detects a selection operation of the at least two following objects in the image, the processor 192 is specifically configured to: detect a user's voice selection operation on at least two following objects in the image; When the selection operation is determined to follow the instruction, the method is specifically configured to: determine the following instruction according to the detected voice selection operation.
  • the processor 192 when the processor 192 detects a selection operation of the at least two following objects in the image, the processor 192 is specifically configured to: detect a user selecting operation on the at least two following objects on the interaction interface that displays the image.
  • the communication interface 191 is further configured to: receive the following object identifier information sent by the drone, and the following object identifier information is used to indicate at least one following by the drone from the image.
  • the processor 192 is further configured to: in the image, identify the at least one following object identified by the drone from the image according to the following object identification information.
  • the following object identification information includes location information of the at least one following object identified by the drone from the image in the image.
  • the processor 192 is configured to: according to the following object identifier information, when the at least one following object identified by the drone from the image is identified in the image, specifically: according to: The following object identification information is displayed in the image for identifying the at least one icon of the following object recognized by the drone from the image.
  • the at least one following object identified by the drone from the image includes a plurality of following objects recognized by the drone from the image; the processor 192 detects the user pair When at least two of the images follow the selection operation of the object, the method is specifically configured to: detect a user's selection operation of at least two following objects in the image that are recognized by the drone from the image.
  • the method is specifically configured to: detect that at least two of the images are not recognized by the drone from the image. Follow the selection operation of the object.
  • the method is specifically configured to: detect, by the user, at least one of the images that is recognized by the drone from the image. A selection operation of the object, and a selection operation for at least one of the following objects that are not recognized by the drone from the image.
  • the communication interface 191 is further configured to: receive category information of the at least one following object sent by the drone; the processor 192 is further configured to: in the image according to the category information of the at least one following object Identifying the category of the at least one following object.
  • the processor 192 is further configured to: detect a confirmation follow operation of the user; the processor 192 controls the drone to follow the at least two following objects indicated by the following instruction to make the at least two following objects
  • the method is specifically configured to: after detecting the confirmation follow operation of the user, control the drone to follow the at least two following objects indicated by the following instruction to make the at least Two following objects are in the shooting picture of the camera.
  • the processor 192 is further configured to: display an acknowledgment following icon; when the processor 192 detects the acknowledgment following operation of the user, specifically, the method is: detecting an operation of the confirmation follow icon by the user.
  • the processor 192 is further configured to: when the drone follows the at least two following objects, detect a flight control operation of the user; and determine a flight control instruction according to the detected flight control operation;
  • the communication interface 191 is further configured to: send the flight control instruction to the drone, so that the drone adjusts the position of the drone, the posture of the drone, according to the flight control instruction, And one or more of the postures of the head of the drone.
  • the processor 192 is further configured to: detect a user's composition selection operation; determine a composition instruction according to the detected composition selection operation; and the processor 192 controls the at least two following objects indicated by the drone to the following instruction When the following is performed such that the at least two following objects are in the photographing screen of the photographing device, specifically for: controlling the drone to follow the at least two following objects indicated by the following instruction to cause the The composition rule indicated by the at least two following objects in accordance with the composition instruction is located in a photographing screen of the photographing device.
  • control terminal provided by the embodiment of the present invention are similar to the embodiments shown in FIG. 1 , FIG. 4 , and FIG. 9 , and details are not described herein again.
  • control terminal receives and displays the image captured by the camera of the drone, detects a user's selection operation of at least two following objects in the image, and determines a follow instruction according to the detected selection operation.
  • the following instruction may indicate at least two following objects, so that the drone follows at least two following objects, so that at least two following objects are in the shooting picture of the photographing device, so that the drone can be applied to more shooting Scenes such as multiplayer games, multiplayer games, group photos, etc.
  • Embodiments of the present invention provide a drone.
  • 20 is a structural diagram of a drone according to an embodiment of the present invention.
  • the drone 200 includes: a body, a power system, a camera 201, a processor 202, and a communication interface 203.
  • 201 is mounted on the body through the pan/tilt head 204 for taking an image.
  • the power system includes at least one of a motor 207, a propeller 206, and an electronic governor 217, the power system being mounted to the fuselage for providing flight power.
  • the processor 202 is configured to acquire an image captured by a camera mounted on the drone; the communication interface 203 is configured to send the image to a control terminal of the drone; and receive the drone Controlling a follow instruction sent by the terminal, the following instruction is used to indicate at least two following objects in the image; the processor 202 is further configured to: follow the at least two following objects to make the at least two follow The object is in the photographing screen of the photographing device.
  • the processor 202 is further configured to: identify the following object in the image; the communication interface 203 is further configured to: send, to the control terminal of the drone, at least one following object identified by the processor 202. Identification information, the identification information of the following object is used to indicate at least one following object recognized by the drone from the image.
  • the identification information of the following object includes position information of the at least one following object recognized by the drone from the image in the image.
  • the processor 202 is further configured to: identify a category of the following object in the image; the communication interface 203 is further configured to: send, to the control terminal of the drone, at least one of the following identifiers recognized by the processor 202.
  • the category information of the object is further configured to: identify a category of the following object in the image;
  • the communication interface 203 is further configured to: receive, in the process of following the at least two following objects, the flight control instruction sent by the control terminal of the drone; the processor 202 further And for: adjusting one or more of a position of the drone, a posture of the drone, and a posture of a gimbal of the drone according to the flight control instruction.
  • the communication interface 203 is further configured to: receive a composition instruction sent by the control terminal of the UAV; the processor 202 follows the at least two following objects to make the at least two following objects in the When the photographing screen of the photographing device is photographed, the photographing rule that the at least two following objects are followed such that the at least two following objects are instructed according to the composition instruction is located in the photographing screen of the photographing device.
  • processor 202 may specifically be a flight controller.
  • the image captured by the camera mounted thereon is acquired by the drone, the image is sent to the control terminal of the drone, and at least two follow instructions of the following command are followed according to the following command sent by the control terminal.
  • the object follows to make the at least two following objects in the photographing screen of the photographing device, so that the drone can be applied to more shooting scenes such as a multiplayer game, a multiplayer play, a group photo, and the like.
  • FIG. 21 is a structural diagram of a control terminal of a drone according to another embodiment of the present invention; as shown in FIG. 21, the control terminal 210 includes a communication interface 211 and a processor 212.
  • the communication interface 211 is configured to: receive an image captured by a camera of the drone; receive the following object identification information sent by the drone, and the following object identification information is used to indicate at least one image from the drone The following object is identified;
  • the processor 212 is configured to: display an image captured by the camera of the drone; and, according to the following object identification information, the at least one of the drones in the image The following objects identified in the image are identified.
  • the following object identification information includes location information of the at least one following object identified by the drone from the image in the image.
  • the processor 212 is configured to: according to the following object identifier information, when the at least one following object identified by the drone from the image is identified in the image, specifically: according to: The following object identification information is displayed in the image for identifying the at least one icon of the following object recognized by the drone from the image.
  • the communication interface 211 is further configured to: receive category information of the at least one following object sent by the drone; the processor 212 is further configured to: in the image according to the category information of the at least one following object Identifying the category of the at least one following object.
  • the processor 212 is further configured to: detect a user selecting operation of the at least one following object in the image; determine a following instruction according to the detected selection operation; and control the indication of the following instruction by the drone The at least one following object is followed to cause the at least one following object to be in a photographing picture of the photographing device.
  • the processor 212 when the processor 212 detects a selection operation of the at least one following object in the image, the processor 212 is specifically configured to: detect a user's voice selection operation on at least one of the following objects in the image; and the processor 212 is configured according to the detected When the selecting operation determines the following instruction, it is specifically configured to: determine the following instruction according to the detected voice selection operation.
  • the processor 212 when the processor 212 detects a selection operation of the at least one following object in the image, the processor 212 is specifically configured to: detect, by the user, a selection operation of the at least one following object on the interaction interface that displays the image.
  • the processor 212 when the processor 212 detects a selection operation of the at least one following object in the image, the processor 212 is specifically configured to: detect, by the user, at least one of the following objects in the image that are recognized by the drone from the image. Choice of operation.
  • the method is specifically configured to: detect, by the user, at least one of the images that is not recognized by the drone from the image. The selection operation of the object.
  • the processor 212 is further configured to: detect a confirmation follow operation of the user; the processor 212 controls the drone to follow the at least one following object indicated by the following instruction, so that the at least one following object is in the When the photographing screen of the photographing device is described, specifically, after detecting the confirmation follow operation of the user, controlling the drone to follow the at least one following object indicated by the following instruction to make the at least one following object In the shooting screen of the photographing device.
  • the processor 212 is further configured to: display an acknowledgment following icon; when the processor 212 detects the acknowledgment following operation of the user, specifically, the method is: detecting an operation of the confirmation follow icon by the user.
  • the processor 212 is further configured to: when the drone follows the at least one following object, detect a flight control operation of the user; determine a flight control instruction according to the detected flight control operation;
  • the interface 211 is further configured to: send the flight control instruction to the drone, so that the drone adjusts a position of the drone, a posture of the drone, and One or more of the poses of the head of the drone.
  • the processor 212 is further configured to: detect a user's composition selection operation; determine a composition instruction according to the detected composition selection operation; and the processor 212 controls the drone to perform the at least one following object indicated by the following instruction Following to cause the at least one following object to be in the shooting picture of the photographing device, specifically for: controlling the drone to follow the at least one following object indicated by the following instruction to make the at least one follow
  • the composition rule indicated by the object in accordance with the composition instruction is located in the photographing screen of the photographing device.
  • the at least one following object includes at least two following objects.
  • control terminal provided by the embodiment of the present invention are similar to the embodiments shown in FIG. 15, FIG. 16, and FIG. 17, and are not described herein again.
  • control terminal receives the following object identification information sent by the drone, and according to the following object identification information, at least one of the following indications of the following object identification information is identified by the drone from the image.
  • the following objects are identified, so that the user can know which objects in the image can be followed by the drone, which improves the user experience.
  • Embodiments of the present invention provide a drone.
  • 20 is a structural diagram of a drone according to an embodiment of the present invention.
  • the drone 200 includes: a body, a power system, a camera 201, a processor 202, and a communication interface 203.
  • 201 is mounted on the body through the pan/tilt head 204 for taking an image.
  • the power system includes at least one of a motor 207, a propeller 206, and an electronic governor 217, the power system being mounted to the fuselage for providing flight power.
  • the processor 202 is configured to: acquire an image captured by a camera mounted on the drone; identify the following object in the image; and use the communication interface 203 to: control the terminal of the drone Sending identification information of the identified at least one following object, the identification information of the following object is used to indicate at least one following object recognized by the drone from the image.
  • the identification information of the following object includes position information of the at least one following object recognized by the drone from the image in the image.
  • the processor 202 is further configured to: identify a category of the following object in the image; the communication interface 203 is further configured to: send the identified category of the at least one following object to the control terminal of the drone information.
  • the communication interface 203 is further configured to: receive a follow instruction sent by the control terminal of the drone; the processor 202 is further configured to: follow the at least one following object indicated by the following instruction to make the at least A follow object is in the shooting picture of the camera.
  • the communication interface 203 is further configured to: receive, in a process of following the at least one following object, a flight control instruction sent by a control terminal of the drone; the processor 202 is further configured to: according to the The flight control command adjusts one or more of the position of the drone, the attitude of the drone, and the attitude of the head of the drone.
  • the communication interface 203 is further configured to: receive a composition instruction sent by the control terminal of the drone; the processor 202 follows the at least one following object indicated by the following instruction to cause the at least one to follow
  • the method is specifically configured to: follow the at least one following object indicated by the following instruction to cause the at least one following object to be located according to the composition rule indicated by the composition instruction In the shooting screen of the shooting device.
  • the at least one following object includes at least two following objects.
  • processor 202 may specifically be a flight controller.
  • the image captured by the camera mounted thereon is acquired by the drone, the image is sent to the control terminal of the drone, and at least two follow instructions of the following command are followed according to the following command sent by the control terminal.
  • the object follows to make the at least two following objects in the photographing screen of the photographing device, so that the drone can be applied to more shooting scenes such as multi-player game, multi-player play, group photo, etc.;
  • the human machine receives a composition instruction sent by the control terminal, and when the at least two following objects indicated by the following instruction are followed, causing the at least two following objects to be located in the photographing device according to a composition rule indicated by the composition instruction In the shooting picture, the flexibility of the drone to follow at least two following objects is improved.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the above-described integrated unit implemented in the form of a software functional unit can be stored in a computer readable storage medium.
  • the above software functional unit is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to perform the methods of the various embodiments of the present invention. Part of the steps.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)
  • Studio Devices (AREA)

Abstract

一种跟随控制方法、控制终端及无人机,该方法包括:接收并显示无人机的拍摄装置拍摄获取图像(S101);检测用户对图像中至少两个跟随对象的选择操作(S102);根据检测到的选择操作确定跟随指令(S103);控制无人机对跟随指令指示的至少两个跟随对象进行跟随以使至少两个跟随对象在拍摄装置的拍摄画面中(S104)。该方法通过控制终端接收并显示无人机的拍摄装置拍摄获取的图像,检测用户对图像中至少两个跟随对象的选择操作,并根据检测到的选择操作确定跟随指令,该跟随指令可指示至少两个跟随对象,以使无人机对至少两个跟随对象进行跟随,以使至少两个跟随对象在拍摄装置的拍摄画面中,使得无人机可应用于更多的拍摄场景。

Description

跟随控制方法、控制终端及无人机 技术领域
本发明实施例涉及无人机领域,尤其涉及一种跟随控制方法、控制终端及无人机。
背景技术
现有技术中无人机可用于对目标对象进行智能跟随。但是,在多人球赛、多人游玩、合照集体照等场景中,无法对多个目标对象进行跟随,从而限制了无人机的拍摄场景。
发明内容
本发明实施例提供一种跟随控制方法、控制终端及无人机,以使无人机可应用于更多的拍摄场景例如多人球赛、多人游玩、合照集体照等。
本发明实施例的第一方面是提供一种跟随控制方法,应用于无人机的控制终端,包括:
接收并显示无人机的拍摄装置拍摄获取的图像;
检测用户对所述图像中至少两个跟随对象的选择操作;
根据检测到的所述选择操作确定跟随指令;
控制无人机对所述跟随指令指示的所述至少两个跟随对象进行跟随以使所述至少两个跟随对象在所述拍摄装置的拍摄画面中。
本发明实施例的第二方面是提供一种跟随控制方法,应用于无人机,包括:
获取所述无人机搭载的拍摄装置拍摄的图像;
将所述图像发送给所述无人机的控制终端;
接收所述无人机的控制终端发送的所述跟随指令,所述跟随指令用于指示所述图像中的至少两个跟随对象;
对所述至少两个跟随对象进行跟随以使所述至少两个跟随对象在所述拍摄装置的拍摄画面中。
本发明实施例的第三方面是提供一种跟随控制方法,应用于无人机的控制终端,包括:
接收并显示无人机的拍摄装置拍摄获取的图像;
接收无人机发送的跟随对象标识信息,所述跟随对象标识信息用于指示至少一个由所述无人机从所述图像中识别出的跟随对象;
根据所述跟随对象标识信息,在所述图像中对所述至少一个由所述无人机从所述图像中识别出的跟随对象进行标识。
本发明实施例的第四方面是提供一种跟随控制方法,应用于无人机,包括:
获取所述无人机搭载的拍摄装置拍摄的图像;
对所述图像中的跟随对象进行识别;
向所述无人机的控制终端发送识别出的至少一个跟随对象的标识信息,所述跟随对象的标识信息用于指示至少一个由所述无人机从所述图像中识别出的跟随对象。
本发明实施例的第五方面是提供一种无人机的控制终端,包括:通讯接口和处理器;
所述通讯接口用于接收无人机的拍摄装置拍摄获取的图像;
所述处理器用于:
显示无人机的拍摄装置拍摄获取的图像;
检测用户对所述图像中至少两个跟随对象的选择操作;
根据检测到的所述选择操作确定跟随指令;
控制无人机对所述跟随指令指示的所述至少两个跟随对象进行跟随以使所述至少两个跟随对象在所述拍摄装置的拍摄画面中。
本发明实施例的第六方面是提供一种无人机,包括:
机身;
动力系统,安装在所述机身,用于提供飞行动力;
拍摄装置,安装在所述机身,用于拍摄图像;
处理器和通讯接口;
所述处理器用于获取无人机搭载的拍摄装置拍摄的图像;
所述通讯接口用于将所述图像发送给所述无人机的控制终端;接收所 述无人机的控制终端发送的跟随指令,所述跟随指令用于指示所述图像中的至少两个跟随对象;
所述处理器还用于:对所述至少两个跟随对象进行跟随以使所述至少两个跟随对象在所述拍摄装置的拍摄画面中。
本发明实施例的第七方面是提供一种无人机的控制终端,包括:通讯接口和处理器;
所述通讯接口用于:
接收无人机的拍摄装置拍摄获取的图像;
接收无人机发送的跟随对象标识信息,所述跟随对象标识信息用于指示至少一个由所述无人机从所述图像中识别出的跟随对象;
所述处理器用于:
显示无人机的拍摄装置拍摄获取的图像;
根据所述跟随对象标识信息,在所述图像中对所述至少一个由所述无人机从所述图像中识别出的跟随对象进行标识。
本发明实施例的第八方面是提供一种无人机,包括:
机身;
动力系统,安装在所述机身,用于提供飞行动力;
拍摄装置,安装在所述机身,用于拍摄图像;
处理器和通讯接口;
所述处理器用于:
获取无人机搭载的拍摄装置拍摄的图像;
对所述图像中的跟随对象进行识别;
所述通讯接口用于:
向所述无人机的控制终端发送识别出的至少一个跟随对象的标识信息,所述跟随对象的标识信息用于指示至少一个由所述无人机从所述图像中识别出的跟随对象。
本实施例提供的跟随控制方法、控制终端及无人机,通过控制终端接收并显示无人机的拍摄装置拍摄获取的图像,检测用户对所述图像中至少两个跟随对象的选择操作,并根据检测到的所述选择操作确定跟随指令,该跟随指令可指示至少两个跟随对象,以使无人机对至少两个跟随对象进 行跟随,以使至少两个跟随对象在拍摄装置的拍摄画面中,使得无人机可应用于更多的拍摄场景例如多人球赛、多人游玩、合照集体照等。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例提供的跟随控制方法的流程图;
图2为本发明实施例提供的通信系统的示意图;
图3为本发明实施例提供的交互界面的示意图;
图4为本发明另一实施例提供的跟随控制方法的流程图;
图5为本发明另一实施例提供的交互界面的示意图;
图6为本发明另一实施例提供的交互界面的示意图;
图7为本发明另一实施例提供的交互界面的示意图;
图8为本发明另一实施例提供的交互界面的示意图;
图9为本发明另一实施例提供的跟随控制方法的流程图;
图10为本发明另一实施例提供的交互界面的示意图;
图11为本发明另一实施例提供的交互界面的示意图;
图12为本发明另一实施例提供的跟随控制方法的流程图;
图13为本发明另一实施例提供的跟随控制方法的流程图;
图14为本发明另一实施例提供的跟随控制方法的流程图;
图15为本发明另一实施例提供的跟随控制方法的流程图;
图16为本发明另一实施例提供的跟随控制方法的流程图;
图17为本发明另一实施例提供的跟随控制方法的流程图;
图18为本发明另一实施例提供的跟随控制方法的流程图;
图19为本发明实施例提供的无人机的控制终端的结构图;
图20为本发明实施例提供的无人机的结构图;
图21为本发明另一实施例提供的无人机的控制终端的结构图。
附图标记:
21-无人机         22-拍摄装置     23-处理器
24-通讯接口       25-控制终端     26-云台
30-交互界面       31-跟随对象     32-跟随对象
33-跟随对象       34-跟随对象     35-圆形图标
36-圆形图标       37-跟随对象     38-图标
39-图标           40-确认跟随图标 41-图标
110-构图选择图标  111-列表        190-控制终端
191-通讯接口      192-处理器      200-无人机
201-拍摄装置      202-处理器      203-通讯接口
204-云台          206-螺旋桨      207-电机
217-电子调速器    210-控制终端    211-通讯接口
212-处理器
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
需要说明的是,当组件被称为“固定于”另一个组件,它可以直接在另一个组件上或者也可以存在居中的组件。当一个组件被认为是“连接”另一个组件,它可以是直接连接到另一个组件或者可能同时存在居中组件。
除非另有定义,本文所使用的所有的技术和科学术语与属于本发明的技术领域的技术人员通常理解的含义相同。本文中在本发明的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本发明。本文所使用的术语“及/或”包括一个或多个相关的所列项目的任意的和所有的组合。
下面结合附图,对本发明的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
本发明实施例提供一种跟随控制方法。图1为本发明实施例提供的跟 随控制方法的流程图。本实施例提供的跟随控制方法应用于无人机的控制终端。如图1所示,本实施例中的方法,可以包括:
步骤S101、接收并显示无人机的拍摄装置拍摄获取的图像。
如图2所示,无人机21搭载有拍摄装置22,无人机21内的处理器23获取拍摄装置22拍摄获取的图像,处理器23可以是无人机21的飞行控制器,也可以是其他通用或者专用的处理器。处理器23获取到拍摄装置22拍摄的图像后,通过无人机21的通讯接口24将该图像发送给无人机21的控制终端25,无人机21和控制终端25可以有线通信,也可以无线通信,本实施例以无线通信进行示意性说明。拍摄装置22通过云台26安装在无人机21上。
控制终端25可以是用于控制无人机21的遥控器,还可以是智能手机、平板电脑、地面控制站、膝上型电脑等及其组合。可选的,控制终端25上设置有显示设备例如显示屏,或者控制终端25可以和外接的显示设备连接。控制终端25接收到无人机21发送的图像后,在该显示设备上显示该图像。具体的,控制终端25可将该图像显示在交互界面上,该交互界面显示在控制终端25的显示屏上,可选的,该显示屏为触摸屏。如图3所示,30表示控制终端25控制显示的交互界面,交互界面30显示的该图像中包括多个跟随对象,例如跟随对象31、跟随对象32、跟随对象33、跟随对象34。此处只是示意性说明,并不限定跟随对象的个数和类别。
步骤S102、检测用户对所述图像中至少两个跟随对象的选择操作。
用户可以从交互界面30显示的多个跟随对象中选择出至少两个跟随对象。用户对至少两个跟随对象的选择操作可通过语音选择操作来实现,也可以通过在交互界面30上对至少两个跟随对象的选择操作来实现。控制终端25可检测用户对至少两个跟随对象的选择操作。
具体的,检测用户对所述图像中至少两个跟随对象的选择操作包括如下几种可行的实现方式:
一种可行的实现方式是:检测用户对所述图像中至少两个跟随对象的语音选择操作。
例如,交互界面30上显示的跟随对象31穿着红衣服、跟随对象32穿着黄衣服,用户可以向控制终端25发出“跟随红衣服和黄衣服的对象” 的语音选择操作。控制终端25检测用户对跟随对象31和跟随对象32的语音选择操作。
或者,如图3所示,跟随对象32位于交互界面30的左下角,跟随对象34位于交互界面30的右下角,用户可以向控制终端25发出“跟随左下角和右下角的对象”的语音选择操作。控制终端25检测用户对跟随对象32和跟随对象34的语音选择操作。
另一种可行的实现方式是:检测用户在显示所述图像的交互界面上对至少两个跟随对象的选择操作。
如图3所示,用户可以对交互界面30上显示的多个跟随对象中的至少两个跟随对象例如跟随对象31和跟随对象32进行选择操作。控制终端25检测用户在交互界面30上对至少两个跟随对象例如跟随对象31和跟随对象32的选择操作,该选择操作包括但不限于单击、双击、框选、长按等操作。
步骤S103、根据检测到的所述选择操作确定跟随指令。
所述根据检测到的所述选择操作确定跟随指令,包括:根据检测到的所述语音选择操作确定跟随指令。
例如,用户向控制终端25发出“跟随红衣服和黄衣服的对象”的语音选择操作,控制终端25可根据该语音选择操作确定出跟随指令,该跟随指令包括用户选择出的跟随对象的特征信息例如红衣服和黄衣服。控制终端25将该跟随指令发送给无人机21,无人机21的处理器23根据该跟随指令包括的特征信息,从拍摄装置22拍摄的图像中识别出穿着红衣服的跟随对象31和穿着黄衣服的跟随对象32,并对跟随对象31和跟随对象32进行跟随。
再例如,用户向控制终端25发出“跟随左下角和右下角的对象”的语音选择操作,控制终端25可根据该语音选择操作确定出跟随指令,该跟随指令包括用户选择出的跟随对象的特征信息例如左下角和右下角。控制终端25将该跟随指令发送给无人机21,无人机21的处理器23根据该跟随指令包括的特征信息,从拍摄装置22拍摄的图像中识别出左下角的跟随对象32和右下角的跟随对象34,并对跟随对象32和跟随对象34进行跟随。
在其他实施例中,当控制终端25检测到用户发出的“跟随红衣服和黄衣服的对象”的语音选择操作,或者“跟随左下角和右下角的对象”的语音选择操作时,控制终端25可根据该语音选择操作识别出用户选择的至少两个跟随对象的特征信息例如红衣服和黄衣服,并根据至少两个跟随对象的特征信息确定出至少两个跟随对象在图像中的位置信息,并根据该至少两个跟随对象在图像中的位置信息生成跟随指令,例如,所述跟踪指令中包括至少两个跟随对象在图像中的位置信息。
在另外一些实施例中,若控制终端25检测到用户在交互界面30上对至少两个跟随对象例如跟随对象31和跟随对象32的选择操作例如单击、双击、框选、长按等点触触摸屏的操作,则可根据该用户的选择操作生成跟随指令,该跟随指令包括用户选择的至少两个跟随对象在图像中的位置信息。
步骤S104、控制无人机对所述跟随指令指示的所述至少两个跟随对象进行跟随以使所述至少两个跟随对象在所述拍摄装置的拍摄画面中。
当控制终端25通过上述任一种方式确定出跟随指令后,可控制无人机21对该跟随指令指示的所述至少两个跟随对象例如跟随对象31和跟随对象32进行跟随,无人机21在对跟随对象31和跟随对象32进行跟随的过程中,处理器23可根据跟随对象31和跟随对象32的运动方向、以及跟随对象31和跟随对象32在图像中所处的位置等信息调整无人机21的运动方向、无人机21的姿态、无人机21的云台的姿态中的至少一个,以使跟随对象31和跟随对象32始终在拍摄装置22的拍摄画面中。
另外,本实施例不限定无人机21对至少两个跟随对象的跟随方式,可选的,无人机21可以在至少两个跟随对象的后方进行跟随,也可以在至少两个跟随对象的侧边进行平行跟随,或者,无人机21的位置不变,无人机21还可通过调整机身姿态和/或云台姿态以使至少两个跟随对象在所述拍摄装置的拍摄画面中。
本实施例通过控制终端接收并显示无人机的拍摄装置拍摄获取的图像,检测用户对所述图像中至少两个跟随对象的选择操作,并根据检测到的所述选择操作确定跟随指令,该跟随指令可指示至少两个跟随对象,以使无人机对至少两个跟随对象进行跟随,以使至少两个跟随对象在拍摄装 置的拍摄画面中,使得无人机可应用于更多的拍摄场景例如多人球赛、多人游玩、合照集体照等。
本发明实施例提供一种跟随控制方法。图4为本发明另一实施例提供的跟随控制方法的流程图。如图4所示,在图1所示实施例的基础上,本实施例中的方法,还可以包括:
步骤S401、接收所述无人机发送的跟随对象标识信息,所述跟随对象标识信息用于指示至少一个由所述无人机从所述图像中识别出的跟随对象。
在本实施例中,无人机21内的处理器23获取拍摄装置22拍摄获取的图像之后,处理器23还可以对该图像中的跟随对象进行识别,例如,处理器23可采用神经网络模型对该图像中物体的轮廓、大小、类别、以及物体与无人机之间的距离等信息中的至少一个进行识别。处理器23可根据物体轮廓的清晰度确定该物体是否可以作为跟随对象;也可以根据该物体的大小确定该物体是否可以作为跟随对象;还可以根据该物体的类别确定该物体是否可以作为跟随对象,例如,神经网络模型识别出的物体的类别包括人、动物、车辆等,处理器23可以将神经网络模型识别出的人作为跟随对象。另外,处理器23还可以将与无人机之间的距离在预设距离范围内的物体作为跟随对象。此处只是示意性说明,并不限定处理器23识别出跟随对象的具体方法。
具体的,当处理器23识别出跟随对象后,可通过无人机21的通讯接口24将跟随对象标识信息发送给无人机21的控制终端25,该跟随对象标识信息用于指示至少一个由无人机21从所述图像中识别出的跟随对象。至少一个由无人机21从所述图像中识别出的跟随对象包括多个由无人机21从所述图像中识别出的跟随对象。如图3所示,跟随对象31、跟随对象32、跟随对象34是无人机21从所述图像中识别出的跟随对象,无人机21向控制终端25发送跟随对象31的标识信息、跟随对象32的标识信息和跟随对象34的标识信息。
步骤S402、根据所述跟随对象标识信息,在所述图像中对所述至少一个由所述无人机从所述图像中识别出的跟随对象进行标识。
当控制终端25接收到无人机21发送的跟随对象31的标识信息、跟随对象32的标识信息和跟随对象34的标识信息之后,在交互界面30中分别对跟随对象31、跟随对象32、跟随对象34进行标识,这样,用户通过观察图像,就可以知道无人机可以至少对图像中哪几个对象是可以跟随的。
具体的,所述跟随对象标识信息包括所述至少一个由所述无人机从所述图像中识别出的跟随对象在所述图像中的位置信息。
例如,无人机21向控制终端25发送的跟随对象31的标识信息是跟随对象31在所述图像中的位置信息,跟随对象32的标识信息是跟随对象32在所述图像中的位置信息,跟随对象34的标识信息是跟随对象34在所述图像中的位置信息。
所述根据所述跟随对象标识信息,在所述图像中对所述至少一个由所述无人机从所述图像中识别出的跟随对象进行标识,包括:根据所述跟随对象标识信息,在所述图像中显示用于标识所述至少一个由所述无人机从所述图像中识别出的跟随对象的图标。
如图3所示,控制终端25可根据跟随对象31在所述图像中的位置信息,在所述图像中显示用于标识跟随对象31的图标例如跟随对象31中的圆形图标35。同理,控制终端25可根据跟随对象32在所述图像中的位置信息,在所述图像中显示用于标识跟随对象32的图标例如跟随对象32中的圆形图标35。控制终端25可根据跟随对象34在所述图像中的位置信息,在所述图像中显示用于标识跟随对象34的图标例如跟随对象34中的圆形图标35。此处只是示意性说明,在其他实施例中还可以采用除了圆形图标之外的其他图标来标识无人机识别出的跟随对象。
具体的,检测用户对所述图像中至少两个跟随对象的选择操作包括如下几种可行的实现方式:
一种可行的实现方式是:所述至少一个由所述无人机从所述图像中识别出的跟随对象包括多个由所述无人机从所述图像中识别出的跟随对象。所述检测用户对所述图像中至少两个跟随对象的选择操作,包括:检测用户对所述图像中至少两个由无人机从所述图像中识别出的跟随对象的选择操作。
如图3所示,跟随对象31、跟随对象32、跟随对象34是无人机21从所述图像中识别出的跟随对象,用户可以在交互界面30上对跟随对象31、跟随对象32、跟随对象34中的至少两个跟随对象进行选择,例如,用户对跟随对象31、跟随对象32、跟随对象34中的至少两个跟随对象进行单击、双击、框选、长按等操作,或者用户还可以对跟随对象31、跟随对象32、跟随对象34中至少两个跟随对象内的圆形图标35进行单击、双击、框选、长按等操作。控制终端25可检测用户对所述图像中至少两个由无人机从所述图像中识别出的跟随对象的选择操作。
例如,用户对跟随对象31、跟随对象32、跟随对象34进行单击,当控制终端25检测到用户对跟随对象31、跟随对象32、跟随对象34单击时,在交互界面30上更新圆形图标35的形状和/或颜色,本实施例以颜色更新为例进行示意性说明,如图5所示,用户对跟随对象31、跟随对象32、跟随对象34进行单击后,圆形图标35更新为圆形图标36,圆形图标35的颜色和圆形图标36的颜色不同,即圆形图标35表示跟随对象处于待选状态,圆形图标36表示跟随对象处于选中状态。
另一种可行的实现方式是:所述检测用户对所述图像中至少两个跟随对象的选择操作,包括:检测用户对所述图像中至少两个由无人机从所述图像中未识别出的跟随对象的选择操作。
如图6所示,跟随对象31、跟随对象32、跟随对象34是无人机21从所述图像中识别出的跟随对象,跟随对象33和跟随对象37是无人机21从所述图像中未识别出的跟随对象。用户还可以对所述图像中至少两个无人机21未识别出的跟随对象进行选择,选择的方式可以是单击、双击、框选、长按等。控制终端25还可以检测用户对所述图像中至少两个由无人机从所述图像中未识别出的跟随对象的选择操作。
例如,用户对跟随对象33和跟随对象37进行框选,当控制终端25检测到用户对跟随对象33和跟随对象37框选时,在交互界面30上显示用于框选跟随对象33的图标和用于框选跟随对象37的图标,如图7所示,38表示用于框选跟随对象33的图标,39表示用于框选跟随对象37的图标。
再一种可行的实现方式是:所述检测用户对所述图像中至少两个跟随 对象的选择操作,包括:检测用户对所述图像中至少一个由无人机从所述图像中识别出的跟随对象的选择操作,以及对至少一个由所述无人机从所述图像中未识别出的跟随对象的选择操作。
如图8所示,用户还可以对所述图像中至少一个由无人机21识别出的跟随对象例如跟随对象31、跟随对象32、跟随对象34进行选择,以及对所述图像中至少一个无人机21未识别出的跟随对象例如跟随对象33进行选择。控制终端25可检测用户对跟随对象31、跟随对象32、跟随对象34的选择操作,以及用户对跟随对象33的选择操作。
在其他实施例中,所述方法还包括:接收所述无人机发送的至少一个跟随对象的类别信息;根据所述至少一个跟随对象的类别信息,在所述图像中对所述至少一个跟随对象的类别进行标识。
例如,无人机21内的处理器23可采用神经网络模型对该图像中的跟随对象的类别进行识别,例如跟随对象的类别可以是人、动物、或车辆等,处理器23进一步还可以通过无人机21的通讯接口24将其识别出的跟随对象的类别信息发送给无人机21的控制终端25,控制终端25还可以在交互界面30中对无人机21识别出的至少一个跟随对象的类别进行标识,例如,在跟随对象31、跟随对象32、跟随对象34的周围分别显示类别信息。
本实施例通过控制终端接收无人机发送的跟随对象标识信息,并根据该跟随对象标识信息,在图像中对跟随对象标识信息指示的至少一个由所述无人机从所述图像中识别出的跟随对象进行标识,这样,用户就可以知道无人机至少可以对图像中哪几个对象进行跟随,提升了用户体验。
本发明实施例提供一种跟随控制方法。图9为本发明另一实施例提供的跟随控制方法的流程图。在上述实施例的基础上,跟随控制方法还可以包括:检测用户的确认跟随操作;所述控制无人机对所述跟随指令指示的所述至少两个跟随对象进行跟随以使所述至少两个跟随对象在所述拍摄装置的拍摄画面中,包括:在检测到用户的确认跟随操作后,控制无人机对所述跟随指令指示的所述至少两个跟随对象进行跟随以使所述至少两个跟随对象在所述拍摄装置的拍摄画面中。
可选的,所述方法还包括:显示确认跟随图标;所述检测用户的确认 跟随操作,包括:检测用户对所述确认跟随图标的操作。
如图8所示,当用户对跟随对象31、跟随对象32、跟随对象34、跟随对象33选择完成后,控制终端25进一步在交互界面30上显示确认跟随图标40,当用户点击确认跟随图标40时,表示控制无人机开始对跟随对象31、跟随对象32、跟随对象34、跟随对象33进行跟随。具体的,在控制终端25检测到用户对确认跟随图标40的操作例如点击后,控制无人机21对跟随对象31、跟随对象32、跟随对象34、跟随对象33进行跟随。无人机21在对跟随对象31和跟随对象32进行跟随的过程中,处理器23可根据跟随对象31、跟随对象32、跟随对象34、跟随对象33的运动方向、以及每个跟随对象在图像中所处的位置等信息调整无人机21的运动方向、无人机21的姿态、无人机21的云台的姿态中的至少一个,以使跟随对象31、跟随对象32、跟随对象34、跟随对象33始终在拍摄装置22的拍摄画面中。
如图10所示,当用户点击确认跟随图标40后,控制终端25还可以在交互界面30中对圆形图标36和图标38进行更新,例如更新为如图10所示的图标41,即图标41表示跟随对象处于被跟随状态。在本实施例中,不限定图标41的颜色和形状。
此外,控制终端25还可以检测用户的结束跟随操作,并根据该结束跟随操作生成结束跟随指令,从而控制无人机21不再跟随用户选择的至少两个跟随对象例如图10所示的跟随对象31、跟随对象32、跟随对象34、跟随对象33。当无人机21结束跟随后,无人机21重新识别拍摄装置22的拍摄画面中的跟随对象,并将其识别出的跟随对象标识信息发送给控制终端25,控制终端25在交互界面上对无人机21重新识别出的跟随对象进行标识,类似于图3所示的标识方法,此处不再赘述。本实施例不限定用户的结束跟随操作,具体的,用户可以点击交互界面30上的退出按键,也可以对控制终端25进行语音控制,例如用户向控制终端25说“结束跟随”。
在其他实施例中,所述方法还包括如图9所示的步骤:
步骤S901、在所述无人机对所述至少两个跟随对象进行跟随的过程中,检测用户的飞行控制操作。
如图8所示,当无人机21对跟随对象31、跟随对象32、跟随对象34、跟随对象33进行跟随的过程中,用户还可以对交互界面30中的控制按键进行操作,例如,该交互界面30还可显示用于控制无人机21的位置、无人机21的姿态、或者云台的姿态的按键。
步骤S902、根据检测到的飞行控制操作确定飞行控制指令。
当控制终端25检测到用户对用于控制无人机21的位置、无人机21的姿态、或者云台的姿态的按键的操作时,生成飞行控制指令。
步骤S903、将所述飞行控制指令发送给无人机,以使所述无人机根据所述飞行控制指令,调整所述无人机的位置、所述无人机的姿态、以及所述无人机的云台的姿态中的一个或多个。
当控制终端25生成飞行控制指令后,将该飞行控制指令发送给无人机21,以使无人机21根据该飞行控制指令调整所述无人机的位置、所述无人机的姿态、以及所述无人机的云台的姿态中的一个或多个。
例如,当无人机对跟随指令指示的所述至少两个跟随对象进行跟随时,无人机可根据所述至少两个跟随对象的运动方向、位置等信息调整无人机的运动方向、无人机的姿态、无人机的云台的姿态中的至少一个,以使所述至少两个跟随对象始终在拍摄装置的拍摄画面中。在无人机对至少两个跟随对象进行跟随的过程中,无人机还可以接收控制终端发送的飞行控制指令,无人机根据该飞行控制指令调整所述无人机的位置、所述无人机的姿态、以及所述无人机的云台的姿态中的一个或多个。例如,在无人机接收控制终端发送的飞行控制指令之前,无人机在所述至少两个跟随对象的后面跟随所述至少两个跟随对象,该飞行控制指令用于控制无人机在所述至少两个跟随对象的侧边跟随所述至少两个跟随对象,因此,当无人机接收到控制终端发送的飞行控制指令之后,无人机将调整跟随方式即在所述至少两个跟随对象的侧边跟随所述至少两个跟随对象。
在另外一些实施例中,所述方法还包括:检测用户的构图选择操作;根据检测到的构图选择操作确定构图指令;所述控制无人机对所述跟随指令指示的所述至少两个跟随对象进行跟随以使所述至少两个跟随对象在所述拍摄装置的拍摄画面中,包括:控制无人机对所述跟随指令指示的所述至少两个跟随对象进行跟随以使所述至少两个跟随对象按照所述构图 指令指示的构图规则位于所述拍摄装置的拍摄画面中。
在本实施例中,控制终端25还可以控制无人机21对至少两个跟随对象进行跟随以使至少两个跟随对象位于拍摄装置的拍摄画面中的预设位置。如图11所示,交互界面30还可显示有构图选择图标110,当用户点击构图选择图标110时,交互界面30显示列表111,列表111包括4个可选项例如左上角、左下角、右上角、右下角。左上角、左下角、右上角、右下角分别表示控制无人机跟随的至少两个跟随对象在拍摄画面中的位置。当用户对该4个可选项中的任一个进行操作时,控制终端25生成相应的构图指令,例如,当用户选择右上角时,控制终端25生成的构图指令指示的构图规则可使无人机跟随的至少两个跟随对象位于拍摄装置的拍摄画面的右上角。同理,当用户选择左上角时,控制终端25生成的构图指令指示的构图规则可使无人机跟随的至少两个跟随对象位于拍摄装置的拍摄画面的左上角,依次类推。
以用户选择右上角为例进行示意性说明,当控制终端25确定出跟随指令后,可控制无人机21对该跟随指令指示的所述至少两个跟随对象例如跟随对象31和跟随对象32进行跟随,无人机21在对跟随对象31和跟随对象32进行跟随的过程中,处理器23可根据跟随对象31和跟随对象32的运动方向、以及跟随对象31和跟随对象32在图像中所处的位置等信息调整无人机21的运动方向、无人机21的姿态、无人机21的云台的姿态中的至少一个,以使跟随对象31和跟随对象32位于拍摄装置22的拍摄画面的右上角。
本实施例通过控制终端检测用户的构图选择操作;根据检测到的构图选择操作确定构图指令,在控制无人机对所述跟随指令指示的所述至少两个跟随对象进行跟随时,使所述至少两个跟随对象按照所述构图指令指示的构图规则位于所述拍摄装置的拍摄画面中,提高了无人机对至少两个跟随对象进行跟随的灵活性。
本发明实施例提供一种跟随控制方法。图12为本发明另一实施例提供的跟随控制方法的流程图。本实施例提供的跟随控制方法应用于无人机。如图12所示,本实施例中的方法,可以包括:
步骤S1201、获取所述无人机搭载的拍摄装置拍摄的图像。
如图2所示,无人机21搭载有拍摄装置22,无人机21内的处理器23获取拍摄装置22拍摄获取的图像,处理器23可以是无人机21的飞行控制器,也可以是其他通用或者专用的处理器。处理器23获取到拍摄装置22拍摄的图像。
步骤S1202、将所述图像发送给所述无人机的控制终端。
处理器23获取到拍摄装置22拍摄的图像后,通过无人机21的通讯接口24将该图像发送给无人机21的控制终端25,无人机21和控制终端25可以有线通信,也可以无线通信,本实施例以无线通信进行示意性说明。
控制终端25接收并显示该图像,并检测用户对该图像中至少两个跟随对象的选择操作,进一步根据检测到的所述选择操作确定跟随指令,具体原理和实现方式均与上述实施例一致,此处不再赘述。
步骤S1203、接收所述无人机的控制终端发送的所述跟随指令,所述跟随指令用于指示所述图像中的至少两个跟随对象。
当控制终端25通过上述实施例所述的方法确定出跟随指令后,将该跟随指令发送给无人机21,该跟随指令用于指示用户选择的至少两个跟随对象。
步骤S1204、对所述至少两个跟随对象进行跟随以使所述至少两个跟随对象在所述拍摄装置的拍摄画面中。
无人机21对该跟随指令指示的所述至少两个跟随对象例如跟随对象31和跟随对象32进行跟随,无人机21在对跟随对象31和跟随对象32进行跟随的过程中,处理器23可根据跟随对象31和跟随对象32的运动方向、以及跟随对象31和跟随对象32在图像中所处的位置等信息调整无人机21的运动方向、无人机21的姿态、无人机21的云台的姿态中的至少一个,以使跟随对象31和跟随对象32在拍摄装置22的拍摄画面中。
本实施例通过无人机获取其搭载的拍摄装置拍摄的图像,将该图像发送给所述无人机的控制终端,并根据该控制终端发送的跟随指令对该跟随指令指示的至少两个跟随对象进行跟随以使所述至少两个跟随对象在所述拍摄装置的拍摄画面中,使得无人机可应用于更多的拍摄场景例如多人球赛、多人游玩、合照集体照等。
本发明实施例提供一种跟随控制方法。图13为本发明另一实施例提供的跟随控制方法的流程图。本实施例提供的跟随控制方法应用于无人机。如图13所示,在图12所示实施例的基础上,本实施例中的方法,还可以包括:
步骤S1301、对所述图像中的跟随对象进行识别。
无人机21内的处理器23获取拍摄装置22拍摄获取的图像之后,处理器23还可以对该图像中的跟随对象进行识别,例如,处理器23可采用神经网络模型对该图像中物体的轮廓、大小、类别、以及物体与无人机之间的距离等信息中的至少一个进行识别。处理器23可根据物体轮廓的清晰度确定该物体是否可以作为跟随对象;也可以根据该物体的大小确定该物体是否可以作为跟随对象;还可以根据该物体的类别确定该物体是否可以作为跟随对象,例如,神经网络模型识别出的物体的类别包括人、动物、车辆等,处理器23可以将神经网络模型识别出的人作为跟随对象。另外,处理器23还可以将与无人机之间的距离在预设距离范围内的物体作为跟随对象。此处只是示意性说明,并不限定处理器23识别出跟随对象的具体方法。
步骤S1302、向所述无人机的控制终端发送识别出的至少一个跟随对象的标识信息,所述跟随对象的标识信息用于指示至少一个由所述无人机从所述图像中识别出的跟随对象。
具体的,当处理器23识别出跟随对象后,可通过无人机21的通讯接口24将跟随对象标识信息发送给无人机21的控制终端25,该跟随对象标识信息用于指示至少一个由无人机21从所述图像中识别出的跟随对象。至少一个由无人机21从所述图像中识别出的跟随对象包括多个由无人机21从所述图像中识别出的跟随对象。如图3所示,跟随对象31、跟随对象32、跟随对象34是无人机21从所述图像中识别出的跟随对象,无人机21向控制终端25发送跟随对象31的标识信息、跟随对象32的标识信息和跟随对象34的标识信息。
具体的,所述跟随对象的标识信息包括所述至少一个由所述无人机从所述图像中识别出的跟随对象在所述图像中的位置信息。
例如,无人机21向控制终端25发送的跟随对象31的标识信息是跟随对象31在所述图像中的位置信息,跟随对象32的标识信息是跟随对象32在所述图像中的位置信息,跟随对象34的标识信息是跟随对象34在所述图像中的位置信息。
另外,在图12或图13所示实施例的基础上,本实施例中的方法,还可以包括如图14所示的步骤:
步骤S1401、对所述图像中的跟随对象的类别进行识别。
例如,无人机21内的处理器23可采用神经网络模型对该图像中的跟随对象的类别进行识别,例如跟随对象的类别可以是人、动物、或车辆等。
步骤S1402、向所述无人机的控制终端发送识别出的至少一个跟随对象的类别信息。
处理器23进一步还可以通过无人机21的通讯接口24将其识别出的跟随对象的类别信息发送给无人机21的控制终端25,控制终端25还可以在交互界面30中对无人机21识别出的至少一个跟随对象的类别进行标识,例如,在跟随对象31、跟随对象32、跟随对象34的周围分别显示类别信息。
在其他实施例中,所述跟随控制方法还可以包括:在对所述至少两个跟随对象进行跟随的过程中,接收所述无人机的控制终端发送的飞行控制指令;根据所述飞行控制指令调整所述无人机的位置、所述无人机的姿态、以及所述无人机的云台的姿态中的一个或多个。
如图8所示,当无人机21对跟随对象31、跟随对象32、跟随对象34、跟随对象33进行跟随的过程中,用户还可以对交互界面30中的控制按键进行操作,例如,该交互界面30还可显示用于控制无人机21的位置、无人机21的姿态、或者云台的姿态的按键。当控制终端25检测到用户对按键的操作时生成飞行控制指令,并将该飞行控制指令发送给无人机21。
无人机21在对跟随对象31、跟随对象32、跟随对象34、跟随对象33进行跟随的过程中,接收控制终端25发送的飞行控制指令,并根据该飞行控制指令调整所述无人机的位置、所述无人机的姿态、以及所述无人机的云台的姿态中的一个或多个。
在另外一些实施例中,所述跟随控制方法还可以包括:接收所述无人 机的控制终端发送的构图指令;所述对所述至少两个跟随对象进行跟随以使所述至少两个跟随对象在所述拍摄装置的拍摄画面中,包括:对所述至少两个跟随对象进行跟随以使所述至少两个跟随对象按照所述构图指令指示的构图规则位于所述拍摄装置的拍摄画面中。
无人机21还可以接收控制终端25发送的构图指令,构图指令的具体原理和实现方式均与上述实施例一致,此处不再赘述。
以用户选择如图11所示的右上角为例进行示意性说明,当无人机21接收到控制终端25发送的跟随指令后,无人机21对该跟随指令指示的所述至少两个跟随对象进行跟随,无人机21在对所述至少两个跟随对象进行跟随的过程中,处理器23可根据所述至少两个跟随对象的运动方向、以及所述至少两个跟随对象在图像中所处的位置等信息调整无人机21的运动方向、无人机21的姿态、无人机21的云台的姿态中的至少一个,以使所述至少两个跟随对象位于拍摄装置的拍摄画面的右上角。
本实施例通过无人机接收控制终端发送的构图指令,在对跟随指令指示的所述至少两个跟随对象进行跟随时,使所述至少两个跟随对象按照所述构图指令指示的构图规则位于所述拍摄装置的拍摄画面中,提高了无人机对至少两个跟随对象进行跟随的灵活性。
本发明实施例提供一种跟随控制方法。图15为本发明另一实施例提供的跟随控制方法的流程图。本实施例提供的跟随控制方法应用于无人机的控制终端。如图15所示,本实施例中的方法,可以包括:
步骤S1501、接收并显示无人机的拍摄装置拍摄获取的图像。
步骤S1501与步骤S101的具体原理和实现方式均一致,此处不再赘述。
步骤S1502、接收无人机发送的跟随对象标识信息,所述跟随对象标识信息用于指示至少一个由所述无人机从所述图像中识别出的跟随对象。
步骤S1502与步骤S401的具体原理和实现方式均一致,此处不再赘述。
步骤S1503、根据所述跟随对象标识信息,在所述图像中对所述至少一个由所述无人机从所述图像中识别出的跟随对象进行标识。
步骤S1503与步骤S402的具体原理和实现方式均一致,此处不再赘述。
具体的,所述跟随对象标识信息包括所述至少一个由所述无人机从所述图像中识别出的跟随对象在所述图像中的位置信息。
例如,无人机21向控制终端25发送的跟随对象31的标识信息是跟随对象31在所述图像中的位置信息,跟随对象32的标识信息是跟随对象32在所述图像中的位置信息,跟随对象34的标识信息是跟随对象34在所述图像中的位置信息。
所述根据所述跟随对象标识信息,在所述图像中对所述至少一个由所述无人机从所述图像中识别出的跟随对象进行标识,包括:根据所述跟随对象标识信息,在所述图像中显示用于标识所述至少一个由所述无人机从所述图像中识别出的跟随对象的图标。
如图3所示,控制终端25可根据跟随对象31在所述图像中的位置信息,在所述图像中显示用于标识跟随对象31的图标例如跟随对象31中的圆形图标35。同理,控制终端25可根据跟随对象32在所述图像中的位置信息,在所述图像中显示用于标识跟随对象32的图标例如跟随对象32中的圆形图标35。控制终端25可根据跟随对象34在所述图像中的位置信息,在所述图像中显示用于标识跟随对象34的图标例如跟随对象34中的圆形图标35。此处只是示意性说明,在其他实施例中还可以采用除了圆形图标之外的其他图标来标识无人机识别出的跟随对象。
在其他实施例中,所述方法还包括:接收所述无人机发送的至少一个跟随对象的类别信息;根据所述至少一个跟随对象的类别信息,在所述图像中对所述至少一个跟随对象的类别进行标识。
例如,无人机21内的处理器23可采用神经网络模型对该图像中的跟随对象的类别进行识别,例如跟随对象的类别可以是人、动物、或车辆等,处理器23进一步还可以通过无人机21的通讯接口24将其识别出的跟随对象的类别信息发送给无人机21的控制终端25,控制终端25还可以在交互界面30中对无人机21识别出的至少一个跟随对象的类别进行标识,所述至少一个跟随对象包括至少两个跟随对象。例如,控制终端25在交互界面30中跟随对象31、跟随对象32、跟随对象34的周围分别显示类别 信息。
本实施例通过控制终端接收无人机发送的跟随对象标识信息,并根据该跟随对象标识信息,在图像中对跟随对象标识信息指示的至少一个由所述无人机从所述图像中识别出的跟随对象进行标识,这样,用户就可以知道无人机至少可以对图像中哪几个对象进行跟随,提升了用户体验。
本发明实施例提供一种跟随控制方法。图16为本发明另一实施例提供的跟随控制方法的流程图。本实施例提供的跟随控制方法应用于无人机的控制终端。如图16所示,在图15所示实施例的基础上,本实施例中的方法,还可以包括:
步骤S1601、检测用户对所述图像中至少一个跟随对象的选择操作。
所述至少一个跟随对象包括至少两个跟随对象。
如图8所示,控制终端25的交互界面30显示有跟随对象31、跟随对象32、跟随对象34、跟随对象33,用户可以对跟随对象31、跟随对象32、跟随对象34、跟随对象33中的至少一个跟随对象进行选择,例如,用户可以对无人机21未识别出的跟随对象33进行框选,或者对无人机21识别出的跟随对象31、跟随对象32、跟随对象34中的至少一个跟随对象进行点选。在其他实施例中,用户还可以对无人机21未识别出的跟随对象和无人机21识别出的跟随对象进行选择。
步骤S1601与步骤S102的具体原理和实现方式一致,此处不再赘述。
具体的,检测用户对所述图像中至少一个跟随对象的选择操作,包括如下几种可行的实现方式:
一种可行的实现方式是:检测用户对所述图像中至少一个跟随对象的语音选择操作。可选的,所述至少一个跟随对象包括至少两个跟随对象。
检测用户对所述图像中至少一个跟随对象的语音选择操作与上述实施例所述的检测用户对所述图像中至少两个跟随对象的语音选择操作的具体原理和实现方式一致,此处不再赘述。
另一种可行的实现方式是:检测用户在显示所述图像的交互界面上对至少一个跟随对象的选择操作。可选的,所述至少一个跟随对象包括至少两个跟随对象。
检测用户在显示所述图像的交互界面上对至少一个跟随对象的选择操作与上述实施例所述的检测用户在显示所述图像的交互界面上对至少两个跟随对象的选择操作的具体原理和实现方式一致,此处不再赘述。
再一种可行的实现方式是:检测用户对所述图像中至少一个由无人机从所述图像中识别出的跟随对象的选择操作。可选的,所述至少一个跟随对象包括至少两个跟随对象。
如图3所示,跟随对象31、跟随对象32、跟随对象34是无人机21从所述图像中识别出的跟随对象,用户可以在交互界面30上对跟随对象31、跟随对象32、跟随对象34中的至少两个跟随对象进行选择,例如,用户对跟随对象31、跟随对象32、跟随对象34中的至少两个跟随对象进行单击、双击、框选、长按等操作。
又一种可行的实现方式是:检测用户对所述图像中至少一个由无人机从所述图像中未识别出的跟随对象的选择操作。可选的,所述至少一个跟随对象包括至少两个跟随对象。
如图6所示,跟随对象31、跟随对象32、跟随对象34是无人机21从所述图像中识别出的跟随对象,跟随对象33和跟随对象37是无人机21从所述图像中未识别出的跟随对象。用户还可以对所述图像中至少两个无人机21未识别出的跟随对象进行选择,选择的方式可以是单击、双击、框选、长按等。控制终端25还可以检测用户对所述图像中至少两个由无人机从所述图像中未识别出的跟随对象的选择操作。
如图8所示,用户还可以对所述图像中至少一个由无人机21识别出的跟随对象例如跟随对象31、跟随对象32、跟随对象34进行选择,以及对所述图像中至少一个无人机21未识别出的跟随对象例如跟随对象33进行选择。控制终端25可检测用户对跟随对象31、跟随对象32、跟随对象34的选择操作,以及用户对跟随对象33的选择操作。
步骤S1602、根据检测到的所述选择操作确定跟随指令。
具体的,根据检测到的所述选择操作确定跟随指令,包括:根据检测到的所述语音选择操作确定跟随指令。
例如,用户向控制终端25发出“跟随红衣服和黄衣服的对象”的语音选择操作,控制终端25可根据该语音选择操作确定出跟随指令,该跟 随指令包括用户选择出的跟随对象的特征信息例如红衣服和黄衣服。控制终端25将该跟随指令发送给无人机21,无人机21的处理器23根据该跟随指令包括的特征信息,从拍摄装置22拍摄的图像中识别出穿着红衣服的跟随对象31和穿着黄衣服的跟随对象32,并对跟随对象31和跟随对象32进行跟随。
再例如,用户向控制终端25发出“跟随左下角和右下角的对象”的语音选择操作,控制终端25可根据该语音选择操作确定出跟随指令,该跟随指令包括用户选择出的跟随对象的特征信息例如左下角和右下角。控制终端25将该跟随指令发送给无人机21,无人机21的处理器23根据该跟随指令包括的特征信息,从拍摄装置22拍摄的图像中识别出左下角的跟随对象32和右下角的跟随对象34,并对跟随对象32和跟随对象34进行跟随。
在其他实施例中,当控制终端25检测到用户发出的“跟随红衣服和黄衣服的对象”的语音选择操作,或者“跟随左下角和右下角的对象”的语音选择操作时,控制终端25可根据该语音选择操作识别出用户选择的至少两个跟随对象的特征信息例如红衣服和黄衣服,并根据至少两个跟随对象的特征信息确定出至少两个跟随对象在图像中的位置信息,并根据该至少两个跟随对象在图像中的位置信息生成跟随指令,例如,该跟随指令包括用户选择的至少两个跟随对象在图像中的位置信息。
在另外一些实施例中,若控制终端25检测到用户在交互界面30上对至少两个跟随对象例如跟随对象31和跟随对象32的选择操作例如单击、双击、框选、长按等点触触摸屏的操作,则可根据该用户的选择操作生成跟随指令,该跟随指令包括用户选择的至少两个跟随对象在图像中的位置信息。
步骤S1603、控制无人机对所述跟随指令指示的所述至少一个跟随对象进行跟随以使所述至少一个跟随对象在所述拍摄装置的拍摄画面中。
可选的,所述至少一个跟随对象包括至少两个跟随对象。
当控制终端25通过上述任一种方式确定出跟随指令后,可控制无人机21对该跟随指令指示的所述至少两个跟随对象例如跟随对象31和跟随对象32进行跟随,无人机21在对跟随对象31和跟随对象32进行跟随的 过程中,处理器23可根据跟随对象31和跟随对象32的运动方向、以及跟随对象31和跟随对象32在图像中所处的位置等信息调整无人机21的运动方向、无人机21的姿态、无人机21的云台的姿态中的至少一个,以使跟随对象31和跟随对象32在拍摄装置22的拍摄画面中。
在其他实施例中,所述方法还包括:检测用户的确认跟随操作;所述控制无人机对所述跟随指令指示的所述至少一个跟随对象进行跟随以使所述至少一个跟随对象在所述拍摄装置的拍摄画面中,包括:在检测到用户的确认跟随操作后,控制无人机对所述跟随指令指示的所述至少一个跟随对象进行跟随以使所述至少一个跟随对象在所述拍摄装置的拍摄画面中。可选的,所述至少一个跟随对象包括至少两个跟随对象。
具体的,所述方法还包括:显示确认跟随图标;所述检测用户的确认跟随操作,包括:检测用户对所述确认跟随图标的操作。
如图8所示,当用户对跟随对象31、跟随对象32、跟随对象34、跟随对象33选择完成后,控制终端25进一步在交互界面30上显示确认跟随图标40,当用户点击确认跟随图标40时,表示控制无人机开始对跟随对象31、跟随对象32、跟随对象34、跟随对象33进行跟随。具体的,在控制终端25检测到用户对确认跟随图标40的操作例如点击后,控制无人机21对跟随对象31、跟随对象32、跟随对象34、跟随对象33进行跟随。无人机21在对跟随对象31和跟随对象32进行跟随的过程中,处理器23可根据跟随对象31、跟随对象32、跟随对象34、跟随对象33的运动方向、以及每个跟随对象在图像中所处的位置等信息调整无人机21的运动方向、无人机21的姿态、无人机21的云台的姿态中的至少一个,以使跟随对象31、跟随对象32、跟随对象34、跟随对象33在拍摄装置22的拍摄画面中。
在其他实施例中,所述方法还包括如图17所示的步骤:
步骤S1701、在所述无人机对所述至少一个跟随对象进行跟随的过程中,检测用户的飞行控制操作。
步骤S1701与步骤S901的具体原理和实现方式一致,此处不再赘述。
步骤S1702、根据检测到的飞行控制操作确定飞行控制指令。
步骤S1702与步骤S902的具体原理和实现方式一致,此处不再赘述。
步骤S1703、将所述飞行控制指令发送给无人机,以使所述无人机根据所述飞行控制指令,调整所述无人机的位置、所述无人机的姿态、以及所述无人机的云台的姿态中的一个或多个。
步骤S1703与步骤S903的具体原理和实现方式一致,此处不再赘述。
在另外一些实施例中,所述方法还包括:检测用户的构图选择操作;根据检测到的构图选择操作确定构图指令;所述控制无人机对所述跟随指令指示的所述至少一个跟随对象进行跟随以使所述至少一个跟随对象在所述拍摄装置的拍摄画面中,包括:控制无人机对所述跟随指令指示的所述至少一个跟随对象进行跟随以使所述至少一个跟随对象按照所述构图指令指示的构图规则位于所述拍摄装置的拍摄画面中。
具体的,控制终端25还可以控制无人机21对至少两个跟随对象进行跟随以使至少两个跟随对象位于拍摄装置的拍摄画面中的预设位置,具体实现过程与上述图11所示的实施例一致,此处不再赘述。
本实施例通过控制终端接收并显示无人机的拍摄装置拍摄获取的图像,检测用户对所述图像中至少两个跟随对象的选择操作,并根据检测到的所述选择操作确定跟随指令,该跟随指令可指示至少两个跟随对象,以使无人机对至少两个跟随对象进行跟随,以使至少两个跟随对象在拍摄装置的拍摄画面中,使得无人机可应用于更多的拍摄场景例如多人球赛、多人游玩、合照集体照等;控制终端检测用户的构图选择操作;根据检测到的构图选择操作确定构图指令,在控制无人机对所述跟随指令指示的所述至少两个跟随对象进行跟随时,使所述至少两个跟随对象按照所述构图指令指示的构图规则位于所述拍摄装置的拍摄画面中,提高了无人机对至少两个跟随对象进行跟随的灵活性。
本发明实施例提供一种跟随控制方法。图18为本发明另一实施例提供的跟随控制方法的流程图。本实施例提供的跟随控制方法应用于无人机。如图18所示,本实施例中的方法,可以包括:
步骤S1801、获取所述无人机搭载的拍摄装置拍摄的图像。
步骤S1801与步骤S1201的具体原理和实现方式一致,此处不再赘述。
步骤S1802、对所述图像中的跟随对象进行识别。
步骤S1802与步骤S1301的具体原理和实现方式一致,此处不再赘述。
步骤S1803、向所述无人机的控制终端发送识别出的至少一个跟随对象的标识信息,所述跟随对象的标识信息用于指示至少一个由所述无人机从所述图像中识别出的跟随对象。
步骤S1803与步骤S1302的具体原理和实现方式一致,此处不再赘述。
可选的,所述至少一个跟随对象包括至少两个跟随对象。
具体的,所述跟随对象的标识信息包括所述至少一个由所述无人机从所述图像中识别出的跟随对象在所述图像中的位置信息。
例如,无人机21向控制终端25发送的跟随对象31的标识信息是跟随对象31在所述图像中的位置信息,跟随对象32的标识信息是跟随对象32在所述图像中的位置信息,跟随对象34的标识信息是跟随对象34在所述图像中的位置信息。
在其他实施例中,所述方法还包括:对所述图像中的跟随对象的类别进行识别;向所述无人机的控制终端发送识别出的至少一个跟随对象的类别信息。可选的,所述至少一个跟随对象包括至少两个跟随对象。此处,对所述图像中的跟随对象的类别进行识别与步骤S1401的具体原理和实现方式一致;向所述无人机的控制终端发送识别出的至少一个跟随对象的类别信息与步骤S1402的具体原理和实现方式一致,此处不再赘述。
在其他实施例中,所述方法还包括:接收所述无人机的控制终端发送的跟随指令;对所述跟随指令指示的至少一个跟随对象进行跟随以使所述至少一个跟随对象在所述拍摄装置的拍摄画面中。可选的,所述至少一个跟随对象包括至少两个跟随对象。
例如,当控制终端25通过上述实施例所述的方法确定出跟随指令后,将该跟随指令发送给无人机21,该跟随指令用于指示用户选择的至少两个跟随对象;无人机21对该跟随指令指示的所述至少两个跟随对象例如跟随对象31和跟随对象32进行跟随,无人机21在对跟随对象31和跟随对象32进行跟随的过程中,处理器23可根据跟随对象31和跟随对象32的运动方向、以及跟随对象31和跟随对象32在图像中所处的位置等信息调整无人机21的运动方向、无人机21的姿态、无人机21的云台的姿态中的至少一个,以使跟随对象31和跟随对象32在拍摄装置22的拍摄画面 中。
进一步的,所述方法还包括:在对所述至少一个跟随对象进行跟随的过程中,接收所述无人机的控制终端发送的飞行控制指令;根据所述飞行控制指令调整所述无人机的位置、所述无人机的姿态、以及所述无人机的云台的姿态中的一个或多个。
例如,无人机21在对跟随对象31、跟随对象32、跟随对象34、跟随对象33进行跟随的过程中,接收控制终端25发送的飞行控制指令,并根据该飞行控制指令调整所述无人机的位置、所述无人机的姿态、以及所述无人机的云台的姿态中的一个或多个。
在另外一些实施例中,所述方法还包括:接收所述无人机的控制终端发送的构图指令;所述对所述跟随指令指示的所述至少一个跟随对象进行跟随以使所述至少一个跟随对象在所述拍摄装置的拍摄画面中,包括:对所述跟随指令指示的所述至少一个跟随对象进行跟随以使所述至少一个跟随对象按照所述构图指令指示的构图规则位于所述拍摄装置的拍摄画面中。
无人机21还可以接收控制终端25发送的构图指令,构图指令的具体原理和实现方式均与上述实施例一致,此处不再赘述。
以用户选择如图11所示的右上角为例进行示意性说明,当无人机21接收到控制终端25发送的跟随指令后,无人机21对该跟随指令指示的所述至少两个跟随对象进行跟随,无人机21在对所述至少两个跟随对象进行跟随的过程中,处理器23可根据所述至少两个跟随对象的运动方向、以及所述至少两个跟随对象在图像中所处的位置等信息调整无人机21的运动方向、无人机21的姿态、无人机21的云台的姿态中的至少一个,以使所述至少两个跟随对象位于拍摄装置的拍摄画面的右上角。
本实施例通过无人机获取其搭载的拍摄装置拍摄的图像,将该图像发送给所述无人机的控制终端,并根据该控制终端发送的跟随指令对该跟随指令指示的至少两个跟随对象进行跟随以使所述至少两个跟随对象在所述拍摄装置的拍摄画面中,使得无人机可应用于更多的拍摄场景例如多人球赛、多人游玩、合照集体照等;通过无人机接收控制终端发送的构图指令,在对跟随指令指示的所述至少两个跟随对象进行跟随时,使所述至少 两个跟随对象按照所述构图指令指示的构图规则位于所述拍摄装置的拍摄画面中,提高了无人机对至少两个跟随对象进行跟随的灵活性。
本发明实施例提供一种无人机的控制终端。图19为本发明实施例提供的无人机的控制终端的结构图,如图19所示,无人机的控制终端190包括:通讯接口191和处理器192。
通讯接口191用于接收无人机的拍摄装置拍摄获取的图像;处理器192用于:显示无人机的拍摄装置拍摄获取的图像;检测用户对所述图像中至少两个跟随对象的选择操作;根据检测到的所述选择操作确定跟随指令;控制无人机对所述跟随指令指示的所述至少两个跟随对象进行跟随以使所述至少两个跟随对象在所述拍摄装置的拍摄画面中。
可选的,处理器192检测用户对所述图像中至少两个跟随对象的选择操作时,具体用于:检测用户对所述图像中至少两个跟随对象的语音选择操作;处理器192根据检测到的所述选择操作确定跟随指令时,具体用于:根据检测到的所述语音选择操作确定跟随指令。
可选的,处理器192检测用户对所述图像中至少两个跟随对象的选择操作时,具体用于:检测用户在显示所述图像的交互界面上对至少两个跟随对象的选择操作。
可选的,通讯接口191还用于:接收所述无人机发送的跟随对象标识信息,所述跟随对象标识信息用于指示至少一个由所述无人机从所述图像中识别出的跟随对象;处理器192还用于:根据所述跟随对象标识信息,在所述图像中对所述至少一个由所述无人机从所述图像中识别出的跟随对象进行标识。
可选的,所述跟随对象标识信息包括所述至少一个由所述无人机从所述图像中识别出的跟随对象在所述图像中的位置信息。
可选的,处理器192根据所述跟随对象标识信息,在所述图像中对所述至少一个由所述无人机从所述图像中识别出的跟随对象进行标识时,具体用于:根据所述跟随对象标识信息,在所述图像中显示用于标识所述至少一个由所述无人机从所述图像中识别出的跟随对象的图标。
可选的,所述至少一个由所述无人机从所述图像中识别出的跟随对象 包括多个由所述无人机从所述图像中识别出的跟随对象;处理器192检测用户对所述图像中至少两个跟随对象的选择操作时,具体用于:检测用户对所述图像中至少两个由无人机从所述图像中识别出的跟随对象的选择操作。
可选的,处理器192检测用户对所述图像中至少两个跟随对象的选择操作时,具体用于:检测用户对所述图像中至少两个由无人机从所述图像中未识别出的跟随对象的选择操作。
可选的,处理器192检测用户对所述图像中至少两个跟随对象的选择操作时,具体用于:检测用户对所述图像中至少一个由无人机从所述图像中识别出的跟随对象的选择操作,以及对至少一个由所述无人机从所述图像中未识别出的跟随对象的选择操作。
可选的,通讯接口191还用于:接收所述无人机发送的至少一个跟随对象的类别信息;处理器192还用于:根据所述至少一个跟随对象的类别信息,在所述图像中对所述至少一个跟随对象的类别进行标识。
可选的,处理器192还用于:检测用户的确认跟随操作;处理器192控制无人机对所述跟随指令指示的所述至少两个跟随对象进行跟随以使所述至少两个跟随对象在所述拍摄装置的拍摄画面中时,具体用于:在检测到用户的确认跟随操作后,控制无人机对所述跟随指令指示的所述至少两个跟随对象进行跟随以使所述至少两个跟随对象在所述拍摄装置的拍摄画面中。
可选的,处理器192还用于:显示确认跟随图标;处理器192检测用户的确认跟随操作时,具体用于:检测用户对所述确认跟随图标的操作。
可选的,处理器192还用于:在所述无人机对所述至少两个跟随对象进行跟随的过程中,检测用户的飞行控制操作;根据检测到的飞行控制操作确定飞行控制指令;通讯接口191还用于:将所述飞行控制指令发送给无人机,以使所述无人机根据所述飞行控制指令,调整所述无人机的位置、所述无人机的姿态、以及所述无人机的云台的姿态中的一个或多个。
可选的,处理器192还用于:检测用户的构图选择操作;根据检测到的构图选择操作确定构图指令;处理器192控制无人机对所述跟随指令指示的所述至少两个跟随对象进行跟随以使所述至少两个跟随对象在所述 拍摄装置的拍摄画面中时,具体用于:控制无人机对所述跟随指令指示的所述至少两个跟随对象进行跟随以使所述至少两个跟随对象按照所述构图指令指示的构图规则位于所述拍摄装置的拍摄画面中。
本发明实施例提供的控制终端的具体原理和实现方式均与图1、图4、图9所示实施例类似,此处不再赘述。
本实施例通过控制终端接收并显示无人机的拍摄装置拍摄获取的图像,检测用户对所述图像中至少两个跟随对象的选择操作,并根据检测到的所述选择操作确定跟随指令,该跟随指令可指示至少两个跟随对象,以使无人机对至少两个跟随对象进行跟随,以使至少两个跟随对象在拍摄装置的拍摄画面中,使得无人机可应用于更多的拍摄场景例如多人球赛、多人游玩、合照集体照等。通过控制终端接收无人机发送的跟随对象标识信息,并根据该跟随对象标识信息,在图像中对跟随对象标识信息指示的至少一个由所述无人机从所述图像中识别出的跟随对象进行标识,这样,用户就可以知道无人机至少可以对图像中哪几个对象进行跟随,提升了用户体验。通过控制终端检测用户的构图选择操作;根据检测到的构图选择操作确定构图指令,在控制无人机对所述跟随指令指示的所述至少两个跟随对象进行跟随时,使所述至少两个跟随对象按照所述构图指令指示的构图规则位于所述拍摄装置的拍摄画面中,提高了无人机对至少两个跟随对象进行跟随的灵活性。
本发明实施例提供一种无人机。图20为本发明实施例提供的无人机的结构图,如图20所示,无人机200包括:机身、动力系统、拍摄装置201、处理器202和通讯接口203;其中,拍摄装置201通过云台204安装在所述机身,用于拍摄图像。所述动力系统包括如下至少一种:电机207、螺旋桨206和电子调速器217,动力系统安装在所述机身,用于提供飞行动力。
在本实施例中,处理器202用于获取无人机搭载的拍摄装置拍摄的图像;通讯接口203用于将所述图像发送给所述无人机的控制终端;接收所述无人机的控制终端发送的跟随指令,所述跟随指令用于指示所述图像中的至少两个跟随对象;处理器202还用于:对所述至少两个跟随对象进行 跟随以使所述至少两个跟随对象在所述拍摄装置的拍摄画面中。
可选的,处理器202还用于:对所述图像中的跟随对象进行识别;通讯接口203还用于:向所述无人机的控制终端发送处理器202识别出的至少一个跟随对象的标识信息,所述跟随对象的标识信息用于指示至少一个由所述无人机从所述图像中识别出的跟随对象。
可选的,所述跟随对象的标识信息包括所述至少一个由所述无人机从所述图像中识别出的跟随对象在所述图像中的位置信息。
可选的,处理器202还用于:对所述图像中的跟随对象的类别进行识别;通讯接口203还用于:向所述无人机的控制终端发送处理器202识别出的至少一个跟随对象的类别信息。
可选的,通讯接口203还用于:在所述无人机对所述至少两个跟随对象进行跟随的过程中,接收所述无人机的控制终端发送的飞行控制指令;处理器202还用于:根据所述飞行控制指令调整所述无人机的位置、所述无人机的姿态、以及所述无人机的云台的姿态中的一个或多个。
可选的,通讯接口203还用于:接收所述无人机的控制终端发送的构图指令;处理器202对所述至少两个跟随对象进行跟随以使所述至少两个跟随对象在所述拍摄装置的拍摄画面中时,具体用于:对所述至少两个跟随对象进行跟随以使所述至少两个跟随对象按照所述构图指令指示的构图规则位于所述拍摄装置的拍摄画面中。
在其他实施例中,处理器202具体可以是飞行控制器。
本发明实施例提供的无人机的具体原理和实现方式均与图12、图13、图14所示实施例类似,此处不再赘述。
本实施例通过无人机获取其搭载的拍摄装置拍摄的图像,将该图像发送给所述无人机的控制终端,并根据该控制终端发送的跟随指令对该跟随指令指示的至少两个跟随对象进行跟随以使所述至少两个跟随对象在所述拍摄装置的拍摄画面中,使得无人机可应用于更多的拍摄场景例如多人球赛、多人游玩、合照集体照等。通过无人机接收控制终端发送的构图指令,在对跟随指令指示的所述至少两个跟随对象进行跟随时,使所述至少两个跟随对象按照所述构图指令指示的构图规则位于所述拍摄装置的拍摄画面中,提高了无人机对至少两个跟随对象进行跟随的灵活性。
本发明实施例提供一种无人机的控制终端。图21为本发明另一实施例提供的无人机的控制终端的结构图;如图21所示,控制终端210包括:通讯接口211和处理器212。通讯接口211用于:接收无人机的拍摄装置拍摄获取的图像;接收无人机发送的跟随对象标识信息,所述跟随对象标识信息用于指示至少一个由所述无人机从所述图像中识别出的跟随对象;处理器212用于:显示无人机的拍摄装置拍摄获取的图像;根据所述跟随对象标识信息,在所述图像中对所述至少一个由所述无人机从所述图像中识别出的跟随对象进行标识。
可选的,所述跟随对象标识信息包括所述至少一个由所述无人机从所述图像中识别出的跟随对象在所述图像中的位置信息。
可选的,处理器212根据所述跟随对象标识信息,在所述图像中对所述至少一个由所述无人机从所述图像中识别出的跟随对象进行标识时,具体用于:根据所述跟随对象标识信息,在所述图像中显示用于标识所述至少一个由所述无人机从所述图像中识别出的跟随对象的图标。
可选的,通讯接口211还用于:接收所述无人机发送的至少一个跟随对象的类别信息;处理器212还用于:根据所述至少一个跟随对象的类别信息,在所述图像中对所述至少一个跟随对象的类别进行标识。
可选的,处理器212还用于:检测用户对所述图像中至少一个跟随对象的选择操作;根据检测到的所述选择操作确定跟随指令;控制无人机对所述跟随指令指示的所述至少一个跟随对象进行跟随以使所述至少一个跟随对象在所述拍摄装置的拍摄画面中。
可选的,处理器212检测用户对所述图像中至少一个跟随对象的选择操作时,具体用于:检测用户对所述图像中至少一个跟随对象的语音选择操作;处理器212根据检测到的所述选择操作确定跟随指令时,具体用于:根据检测到的所述语音选择操作确定跟随指令。
可选的,处理器212检测用户对所述图像中至少一个跟随对象的选择操作时,具体用于:检测用户在显示所述图像的交互界面上对至少一个跟随对象的选择操作。
可选的,处理器212检测用户对所述图像中至少一个跟随对象的选择 操作时,具体用于:检测用户对所述图像中至少一个由无人机从所述图像中识别出的跟随对象的选择操作。
可选的,处理器212检测用户对所述图像中至少一个跟随对象的选择操作时,具体用于:检测用户对所述图像中至少一个由无人机从所述图像中未识别出的跟随对象的选择操作。
可选的,处理器212还用于:检测用户的确认跟随操作;处理器212控制无人机对所述跟随指令指示的所述至少一个跟随对象进行跟随以使所述至少一个跟随对象在所述拍摄装置的拍摄画面中时,具体用于:在检测到用户的确认跟随操作后,控制无人机对所述跟随指令指示的所述至少一个跟随对象进行跟随以使所述至少一个跟随对象在所述拍摄装置的拍摄画面中。
可选的,处理器212还用于:显示确认跟随图标;处理器212检测用户的确认跟随操作时,具体用于:检测用户对所述确认跟随图标的操作。
可选的,处理器212还用于:在所述无人机对所述至少一个跟随对象进行跟随的过程中,检测用户的飞行控制操作;根据检测到的飞行控制操作确定飞行控制指令;通讯接口211还用于:将所述飞行控制指令发送给无人机,以使所述无人机根据所述飞行控制指令,调整所述无人机的位置、所述无人机的姿态、以及所述无人机的云台的姿态中的一个或多个。
可选的,处理器212还用于:检测用户的构图选择操作;根据检测到的构图选择操作确定构图指令;处理器212控制无人机对所述跟随指令指示的所述至少一个跟随对象进行跟随以使所述至少一个跟随对象在所述拍摄装置的拍摄画面中时,具体用于:控制无人机对所述跟随指令指示的所述至少一个跟随对象进行跟随以使所述至少一个跟随对象按照所述构图指令指示的构图规则位于所述拍摄装置的拍摄画面中。
可选的,所述至少一个跟随对象包括至少两个跟随对象。
本发明实施例提供的控制终端的具体原理和实现方式均与图15、图16、图17所示实施例类似,此处不再赘述。
本实施例通过控制终端接收无人机发送的跟随对象标识信息,并根据该跟随对象标识信息,在图像中对跟随对象标识信息指示的至少一个由所述无人机从所述图像中识别出的跟随对象进行标识,这样,用户就可以知 道无人机至少可以对图像中哪几个对象进行跟随,提升了用户体验。
本发明实施例提供一种无人机。图20为本发明实施例提供的无人机的结构图,如图20所示,无人机200包括:机身、动力系统、拍摄装置201、处理器202和通讯接口203;其中,拍摄装置201通过云台204安装在所述机身,用于拍摄图像。所述动力系统包括如下至少一种:电机207、螺旋桨206和电子调速器217,动力系统安装在所述机身,用于提供飞行动力。
在本实施例中,处理器202用于:获取无人机搭载的拍摄装置拍摄的图像;对所述图像中的跟随对象进行识别;通讯接口203用于:向所述无人机的控制终端发送识别出的至少一个跟随对象的标识信息,所述跟随对象的标识信息用于指示至少一个由所述无人机从所述图像中识别出的跟随对象。
可选的,所述跟随对象的标识信息包括所述至少一个由所述无人机从所述图像中识别出的跟随对象在所述图像中的位置信息。
可选的,处理器202还用于:对所述图像中的跟随对象的类别进行识别;通讯接口203还用于:向所述无人机的控制终端发送识别出的至少一个跟随对象的类别信息。
可选的,通讯接口203还用于:接收所述无人机的控制终端发送的跟随指令;处理器202还用于:对所述跟随指令指示的至少一个跟随对象进行跟随以使所述至少一个跟随对象在所述拍摄装置的拍摄画面中。
可选的,通讯接口203还用于:在对所述至少一个跟随对象进行跟随的过程中,接收所述无人机的控制终端发送的飞行控制指令;处理器202还用于:根据所述飞行控制指令调整所述无人机的位置、所述无人机的姿态、以及所述无人机的云台的姿态中的一个或多个。
可选的,通讯接口203还用于:接收所述无人机的控制终端发送的构图指令;处理器202对所述跟随指令指示的所述至少一个跟随对象进行跟随以使所述至少一个跟随对象在所述拍摄装置的拍摄画面中时,具体用于:对所述跟随指令指示的所述至少一个跟随对象进行跟随以使所述至少一个跟随对象按照所述构图指令指示的构图规则位于所述拍摄装置的拍 摄画面中。
可选的,所述至少一个跟随对象包括至少两个跟随对象。
在其他实施例中,处理器202具体可以是飞行控制器。
本发明实施例提供的无人机的具体原理和实现方式均与图18所示实施例类似,此处不再赘述。
本实施例通过无人机获取其搭载的拍摄装置拍摄的图像,将该图像发送给所述无人机的控制终端,并根据该控制终端发送的跟随指令对该跟随指令指示的至少两个跟随对象进行跟随以使所述至少两个跟随对象在所述拍摄装置的拍摄画面中,使得无人机可应用于更多的拍摄场景例如多人球赛、多人游玩、合照集体照等;通过无人机接收控制终端发送的构图指令,在对跟随指令指示的所述至少两个跟随对象进行跟随时,使所述至少两个跟随对象按照所述构图指令指示的构图规则位于所述拍摄装置的拍摄画面中,提高了无人机对至少两个跟随对象进行跟随的灵活性。
在本发明所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
上述以软件功能单元的形式实现的集成的单元,可以存储在一个计 算机可读取存储介质中。上述软件功能单元存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本发明各个实施例所述方法的部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
本领域技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (82)

  1. 一种跟随控制方法,应用于无人机的控制终端,其特征在于,包括:
    接收并显示无人机的拍摄装置拍摄获取的图像;
    检测用户对所述图像中至少两个跟随对象的选择操作;
    根据检测到的所述选择操作确定跟随指令;
    控制无人机对所述跟随指令指示的所述至少两个跟随对象进行跟随以使所述至少两个跟随对象在所述拍摄装置的拍摄画面中。
  2. 根据权利要求1所述的方法,其特征在于,所述检测用户对所述图像中至少两个跟随对象的选择操作,包括:
    检测用户对所述图像中至少两个跟随对象的语音选择操作;
    所述根据检测到的所述选择操作确定跟随指令,包括:
    根据检测到的所述语音选择操作确定跟随指令。
  3. 根据权利要求1所述的方法,其特征在于,所述检测用户对所述图像中至少两个跟随对象的选择操作,包括:
    检测用户在显示所述图像的交互界面上对至少两个跟随对象的选择操作。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述方法还包括:
    接收所述无人机发送的跟随对象标识信息,所述跟随对象标识信息用于指示至少一个由所述无人机从所述图像中识别出的跟随对象;
    根据所述跟随对象标识信息,在所述图像中对所述至少一个由所述无人机从所述图像中识别出的跟随对象进行标识。
  5. 根据权利要求4所述的方法,其特征在于,所述跟随对象标识信息包括所述至少一个由所述无人机从所述图像中识别出的跟随对象在所述图像中的位置信息。
  6. 根据权利要求4或5所述的方法,其特征在于,所述根据所述跟随对象标识信息,在所述图像中对所述至少一个由所述无人机从所述图像中识别出的跟随对象进行标识,包括:
    根据所述跟随对象标识信息,在所述图像中显示用于标识所述至少一 个由所述无人机从所述图像中识别出的跟随对象的图标。
  7. 根据权利要求4-6任一项所述的方法,其特征在于,所述至少一个由所述无人机从所述图像中识别出的跟随对象包括多个由所述无人机从所述图像中识别出的跟随对象;
    所述检测用户对所述图像中至少两个跟随对象的选择操作,包括:
    检测用户对所述图像中至少两个由无人机从所述图像中识别出的跟随对象的选择操作。
  8. 根据权利要求4-6任一项所述的方法,其特征在于,所述检测用户对所述图像中至少两个跟随对象的选择操作,包括:
    检测用户对所述图像中至少两个由无人机从所述图像中未识别出的跟随对象的选择操作。
  9. 根据权利要求4-6任一项所述的方法,其特征在于,所述检测用户对所述图像中至少两个跟随对象的选择操作,包括:
    检测用户对所述图像中至少一个由无人机从所述图像中识别出的跟随对象的选择操作,以及对至少一个由所述无人机从所述图像中未识别出的跟随对象的选择操作。
  10. 根据权利要求4-9任一项所述的方法,其特征在于,所述方法还包括:
    接收所述无人机发送的至少一个跟随对象的类别信息;
    根据所述至少一个跟随对象的类别信息,在所述图像中对所述至少一个跟随对象的类别进行标识。
  11. 根据权利要求1-10任一项所述的方法,其特征在于,所述方法还包括:
    检测用户的确认跟随操作;
    所述控制无人机对所述跟随指令指示的所述至少两个跟随对象进行跟随以使所述至少两个跟随对象在所述拍摄装置的拍摄画面中,包括:
    在检测到用户的确认跟随操作后,控制无人机对所述跟随指令指示的所述至少两个跟随对象进行跟随以使所述至少两个跟随对象在所述拍摄装置的拍摄画面中。
  12. 根据权利要求11所述的方法,其特征在于,所述方法还包括:
    显示确认跟随图标;
    所述检测用户的确认跟随操作,包括:
    检测用户对所述确认跟随图标的操作。
  13. 根据权利要求1-12任一项所述的方法,其特征在于,所述方法还包括:
    在所述无人机对所述至少两个跟随对象进行跟随的过程中,检测用户的飞行控制操作;
    根据检测到的飞行控制操作确定飞行控制指令;
    将所述飞行控制指令发送给无人机,以使所述无人机根据所述飞行控制指令,调整所述无人机的位置、所述无人机的姿态、以及所述无人机的云台的姿态中的一个或多个。
  14. 根据权利要求1-13任一项所述的方法,其特征在于,所述方法还包括:
    检测用户的构图选择操作;
    根据检测到的构图选择操作确定构图指令;
    所述控制无人机对所述跟随指令指示的所述至少两个跟随对象进行跟随以使所述至少两个跟随对象在所述拍摄装置的拍摄画面中,包括:
    控制无人机对所述跟随指令指示的所述至少两个跟随对象进行跟随以使所述至少两个跟随对象按照所述构图指令指示的构图规则位于所述拍摄装置的拍摄画面中。
  15. 一种跟随控制方法,应用于无人机,其特征在于,包括:
    获取所述无人机搭载的拍摄装置拍摄的图像;
    将所述图像发送给所述无人机的控制终端;
    接收所述无人机的控制终端发送的所述跟随指令,所述跟随指令用于指示所述图像中的至少两个跟随对象;
    对所述至少两个跟随对象进行跟随以使所述至少两个跟随对象在所述拍摄装置的拍摄画面中。
  16. 根据权利要求15所述的方法,其特征在于,所述方法还包括:
    对所述图像中的跟随对象进行识别;
    向所述无人机的控制终端发送识别出的至少一个跟随对象的标识信 息,所述跟随对象的标识信息用于指示至少一个由所述无人机从所述图像中识别出的跟随对象。
  17. 根据权利要求16所述的方法,其特征在于,所述跟随对象的标识信息包括所述至少一个由所述无人机从所述图像中识别出的跟随对象在所述图像中的位置信息。
  18. 根据权利要求16或17所述的方法,其特征在于,所述方法还包括:
    对所述图像中的跟随对象的类别进行识别;
    向所述无人机的控制终端发送识别出的至少一个跟随对象的类别信息。
  19. 根据权利要求15-18任一项所述的方法,其特征在于,所述方法还包括:
    在对所述至少两个跟随对象进行跟随的过程中,接收所述无人机的控制终端发送的飞行控制指令;
    根据所述飞行控制指令调整所述无人机的位置、所述无人机的姿态、以及所述无人机的云台的姿态中的一个或多个。
  20. 根据权利要求15-19任一项所述的方法,其特征在于,所述方法还包括:
    接收所述无人机的控制终端发送的构图指令;
    所述对所述至少两个跟随对象进行跟随以使所述至少两个跟随对象在所述拍摄装置的拍摄画面中,包括:
    对所述至少两个跟随对象进行跟随以使所述至少两个跟随对象按照所述构图指令指示的构图规则位于所述拍摄装置的拍摄画面中。
  21. 一种跟随控制方法,应用于无人机的控制终端,其特征在于,包括:
    接收并显示无人机的拍摄装置拍摄获取的图像;
    接收无人机发送的跟随对象标识信息,所述跟随对象标识信息用于指示至少一个由所述无人机从所述图像中识别出的跟随对象;
    根据所述跟随对象标识信息,在所述图像中对所述至少一个由所述无人机从所述图像中识别出的跟随对象进行标识。
  22. 根据权利要求21所述的方法,其特征在于,所述跟随对象标识信息包括所述至少一个由所述无人机从所述图像中识别出的跟随对象在所述图像中的位置信息。
  23. 根据权利要求21或22所述的方法,其特征在于,所述根据所述跟随对象标识信息,在所述图像中对所述至少一个由所述无人机从所述图像中识别出的跟随对象进行标识,包括:
    根据所述跟随对象标识信息,在所述图像中显示用于标识所述至少一个由所述无人机从所述图像中识别出的跟随对象的图标。
  24. 根据权利要求21-23任一项所述的方法,其特征在于,所述方法还包括:
    接收所述无人机发送的至少一个跟随对象的类别信息;
    根据所述至少一个跟随对象的类别信息,在所述图像中对所述至少一个跟随对象的类别进行标识。
  25. 根据权利要求21-24任一项所述的方法,其特征在于,所述方法还包括:
    检测用户对所述图像中至少一个跟随对象的选择操作;
    根据检测到的所述选择操作确定跟随指令;
    控制无人机对所述跟随指令指示的所述至少一个跟随对象进行跟随以使所述至少一个跟随对象在所述拍摄装置的拍摄画面中。
  26. 根据权利要求25所述的方法,其特征在于,所述检测用户对所述图像中至少一个跟随对象的选择操作,包括:
    检测用户对所述图像中至少一个跟随对象的语音选择操作;
    所述根据检测到的所述选择操作确定跟随指令,包括:
    根据检测到的所述语音选择操作确定跟随指令。
  27. 根据权利要求25所述的方法,其特征在于,所述检测用户对所述图像中至少一个跟随对象的选择操作,包括:
    检测用户在显示所述图像的交互界面上对至少一个跟随对象的选择操作。
  28. 根据权利要求25-27任一项所述的方法,其特征在于,所述检测用户对所述图像中至少一个跟随对象的选择操作,包括:
    检测用户对所述图像中至少一个由无人机从所述图像中识别出的跟随对象的选择操作。
  29. 根据权利要求25-28任一项所述的方法,其特征在于,所述检测用户对所述图像中至少一个跟随对象的选择操作,包括:
    检测用户对所述图像中至少一个由无人机从所述图像中未识别出的跟随对象的选择操作。
  30. 根据权利要求25-29任一项所述的方法,其特征在于,所述方法还包括:
    检测用户的确认跟随操作;
    所述控制无人机对所述跟随指令指示的所述至少一个跟随对象进行跟随以使所述至少一个跟随对象在所述拍摄装置的拍摄画面中,包括:
    在检测到用户的确认跟随操作后,控制无人机对所述跟随指令指示的所述至少一个跟随对象进行跟随以使所述至少一个跟随对象在所述拍摄装置的拍摄画面中。
  31. 根据权利要求30所述的方法,其特征在于,所述方法还包括:
    显示确认跟随图标;
    所述检测用户的确认跟随操作,包括:
    检测用户对所述确认跟随图标的操作。
  32. 根据权利要求25-31任一项所述的方法,其特征在于,所述方法还包括:
    在所述无人机对所述至少一个跟随对象进行跟随的过程中,检测用户的飞行控制操作;
    根据检测到的飞行控制操作确定飞行控制指令;
    将所述飞行控制指令发送给无人机,以使所述无人机根据所述飞行控制指令,调整所述无人机的位置、所述无人机的姿态、以及所述无人机的云台的姿态中的一个或多个。
  33. 根据权利要求25-32任一项所述的方法,其特征在于,所述方法还包括:
    检测用户的构图选择操作;
    根据检测到的构图选择操作确定构图指令;
    所述控制无人机对所述跟随指令指示的所述至少一个跟随对象进行跟随以使所述至少一个跟随对象在所述拍摄装置的拍摄画面中,包括:
    控制无人机对所述跟随指令指示的所述至少一个跟随对象进行跟随以使所述至少一个跟随对象按照所述构图指令指示的构图规则位于所述拍摄装置的拍摄画面中。
  34. 根据权利要求24-33任一项所述的方法,其特征在于,所述至少一个跟随对象包括至少两个跟随对象。
  35. 一种跟随控制方法,应用于无人机,其特征在于,包括:
    获取所述无人机搭载的拍摄装置拍摄的图像;
    对所述图像中的跟随对象进行识别;
    向所述无人机的控制终端发送识别出的至少一个跟随对象的标识信息,所述跟随对象的标识信息用于指示至少一个由所述无人机从所述图像中识别出的跟随对象。
  36. 根据权利要求35所述的方法,其特征在于,所述跟随对象的标识信息包括所述至少一个由所述无人机从所述图像中识别出的跟随对象在所述图像中的位置信息。
  37. 根据权利要求35或36所述的方法,其特征在于,所述方法还包括:
    对所述图像中的跟随对象的类别进行识别;
    向所述无人机的控制终端发送识别出的至少一个跟随对象的类别信息。
  38. 根据权利要求35-37任一项所述的方法,其特征在于,所述方法还包括:
    接收所述无人机的控制终端发送的跟随指令;
    对所述跟随指令指示的至少一个跟随对象进行跟随以使所述至少一个跟随对象在所述拍摄装置的拍摄画面中。
  39. 根据权利要求38所述的方法,其特征在于,所述方法还包括:
    在对所述至少一个跟随对象进行跟随的过程中,接收所述无人机的控制终端发送的飞行控制指令;
    根据所述飞行控制指令调整所述无人机的位置、所述无人机的姿态、 以及所述无人机的云台的姿态中的一个或多个。
  40. 根据权利要求38或39所述的方法,其特征在于,所述方法还包括:
    接收所述无人机的控制终端发送的构图指令;
    所述对所述跟随指令指示的所述至少一个跟随对象进行跟随以使所述至少一个跟随对象在所述拍摄装置的拍摄画面中,包括:
    对所述跟随指令指示的所述至少一个跟随对象进行跟随以使所述至少一个跟随对象按照所述构图指令指示的构图规则位于所述拍摄装置的拍摄画面中。
  41. 根据权利要求35-40任一项所述的方法,其特征在于,所述至少一个跟随对象包括至少两个跟随对象。
  42. 一种无人机的控制终端,其特征在于,包括:通讯接口和处理器;
    所述通讯接口,用于接收无人机的拍摄装置拍摄获取的图像;
    所述处理器,用于:
    显示无人机的拍摄装置拍摄获取的图像;
    检测用户对所述图像中至少两个跟随对象的选择操作;
    根据检测到的所述选择操作确定跟随指令;
    控制无人机对所述跟随指令指示的所述至少两个跟随对象进行跟随以使所述至少两个跟随对象在所述拍摄装置的拍摄画面中。
  43. 根据权利要求42所述的控制终端,其特征在于,所述处理器检测用户对所述图像中至少两个跟随对象的选择操作时,具体用于:
    检测用户对所述图像中至少两个跟随对象的语音选择操作;
    所述处理器根据检测到的所述选择操作确定跟随指令时,具体用于:
    根据检测到的所述语音选择操作确定跟随指令。
  44. 根据权利要求42所述的控制终端,其特征在于,所述处理器检测用户对所述图像中至少两个跟随对象的选择操作时,具体用于:
    检测用户在显示所述图像的交互界面上对至少两个跟随对象的选择操作。
  45. 根据权利要求42-44任一项所述的控制终端,其特征在于,所述通讯接口还用于:接收所述无人机发送的跟随对象标识信息,所述跟随对 象标识信息用于指示至少一个由所述无人机从所述图像中识别出的跟随对象;
    所述处理器还用于:根据所述跟随对象标识信息,在所述图像中对所述至少一个由所述无人机从所述图像中识别出的跟随对象进行标识。
  46. 根据权利要求45所述的控制终端,其特征在于,所述跟随对象标识信息包括所述至少一个由所述无人机从所述图像中识别出的跟随对象在所述图像中的位置信息。
  47. 根据权利要求45或46所述的控制终端,其特征在于,所述处理器根据所述跟随对象标识信息,在所述图像中对所述至少一个由所述无人机从所述图像中识别出的跟随对象进行标识时,具体用于:
    根据所述跟随对象标识信息,在所述图像中显示用于标识所述至少一个由所述无人机从所述图像中识别出的跟随对象的图标。
  48. 根据权利要求45-47任一项所述的控制终端,其特征在于,所述至少一个由所述无人机从所述图像中识别出的跟随对象包括多个由所述无人机从所述图像中识别出的跟随对象;
    所述处理器检测用户对所述图像中至少两个跟随对象的选择操作时,具体用于:
    检测用户对所述图像中至少两个由无人机从所述图像中识别出的跟随对象的选择操作。
  49. 根据权利要求45-47任一项所述的控制终端,其特征在于,所述处理器检测用户对所述图像中至少两个跟随对象的选择操作时,具体用于:
    检测用户对所述图像中至少两个由无人机从所述图像中未识别出的跟随对象的选择操作。
  50. 根据权利要求45-47任一项所述的控制终端,其特征在于,所述处理器检测用户对所述图像中至少两个跟随对象的选择操作时,具体用于:
    检测用户对所述图像中至少一个由无人机从所述图像中识别出的跟随对象的选择操作,以及对至少一个由所述无人机从所述图像中未识别出的跟随对象的选择操作。
  51. 根据权利要求45-50任一项所述的控制终端,其特征在于,所述通讯接口还用于:接收所述无人机发送的至少一个跟随对象的类别信息;
    所述处理器还用于:根据所述至少一个跟随对象的类别信息,在所述图像中对所述至少一个跟随对象的类别进行标识。
  52. 根据权利要求42-51任一项所述的控制终端,其特征在于,所述处理器还用于:检测用户的确认跟随操作;
    所述处理器控制无人机对所述跟随指令指示的所述至少两个跟随对象进行跟随以使所述至少两个跟随对象在所述拍摄装置的拍摄画面中时,具体用于:
    在检测到用户的确认跟随操作后,控制无人机对所述跟随指令指示的所述至少两个跟随对象进行跟随以使所述至少两个跟随对象在所述拍摄装置的拍摄画面中。
  53. 根据权利要求52所述的控制终端,其特征在于,所述处理器还用于:
    显示确认跟随图标;
    所述处理器检测用户的确认跟随操作时,具体用于:
    检测用户对所述确认跟随图标的操作。
  54. 根据权利要求42-53任一项所述的控制终端,其特征在于,所述处理器还用于:
    在所述无人机对所述至少两个跟随对象进行跟随的过程中,检测用户的飞行控制操作;
    根据检测到的飞行控制操作确定飞行控制指令;
    所述通讯接口还用于:将所述飞行控制指令发送给无人机,以使所述无人机根据所述飞行控制指令,调整所述无人机的位置、所述无人机的姿态、以及所述无人机的云台的姿态中的一个或多个。
  55. 根据权利要求42-54任一项所述的控制终端,其特征在于,所述处理器还用于:
    检测用户的构图选择操作;
    根据检测到的构图选择操作确定构图指令;
    所述处理器控制无人机对所述跟随指令指示的所述至少两个跟随对 象进行跟随以使所述至少两个跟随对象在所述拍摄装置的拍摄画面中时,具体用于:
    控制无人机对所述跟随指令指示的所述至少两个跟随对象进行跟随以使所述至少两个跟随对象按照所述构图指令指示的构图规则位于所述拍摄装置的拍摄画面中。
  56. 一种无人机,其特征在于,包括:
    机身;
    动力系统,安装在所述机身,用于提供飞行动力;
    拍摄装置,安装在所述机身,用于拍摄图像;
    处理器和通讯接口;
    所述处理器用于获取无人机搭载的拍摄装置拍摄的图像;
    所述通讯接口用于将所述图像发送给所述无人机的控制终端;接收所述无人机的控制终端发送的跟随指令,所述跟随指令用于指示所述图像中的至少两个跟随对象;
    所述处理器还用于:对所述至少两个跟随对象进行跟随以使所述至少两个跟随对象在所述拍摄装置的拍摄画面中。
  57. 根据权利要求56所述的无人机,其特征在于,所述处理器还用于:对所述图像中的跟随对象进行识别;
    所述通讯接口还用于:向所述无人机的控制终端发送所述处理器识别出的至少一个跟随对象的标识信息,所述跟随对象的标识信息用于指示至少一个由所述无人机从所述图像中识别出的跟随对象。
  58. 根据权利要求57所述的无人机,其特征在于,所述跟随对象的标识信息包括所述至少一个由所述无人机从所述图像中识别出的跟随对象在所述图像中的位置信息。
  59. 根据权利要求57或58所述的无人机,其特征在于,所述处理器还用于:对所述图像中的跟随对象的类别进行识别;
    所述通讯接口还用于:向所述无人机的控制终端发送所述处理器识别出的至少一个跟随对象的类别信息。
  60. 根据权利要求56-59任一项所述的无人机,其特征在于,所述通讯接口还用于:在所述无人机对所述至少两个跟随对象进行跟随的过程 中,接收所述无人机的控制终端发送的飞行控制指令;
    所述处理器还用于:根据所述飞行控制指令调整所述无人机的位置、所述无人机的姿态、以及所述无人机的云台的姿态中的一个或多个。
  61. 根据权利要求56-60任一项所述的无人机,其特征在于,所述通讯接口还用于:接收所述无人机的控制终端发送的构图指令;
    所述处理器对所述至少两个跟随对象进行跟随以使所述至少两个跟随对象在所述拍摄装置的拍摄画面中时,具体用于:
    对所述至少两个跟随对象进行跟随以使所述至少两个跟随对象按照所述构图指令指示的构图规则位于所述拍摄装置的拍摄画面中。
  62. 一种无人机的控制终端,其特征在于,包括:通讯接口和处理器;
    所述通讯接口用于:
    接收无人机的拍摄装置拍摄获取的图像;
    接收无人机发送的跟随对象标识信息,所述跟随对象标识信息用于指示至少一个由所述无人机从所述图像中识别出的跟随对象;
    所述处理器用于:
    显示无人机的拍摄装置拍摄获取的图像;
    根据所述跟随对象标识信息,在所述图像中对所述至少一个由所述无人机从所述图像中识别出的跟随对象进行标识。
  63. 根据权利要求62所述的控制终端,其特征在于,所述跟随对象标识信息包括所述至少一个由所述无人机从所述图像中识别出的跟随对象在所述图像中的位置信息。
  64. 根据权利要求62或63所述的控制终端,其特征在于,所述处理器根据所述跟随对象标识信息,在所述图像中对所述至少一个由所述无人机从所述图像中识别出的跟随对象进行标识时,具体用于:
    根据所述跟随对象标识信息,在所述图像中显示用于标识所述至少一个由所述无人机从所述图像中识别出的跟随对象的图标。
  65. 根据权利要求62-64任一项所述的控制终端,其特征在于,所述通讯接口还用于:接收所述无人机发送的至少一个跟随对象的类别信息;
    所述处理器还用于:根据所述至少一个跟随对象的类别信息,在所述图像中对所述至少一个跟随对象的类别进行标识。
  66. 根据权利要求62-65任一项所述的控制终端,其特征在于,所述处理器还用于:
    检测用户对所述图像中至少一个跟随对象的选择操作;
    根据检测到的所述选择操作确定跟随指令;
    控制无人机对所述跟随指令指示的所述至少一个跟随对象进行跟随以使所述至少一个跟随对象在所述拍摄装置的拍摄画面中。
  67. 根据权利要求66所述的控制终端,其特征在于,所述处理器检测用户对所述图像中至少一个跟随对象的选择操作时,具体用于:
    检测用户对所述图像中至少一个跟随对象的语音选择操作;
    所述处理器根据检测到的所述选择操作确定跟随指令时,具体用于:
    根据检测到的所述语音选择操作确定跟随指令。
  68. 根据权利要求66所述的控制终端,其特征在于,所述处理器检测用户对所述图像中至少一个跟随对象的选择操作时,具体用于:
    检测用户在显示所述图像的交互界面上对至少一个跟随对象的选择操作。
  69. 根据权利要求66-68任一项所述的控制终端,其特征在于,所述处理器检测用户对所述图像中至少一个跟随对象的选择操作时,具体用于:
    检测用户对所述图像中至少一个由无人机从所述图像中识别出的跟随对象的选择操作。
  70. 根据权利要求66-69任一项所述的控制终端,其特征在于,所述处理器检测用户对所述图像中至少一个跟随对象的选择操作时,具体用于:
    检测用户对所述图像中至少一个由无人机从所述图像中未识别出的跟随对象的选择操作。
  71. 根据权利要求66-70任一项所述的控制终端,其特征在于,所述处理器还用于:
    检测用户的确认跟随操作;
    所述处理器控制无人机对所述跟随指令指示的所述至少一个跟随对象进行跟随以使所述至少一个跟随对象在所述拍摄装置的拍摄画面中时, 具体用于:
    在检测到用户的确认跟随操作后,控制无人机对所述跟随指令指示的所述至少一个跟随对象进行跟随以使所述至少一个跟随对象在所述拍摄装置的拍摄画面中。
  72. 根据权利要求71所述的控制终端,其特征在于,所述处理器还用于:
    显示确认跟随图标;
    所述处理器检测用户的确认跟随操作时,具体用于:
    检测用户对所述确认跟随图标的操作。
  73. 根据权利要求66-72任一项所述的控制终端,其特征在于,所述处理器还用于:
    在所述无人机对所述至少一个跟随对象进行跟随的过程中,检测用户的飞行控制操作;
    根据检测到的飞行控制操作确定飞行控制指令;
    所述通讯接口还用于:将所述飞行控制指令发送给无人机,以使所述无人机根据所述飞行控制指令,调整所述无人机的位置、所述无人机的姿态、以及所述无人机的云台的姿态中的一个或多个。
  74. 根据权利要求66-73任一项所述的控制终端,其特征在于,所述处理器还用于:
    检测用户的构图选择操作;
    根据检测到的构图选择操作确定构图指令;
    所述处理器控制无人机对所述跟随指令指示的所述至少一个跟随对象进行跟随以使所述至少一个跟随对象在所述拍摄装置的拍摄画面中时,具体用于:
    控制无人机对所述跟随指令指示的所述至少一个跟随对象进行跟随以使所述至少一个跟随对象按照所述构图指令指示的构图规则位于所述拍摄装置的拍摄画面中。
  75. 根据权利要求65-74任一项所述的控制终端,其特征在于,所述至少一个跟随对象包括至少两个跟随对象。
  76. 一种无人机,其特征在于,包括:
    机身;
    动力系统,安装在所述机身,用于提供飞行动力;
    拍摄装置,安装在所述机身,用于拍摄图像;
    处理器和通讯接口;
    所述处理器用于:
    获取无人机搭载的拍摄装置拍摄的图像;
    对所述图像中的跟随对象进行识别;
    所述通讯接口用于:
    向所述无人机的控制终端发送识别出的至少一个跟随对象的标识信息,所述跟随对象的标识信息用于指示至少一个由所述无人机从所述图像中识别出的跟随对象。
  77. 根据权利要求76所述的无人机,其特征在于,所述跟随对象的标识信息包括所述至少一个由所述无人机从所述图像中识别出的跟随对象在所述图像中的位置信息。
  78. 根据权利要求76或77所述的无人机,其特征在于,所述处理器还用于:
    对所述图像中的跟随对象的类别进行识别;
    所述通讯接口还用于:向所述无人机的控制终端发送识别出的至少一个跟随对象的类别信息。
  79. 根据权利要求76-78任一项所述的无人机,其特征在于,所述通讯接口还用于:接收所述无人机的控制终端发送的跟随指令;
    所述处理器还用于:对所述跟随指令指示的至少一个跟随对象进行跟随以使所述至少一个跟随对象在所述拍摄装置的拍摄画面中。
  80. 根据权利要求79所述的无人机,其特征在于,所述通讯接口还用于:在对所述至少一个跟随对象进行跟随的过程中,接收所述无人机的控制终端发送的飞行控制指令;
    所述处理器还用于:根据所述飞行控制指令调整所述无人机的位置、所述无人机的姿态、以及所述无人机的云台的姿态中的一个或多个。
  81. 根据权利要求79或80所述的无人机,其特征在于,所述通讯接口还用于:接收所述无人机的控制终端发送的构图指令;
    所述处理器对所述跟随指令指示的所述至少一个跟随对象进行跟随以使所述至少一个跟随对象在所述拍摄装置的拍摄画面中时,具体用于:
    对所述跟随指令指示的所述至少一个跟随对象进行跟随以使所述至少一个跟随对象按照所述构图指令指示的构图规则位于所述拍摄装置的拍摄画面中。
  82. 根据权利要求76-81任一项所述的无人机,其特征在于,所述至少一个跟随对象包括至少两个跟随对象。
PCT/CN2018/073626 2018-01-22 2018-01-22 跟随控制方法、控制终端及无人机 WO2019140686A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201880032268.XA CN110622089A (zh) 2018-01-22 2018-01-22 跟随控制方法、控制终端及无人机
PCT/CN2018/073626 WO2019140686A1 (zh) 2018-01-22 2018-01-22 跟随控制方法、控制终端及无人机
US16/935,875 US11509809B2 (en) 2018-01-22 2020-07-22 Following control method, control terminal, and unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/073626 WO2019140686A1 (zh) 2018-01-22 2018-01-22 跟随控制方法、控制终端及无人机

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/935,875 Continuation US11509809B2 (en) 2018-01-22 2020-07-22 Following control method, control terminal, and unmanned aerial vehicle

Publications (1)

Publication Number Publication Date
WO2019140686A1 true WO2019140686A1 (zh) 2019-07-25

Family

ID=67301969

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/073626 WO2019140686A1 (zh) 2018-01-22 2018-01-22 跟随控制方法、控制终端及无人机

Country Status (3)

Country Link
US (1) US11509809B2 (zh)
CN (1) CN110622089A (zh)
WO (1) WO2019140686A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112154656A (zh) * 2019-09-25 2020-12-29 深圳市大疆创新科技有限公司 一种拍摄方法和拍摄设备
WO2021026784A1 (zh) * 2019-08-13 2021-02-18 深圳市大疆创新科技有限公司 跟随拍摄、云台控制方法、拍摄装置、手持云台和拍摄系统
CN113260942A (zh) * 2020-09-22 2021-08-13 深圳市大疆创新科技有限公司 手持云台控制方法、手持云台、系统及可读存储介质
CN114710623A (zh) * 2019-08-13 2022-07-05 深圳市大疆创新科技有限公司 基于手持云台的拍摄方法、手持云台及存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022061508A1 (zh) * 2020-09-22 2022-03-31 深圳市大疆创新科技有限公司 拍摄控制方法、装置、系统及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009010044A2 (de) * 2007-07-14 2009-01-22 Hans Peter Pankiewicz 2d- oder 3d-zielobjekt, insbesondere für bogen-, armbrust- oder dartpfeile
CN105676862A (zh) * 2016-04-01 2016-06-15 成都云图秀色科技有限公司 一种飞行装置控制系统及控制方法
CN106708089A (zh) * 2016-12-20 2017-05-24 北京小米移动软件有限公司 跟随式的飞行控制方法及装置、无人机
CN106774301A (zh) * 2016-10-25 2017-05-31 纳恩博(北京)科技有限公司 一种避障跟随方法和电子设备
CN106970627A (zh) * 2017-05-17 2017-07-21 深圳市元时科技有限公司 一种智能跟随系统

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3862837B1 (en) * 2014-07-30 2023-05-03 SZ DJI Technology Co., Ltd. Systems and methods for target tracking
US10587790B2 (en) * 2015-11-04 2020-03-10 Tencent Technology (Shenzhen) Company Limited Control method for photographing using unmanned aerial vehicle, photographing method using unmanned aerial vehicle, mobile terminal, and unmanned aerial vehicle
CN105554480B (zh) * 2016-03-01 2018-03-16 深圳市大疆创新科技有限公司 无人机拍摄图像的控制方法、装置、用户设备及无人机
CN106161953A (zh) * 2016-08-12 2016-11-23 零度智控(北京)智能科技有限公司 一种跟踪拍摄方法和装置
KR102236339B1 (ko) * 2016-10-24 2021-04-02 에스지 디제이아이 테크놀러지 코., 엘티디 이미징 기기로 캡처한 이미지를 제어하기 위한 시스템 및 방법
WO2018098784A1 (zh) * 2016-12-01 2018-06-07 深圳市大疆创新科技有限公司 无人机的控制方法、装置、设备和无人机的控制系统
CN107505951B (zh) * 2017-08-29 2020-08-21 深圳市道通智能航空技术有限公司 一种目标跟踪方法、无人机和计算机可读存储介质
US10719087B2 (en) * 2017-08-29 2020-07-21 Autel Robotics Co., Ltd. Target tracking method, unmanned aerial vehicle, and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009010044A2 (de) * 2007-07-14 2009-01-22 Hans Peter Pankiewicz 2d- oder 3d-zielobjekt, insbesondere für bogen-, armbrust- oder dartpfeile
CN105676862A (zh) * 2016-04-01 2016-06-15 成都云图秀色科技有限公司 一种飞行装置控制系统及控制方法
CN106774301A (zh) * 2016-10-25 2017-05-31 纳恩博(北京)科技有限公司 一种避障跟随方法和电子设备
CN106708089A (zh) * 2016-12-20 2017-05-24 北京小米移动软件有限公司 跟随式的飞行控制方法及装置、无人机
CN106970627A (zh) * 2017-05-17 2017-07-21 深圳市元时科技有限公司 一种智能跟随系统

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021026784A1 (zh) * 2019-08-13 2021-02-18 深圳市大疆创新科技有限公司 跟随拍摄、云台控制方法、拍摄装置、手持云台和拍摄系统
CN114710623A (zh) * 2019-08-13 2022-07-05 深圳市大疆创新科技有限公司 基于手持云台的拍摄方法、手持云台及存储介质
CN112154656A (zh) * 2019-09-25 2020-12-29 深圳市大疆创新科技有限公司 一种拍摄方法和拍摄设备
WO2021056260A1 (zh) * 2019-09-25 2021-04-01 深圳市大疆创新科技有限公司 一种拍摄方法和拍摄设备
CN112154656B (zh) * 2019-09-25 2022-10-11 深圳市大疆创新科技有限公司 一种拍摄方法和拍摄设备
CN113260942A (zh) * 2020-09-22 2021-08-13 深圳市大疆创新科技有限公司 手持云台控制方法、手持云台、系统及可读存储介质

Also Published As

Publication number Publication date
US11509809B2 (en) 2022-11-22
CN110622089A (zh) 2019-12-27
US20200358940A1 (en) 2020-11-12

Similar Documents

Publication Publication Date Title
WO2019140686A1 (zh) 跟随控制方法、控制终端及无人机
US10674062B2 (en) Control method for photographing using unmanned aerial vehicle, photographing method using unmanned aerial vehicle, mobile terminal, and unmanned aerial vehicle
CN106598071B (zh) 跟随式的飞行控制方法及装置、无人机
JP6851470B2 (ja) 無人機の制御方法、頭部装着式表示メガネおよびシステム
WO2018227350A1 (zh) 无人机返航控制方法、无人机和机器可读存储介质
JP7059937B2 (ja) 可動型撮像装置の制御装置、可動型撮像装置の制御方法及びプログラム
WO2020014987A1 (zh) 移动机器人的控制方法、装置、设备及存储介质
KR20200063136A (ko) 표본점을 매핑하는 계획 방법, 장치, 제어 단말기 및 기억 매체
WO2019061159A1 (zh) 定位故障光伏板的方法、设备及无人机
KR20220070292A (ko) 자동화된 안경류 디바이스 공유 시스템
WO2019051832A1 (zh) 可移动物体控制方法、设备及系统
JP2020005146A (ja) 出力制御装置、表示端末、情報処理装置、移動体、遠隔制御システム、出力制御方法、プログラムおよび撮影制御装置
US11736802B2 (en) Communication management apparatus, image communication system, communication management method, and recording medium
JP2022050979A (ja) 通信端末、画像通信システム、画像表示方法およびプログラム
JP2022040434A (ja) 通信端末、画像通信システム、画像表示方法およびプログラム
CN110139038A (zh) 一种自主环绕拍摄方法、装置以及无人机
WO2019205070A1 (zh) 无人机的控制方法、控制设备及无人机
CN113412479A (zh) 混合现实显示装置和混合现实显示方法
JP7379785B2 (ja) 3dツアーの比較表示システム及び方法
KR101358064B1 (ko) 사용자 이미지를 이용한 원격 제어 방법 및 시스템
CN111242107B (zh) 用于设置空间中的虚拟对象的方法和电子设备
JP2023121636A (ja) 情報処理システム、通信システム、画像共有方法、プログラム
KR102517324B1 (ko) 메타버스-현실 융합 시스템
WO2023102696A1 (zh) 云台、无线通信设备、云台的控制方法、设备和云台系统
US11765454B2 (en) Image control method and device, and mobile platform

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18901784

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18901784

Country of ref document: EP

Kind code of ref document: A1