CN113168188A - Unmanned aerial vehicle control method, motion information determination method and device and unmanned aerial vehicle - Google Patents

Unmanned aerial vehicle control method, motion information determination method and device and unmanned aerial vehicle Download PDF

Info

Publication number
CN113168188A
CN113168188A CN202080006476.XA CN202080006476A CN113168188A CN 113168188 A CN113168188 A CN 113168188A CN 202080006476 A CN202080006476 A CN 202080006476A CN 113168188 A CN113168188 A CN 113168188A
Authority
CN
China
Prior art keywords
objects
unmanned aerial
aerial vehicle
determining
different moments
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080006476.XA
Other languages
Chinese (zh)
Inventor
周游
薛连杰
叶长春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN113168188A publication Critical patent/CN113168188A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • G05D1/106Change initiated in response to external conditions, e.g. avoidance of elevated terrain or of no-fly zones

Abstract

An unmanned aerial vehicle control method, a motion information determination device and an unmanned aerial vehicle are provided, and the method comprises the following steps: acquiring a first environment image and a second environment image acquired by a vision sensor at different moments (S101); determining whether at least one moving object exists in the environment where the unmanned aerial vehicle is located according to the first environment image and the second environment image acquired at different moments (S102); when it is determined that at least one moving object exists in the environment where the unmanned aerial vehicle is located and the unmanned aerial vehicle and the at least one moving object have collision risks, adjusting the flight trajectory of the unmanned aerial vehicle (S103); and controlling the unmanned aerial vehicle to fly according to the adjusted flight track so that the unmanned aerial vehicle avoids at least one moving object (S104). The method improves the flight safety of the unmanned aerial vehicle.

Description

Unmanned aerial vehicle control method, motion information determination method and device and unmanned aerial vehicle
Technical Field
The application relates to the technical field of unmanned aerial vehicles, in particular to an unmanned aerial vehicle control method, a motion information determining method and device and an unmanned aerial vehicle.
Background
Along with unmanned aerial vehicle application field's expansion day by day, the occasion of using unmanned aerial vehicle also becomes more and more diversified, unmanned aerial vehicle carries the lift-off system, unmanned aerial vehicle is when flying, can avoid the barrier around the unmanned aerial vehicle through keeping away the barrier system, and then guarantee unmanned aerial vehicle's flight safety, however, present lift-off system can only sense static object, for example, the mountain peak, trees and building etc., thereby make unmanned aerial vehicle can avoid the static object on the flight route, but unmanned aerial vehicle's the unable accurate sensing moving object of barrier system of keeping away, make unmanned aerial vehicle can't avoid the moving object, unmanned aerial vehicle's flight safety can not be guaranteed. Therefore, how to improve the flight safety of the unmanned aerial vehicle is a problem to be solved urgently at present.
Disclosure of Invention
Based on this, the application provides an unmanned aerial vehicle control method, a motion information determining device and an unmanned aerial vehicle, and aims to improve the flight safety of the unmanned aerial vehicle.
In a first aspect, the present application provides a method for controlling a drone, the method being applied to a drone including a vision sensor, the method including:
acquiring a first environment image and a second environment image which are acquired by the vision sensor at different moments;
determining whether at least one moving object exists in the environment where the unmanned aerial vehicle is located according to a first environment image and a second environment image which are acquired at different moments;
when it is determined that at least one moving object exists in the environment where the unmanned aerial vehicle is located and the unmanned aerial vehicle and the at least one moving object have collision risks, adjusting the flight track of the unmanned aerial vehicle;
and controlling the unmanned aerial vehicle to fly according to the adjusted flight track so that the unmanned aerial vehicle avoids the at least one moving object.
In a second aspect, the present application further provides a motion information determination method, where the method is applied to a drone, where the drone includes a visual sensor, and the method includes:
acquiring a first environment image and a second environment image which are acquired by the vision sensor at different moments;
determining a plurality of objects in the environment where the unmanned aerial vehicle is located at different moments according to a first environment image and a second environment image which are acquired at different moments;
determining target three-dimensional position coordinates of the plurality of objects at different moments according to the first environment image and the second environment image which are acquired at different moments;
and determining the motion information of the plurality of objects according to the target three-dimensional position coordinates of the plurality of objects at different moments.
In a third aspect, the present application further provides an drone control apparatus, which is applied to a drone, the drone including a visual sensor, the drone control apparatus including a memory and a processor;
the memory is used for storing a computer program;
the processor is configured to execute the computer program and, when executing the computer program, implement the following steps:
acquiring a first environment image and a second environment image which are acquired by the vision sensor at different moments;
determining whether at least one moving object exists in the environment where the unmanned aerial vehicle is located according to a first environment image and a second environment image which are acquired at different moments;
when it is determined that at least one moving object exists in the environment where the unmanned aerial vehicle is located and the unmanned aerial vehicle and the at least one moving object have collision risks, adjusting the flight track of the unmanned aerial vehicle;
and controlling the unmanned aerial vehicle to fly according to the adjusted flight track so that the unmanned aerial vehicle avoids the at least one moving object.
In a fourth aspect, the present application further provides a motion information determination apparatus, which is applied to an unmanned aerial vehicle, the unmanned aerial vehicle including a vision sensor, the motion information determination apparatus including a memory and a processor;
the memory is used for storing a computer program;
the processor is configured to execute the computer program and, when executing the computer program, implement the following steps:
acquiring a first environment image and a second environment image which are acquired by the vision sensor at different moments;
determining a plurality of objects in the environment where the unmanned aerial vehicle is located at different moments according to a first environment image and a second environment image which are acquired at different moments;
determining target three-dimensional position coordinates of the plurality of objects at different moments according to the first environment image and the second environment image which are acquired at different moments;
and determining the motion information of the plurality of objects according to the target three-dimensional position coordinates of the plurality of objects at different moments.
In a fifth aspect, the present application further provides a drone, the drone comprising a vision sensor, a memory, and a processor;
the memory is used for storing a computer program;
the processor is configured to execute the computer program and, when executing the computer program, implement the following steps:
acquiring a first environment image and a second environment image which are acquired by the vision sensor at different moments;
determining whether at least one moving object exists in the environment where the unmanned aerial vehicle is located according to a first environment image and a second environment image which are acquired at different moments;
when it is determined that at least one moving object exists in the environment where the unmanned aerial vehicle is located and the unmanned aerial vehicle and the at least one moving object have collision risks, adjusting the flight track of the unmanned aerial vehicle;
and controlling the unmanned aerial vehicle to fly according to the adjusted flight track so that the unmanned aerial vehicle avoids the at least one moving object.
In a sixth aspect, the present application further provides a drone, the drone comprising a vision sensor, a memory, and a processor;
the memory is used for storing a computer program;
the processor is configured to execute the computer program and, when executing the computer program, implement the following steps:
acquiring a first environment image and a second environment image which are acquired by the vision sensor at different moments;
determining a plurality of objects in the environment where the unmanned aerial vehicle is located at different moments according to a first environment image and a second environment image which are acquired at different moments;
determining target three-dimensional position coordinates of the plurality of objects at different moments according to the first environment image and the second environment image which are acquired at different moments;
and determining the motion information of the plurality of objects according to the target three-dimensional position coordinates of the plurality of objects at different moments.
In a seventh aspect, the present application further provides a computer-readable storage medium storing a computer program, which when executed by a processor causes the processor to implement any one of the unmanned aerial vehicle control method or the motion information determination method provided in the present specification.
The embodiment of the application provides an unmanned aerial vehicle control method, a motion information determining method, a device and an unmanned aerial vehicle, a first environment image and a second environment image which are acquired by a visual sensor at different moments are acquired, whether at least one moving object exists in the environment where the unmanned aerial vehicle is located is determined according to the first environment image and the second environment image which are acquired at different moments, when the at least one moving object exists in the environment where the unmanned aerial vehicle is determined, and the unmanned aerial vehicle and the at least one moving object have collision risks, the flight track of the unmanned aerial vehicle is adjusted, and the unmanned aerial vehicle is controlled to fly according to the adjusted flight track, so that the unmanned aerial vehicle avoids the at least one moving object, and the flight safety of the unmanned aerial vehicle is greatly improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic view of a scene of an unmanned aerial vehicle implementing the unmanned aerial vehicle control method or the motion information control method provided in the embodiment of the present application;
fig. 2 is a flowchart illustrating steps of a method for controlling an unmanned aerial vehicle according to an embodiment of the present application;
fig. 3 is a flow diagram illustrating sub-steps of the drone controlling method of fig. 2;
fig. 4 is a flow diagram illustrating sub-steps of the drone controlling method of fig. 2;
FIG. 5 is a flowchart illustrating steps of a motion information control method according to an embodiment of the present application;
fig. 6 is a block diagram schematically illustrating a structure of an unmanned aerial vehicle control device according to an embodiment of the present application;
fig. 7 is a block diagram schematically illustrating a structure of a motion information control apparatus according to an embodiment of the present application;
fig. 8 is a schematic block diagram of a structure of an unmanned aerial vehicle provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The flow diagrams depicted in the figures are merely illustrative and do not necessarily include all of the elements and operations/steps, nor do they necessarily have to be performed in the order depicted. For example, some operations/steps may be decomposed, combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Based on the above problem, this application specification provides an unmanned aerial vehicle control method, this method is applied to an unmanned aerial vehicle, as shown in fig. 1, unmanned aerial vehicle 100 includes unmanned aerial vehicle body 110 and visual sensor 120, this visual sensor 120 is installed on unmanned aerial vehicle body 110, can gather the first environment image and the second environment image of the environment that unmanned aerial vehicle 100 locates under different moments through visual sensor 120, through the first environment image and the second environment image gathered under different moments, can confirm whether there is the moving object in the environment that unmanned aerial vehicle 100 locates, when confirming that unmanned aerial vehicle 100 locates, and unmanned aerial vehicle 100 and moving object have the collision risk, unmanned aerial vehicle 100 automatic adjustment flight path, and fly according to the flight path after the adjustment, so that unmanned aerial vehicle 100 avoids the moving object that has the collision risk.
The present specification further provides a motion information control method, which is applied to an unmanned aerial vehicle, as shown in fig. 1, the unmanned aerial vehicle 100 includes an unmanned aerial vehicle body 110 and a visual sensor 120, the visual sensor 120 is installed on the unmanned aerial vehicle body 110, a first environment image and a second environment image of an environment where the unmanned aerial vehicle 100 is located at different times can be acquired through the visual sensor 120, a plurality of objects in the environment where the unmanned aerial vehicle 100 is located at different times can be determined through the first environment image and the second environment image acquired at different times, and then target three-dimensional position coordinates of the plurality of objects at different times are determined through the first environment image and the second environment image acquired at different times; according to the target three-dimensional position coordinates of the plurality of objects at different moments, the motion information of the plurality of objects is determined, so that the unmanned aerial vehicle 100 can sense the static objects and the moving objects, and the static objects and the moving objects can be conveniently tracked subsequently.
Wherein, this vision sensor 120 can be for binocular vision device, also can be for other vision devices, and the mounted position and the quantity of this vision sensor 120 on unmanned aerial vehicle body 110 can set up according to actual conditions, and this application does not do specific limitation to this. For example, the drone 100 includes a vision sensor 120, and the vision sensor 120 is mounted to a front area of the drone body 110 for sensing objects in front of the drone 100. For another example, the drone 100 includes two vision sensors 120, the two vision sensors 120 being respectively mounted to a front area and a rear area of the drone body 110 for sensing objects in front of and behind the drone 100. For another example, the drone 100 includes four vision sensors 120, and the four vision sensors 120 are respectively installed in a front area, a rear area, a left area, and a right area of the drone body 110 for sensing objects in front of, behind, to the left, and to the right of the drone 100. For another example, the drone 100 includes six vision sensors 120, and the six vision sensors 120 are respectively installed in a front area, a rear area, a left area, a right area, an upper area, and a lower area of the drone body 110 for sensing objects in front of, behind, left, right, above, and below the drone 100.
The drone 100 may have one or more propulsion units to allow the drone 100 to fly in the air. The one or more propulsion units may cause the drone 100 to move at one or more, two or more, three or more, four or more, five or more, six or more free angles. In some cases, the drone 100 may rotate about one, two, three, or more axes of rotation. The axes of rotation may be perpendicular to each other. The axes of rotation may be maintained perpendicular to each other throughout the flight of the drone 100. The axis of rotation may include a pitch axis, a roll axis, and/or a yaw axis. The drone 100 may be movable in one or more dimensions. For example, the drone 100 can move upward due to the lift generated by one or more rotors. In some cases, the drone 100 may be movable along a Z-axis (which may be upward with respect to the drone 100 direction), an X-axis, and/or a Y-axis (which may be lateral). The drone 100 may be movable along one, two, or three axes that are perpendicular to each other.
The drone 100 may be a rotorcraft. In some cases, the drone 100 may be a multi-rotor aircraft that may include multiple rotors. The plurality of rotors may rotate to generate lift for the drone 100. The rotor may be a propulsion unit that allows the drone 100 to move freely in the air. The rotors may rotate at the same rate and/or may produce the same amount of lift or thrust. The rotors may rotate at will at different rates, generate different amounts of lift or thrust, and/or allow the drone 100 to rotate. In some cases, one, two, three, four, five, six, seven, eight, nine, ten, or more rotors may be provided on the drone 100. The rotors may be arranged with their axes of rotation parallel to each other. In some cases, the axes of rotation of the rotors may be at any angle relative to each other, which may affect the motion of the drone 100.
The drone 100 may have multiple rotors. The rotor may be connected to the body of the drone 100, which may include a control unit, an Inertial Measurement Unit (IMU), a processor, a battery, a power source, and/or other sensors. The rotor may be connected to the body by one or more arms or extensions that branch off from a central portion of the body. For example, one or more arms may extend radially from the central body of the drone 100 and may have rotors at or near the ends of the arms.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating steps of a method for controlling an unmanned aerial vehicle according to an embodiment of the present application. Specifically, as shown in fig. 2, the drone control method includes steps S101 to S104.
S101, acquiring a first environment image and a second environment image which are acquired by the vision sensor at different moments.
Unmanned aerial vehicle is at the in-process of flight, and control visual sensor presets the time with the interval and gathers the first environment image and the second environment image of unmanned aerial vehicle environment to obtain the first environment image and the second environment image of gathering at different moments. The vision sensor comprises a binocular vision device, the first environment image is an environment image acquired by a first vision device in the binocular vision device, the second environment image is an environment image acquired by a second vision device in the binocular vision device, the preset time can be set based on actual conditions, the application specification does not specifically limit the preset time, for example, the preset time is 100 milliseconds, the first environment image of the environment where the unmanned aerial vehicle is located is acquired by the first vision device at intervals of 100 milliseconds in 1 second, 10 first environment images are obtained, meanwhile, the second environment image of the environment where the unmanned aerial vehicle is located is acquired by the second vision device at intervals of 100 milliseconds in 1 second, 10 second environment images are obtained, and the first environment image and the second environment image acquired at the same time correspond to each other one to one.
Wherein, this vision sensor mounted position and quantity on unmanned aerial vehicle can set up according to actual conditions, and this application specification does not do specific limit to this. For example, the drone includes a vision sensor, and the vision sensor is mounted in the front area of the drone for capturing images of the environment in front of the drone. For another example, the drone includes two vision sensors mounted to the front and rear areas of the drone, respectively, for capturing images of the environment in front of and behind the drone. For another example, the drone includes four vision sensors, which are respectively installed in the front area, the rear area, the left area, and the right area of the drone, for acquiring the environmental images in front of, behind, to the left, and to the right of the drone. For another example, the drone includes six vision sensors, which are respectively installed in a front area, a rear area, a left area, a right area, an upper area, and a lower area of the drone, for acquiring an environmental image in front of, behind, to the left, to the right, above, and below the drone.
S102, determining whether at least one moving object exists in the environment where the unmanned aerial vehicle is located according to the first environment image and the second environment image acquired at different moments.
Through the first environment image and the second environment image that gather at different moments, can confirm the state of a plurality of objects in the environment that unmanned aerial vehicle is located, can confirm whether there is at least one moving object in the environment that unmanned aerial vehicle is located through the state of a plurality of objects.
In one embodiment, the flight height of the unmanned aerial vehicle is obtained; when the flying height of the unmanned aerial vehicle is smaller than or equal to the preset height, whether at least one moving object exists in the environment where the unmanned aerial vehicle is located is determined according to the first environment image and the second environment image which are acquired at different moments. Wherein, preset the height and can set up according to actual conditions, and this application does not do specifically to this and restricts, for example, preset the height and be 6 meters, and unmanned aerial vehicle is when the low-altitude flight promptly, according to the first environment image and the second environment image of gathering at different moments, confirms whether there is at least one moving object in the environment that unmanned aerial vehicle is located, and when being convenient for unmanned aerial vehicle when the low-altitude flight, the moving object of perception unmanned aerial vehicle environment.
In one embodiment, as shown in fig. 3, step S102 includes sub-steps S1021 to S1023.
S1021, determining target three-dimensional position coordinates of a plurality of objects in the environment where the unmanned aerial vehicle is located at different moments according to the first environment image and the second environment image acquired at different moments.
The depth maps of the environment where the unmanned aerial vehicle is located at different moments can be generated through the first environment image and the second environment image which are acquired at different moments; the depth information of a plurality of objects at different moments can be determined through the depth maps of the environment where the unmanned aerial vehicle is located at different moments, and the target three-dimensional position coordinates of the plurality of objects at different moments are determined according to the depth information of the plurality of objects at different moments. And the target three-dimensional position coordinate is a three-dimensional position coordinate of the object in a world coordinate system.
In one embodiment, feature point matching pairs of a plurality of spatial points on a plurality of objects at different time points are determined from a first environment image and a second environment image acquired at different time points respectively; determining depth information of a plurality of objects at different moments according to feature point matching pairs of a plurality of space points on the plurality of objects at different moments respectively; and determining target three-dimensional position coordinates of the plurality of objects at different moments according to the depth information of the plurality of objects at different moments. The target three-dimensional position coordinates of the multiple objects at different moments can be determined through the depth information of the multiple objects at different moments, so that the states of the multiple objects can be determined conveniently in a follow-up mode.
In an embodiment, the determining, from the first environmental image and the second environmental image, the feature point matching pairs corresponding to the plurality of spatial points on the object respectively is specifically as follows: extracting first feature points corresponding to a plurality of space points on an object from a first environment image according to a preset feature point extraction algorithm; determining second feature points matched with the first feature points from the corresponding second environment images based on a preset feature point tracking algorithm to obtain feature point matching pairs corresponding to a plurality of space points on the object respectively; or, extracting first feature points corresponding to a plurality of spatial points on the object from the second environment image based on a preset feature point extraction algorithm; and determining a second characteristic point matched with the first characteristic point from the first environment image based on a preset characteristic point tracking algorithm to obtain characteristic point matching pairs respectively corresponding to a plurality of space points on the object. The preset feature point extraction algorithm comprises at least one of the following steps: corner point Detection algorithm (Harris Corner Detection), Scale-invariant feature transform (SIFT) algorithm, Scale-Up Robust Features transform (SURF) algorithm, fast (Features From accessed Segment test) feature point Detection algorithm, and pre-set feature point tracking algorithm including, but not limited to, KLT (Kanade-Lucas-Tomasi feature tracker) algorithm.
In an embodiment, the manner of determining, from the first environmental image and the second environmental image acquired at different times, feature point matching pairs of a plurality of spatial points on a plurality of objects at different times is specifically: acquiring first target image areas of a plurality of objects from first environment images acquired at different moments to obtain the first target image areas of the plurality of objects at different moments; acquiring second target image areas of a plurality of objects from second environment images acquired at different moments to obtain the second target image areas of the plurality of objects at different moments; and extracting feature point matching pairs of a plurality of spatial points on the plurality of objects at different moments respectively from the first target image area and the second target image area of the plurality of objects at different moments. The first target image area and the second target image area of the plurality of objects at different moments are determined, and then the feature point matching pairs of the plurality of space points of the plurality of objects at different moments are extracted from the first target image area and the second target image area of the plurality of objects at different moments, so that the calculation amount can be reduced, and the processing speed can be improved.
In an embodiment, the determining the depth information of the plurality of objects at different times according to the feature point matching pairs of the plurality of spatial points on the plurality of objects at different times includes: determining feature point matching pairs of central space points of a plurality of objects at different moments in a fitting manner according to the feature point matching pairs of the plurality of space points on the plurality of objects at different moments respectively; and determining the depth information of the plurality of objects at different moments according to the feature point matching pairs of the central space points of the plurality of objects at different moments. The method comprises the steps of determining a feature point matching pair corresponding to a central space point of an object according to a fitting mode based on feature point matching pairs corresponding to a plurality of space points on the object, accurately determining the feature point matching pair corresponding to the central space point of the object, further accurately determining depth information of the object, and facilitating subsequent accurate determination of three-dimensional position coordinates of the object.
In some embodiments, taking a single object and a single time as an example, the depth information of the multiple objects at different times is determined according to feature point matching pairs of multiple spatial points on the multiple objects at different times, and specifically, the depth information of the multiple spatial points on the object (distances between the multiple spatial points and the unmanned aerial vehicle, respectively) is determined according to the feature point matching pairs corresponding to the multiple spatial points on the object; according to the depth information of a plurality of space points on the object, average depth information is determined and is used as the depth information of the object. Similarly, in the above manner, depth information of a plurality of objects at different times can be determined. By using the average depth information as the depth information of the object, the accuracy of the new depth information can be improved, and the three-dimensional position coordinates of the object can be conveniently and accurately determined subsequently.
In some embodiments, the manner of determining the depth information of the spatial point through the feature point matching pair is specifically: determining the pixel difference of the feature point matching pair according to the pixels of the two feature points in the feature point matching pair; acquiring a preset focal length and a preset binocular distance of a binocular vision device; and determining the depth information of the space point according to the pixel difference corresponding to the preset focal length, the preset binocular distance and the characteristic point matching pair. The preset focal length is determined by calibrating the focal length of the binocular vision device, and the preset binocular distance is determined according to the installation positions of the first shooting device and the second shooting device in the binocular vision device.
In some embodiments, the determining, according to the depth information of the multiple objects at different time instances, the target three-dimensional position coordinates of the multiple objects at different time instances is specifically: determining three-dimensional position coordinates of the plurality of objects under a camera coordinate system at different moments according to the depth information of the plurality of objects at different moments; the method comprises the steps of obtaining position information and posture information of the unmanned aerial vehicle at different moments, converting three-dimensional position coordinates of a plurality of objects under a camera coordinate system at different moments into three-dimensional position coordinates under a world coordinate system at different moments according to the position information and the posture information of the unmanned aerial vehicle at different moments, and obtaining target three-dimensional position coordinates of the plurality of objects at different moments.
S1022, determining states of the multiple objects according to target three-dimensional position coordinates of the multiple objects at different moments, wherein the states include a motion state and a static state.
Specifically, three-dimensional position coordinates of each object at two adjacent moments are obtained, and the speed of each object is determined according to the three-dimensional position coordinates of each object at two adjacent moments; and determining the state of each object according to the speed of each object, namely comparing the speed of each object with a preset speed, if the speed of the object is greater than or equal to the preset speed, determining that the state of the object is a moving state, namely the object is a moving object, and if the speed of the object is less than the preset speed, determining that the state of the object is a static state, namely the object is a static object. The preset speed may be set according to an actual situation, which is not specifically limited in the present application, for example, the preset speed is 0.5 meters per second.
In one embodiment, the speed of the plurality of objects between every two adjacent moments is determined according to the target three-dimensional position coordinates of the plurality of objects at different moments; determining the states of the plurality of objects according to the speeds of the plurality of objects between every two adjacent moments, namely comparing the speed of each object between every two adjacent moments with a preset speed, if at least one speed of the objects between every two adjacent moments is greater than or equal to the preset speed, determining that the state of the object is a moving state, namely the object is a moving object, and if the speeds of the objects between every two adjacent moments are less than the preset speed, determining that the state of the object is a static state, namely the object is a static object.
S1023, when at least one object with the state being a motion state exists in the plurality of objects, determining that at least one moving object exists in the environment where the unmanned aerial vehicle is located.
After the states of the multiple objects in the environment where the unmanned aerial vehicle is located are determined, when it is determined that at least one object in which the state is the motion state exists in the multiple objects, it can be determined that at least one moving object exists in the environment where the unmanned aerial vehicle is located. Whether at least one moving object exists within the environment in which the drone is located can be determined by determining whether the state of each object is a moving state or a stationary state.
S103, when it is determined that at least one moving object exists in the environment where the unmanned aerial vehicle is located and the unmanned aerial vehicle and the at least one moving object have collision risks, adjusting the flight trajectory of the unmanned aerial vehicle.
When determining that at least one moving object exists in the environment where the unmanned aerial vehicle is located, determining whether the unmanned aerial vehicle and the at least one moving object have collision risks, and when determining that the unmanned aerial vehicle and the at least one moving object have collision risks, adjusting the flight trajectory of the unmanned aerial vehicle so that the unmanned aerial vehicle can avoid the moving object when flying according to the adjusted flight trajectory; when no moving object exists in the environment where the unmanned aerial vehicle is located, the unmanned aerial vehicle is controlled to avoid the obstacle according to the obstacle information acquired by the perception sensor, so that the unmanned aerial vehicle can avoid the static object.
In an embodiment, the determining whether there is a collision risk between the unmanned aerial vehicle and at least one moving object is specifically: acquiring a first description equation of the flight track of the unmanned aerial vehicle, and acquiring a second description equation of the motion track of each moving object; respectively solving a plurality of equation sets formed by the first description equation and each second description equation to obtain the solution results of the plurality of equation sets; when the solving result of at least one equation set is larger than zero, determining that the unmanned aerial vehicle and at least one moving object have collision risks, and when the solving results of the equation sets do not have solutions, determining that the unmanned aerial vehicle and the moving object do not have collision risks. The first description equation is an expression of a Bezier curve between the position of the unmanned aerial vehicle and the time, and the second description equation is an expression of a Bezier curve between the position of the moving object and the time. Whether collision risks exist between the unmanned aerial vehicle and the moving object can be accurately determined in a mode of solving the equation set.
In one embodiment, as shown in fig. 4, step S103 specifically includes: and substeps 1031 to S1033.
And S1031, obtaining the motion track of the at least one moving object and the flight track of the unmanned aerial vehicle.
Specifically, three-dimensional position coordinates of at least one moving object at different moments are determined according to a first environment image and a second environment image which are acquired at different moments; and determining the motion track of at least one moving object according to the three-dimensional position coordinates of the at least one moving object at different moments. The detailed process of determining the three-dimensional position coordinates of at least one moving object at different times according to the first environment image and the second environment image acquired at different times refers to the foregoing embodiments, which are not described herein again.
In one embodiment, when the unmanned aerial vehicle flies according to a preset flight trajectory, a current waypoint of the unmanned aerial vehicle on the preset flight trajectory is acquired, and an ending waypoint of the preset flight trajectory is acquired; intercepting the remaining flight path of the unmanned aerial vehicle from the preset flight path according to the current waypoint of the unmanned aerial vehicle on the preset flight path and the ending waypoint of the preset flight path, and taking the intercepted remaining flight path as the flight path of the unmanned aerial vehicle; when the unmanned aerial vehicle is controlled to fly by a flying hand, acquiring a plurality of historical position coordinates of the unmanned aerial vehicle, and acquiring the current flying speed and flying direction of the unmanned aerial vehicle; and determining the flight track of the unmanned aerial vehicle according to the plurality of historical position coordinates of the unmanned aerial vehicle and the current flight speed and flight direction of the unmanned aerial vehicle. Wherein, predetermine the flight trajectory and can confirm according to the flight task that unmanned aerial vehicle actually executed, this application specification does not specifically limit to this.
S1032, acquiring the intersection position of the motion track of the at least one moving object and the flight track of the unmanned aerial vehicle.
Specifically, a first description equation of the flight path of the unmanned aerial vehicle is obtained; acquiring a second description equation of the motion trail of at least one moving object; solving an equation set formed by the first description equation and a second description equation of the motion trail of at least one moving object to obtain a solution result; and determining the intersection position of the motion track of at least one moving object and the flight track of the unmanned aerial vehicle according to the solving result. The first description equation is an expression of a bezier curve between the position of the unmanned aerial vehicle and time, and the second description equation is an expression of a bezier curve between the position of the moving object and time.
S1033, adjusting the flight path of the unmanned aerial vehicle according to at least one intersection position.
Specifically, determining the position information of at least one moving object relative to the unmanned aerial vehicle; determining target position coordinates of the unmanned aerial vehicle at least one intersection position according to the azimuth information; and adjusting the flight track of the unmanned aerial vehicle according to the at least one target position coordinate. Wherein, can confirm the position information of moving object for unmanned aerial vehicle according to unmanned aerial vehicle's perception sensor, this position information includes at least one of the place ahead of unmanned aerial vehicle, rear, left, right side, top and below. Through the target position coordinate of unmanned aerial vehicle at least one crossing position department based on the position information is confirmed, adjustment unmanned aerial vehicle's that can be accurate flight track for when unmanned aerial vehicle flies according to the flight track after the adjustment, unmanned aerial vehicle can avoid the moving object, guarantees unmanned aerial vehicle's flight safety.
In an embodiment, according to the orientation information, the method for determining the target position coordinates of the unmanned aerial vehicle at the intersection position specifically includes: acquiring position coordinates of an intersecting position on a flight track of the unmanned aerial vehicle, and determining a target adjustment strategy of the position coordinates according to the azimuth information; and adjusting the position coordinate according to the target adjustment strategy of the position coordinate, and taking the adjusted position coordinate as a target position coordinate of the unmanned aerial vehicle at the intersection position, wherein a preset distance is arranged between the target position coordinate and the position coordinate. The target adjustment strategy of the position coordinate is to perform at least one of preset distances of forward translation, backward translation, left translation, right translation, upward translation and downward translation on the position coordinate, and the preset distance may be set based on an actual situation, which is not specifically limited in the present application, for example, the preset distance is 3 meters.
In an embodiment, the determining the target adjustment policy of the position coordinate according to the orientation information specifically includes: and determining at least one translation direction of the position coordinate according to the orientation information, acquiring an adjustment strategy corresponding to the at least one translation direction, and taking the adjustment strategy corresponding to the at least one translation direction as a target adjustment strategy of the position coordinate. For example, if the unmanned aerial vehicle and a moving object are in collision risk and the moving object is located to the left of the unmanned aerial vehicle, the translation direction of the position coordinate is any one of above, below, front and back, or at least one of above and front, or at least one of below and back, or at least one of above and back, or at least one of below and front, and the like. For another example, if the unmanned aerial vehicle and two moving objects are in collision risk and the two moving objects are located at the left and above the unmanned aerial vehicle, respectively, the translation direction of the position coordinate is front or rear, or right and front, or right and rear, etc.
In an embodiment, the adjusting the flight trajectory of the drone according to the at least one target position coordinate is specifically: acquiring the current position coordinate of the unmanned aerial vehicle and the position coordinate of the end waypoint of the unmanned aerial vehicle on the flight track; planning a first flight track of the unmanned aerial vehicle according to the current position coordinates and the target position coordinates of the unmanned aerial vehicle, and planning a second flight track of the unmanned aerial vehicle according to the target position coordinates and the position coordinates of the ending waypoint; and splicing the first flight track and the second flight track to obtain the adjusted flight track. The planning mode of the first flight trajectory and the second flight trajectory may be set according to an actual situation, which is not specifically limited in the present application.
In one embodiment, when it is determined that at least one moving object exists in the environment where the unmanned aerial vehicle is located and the unmanned aerial vehicle and the at least one moving object have a collision risk, acquiring a motion trajectory of the at least one moving object and a flight trajectory of the unmanned aerial vehicle; determining the collision moment of at least one moving object and the unmanned aerial vehicle according to the motion track of the at least one moving object and the flight track of the unmanned aerial vehicle; and adjusting the flight track of the unmanned aerial vehicle according to the collision moment of at least one moving object and the unmanned aerial vehicle. Through the collision moment of moving object and unmanned aerial vehicle, can accurately adjust unmanned aerial vehicle's flight track for when unmanned aerial vehicle flies according to the flight track after the adjustment, unmanned aerial vehicle can avoid the moving object.
In an embodiment, the mode of determining the collision time of the at least one moving object with the unmanned aerial vehicle according to the motion trajectory of the at least one moving object and the flight trajectory of the unmanned aerial vehicle is specifically as follows: acquiring a first description equation of the flight track of the unmanned aerial vehicle; acquiring a second description equation of the motion trail of at least one moving object; and solving an equation set formed by the first description equation and a second description equation of the motion trail of the at least one moving object to obtain the collision time of the at least one moving object and the unmanned aerial vehicle. The first description equation is an expression of a Bezier curve between the position of the unmanned aerial vehicle and the time, and the second description equation is an expression of a Bezier curve between the position of the moving object and the time.
In an embodiment, the method for adjusting the flight trajectory of the unmanned aerial vehicle according to the collision time of at least one moving object with the unmanned aerial vehicle specifically comprises: determining a target collision moment according to the collision moment of at least one moving object and the unmanned aerial vehicle and the current system moment, namely determining a time difference value between the collision moment of at least one moving object and the unmanned aerial vehicle and the current system moment, and taking the collision moment of which the absolute value of the time difference value is less than or equal to a preset difference value as the target collision moment; according to the position coordinates of the unmanned aerial vehicle at the target collision moment, the flight trajectory of the unmanned aerial vehicle is adjusted, namely, the position information of the moving object with collision risk relative to the unmanned aerial vehicle is obtained, the position coordinates of the unmanned aerial vehicle at the target collision moment are adjusted according to the position information, the target position coordinates are obtained, and the flight trajectory of the unmanned aerial vehicle is adjusted according to the target position coordinates.
And S104, controlling the unmanned aerial vehicle to fly according to the adjusted flight track so that the unmanned aerial vehicle avoids the at least one moving object.
After the flight track of the unmanned aerial vehicle is adjusted, the unmanned aerial vehicle is controlled to fly according to the adjusted flight track, so that the unmanned aerial vehicle can avoid a moving object with collision risk, and the flight safety of the unmanned aerial vehicle is ensured.
The unmanned aerial vehicle control method that this application specification provided, through obtaining first environment image and the second environment image that vision sensor gathered at different moments, and according to the first environment image and the second environment image gathered at different moments, confirm whether there is at least one moving object in the environment that unmanned aerial vehicle is located, at least one moving object exists in the environment that unmanned aerial vehicle is located when confirming, and unmanned aerial vehicle and at least one moving object have the collision risk, adjust unmanned aerial vehicle's flight track, and control unmanned aerial vehicle according to the flight track flight after the adjustment, so that unmanned aerial vehicle avoids at least one moving object, unmanned aerial vehicle's flight safety has greatly been improved.
Referring to fig. 5, fig. 5 is a flowchart illustrating steps of a motion information determining method according to an embodiment of the present application. Specifically, as shown in fig. 5, the drone control method includes steps S201 to S204.
S201, acquiring a first environment image and a second environment image which are acquired by the vision sensor at different moments.
Unmanned aerial vehicle is at the in-process of flight, and control visual sensor presets the time with the interval and gathers the first environment image and the second environment image of unmanned aerial vehicle environment to obtain the first environment image and the second environment image of gathering at different moments. The vision sensor comprises a binocular vision device, the first environment image is an environment image acquired by a first vision device in the binocular vision device, the second environment image is an environment image acquired by a second vision device in the binocular vision device, the preset time can be set based on actual conditions, and the application specification does not specifically limit the preset time, for example, the preset time is 100 milliseconds.
S202, determining a plurality of objects in the environment where the unmanned aerial vehicle is located at different moments according to the first environment image and the second environment image acquired at different moments.
Specifically, first environment images acquired at different moments are processed based on a preset edge detection algorithm to obtain first edge images corresponding to the first environment images at different moments; based on a preset connected domain detection algorithm, connected domain detection is carried out on the first edge images at different moments to determine connected domains in the first edge images at different moments, and the determined connected domains in the first edge images at different moments are used as a plurality of objects in the environment where the unmanned aerial vehicle is located. The preset edge detection algorithm comprises any one of a Sobelo operator algorithm, a Canny algorithm, a Laplacian algorithm and the like, and the preset connected domain detection algorithm comprises a Flood Fill algorithm.
Similarly, processing the second environment images acquired at different moments based on a preset edge detection algorithm to obtain second edge images corresponding to the second environment images at different moments; and based on a preset connected domain detection algorithm, carrying out connected domain detection on the second edge images at different moments to determine connected domains in the second edge images at different moments, and taking the determined connected domains in the second edge images at different moments as a plurality of objects in the environment where the unmanned aerial vehicle is located.
In one embodiment, according to a first environment image and a second environment image acquired at different times, generating depth maps of environments where the unmanned aerial vehicle is located at different times; based on a preset connected domain detection algorithm, the connected domain detection is carried out on the depth maps of the environments where the unmanned aerial vehicles are located at different moments, so that the connected domains in the depth maps of the environments where the unmanned aerial vehicles are located at different moments are determined, and the determined connected domains in the depth maps of the environments where the unmanned aerial vehicles are located at different moments are used as a plurality of objects in the environments where the unmanned aerial vehicles are located.
In one embodiment, according to a first environment image and a second environment image acquired at different times, generating depth maps of the environment where the unmanned aerial vehicle is located at different times; determining three-dimensional point cloud data of the environment where the unmanned aerial vehicle is located at different moments based on the depth maps of the environment where the unmanned aerial vehicle is located at different moments; and extracting planes from the three-dimensional point cloud data of the environment where the unmanned aerial vehicle is located at different moments, and determining a plurality of objects in the environment where the unmanned aerial vehicle is located according to the extracted planes and a preset connected domain detection algorithm. Wherein, the plane can be extracted according to RANSAC (random Sample consensus) algorithm.
S203, determining target three-dimensional position coordinates of the plurality of objects at different moments according to the first environment image and the second environment image acquired at different moments.
Specifically, feature point matching pairs of a plurality of spatial points on a plurality of objects at different moments are determined from a first environment image and a second environment image acquired at different moments; determining depth information of a plurality of objects at different moments according to feature point matching pairs of a plurality of space points on the plurality of objects at different moments respectively; and determining target three-dimensional position coordinates of the plurality of objects at different moments according to the depth information of the plurality of objects at different moments.
In one embodiment, according to feature point matching pairs of a plurality of spatial points on a plurality of objects at different time instants, feature point matching pairs of central spatial points of the plurality of objects at different time instants are determined in a fitting manner; and determining the depth information of the plurality of objects at different moments according to the feature point matching pairs of the central space points of the plurality of objects at different moments.
And S204, determining the motion information of the plurality of objects according to the target three-dimensional position coordinates of the plurality of objects at different moments.
The motion information comprises the state, the motion track and the motion speed of the object, and the state comprises a motion state and a static state.
In an embodiment, a plurality of objects can be tracked according to motion information of the plurality of objects and by combining two-dimensional image information of the plurality of objects, so that tracking of one main target is achieved, the problems of loss and wrong tracking are avoided, and the tracking accuracy is improved.
Specifically, according to target three-dimensional position coordinates of a plurality of objects at different moments, determining the speed of the plurality of objects between every two adjacent moments; the states of the plurality of objects are determined based on the velocities of the plurality of objects between each two adjacent time instants.
In one embodiment, three-dimensional position coordinates of the object in the motion state at different moments are acquired; and determining the motion trail of the object in the motion state according to the three-dimensional position coordinates of the object in the motion state at different moments.
According to the motion information determining method, a first environment image and a second environment image which are acquired by a visual sensor at different moments are obtained, a plurality of objects in the environment where the unmanned aerial vehicle is located at different moments are determined according to the first environment image and the second environment image which are acquired at different moments, and then target three-dimensional position coordinates of the objects at different moments are determined according to the first environment image and the second environment image which are acquired at different moments; and determining the motion information of the plurality of objects according to the target three-dimensional position coordinates of the plurality of objects at different moments, so that the unmanned aerial vehicle can perceive moving objects and static objects.
Referring to fig. 6, fig. 6 is a schematic block diagram of a structure of an unmanned aerial vehicle control device according to an embodiment of the present application. As shown in fig. 6, the drone control device 300 is applied to a drone, the drone includes a vision sensor, the drone control device 300 includes a processor 301 and a memory 302, the processor 301 and the memory 302 are connected by a bus 303, and the bus 303 is, for example, an I2C (Inter-integrated Circuit) bus.
Specifically, the Processor 301 may be a Micro-controller Unit (MCU), a Central Processing Unit (CPU), a Digital Signal Processor (DSP), or the like.
Specifically, the Memory 302 may be a Flash chip, a Read-Only Memory (ROM) magnetic disk, an optical disk, a usb disk, or a removable hard disk.
Wherein the processor 301 is configured to run a computer program stored in the memory 302, and when executing the computer program, implement the following steps:
acquiring a first environment image and a second environment image which are acquired by the vision sensor at different moments;
determining whether at least one moving object exists in the environment where the unmanned aerial vehicle is located according to a first environment image and a second environment image which are acquired at different moments;
when it is determined that at least one moving object exists in the environment where the unmanned aerial vehicle is located and the unmanned aerial vehicle and the at least one moving object have collision risks, adjusting the flight track of the unmanned aerial vehicle;
and controlling the unmanned aerial vehicle to fly according to the adjusted flight track so that the unmanned aerial vehicle avoids the at least one moving object.
Optionally, the processor is configured to, when determining whether at least one moving object exists in the environment where the unmanned aerial vehicle is located according to the first environment image and the second environment image acquired at different times, implement:
determining target three-dimensional position coordinates of a plurality of objects in the environment where the unmanned aerial vehicle is located at different moments according to the first environment image and the second environment image which are acquired at different moments;
determining states of the plurality of objects according to target three-dimensional position coordinates of the plurality of objects at different moments, wherein the states comprise a motion state and a static state;
and when at least one object in the plurality of objects in the motion state exists, determining that at least one moving object exists in the environment where the unmanned aerial vehicle is located.
Optionally, the processor is configured to, when determining target three-dimensional position coordinates of a plurality of objects in an environment where the unmanned aerial vehicle is located at different times according to the first environment image and the second environment image acquired at different times, implement:
determining feature point matching pairs of a plurality of spatial points on the plurality of objects at different moments respectively from a first environment image and a second environment image acquired at different moments;
determining depth information of the plurality of objects at different moments according to feature point matching pairs of the plurality of space points on the plurality of objects at different moments respectively;
and determining target three-dimensional position coordinates of the plurality of objects at different moments according to the depth information of the plurality of objects at different moments.
Optionally, the processor is configured to determine depth information of the plurality of objects at different times according to feature point matching pairs of the plurality of spatial points on the plurality of objects at different times, and is configured to:
determining feature point matching pairs of central space points of the plurality of objects at different moments in a fitting manner according to the feature point matching pairs of the plurality of space points on the plurality of objects at different moments respectively;
and determining the depth information of the plurality of objects at different moments according to the feature point matching pairs of the central space points of the plurality of objects at different moments.
Optionally, the processor is configured to, when determining the states of the plurality of objects according to the target three-dimensional position coordinates of the plurality of objects at different time instants, implement:
determining the speed of the plurality of objects between every two adjacent moments according to the target three-dimensional position coordinates of the plurality of objects at different moments;
determining the states of the plurality of objects according to the speeds of the plurality of objects between every two adjacent moments.
Optionally, the processor, when implementing adjusting the flight trajectory of the drone, is configured to implement:
acquiring a motion track of the at least one moving object and a flight track of the unmanned aerial vehicle;
acquiring the intersection position of the motion track of the at least one moving object and the flight track of the unmanned aerial vehicle;
adjusting the flight trajectory of the unmanned aerial vehicle according to at least one of the intersection positions.
Optionally, when the processor is implemented to acquire the motion trajectory of the at least one moving object, the processor is configured to implement:
determining three-dimensional position coordinates of the at least one moving object at different moments according to the first environment image and the second environment image which are acquired at different moments;
and determining the motion track of the at least one moving object according to the three-dimensional position coordinates of the at least one moving object at different moments.
Optionally, the processor, when implementing to acquire an intersection position of the motion trajectory of the at least one moving object and the flight trajectory of the unmanned aerial vehicle, is configured to implement:
acquiring a first description equation of the flight track of the unmanned aerial vehicle;
acquiring a second description equation of the motion trail of the at least one moving object;
solving an equation set formed by the first description equation and a second description equation of the motion trail of at least one moving object to obtain a solution result;
and determining the intersection position of the motion track of the at least one moving object and the flight track of the unmanned aerial vehicle according to the solving result.
Optionally, the processor is configured to, when adjusting the flight trajectory of the drone according to at least one of the intersection positions, implement:
determining position information of the at least one moving object relative to the drone;
determining target position coordinates of the unmanned aerial vehicle at least one of the intersection positions according to the orientation information;
and adjusting the flight track of the unmanned aerial vehicle according to at least one target position coordinate.
Optionally, the processor is further configured to implement the following steps:
when it is determined that at least one moving object exists in the environment where the unmanned aerial vehicle is located and the unmanned aerial vehicle and the at least one moving object have collision risks, acquiring a motion track of the at least one moving object and a flight track of the unmanned aerial vehicle;
determining the collision moment of the at least one moving object and the unmanned aerial vehicle according to the motion track of the at least one moving object and the flight track of the unmanned aerial vehicle;
and adjusting the flight track of the unmanned aerial vehicle according to the collision moment of the at least one moving object and the unmanned aerial vehicle.
Optionally, the processor is configured to, when adjusting a flight trajectory of the unmanned aerial vehicle according to a collision time of the at least one moving object with the unmanned aerial vehicle, implement:
determining a target collision moment according to the collision moment of the at least one moving object and the unmanned aerial vehicle and the current system moment;
and adjusting the flight track of the unmanned aerial vehicle according to the position coordinate of the unmanned aerial vehicle at the target collision moment.
Optionally, before determining whether there is at least one moving object in the environment where the unmanned aerial vehicle is located according to the first environment image and the second environment image acquired at different time, the processor is further configured to:
acquiring the flight height of the unmanned aerial vehicle;
when the flying height of the unmanned aerial vehicle is smaller than or equal to a preset height, determining whether at least one moving object exists in the environment where the unmanned aerial vehicle is located according to a first environment image and a second environment image which are acquired at different moments.
Optionally, the vision sensor includes a binocular vision device, the first environment image is an environment image collected by a first vision device of the binocular vision device, and the second environment image is an environment image collected by a second vision device of the binocular vision device.
It should be noted that, as will be clearly understood by those skilled in the art, for convenience and brevity of description, the specific working process of the unmanned aerial vehicle control device described above may refer to the corresponding process in the foregoing unmanned aerial vehicle control method embodiment, and details are not described herein again.
Referring to fig. 7, fig. 7 is a block diagram schematically illustrating a structure of a motion information determining apparatus according to an embodiment of the present application. As shown in fig. 7, the motion information determination apparatus 400 is applied to a drone, the drone includes a vision sensor, the motion information determination apparatus 400 includes a processor 401 and a memory 402, the processor 401 and the memory 402 are connected by a bus 403, and the bus 403 is, for example, an I2C (Inter-integrated Circuit) bus.
Specifically, the Processor 401 may be a Micro-controller Unit (MCU), a Central Processing Unit (CPU), a Digital Signal Processor (DSP), or the like.
Specifically, the Memory 402 may be a Flash chip, a Read-Only Memory (ROM) magnetic disk, an optical disk, a usb disk, or a removable hard disk.
Wherein the processor 401 is configured to run a computer program stored in the memory 402, and when executing the computer program, implement the following steps:
acquiring a first environment image and a second environment image which are acquired by the vision sensor at different moments;
determining a plurality of objects in the environment where the unmanned aerial vehicle is located at different moments according to a first environment image and a second environment image which are acquired at different moments;
determining target three-dimensional position coordinates of the plurality of objects at different moments according to the first environment image and the second environment image which are acquired at different moments;
and determining the motion information of the plurality of objects according to the target three-dimensional position coordinates of the plurality of objects at different moments.
Optionally, the processor is configured to determine target three-dimensional position coordinates of the plurality of objects at different times according to the first environment image and the second environment image acquired at different times, and is configured to:
determining feature point matching pairs of a plurality of spatial points on the plurality of objects at different moments respectively from a first environment image and a second environment image acquired at different moments;
determining depth information of the plurality of objects at different moments according to feature point matching pairs of the plurality of space points on the plurality of objects at different moments respectively;
and determining target three-dimensional position coordinates of the plurality of objects at different moments according to the depth information of the plurality of objects at different moments.
Optionally, the processor is configured to determine depth information of the plurality of objects at different times according to feature point matching pairs of the plurality of spatial points on the plurality of objects at different times, and is configured to:
determining feature point matching pairs of central space points of the plurality of objects at different moments in a fitting manner according to the feature point matching pairs of the plurality of space points on the plurality of objects at different moments respectively;
and determining the depth information of the plurality of objects at different moments according to the feature point matching pairs of the central space points of the plurality of objects at different moments.
Optionally, the processor is configured to, when determining the motion information of the multiple objects according to the target three-dimensional position coordinates of the multiple objects at different time instants, implement:
determining the speed of the plurality of objects between every two adjacent moments according to the target three-dimensional position coordinates of the plurality of objects at different moments;
determining states of the plurality of objects according to the speeds of the plurality of objects between every two adjacent time instants, wherein the states comprise a motion state and a static state.
Optionally, the processor is further configured to, after determining the states of the plurality of objects according to the speeds of the plurality of objects between each two adjacent time instants, further:
acquiring three-dimensional position coordinates of the object in the motion state at different moments;
and determining the motion trail of the object in the motion state according to the three-dimensional position coordinates of the object in the motion state at different moments.
It should be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the motion information determining apparatus described above may refer to the corresponding process in the foregoing embodiment of the motion information determining method, and is not described herein again.
Please refer to fig. 8, fig. 8 is a schematic block diagram of a structure of an unmanned aerial vehicle according to an embodiment of the present application.
As shown in fig. 8, the drone 500 includes a vision sensor 501, a processor 502, and a memory 503, and the vision sensor 501, the processor 502, and the memory 503 are connected by a bus 504, such as an I2C (Inter-integrated Circuit) bus 504.
Specifically, this vision sensor 501 can be binocular vision device, also can be other vision devices, and the mounted position and the quantity of this vision sensor 501 on unmanned aerial vehicle 500 can set up according to actual conditions, and this application does not do this and specifically limits. For example, the drone 500 includes one vision sensor 501, and the vision sensor 501 is mounted in a front area of the drone for sensing objects in front of the drone 500. For another example, the drone 500 includes two vision sensors 501, the two vision sensors 501 being mounted to a front area and a rear area of the drone 500, respectively, for sensing objects in front of and behind the drone 500. For another example, the drone 500 includes four vision sensors 501, and the four vision sensors 501 are respectively installed at a front area, a rear area, a left area, and a right area of the drone 500, for sensing objects in front of, behind, to the left, and to the right of the drone 500. For another example, the drone 500 includes six vision sensors 501, and the six vision sensors 501 are respectively installed at a front area, a rear area, a left area, a right area, an upper area, and a lower area of the drone 500, for sensing objects in front of, behind, left, right, above, and below the drone 500.
The drone 500 may have one or more propulsion units to allow the drone 500 to fly in the air. The one or more propulsion units may cause the drone 500 to move at one or more, two or more, three or more, four or more, five or more, six or more free angles. In some cases, the drone 500 may rotate about one, two, three, or more axes of rotation. The axes of rotation may be perpendicular to each other. The axes of rotation may be maintained perpendicular to each other throughout the flight of the drone 500. The axis of rotation may include a pitch axis, a roll axis, and/or a yaw axis. The drone 500 may be movable in one or more dimensions. For example, the drone 500 can move upward due to the lift generated by one or more rotors. In some cases, the drone 500 may be movable along a Z-axis (which may be upward with respect to the drone 500 direction), an X-axis, and/or a Y-axis (which may be lateral). The drone 500 may move along one, two, or three axes that are perpendicular to each other.
The drone 500 may be a rotorcraft. In some cases, the drone 500 may be a multi-rotor aircraft that may include multiple rotors. The plurality of rotors may rotate to generate lift for the drone 500. The rotor may be a propulsion unit that allows the drone 500 to move freely in the air. The rotors may rotate at the same rate and/or may produce the same amount of lift or thrust. The rotors may rotate at will at different rates, generating different amounts of lift or thrust and/or allowing the drone 500 to rotate. In some cases, one, two, three, four, five, six, seven, eight, nine, ten, or more rotors may be provided on the drone 500. The rotors may be arranged with their axes of rotation parallel to each other. In some cases, the axes of rotation of the rotors may be at any angle relative to each other, which may affect the motion of the drone 500.
The drone 500 may have multiple rotors. The rotor may be connected to the body of the drone 500, which may include a control unit, an Inertial Measurement Unit (IMU), a processor, a battery, a power source, and/or other sensors. The rotor may be connected to the body by one or more arms or extensions that branch off from a central portion of the body. For example, one or more arms may extend radially from the central body of the drone 500 and may have rotors at or near the ends of the arms.
Specifically, the Processor 502 may be a Micro-controller Unit (MCU), a Central Processing Unit (CPU), a Digital Signal Processor (DSP), or the like.
Specifically, the Memory 503 may be a Flash chip, a Read-Only Memory (ROM) magnetic disk, an optical disk, a usb disk, or a removable hard disk.
The processor 502 is configured to run a computer program stored in the memory 503, and when executing the computer program, implement any one of the unmanned aerial vehicle control method or the motion information determination method provided in the present specification.
It should be noted that, as will be clearly understood by those skilled in the art, for convenience and brevity of description, the specific working process of the above-described unmanned aerial vehicle may refer to the corresponding process in the foregoing embodiment of the unmanned aerial vehicle control method or the motion information determination method, and details are not described herein again.
In an embodiment of the present application, a computer-readable storage medium is further provided, where a computer program is stored in the computer-readable storage medium, where the computer program includes program instructions, and the processor executes the program instructions to implement the unmanned aerial vehicle control method or the motion information determination method provided in the foregoing embodiment.
The computer readable storage medium may be an internal storage unit of the drone, such as a hard disk or a memory of the drone, according to any of the foregoing embodiments. The computer readable storage medium may also be an external storage device of the drone, such as a plug-in hard disk equipped on the drone, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like.
It is to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
While the invention has been described with reference to specific embodiments, the scope of the invention is not limited thereto, and those skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (39)

1. A drone control method, characterized in that it is applied to a drone comprising a vision sensor, said method comprising:
acquiring a first environment image and a second environment image which are acquired by the vision sensor at different moments;
determining whether at least one moving object exists in the environment where the unmanned aerial vehicle is located according to a first environment image and a second environment image which are acquired at different moments;
when it is determined that at least one moving object exists in the environment where the unmanned aerial vehicle is located and the unmanned aerial vehicle and the at least one moving object have collision risks, adjusting the flight track of the unmanned aerial vehicle;
and controlling the unmanned aerial vehicle to fly according to the adjusted flight track so that the unmanned aerial vehicle avoids the at least one moving object.
2. The drone controlling method according to claim 1, wherein the determining whether at least one moving object exists in the environment where the drone is located according to the first environment image and the second environment image acquired at different time includes:
determining target three-dimensional position coordinates of a plurality of objects in the environment where the unmanned aerial vehicle is located at different moments according to the first environment image and the second environment image which are acquired at different moments;
determining states of the plurality of objects according to target three-dimensional position coordinates of the plurality of objects at different moments, wherein the states comprise a motion state and a static state;
and when at least one object in the plurality of objects in the motion state exists, determining that at least one moving object exists in the environment where the unmanned aerial vehicle is located.
3. The drone controlling method according to claim 2, wherein determining target three-dimensional position coordinates of a plurality of objects in an environment where the drone is located at different times according to the first environment image and the second environment image acquired at different times includes:
determining feature point matching pairs of a plurality of spatial points on the plurality of objects at different moments respectively from a first environment image and a second environment image acquired at different moments;
determining depth information of the plurality of objects at different moments according to feature point matching pairs of the plurality of space points on the plurality of objects at different moments respectively;
and determining target three-dimensional position coordinates of the plurality of objects at different moments according to the depth information of the plurality of objects at different moments.
4. The unmanned aerial vehicle control method of claim 3, wherein the determining depth information of the plurality of objects at different times according to the matched pairs of feature points of the plurality of spatial points on the plurality of objects at different times respectively comprises:
determining feature point matching pairs of central space points of the plurality of objects at different moments in a fitting manner according to the feature point matching pairs of the plurality of space points on the plurality of objects at different moments respectively;
and determining the depth information of the plurality of objects at different moments according to the feature point matching pairs of the central space points of the plurality of objects at different moments.
5. The drone controlling method of claim 2, wherein the determining the state of the plurality of objects from the target three-dimensional position coordinates of the plurality of objects at different times includes:
determining the speed of the plurality of objects between every two adjacent moments according to the target three-dimensional position coordinates of the plurality of objects at different moments;
determining the states of the plurality of objects according to the speeds of the plurality of objects between every two adjacent moments.
6. The drone controlling method according to any one of claims 1 to 5, wherein the adjusting the flight trajectory of the drone includes:
acquiring a motion track of the at least one moving object and a flight track of the unmanned aerial vehicle;
acquiring the intersection position of the motion track of the at least one moving object and the flight track of the unmanned aerial vehicle;
adjusting the flight trajectory of the unmanned aerial vehicle according to at least one of the intersection positions.
7. The drone controlling method of claim 6, wherein the obtaining of the motion trajectory of the at least one moving object comprises:
determining three-dimensional position coordinates of the at least one moving object at different moments according to the first environment image and the second environment image which are acquired at different moments;
and determining the motion track of the at least one moving object according to the three-dimensional position coordinates of the at least one moving object at different moments.
8. The drone controlling method according to claim 6, wherein the obtaining of the intersection position of the motion trajectory of the at least one moving object and the flight trajectory of the drone includes:
acquiring a first description equation of the flight track of the unmanned aerial vehicle;
acquiring a second description equation of the motion trail of the at least one moving object;
solving an equation set formed by the first description equation and a second description equation of the motion trail of at least one moving object to obtain a solution result;
and determining the intersection position of the motion track of the at least one moving object and the flight track of the unmanned aerial vehicle according to the solving result.
9. The drone controlling method of claim 6, wherein the adjusting the flight trajectory of the drone according to at least one of the intersection locations comprises:
determining position information of the at least one moving object relative to the drone;
determining target position coordinates of the unmanned aerial vehicle at least one of the intersection positions according to the orientation information;
and adjusting the flight track of the unmanned aerial vehicle according to at least one target position coordinate.
10. The drone controlling method according to any one of claims 1 to 5, characterized in that the method further comprises:
when it is determined that at least one moving object exists in the environment where the unmanned aerial vehicle is located and the unmanned aerial vehicle and the at least one moving object have collision risks, acquiring a motion track of the at least one moving object and a flight track of the unmanned aerial vehicle;
determining the collision moment of the at least one moving object and the unmanned aerial vehicle according to the motion track of the at least one moving object and the flight track of the unmanned aerial vehicle;
and adjusting the flight track of the unmanned aerial vehicle according to the collision moment of the at least one moving object and the unmanned aerial vehicle.
11. The drone controlling method according to claim 10, wherein the adjusting the flight trajectory of the drone according to the collision time of the at least one moving object with the drone includes:
determining a target collision moment according to the collision moment of the at least one moving object and the unmanned aerial vehicle and the current system moment;
and adjusting the flight track of the unmanned aerial vehicle according to the position coordinate of the unmanned aerial vehicle at the target collision moment.
12. The drone controlling method according to any one of claims 1 to 5, wherein before determining whether at least one moving object exists in the environment where the drone is located according to the first environment image and the second environment image acquired at different time, further comprising:
acquiring the flight height of the unmanned aerial vehicle;
when the flying height of the unmanned aerial vehicle is smaller than or equal to a preset height, determining whether at least one moving object exists in the environment where the unmanned aerial vehicle is located according to a first environment image and a second environment image which are acquired at different moments.
13. The drone controlling method of any one of claims 1 to 5, wherein the vision sensor includes a binocular vision device, the first environmental image is an environmental image captured by a first one of the binocular vision devices, and the second environmental image is an environmental image captured by a second one of the binocular vision devices.
14. A motion information determination method, applied to a drone comprising a visual sensor, the method comprising:
acquiring a first environment image and a second environment image which are acquired by the vision sensor at different moments;
determining a plurality of objects in the environment where the unmanned aerial vehicle is located at different moments according to a first environment image and a second environment image which are acquired at different moments;
determining target three-dimensional position coordinates of the plurality of objects at different moments according to the first environment image and the second environment image which are acquired at different moments;
and determining the motion information of the plurality of objects according to the target three-dimensional position coordinates of the plurality of objects at different moments.
15. The method for determining motion information according to claim 14, wherein determining the target three-dimensional position coordinates of the plurality of objects at different time points based on the first environment image and the second environment image acquired at different time points comprises:
determining feature point matching pairs of a plurality of spatial points on the plurality of objects at different moments respectively from a first environment image and a second environment image acquired at different moments;
determining depth information of the plurality of objects at different moments according to feature point matching pairs of the plurality of space points on the plurality of objects at different moments respectively;
and determining target three-dimensional position coordinates of the plurality of objects at different moments according to the depth information of the plurality of objects at different moments.
16. The method according to claim 15, wherein the determining depth information of the plurality of objects at different time points according to the matched pairs of feature points of the plurality of spatial points on the plurality of objects at different time points comprises:
determining feature point matching pairs of central space points of the plurality of objects at different moments in a fitting manner according to the feature point matching pairs of the plurality of space points on the plurality of objects at different moments respectively;
and determining the depth information of the plurality of objects at different moments according to the feature point matching pairs of the central space points of the plurality of objects at different moments.
17. The method according to claim 14, wherein the determining motion information of the plurality of objects based on the target three-dimensional position coordinates of the plurality of objects at different time instants comprises:
determining the speed of the plurality of objects between every two adjacent moments according to the target three-dimensional position coordinates of the plurality of objects at different moments;
determining states of the plurality of objects according to the speeds of the plurality of objects between every two adjacent time instants, wherein the states comprise a motion state and a static state.
18. The motion information determining method according to claim 17, wherein after determining the states of the plurality of objects based on the velocities of the plurality of objects between each two adjacent time instants, further comprising:
acquiring three-dimensional position coordinates of the object in the motion state at different moments;
and determining the motion trail of the object in the motion state according to the three-dimensional position coordinates of the object in the motion state at different moments.
19. An unmanned aerial vehicle control apparatus, wherein the unmanned aerial vehicle control apparatus is applied to an unmanned aerial vehicle, the unmanned aerial vehicle comprises a vision sensor, the unmanned aerial vehicle control apparatus comprises a memory and a processor;
the memory is used for storing a computer program;
the processor is configured to execute the computer program and, when executing the computer program, implement the following steps:
acquiring a first environment image and a second environment image which are acquired by the vision sensor at different moments;
determining whether at least one moving object exists in the environment where the unmanned aerial vehicle is located according to a first environment image and a second environment image which are acquired at different moments;
when it is determined that at least one moving object exists in the environment where the unmanned aerial vehicle is located and the unmanned aerial vehicle and the at least one moving object have collision risks, adjusting the flight track of the unmanned aerial vehicle;
and controlling the unmanned aerial vehicle to fly according to the adjusted flight track so that the unmanned aerial vehicle avoids the at least one moving object.
20. The drone controlling device of claim 19, wherein the processor, when determining whether at least one moving object exists in the environment where the drone is located according to the first environment image and the second environment image acquired at different times, is configured to:
determining target three-dimensional position coordinates of a plurality of objects in the environment where the unmanned aerial vehicle is located at different moments according to the first environment image and the second environment image which are acquired at different moments;
determining states of the plurality of objects according to target three-dimensional position coordinates of the plurality of objects at different moments, wherein the states comprise a motion state and a static state;
and when at least one object in the plurality of objects in the motion state exists, determining that at least one moving object exists in the environment where the unmanned aerial vehicle is located.
21. The drone controlling device according to claim 20, wherein the processor is configured to determine target three-dimensional position coordinates of a plurality of objects in the environment where the drone is located at different times according to the first and second environment images acquired at different times, and is configured to:
determining feature point matching pairs of a plurality of spatial points on the plurality of objects at different moments respectively from a first environment image and a second environment image acquired at different moments;
determining depth information of the plurality of objects at different moments according to feature point matching pairs of the plurality of space points on the plurality of objects at different moments respectively;
and determining target three-dimensional position coordinates of the plurality of objects at different moments according to the depth information of the plurality of objects at different moments.
22. The drone controlling device of claim 21, wherein the processor, when determining the depth information of the plurality of objects at different times according to the matched pairs of feature points of the plurality of spatial points on the plurality of objects at different times, is configured to:
determining feature point matching pairs of central space points of the plurality of objects at different moments in a fitting manner according to the feature point matching pairs of the plurality of space points on the plurality of objects at different moments respectively;
and determining the depth information of the plurality of objects at different moments according to the feature point matching pairs of the central space points of the plurality of objects at different moments.
23. The drone controlling device of claim 20, wherein the processor, when enabled to determine the state of the plurality of objects based on the target three-dimensional position coordinates of the plurality of objects at different times, is configured to enable:
determining the speed of the plurality of objects between every two adjacent moments according to the target three-dimensional position coordinates of the plurality of objects at different moments;
determining the states of the plurality of objects according to the speeds of the plurality of objects between every two adjacent moments.
24. A drone controlling device according to any one of claims 19 to 23, wherein the processor, when effecting adjustment of the flight trajectory of the drone, is configured to effect:
acquiring a motion track of the at least one moving object and a flight track of the unmanned aerial vehicle;
acquiring the intersection position of the motion track of the at least one moving object and the flight track of the unmanned aerial vehicle;
adjusting the flight trajectory of the unmanned aerial vehicle according to at least one of the intersection positions.
25. The drone controlling device of claim 24, wherein the processor, when enabled to obtain the motion trajectory of the at least one moving object, is configured to enable:
determining three-dimensional position coordinates of the at least one moving object at different moments according to the first environment image and the second environment image which are acquired at different moments;
and determining the motion track of the at least one moving object according to the three-dimensional position coordinates of the at least one moving object at different moments.
26. The drone control device of claim 24, wherein the processor, when enabled to obtain the intersection location of the motion trajectory of the at least one moving object and the flight trajectory of the drone, is configured to enable:
acquiring a first description equation of the flight track of the unmanned aerial vehicle;
acquiring a second description equation of the motion trail of the at least one moving object;
solving an equation set formed by the first description equation and a second description equation of the motion trail of at least one moving object to obtain a solution result;
and determining the intersection position of the motion track of the at least one moving object and the flight track of the unmanned aerial vehicle according to the solving result.
27. The drone controlling device of claim 24, wherein the processor, when effecting adjustment of the flight trajectory of the drone according to at least one of the intersection positions, is adapted to effect:
determining position information of the at least one moving object relative to the drone;
determining target position coordinates of the unmanned aerial vehicle at least one of the intersection positions according to the orientation information;
and adjusting the flight track of the unmanned aerial vehicle according to at least one target position coordinate.
28. A drone control device according to any one of claims 19 to 23, wherein the processor is further configured to implement the steps of:
when it is determined that at least one moving object exists in the environment where the unmanned aerial vehicle is located and the unmanned aerial vehicle and the at least one moving object have collision risks, acquiring a motion track of the at least one moving object and a flight track of the unmanned aerial vehicle;
determining the collision moment of the at least one moving object and the unmanned aerial vehicle according to the motion track of the at least one moving object and the flight track of the unmanned aerial vehicle;
and adjusting the flight track of the unmanned aerial vehicle according to the collision moment of the at least one moving object and the unmanned aerial vehicle.
29. The drone controlling device of claim 28, wherein the processor, when enabling adjusting the flight trajectory of the drone according to the time of collision of the at least one moving object with the drone, is configured to enable:
determining a target collision moment according to the collision moment of the at least one moving object and the unmanned aerial vehicle and the current system moment;
and adjusting the flight track of the unmanned aerial vehicle according to the position coordinate of the unmanned aerial vehicle at the target collision moment.
30. A drone controlling device according to any one of claims 19 to 23, wherein the processor is further configured to, before determining whether there is at least one moving object in the environment in which the drone is located, from the first and second environment images acquired at different times:
acquiring the flight height of the unmanned aerial vehicle;
when the flying height of the unmanned aerial vehicle is smaller than or equal to a preset height, determining whether at least one moving object exists in the environment where the unmanned aerial vehicle is located according to a first environment image and a second environment image which are acquired at different moments.
31. The drone control device of any one of claims 19 to 23, the vision sensor comprising a binocular vision device, the first environmental image being an environmental image captured by a first one of the binocular vision devices, the second environmental image being an environmental image captured by a second one of the binocular vision devices.
32. A motion information determination apparatus, applied to a drone including a visual sensor, the motion information determination apparatus including a memory and a processor;
the memory is used for storing a computer program;
the processor is configured to execute the computer program and, when executing the computer program, implement the following steps:
acquiring a first environment image and a second environment image which are acquired by the vision sensor at different moments;
determining a plurality of objects in the environment where the unmanned aerial vehicle is located at different moments according to a first environment image and a second environment image which are acquired at different moments;
determining target three-dimensional position coordinates of the plurality of objects at different moments according to the first environment image and the second environment image which are acquired at different moments;
and determining the motion information of the plurality of objects according to the target three-dimensional position coordinates of the plurality of objects at different moments.
33. The motion information determining apparatus of claim 32, wherein the processor is configured to determine the target three-dimensional position coordinates of the plurality of objects at different times based on the first environmental image and the second environmental image acquired at different times, and is configured to:
determining feature point matching pairs of a plurality of spatial points on the plurality of objects at different moments respectively from a first environment image and a second environment image acquired at different moments;
determining depth information of the plurality of objects at different moments according to feature point matching pairs of the plurality of space points on the plurality of objects at different moments respectively;
and determining target three-dimensional position coordinates of the plurality of objects at different moments according to the depth information of the plurality of objects at different moments.
34. The motion information determining apparatus according to claim 33, wherein the processor is configured to determine the depth information of the plurality of objects at different time instants according to the matched pairs of feature points at different time instants of the plurality of spatial points on the plurality of objects, and is configured to:
determining feature point matching pairs of central space points of the plurality of objects at different moments in a fitting manner according to the feature point matching pairs of the plurality of space points on the plurality of objects at different moments respectively;
and determining the depth information of the plurality of objects at different moments according to the feature point matching pairs of the central space points of the plurality of objects at different moments.
35. The motion information determining apparatus of claim 32, wherein the processor is configured to, when determining the motion information of the plurality of objects according to the target three-dimensional position coordinates of the plurality of objects at different time instances, perform:
determining the speed of the plurality of objects between every two adjacent moments according to the target three-dimensional position coordinates of the plurality of objects at different moments;
determining states of the plurality of objects according to the speeds of the plurality of objects between every two adjacent time instants, wherein the states comprise a motion state and a static state.
36. The motion information determining apparatus of claim 35, wherein the processor, after determining the states of the plurality of objects based on the velocities of the plurality of objects between each two adjacent time instances, is further configured to:
acquiring three-dimensional position coordinates of the object in the motion state at different moments;
and determining the motion trail of the object in the motion state according to the three-dimensional position coordinates of the object in the motion state at different moments.
37. A drone, the drone comprising a vision sensor, a memory, and a processor;
the memory is used for storing a computer program;
the processor is configured to execute the computer program and, when executing the computer program, implement the following steps:
acquiring a first environment image and a second environment image which are acquired by the vision sensor at different moments;
determining whether at least one moving object exists in the environment where the unmanned aerial vehicle is located according to a first environment image and a second environment image which are acquired at different moments;
when it is determined that at least one moving object exists in the environment where the unmanned aerial vehicle is located and the unmanned aerial vehicle and the at least one moving object have collision risks, adjusting the flight track of the unmanned aerial vehicle;
and controlling the unmanned aerial vehicle to fly according to the adjusted flight track so that the unmanned aerial vehicle avoids the at least one moving object.
38. A drone, the drone comprising a vision sensor, a memory, and a processor;
the memory is used for storing a computer program;
the processor is configured to execute the computer program and, when executing the computer program, implement the following steps:
acquiring a first environment image and a second environment image which are acquired by the vision sensor at different moments;
determining a plurality of objects in the environment where the unmanned aerial vehicle is located at different moments according to a first environment image and a second environment image which are acquired at different moments;
determining target three-dimensional position coordinates of the plurality of objects at different moments according to the first environment image and the second environment image which are acquired at different moments;
and determining the motion information of the plurality of objects according to the target three-dimensional position coordinates of the plurality of objects at different moments.
39. A computer-readable storage medium, characterized in that it stores a computer program which, when executed by a processor, causes the processor to implement the drone controlling method of any one of claims 1 to 13 or the movement information determining method of any one of claims 14 to 18.
CN202080006476.XA 2020-04-28 2020-04-28 Unmanned aerial vehicle control method, motion information determination method and device and unmanned aerial vehicle Pending CN113168188A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/087592 WO2021217451A1 (en) 2020-04-28 2020-04-28 Unmanned aerial vehicle control method, motion information determination method and device, and unmanned aerial vehicle

Publications (1)

Publication Number Publication Date
CN113168188A true CN113168188A (en) 2021-07-23

Family

ID=76879283

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080006476.XA Pending CN113168188A (en) 2020-04-28 2020-04-28 Unmanned aerial vehicle control method, motion information determination method and device and unmanned aerial vehicle

Country Status (2)

Country Link
CN (1) CN113168188A (en)
WO (1) WO2021217451A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107194339A (en) * 2017-05-15 2017-09-22 武汉星巡智能科技有限公司 Obstacle recognition method, equipment and unmanned vehicle
CN107980138A (en) * 2016-12-28 2018-05-01 深圳前海达闼云端智能科技有限公司 A kind of false-alarm obstacle detection method and device
CN109634304A (en) * 2018-12-13 2019-04-16 中国科学院自动化研究所南京人工智能芯片创新研究院 Unmanned plane during flying paths planning method, device and storage medium
CN110362098A (en) * 2018-03-26 2019-10-22 北京京东尚科信息技术有限公司 Unmanned plane vision method of servo-controlling, device and unmanned plane
CN110799921A (en) * 2018-07-18 2020-02-14 深圳市大疆创新科技有限公司 Shooting method and device and unmanned aerial vehicle

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106767682A (en) * 2016-12-01 2017-05-31 腾讯科技(深圳)有限公司 A kind of method and aircraft for obtaining flying height information
US10459445B2 (en) * 2017-09-28 2019-10-29 Intel IP Corporation Unmanned aerial vehicle and method for operating an unmanned aerial vehicle
CN110689578A (en) * 2019-10-11 2020-01-14 南京邮电大学 Unmanned aerial vehicle obstacle identification method based on monocular vision

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107980138A (en) * 2016-12-28 2018-05-01 深圳前海达闼云端智能科技有限公司 A kind of false-alarm obstacle detection method and device
CN107194339A (en) * 2017-05-15 2017-09-22 武汉星巡智能科技有限公司 Obstacle recognition method, equipment and unmanned vehicle
CN110362098A (en) * 2018-03-26 2019-10-22 北京京东尚科信息技术有限公司 Unmanned plane vision method of servo-controlling, device and unmanned plane
CN110799921A (en) * 2018-07-18 2020-02-14 深圳市大疆创新科技有限公司 Shooting method and device and unmanned aerial vehicle
CN109634304A (en) * 2018-12-13 2019-04-16 中国科学院自动化研究所南京人工智能芯片创新研究院 Unmanned plane during flying paths planning method, device and storage medium

Also Published As

Publication number Publication date
WO2021217451A1 (en) 2021-11-04

Similar Documents

Publication Publication Date Title
CN109144097B (en) Obstacle or ground recognition and flight control method, device, equipment and medium
US11237572B2 (en) Collision avoidance system, depth imaging system, vehicle, map generator and methods thereof
Barry et al. High‐speed autonomous obstacle avoidance with pushbroom stereo
CN108419446B (en) System and method for laser depth map sampling
EP2615580B1 (en) Automatic scene calibration
Sa et al. Outdoor flight testing of a pole inspection UAV incorporating high-speed vision
WO2020113423A1 (en) Target scene three-dimensional reconstruction method and system, and unmanned aerial vehicle
EP3837492A1 (en) Distance measuring method and device
CN110874100A (en) System and method for autonomous navigation using visual sparse maps
CN111326023A (en) Unmanned aerial vehicle route early warning method, device, equipment and storage medium
WO2015116993A1 (en) Augmented three dimensional point collection of vertical structures
CN107665508B (en) Method and system for realizing augmented reality
CN110570463B (en) Target state estimation method and device and unmanned aerial vehicle
Eynard et al. Real time UAV altitude, attitude and motion estimation from hybrid stereovision
CN112561941A (en) Cliff detection method and device and robot
Williams et al. Feature and pose constrained visual aided inertial navigation for computationally constrained aerial vehicles
Shuai et al. Power lines extraction and distance measurement from binocular aerial images for power lines inspection using UAV
CN110382358A (en) Holder attitude rectification method, holder attitude rectification device, holder, clouds terrace system and unmanned plane
CN115686073B (en) Unmanned aerial vehicle-based transmission line inspection control method and system
CN113168188A (en) Unmanned aerial vehicle control method, motion information determination method and device and unmanned aerial vehicle
WO2020079309A1 (en) Obstacle detection
CN111433819A (en) Target scene three-dimensional reconstruction method and system and unmanned aerial vehicle
CN113168532A (en) Target detection method and device, unmanned aerial vehicle and computer readable storage medium
CN111780744A (en) Mobile robot hybrid navigation method, equipment and storage device
Bauer et al. Real flight application of a monocular image-based aircraft collision decision method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination