WO2021217451A1 - Procédé de commande de véhicule aérien sans pilote, procédé et dispositif de détermination d'informations de mouvement et véhicule aérien sans pilote - Google Patents

Procédé de commande de véhicule aérien sans pilote, procédé et dispositif de détermination d'informations de mouvement et véhicule aérien sans pilote Download PDF

Info

Publication number
WO2021217451A1
WO2021217451A1 PCT/CN2020/087592 CN2020087592W WO2021217451A1 WO 2021217451 A1 WO2021217451 A1 WO 2021217451A1 CN 2020087592 W CN2020087592 W CN 2020087592W WO 2021217451 A1 WO2021217451 A1 WO 2021217451A1
Authority
WO
WIPO (PCT)
Prior art keywords
drone
different moments
multiple objects
environment image
moving object
Prior art date
Application number
PCT/CN2020/087592
Other languages
English (en)
Chinese (zh)
Inventor
周游
薛连杰
叶长春
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2020/087592 priority Critical patent/WO2021217451A1/fr
Priority to CN202080006476.XA priority patent/CN113168188A/zh
Publication of WO2021217451A1 publication Critical patent/WO2021217451A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • G05D1/106Change initiated in response to external conditions, e.g. avoidance of elevated terrain or of no-fly zones

Definitions

  • This application relates to the technical field of unmanned aerial vehicles, in particular to an unmanned aerial vehicle control method, a motion information determination method, device and an unmanned aerial vehicle.
  • UAVs are equipped with obstacle avoidance systems.
  • UAVs When UAVs are flying, they can avoid obstacles through obstacle avoidance systems. Obstacles around humans and machines ensure the flight safety of UAVs.
  • current obstacle avoidance systems can only sense static objects, such as mountains, trees and buildings, so that UAVs can avoid flying routes. Static objects, but the UAV's obstacle avoidance system cannot accurately sense the moving objects, so that the UAV cannot avoid the moving objects, and the flight safety of the UAV cannot be guaranteed. Therefore, how to improve the flight safety of UAVs is a problem that needs to be solved urgently.
  • this application provides an unmanned aerial vehicle control method, a motion information determination method, device, and an unmanned aerial vehicle, aiming to improve the flight safety of an unmanned aerial vehicle.
  • this application provides a drone control method, the method is applied to the drone, the drone includes a vision sensor, and the method includes:
  • this application also provides a method for determining motion information, the method is applied to an unmanned aerial vehicle, the unmanned aerial vehicle includes a visual sensor, and the method includes:
  • the motion information of the multiple objects is determined according to the target three-dimensional position coordinates of the multiple objects at different moments.
  • the present application also provides a drone control device, the drone control device is applied to the drone, the drone includes a vision sensor, and the drone control device includes a memory and a processing device.
  • the drone control device is applied to the drone, the drone includes a vision sensor, and the drone control device includes a memory and a processing device.
  • the memory is used to store a computer program
  • the processor is configured to execute the computer program and, when executing the computer program, implement the following steps:
  • the present application also provides a motion information determining device, the motion information determining device is applied to a drone, the drone includes a vision sensor, and the motion information determining device includes a memory and a processor;
  • the memory is used to store a computer program
  • the processor is configured to execute the computer program and, when executing the computer program, implement the following steps:
  • the motion information of the multiple objects is determined according to the target three-dimensional position coordinates of the multiple objects at different moments.
  • this application also provides an unmanned aerial vehicle, the unmanned aerial vehicle including a vision sensor, a memory, and a processor;
  • the memory is used to store a computer program
  • the processor is configured to execute the computer program and, when executing the computer program, implement the following steps:
  • the present application also provides an unmanned aerial vehicle, the unmanned aerial vehicle including a vision sensor, a memory, and a processor;
  • the memory is used to store a computer program
  • the processor is configured to execute the computer program and, when executing the computer program, implement the following steps:
  • the motion information of the multiple objects is determined according to the target three-dimensional position coordinates of the multiple objects at different moments.
  • this application also provides a computer-readable storage medium that stores a computer program that, when executed by a processor, causes the processor to implement any of the provisions provided in the specification of this application.
  • An unmanned aerial vehicle control method or motion information determination method is also provided.
  • the embodiments of the present application provide a drone control method, a motion information determination method, a device, and a drone.
  • the first environment image and the second environment image collected by the visual sensor at different times are acquired by
  • the first environment image and the second environment image are collected to determine whether there is at least one moving object in the environment where the drone is located.
  • the drone is When there is a risk of collision with moving objects, adjust the flight trajectory of the drone and control the drone to fly according to the adjusted flight trajectory so that the drone avoids at least one moving object, which greatly improves the flight safety of the drone .
  • FIG. 1 is a schematic diagram of a scene of an unmanned aerial vehicle that implements the unmanned aerial vehicle control method or the motion information control method provided by an embodiment of the present application;
  • FIG. 2 is a schematic flowchart of steps of a drone control method provided by an embodiment of the present application
  • Fig. 3 is a schematic flowchart of sub-steps of the drone control method in Fig. 2;
  • FIG. 4 is a schematic flowchart of sub-steps of the drone control method in FIG. 2;
  • FIG. 5 is a schematic flowchart of steps of a motion information control method provided by an embodiment of the present application.
  • FIG. 6 is a schematic block diagram of the structure of an unmanned aerial vehicle control device provided by an embodiment of the present application.
  • FIG. 7 is a schematic block diagram of the structure of a sports information control device provided by an embodiment of the present application.
  • FIG. 8 is a schematic block diagram of the structure of an unmanned aerial vehicle provided by an embodiment of the present application.
  • the specification of this application provides a drone control method, which is applied to the drone.
  • the drone 100 includes a drone body 110 and a vision sensor 120, and the vision sensor 120 is installed.
  • the first environment image and the second environment image of the environment in which the drone 100 is located at different moments can be collected through the visual sensor 120, and the first environment image and the second environment image collected at different moments can be used.
  • the environment image can determine whether there are moving objects in the environment where the drone 100 is located.
  • the drone 100 Automatically adjust the flight trajectory and fly according to the adjusted flight trajectory, so that the UAV 100 avoids moving objects that are at risk of collision.
  • an unmanned aerial vehicle 100 includes an unmanned aerial vehicle body 110 and a vision sensor 120.
  • the vision sensor 120 is installed on the unmanned aerial vehicle.
  • the first environment image and the second environment image of the environment where the drone 100 is located at different moments can be collected through the visual sensor 120, and through the first environment image and the second environment image collected at different moments, Able to determine multiple objects in the environment where the drone 100 is located at different moments, and then determine the target three-dimensional position coordinates of multiple objects at different moments through the first environment image and the second environment image collected at different moments ; According to the target three-dimensional position coordinates of multiple objects at different times, the motion information of multiple objects is determined, so that the UAV 100 can sense stationary objects and moving objects, which is convenient for subsequent tracking of stationary objects and moving objects.
  • the vision sensor 120 may be a binocular vision device or other vision devices.
  • the installation position and number of the vision sensor 120 on the drone body 110 can be set according to the actual situation, and this application will not be specific. limited.
  • the drone 100 includes a vision sensor 120, and the vision sensor 120 is installed in the front area of the drone body 110 for sensing objects in front of the drone 100.
  • the drone 100 includes two vision sensors 120, which are respectively installed in the front area and the rear area of the drone body 110, and are used to sense objects in front of and behind the drone 100 .
  • the drone 100 includes four vision sensors 120, which are respectively installed in the front, rear, left, and right regions of the drone body 110 for sensing Objects in front, rear, left, and right of the man-machine 100.
  • the drone 100 includes six vision sensors 120, which are respectively installed in the front area, the rear area, the left area, the right area, the upper area, and the lower area of the drone body 110.
  • the side area is used to sense objects in front, rear, left, right, above, and below the drone 100.
  • the drone 100 may have one or more propulsion units to allow the drone 100 to fly in the air.
  • the one or more propulsion units can make the drone 100 at one or more, two or more, three or more, four or more, five or more, six or more free angles move.
  • the drone 100 can rotate around one, two, three, or more rotation axes.
  • the rotation axes may be perpendicular to each other.
  • the rotation axes can be maintained perpendicular to each other during the entire flight of the drone 100.
  • the rotation axis may include a pitch axis, a roll axis, and/or a yaw axis.
  • the drone 100 can move in one or more dimensions.
  • the drone 100 can move upward due to the lifting force generated by one or more rotors.
  • the drone 100 can move along the Z axis (which can be upward relative to the drone 100), the X axis, and/or the Y axis (which can be lateral).
  • the drone 100 can move along one, two, or three axes that are perpendicular to each other.
  • the drone 100 may be a rotary wing aircraft.
  • the drone 100 may be a multi-rotor aircraft that may include multiple rotors.
  • the multiple rotors can rotate to generate lifting force for the drone 100.
  • the rotor may be a propulsion unit, which allows the drone 100 to move freely in the air.
  • the rotor can rotate at the same rate and/or can generate the same amount of lift or thrust.
  • the rotor can rotate at different speeds at will, generate different amounts of lifting force or thrust, and/or allow the drone 100 to rotate.
  • one, two, three, four, five, six, seven, eight, nine, ten or more rotors may be provided on the drone 100.
  • These rotors can be arranged such that their rotation axes are parallel to each other.
  • the rotation axis of the rotors can be at any angle relative to each other, which can affect the movement of the drone 100.
  • the drone 100 may have multiple rotors.
  • the rotor may be connected to the main body of the drone 100, and the main body may include a control unit, an inertial measurement unit (IMU), a processor, a battery, a power supply, and/or other sensors.
  • the rotor may be connected to the body by one or more arms or extensions branching from the central part of the body.
  • one or more arms may extend radially from the central body of the drone 100, and may have a rotor at or near the end of the arm.
  • FIG. 2 is a schematic flowchart of steps of a drone control method provided by an embodiment of the present application. Specifically, as shown in FIG. 2, the drone control method includes steps S101 to S104.
  • the vision sensor is controlled to collect the first environment image and the second environment image of the environment where the drone is located at predetermined intervals, so as to obtain the first environment image and the second environment image collected at different times.
  • the vision sensor includes a binocular vision device
  • the first environment image is the environment image collected by the first vision device in the binocular vision device
  • the second environment image is the environment collected by the second vision device in the binocular vision device
  • the preset time can be set based on actual conditions. This application specification does not specifically limit this.
  • the preset time is 100 milliseconds, so the first vision device collects the environment of the drone at 100 millisecond intervals within 1 second Get 10 first environment images, and the second vision device collects the second environment images of the environment where the drone is located at 100 millisecond intervals within 1 second to get 10 second environment images, at the same time
  • the collected first environment image and the second environment image correspond one-to-one.
  • the installation position and quantity of the vision sensor on the drone can be set according to the actual situation, which is not specifically limited in the specification of this application.
  • the UAV includes a vision sensor, and the vision sensor is installed in the front area of the UAV to collect the environment image in front of the UAV.
  • the drone includes two vision sensors, which are respectively installed in the front area and the rear area of the drone, and are used to collect environmental images in front of and behind the drone.
  • the drone includes four vision sensors, which are installed in the front, rear, left, and right areas of the drone, and are used to collect the front, rear, and rear of the drone.
  • Environmental images on the left and right
  • the drone includes six vision sensors, which are installed in the front area, back area, left area, right area, upper area, and lower area of the drone, respectively. Collect environment images in front, rear, left, right, above and below the drone.
  • S102 Determine whether there is at least one moving object in the environment where the drone is located according to the first environment image and the second environment image collected at different moments.
  • the status of multiple objects in the environment of the drone can be determined, and the status of multiple objects can determine whether there is in the environment of the drone At least one moving object.
  • the flying height of the drone is acquired; when the flying height of the drone is less than or equal to the preset height, the drone is determined according to the first environment image and the second environment image collected at different times Whether there is at least one moving object in the environment.
  • the preset height can be set according to the actual situation, and this application does not specifically limit it.
  • the preset height is 6 meters, that is, when the drone is flying at a low altitude, it will be based on the first environment image collected at different times.
  • the second environment image to determine whether there is at least one moving object in the environment where the drone is located, so that the drone can perceive the moving object in the environment where the drone is flying when the drone is flying at a low altitude.
  • step S102 includes sub-steps S1021 to S1023.
  • S1021 according to the first environment image and the second environment image collected at different moments, determine the target three-dimensional position coordinates of multiple objects in the environment where the drone is located at different moments.
  • the depth map of the environment where the drone is located at different times can be generated; multiple objects can be determined by the depth map of the environment where the drone is located at different times Depth information at different times, according to the depth information of multiple objects at different times, determine the target three-dimensional position coordinates of multiple objects at different times.
  • the target three-dimensional position coordinates are the three-dimensional position coordinates of the object in the world coordinate system.
  • the matching pairs of feature points of multiple spatial points on multiple objects at different moments are determined; according to the number of multiple objects on the multiple objects
  • the matching pairs of feature points of each spatial point at different times determine the depth information of multiple objects at different times; determine the target three-dimensional position coordinates of multiple objects at different times according to the depth information of multiple objects at different times.
  • the target three-dimensional position coordinates of multiple objects at different times can be determined, which is convenient for subsequent determination of the states of multiple objects.
  • the method for determining the feature point matching pairs corresponding to the multiple spatial points on the object from the first environment image and the second environment image is specifically: according to a preset feature point extraction algorithm, from the first environment image Extract the first feature points corresponding to multiple spatial points on the object; determine the second feature points matching the first feature points from the corresponding second environment image based on the preset feature point tracking algorithm, and obtain multiple spaces on the object Points corresponding to the matching pairs of feature points; alternatively, the first feature points corresponding to multiple spatial points on the object can be extracted from the second environment image based on the preset feature point extraction algorithm; the first feature points corresponding to the multiple spatial points on the object can be extracted from the A second feature point matching the first feature point is determined in an environment image, and a matching pair of feature points corresponding to multiple spatial points on the object is obtained.
  • the preset feature point extraction algorithm includes at least one of the following: corner detection algorithm (Harris Corner Detection), scale-invariant feature transform (SIFT) algorithm, scale and rotation invariant feature transform (Speeded- Up Robust Features, SURF) algorithm, FAST (Features From Accelerated Segment Test) feature point detection algorithm, preset feature point tracking algorithm includes but not limited to KLT (Kanade-Lucas-Tomasi feature tracker) algorithm.
  • corner detection algorithm Harris Corner Detection
  • SIFT scale-invariant feature transform
  • SURF scale and rotation invariant feature transform
  • FAST Features From Accelerated Segment Test
  • preset feature point tracking algorithm includes but not limited to KLT (Kanade-Lucas-Tomasi feature tracker) algorithm.
  • the method for determining the feature point matching pairs of multiple spatial points on multiple objects at different moments from the first environment image and the second environment image collected at different times is specifically as follows: Acquire first target image areas of multiple objects in the first environmental image collected at all times, and obtain first target image areas of multiple objects at different times; obtain multiple objects from second environmental images collected at different times The second target image area of the object is obtained, and the second target image area of multiple objects at different times is obtained; the first target image area and the second target image area of multiple objects at different times are extracted Multiple spatial points are matched pairs of feature points at different moments.
  • the method of determining the depth information of multiple objects at different times according to the feature point matching pairs of multiple spatial points on multiple objects at different times is specifically: according to multiple spatial points on multiple objects Matching pairs of feature points of points at different moments respectively, determine the feature point matching pairs of the central space points of multiple objects at different moments by fitting; according to the feature point matching of the central space points of multiple objects at different moments Yes, determine the depth information of multiple objects at different times.
  • the feature point matching pair corresponding to the central space point of the object is determined according to the fitting method, which can accurately determine the feature point matching pair corresponding to the central space point of the object, and then Accurately determine the depth information of the object to facilitate the subsequent accurate determination of the three-dimensional position coordinates of the object.
  • a single object and a single moment are taken as examples to explain and explain the depth information of multiple objects at different moments according to the feature point matching pairs of multiple spatial points on multiple objects at different moments.
  • Ground according to the feature point matching pairs corresponding to multiple spatial points on the object, determine the depth information of multiple spatial points on the object (the distance between the multiple spatial points and the drone); according to the multiple spatial points on the object
  • the average depth information is determined, and the average depth information is used as the depth information of the object.
  • the depth information of multiple objects at different moments can be determined.
  • the accuracy of the new depth information can be improved, which facilitates the subsequent accurate determination of the three-dimensional position coordinates of the object.
  • the method of determining the depth information of the spatial point by the feature point matching pair is specifically: determining the pixel difference of the feature point matching pair according to the pixels of the two feature points in the feature point matching pair; obtaining a binocular vision device According to the preset focal length and the preset binocular distance, the depth information of the spatial point is determined according to the preset focal length, the preset binocular distance, and the pixel difference corresponding to the feature point matching pair.
  • the preset focal length is determined by calibrating the focal length of the binocular vision device, and the preset binocular distance is determined according to the installation positions of the first camera and the second camera in the binocular vision device.
  • the method of determining the target three-dimensional position coordinates of multiple objects at different moments is specifically as follows: according to the depth information of multiple objects at different moments, determining that the multiple objects are at different moments The three-dimensional position coordinates of the camera coordinate system at different times; obtain the position information and attitude information of the UAV at different times, and according to the position information and attitude information of the UAV at different times, put multiple objects in different The three-dimensional position coordinates of the camera coordinate system at the time are converted into the three-dimensional position coordinates of the world coordinate system at different times to obtain the target three-dimensional position coordinates of multiple objects at different times.
  • the state of each object that is, the speed of each object is compared with the preset speed. If the speed of the object is greater than or equal to the preset speed, the state of the object is determined to be a moving state, that is to say, the object is a moving object. If the speed of the object is less than the preset speed, it is determined that the state of the object is a stationary state, that is, the object is a stationary object.
  • the preset speed can be set according to actual conditions, which is not specifically limited in this application. For example, the preset speed is 0.5 meters per second.
  • the speed of multiple objects between every two adjacent moments is determined; according to the speed of multiple objects between every two adjacent moments , To determine the state of multiple objects, that is, to compare the speed of each object between every two adjacent moments with the preset speed, if the speed of the object between every two adjacent moments has at least one speed greater than or Equal to the preset speed, the state of the object is determined to be a moving state, that is, the object is a moving object, if the speed of the object between every two adjacent moments is less than the preset speed, then the state of the object is determined to be Static state, which means that the object is a stationary object.
  • the drone When it is determined that there is at least one moving object in the environment where the drone is located, determine whether there is a risk of collision between the drone and at least one moving object. When it is determined that there is a risk of collision between the drone and at least one moving object, adjust the drone’s
  • the flight trajectory enables the drone to avoid moving objects when the drone is flying according to the adjusted flight trajectory; when there is no moving object in the environment where the drone is located, the control is controlled based on the obstacle information collected by the sensor
  • the UAV performs obstacle avoidance so that the UAV can avoid stationary objects.
  • the method of determining whether there is a risk of collision between the drone and at least one moving object is specifically: obtaining a first description equation of the flight trajectory of the drone, and obtaining a second description of the motion trajectory of each moving object Equation; Solve multiple equations formed by the first description equation and each second description equation to obtain the solution results of multiple equations; when there is at least one equation with the solution result greater than zero, it is determined that no one There is a risk of collision between the drone and at least one moving object. When the solution results of multiple equations are all non-existent solutions, it is determined that there is no risk of collision between the drone and the moving object.
  • the first description equation is the expression of the Bezier curve between the position of the drone and the time
  • the second description equation is the expression of the Bezier curve between the position of the moving object and the time.
  • step S103 specifically includes: sub-steps S1031 to S1033.
  • the three-dimensional position coordinates of at least one moving object at different times are determined according to the first environment image and the second environment image collected at different times; the at least one moving object is determined according to the three-dimensional position coordinates of the at least one moving object at different times The trajectory of the object.
  • the current waypoint of the drone on the preset flight trajectory is acquired, and the end waypoint of the preset flight trajectory is acquired;
  • Set the current waypoint on the flight trajectory and the end waypoint of the preset flight trajectory intercept the remaining flight trajectory of the drone from the preset flight trajectory, and use the intercepted remaining flight trajectory as the flight trajectory of the drone;
  • the drone is controlled by the pilot to fly, obtain multiple historical position coordinates of the drone, and obtain the current flight speed and flight direction of the drone; according to the multiple historical position coordinates of the drone and the current drone
  • the flight speed and flight direction determine the flight trajectory of the UAV.
  • the preset flight trajectory can be determined according to the flight mission actually performed by the drone, which is not specifically limited in the specification of this application.
  • S1032 Obtain the intersection position of the motion trajectory of the at least one moving object and the flight trajectory of the drone.
  • the first description equation of the flight trajectory of the drone is acquired; the second description equation of the motion trajectory of at least one moving object is acquired; the second description equation of the motion trajectory of the at least one moving object is formed by the first description equation and the second description equation of the motion trajectory of the at least one moving object.
  • the system of equations is solved to obtain the solution result; according to the solution result, the intersection position of the motion trajectory of at least one moving object and the flight trajectory of the UAV is determined.
  • the first description equation is the expression of the Bezier curve between the position of the drone and the time
  • the second description equation is the expression of the Bezier curve between the position of the moving object and time, which is understandable Yes, the order of the Bezier curve can be set according to the actual situation.
  • the third-order Bezier curve is used to express the relationship between the position of the drone and the time.
  • the third-order Bezier curve represents the relationship between the position of a moving object and time.
  • the position information of at least one moving object relative to the drone is determined; the target position coordinates of the drone at at least one intersection position are determined according to the position information; the flight trajectory of the drone is adjusted according to the coordinates of the at least one target position.
  • the position information of the moving object relative to the UAV can be determined according to the perception sensor of the UAV, and the position information includes at least one of the front, the rear, the left, the right, the top and the bottom of the UAV.
  • the method of determining the target position coordinates of the drone at the intersection position according to the position information is specifically: acquiring the position coordinates of the intersection position on the flight trajectory of the drone, and determining the position coordinates according to the position information.
  • the target adjustment strategy of the position coordinates ; according to the target adjustment strategy of the position coordinates, adjust the position coordinates, and use the adjusted position coordinates as the target position coordinates of the UAV at the intersection position, where the target position coordinates and the position coordinates are A preset distance between them.
  • the target adjustment strategy of position coordinates is to perform at least one of the preset distances of forward translation, rear translation, left translation, right translation, upper translation, and lower translation for the position coordinates.
  • the preset distance can be set based on the actual situation. This application does not specifically limit this, for example, the preset distance is 3 meters.
  • the method of determining the target adjustment strategy of the position coordinate according to the position information is specifically: determining at least one translation direction of the position coordinate according to the position information, and obtaining the adjustment strategy corresponding to the at least one translation direction, And the adjustment strategy corresponding to at least one translation direction is used as the target adjustment strategy of the position coordinate.
  • the translation direction of the position coordinate is any one of up, down, front and back, or up and At least one item of the front is either at least one of the lower and the rear, or at least one of the upper and the rear, or at least one of the lower and the front, or the like.
  • the translation direction of the position coordinates is forward or backward, or right and forward. , Or right and rear, etc.
  • the method of adjusting the flight trajectory of the drone according to at least one target position coordinate is specifically: obtaining the current position coordinates of the drone and the position coordinates of the ending waypoint of the drone on the flight trajectory; UAV’s current position coordinates and target position coordinates, plan the UAV’s first flight trajectory, and plan the UAV’s second flight trajectory according to the target position coordinates and the position coordinates of the ending waypoint; change the first flight trajectory Splicing with the second flight trajectory to obtain the adjusted flight trajectory.
  • the planning method of the first flight trajectory and the second flight trajectory can be set according to the actual situation, which is not specifically limited in this application.
  • the movement trajectory of the at least one moving object and the flight trajectory of the drone are acquired ; According to the movement trajectory of at least one moving object and the flight trajectory of the drone, determine the collision time of at least one moving object with the drone; adjust the flight trajectory of the drone according to the collision time of the at least one moving object and the drone .
  • the flight trajectory of the UAV can be accurately adjusted through the collision time between the moving object and the UAV, so that the UAV can avoid the moving object when the UAV follows the adjusted flight trajectory.
  • the method for determining the collision moment of the at least one moving object with the drone is specifically: obtaining a first description of the flight trajectory of the drone Equation; obtain the second description equation of the trajectory of at least one moving object; solve the equation set formed by the first description equation and the second description equation of the trajectory of at least one moving object to obtain at least one moving object and unmanned The moment of collision of the aircraft.
  • the first description equation is the expression of the Bezier curve between the position of the drone and the time
  • the second description equation is the expression of the Bezier curve between the position of the moving object and the time.
  • the method of adjusting the flight trajectory of the UAV is specifically: determining the target according to the collision time between the at least one moving object and the UAV and the current system time.
  • the time of collision that is, the time difference between the collision time of at least one moving object and the UAV and the current system time is determined, and the collision time when the absolute value of the time difference is less than or equal to the preset difference is regarded as the target collision time;
  • the position coordinates of the UAV at the time of target collision adjust the flight trajectory of the UAV, that is, obtain the position information of the moving object at risk of collision relative to the UAV, and adjust the UAV’s position at the time of the target collision according to the position information.
  • Position coordinates obtain the target position coordinates, and adjust the flight trajectory of the UAV according to the target position coordinates.
  • S104 Control the drone to fly according to the adjusted flight trajectory, so that the drone avoids the at least one moving object.
  • the UAV control method obtained in the specification of this application obtains the first environment image and the second environment image collected by the vision sensor at different times, and determines according to the first environment image and the second environment image collected at different times Whether there is at least one moving object in the environment where the drone is located, adjust the flight trajectory of the drone when it is determined that there is at least one moving object in the environment where the drone is located and there is a risk of collision between the drone and at least one moving object , And control the drone to fly according to the adjusted flight trajectory, so that the drone avoids at least one moving object, which greatly improves the flight safety of the drone.
  • FIG. 5 is a schematic flowchart of steps of a method for determining exercise information provided by an embodiment of the present application. Specifically, as shown in FIG. 5, the drone control method includes steps S201 to S204.
  • S201 Acquire a first environment image and a second environment image collected by the vision sensor at different moments.
  • the vision sensor is controlled to collect the first environment image and the second environment image of the environment where the drone is located at predetermined intervals, so as to obtain the first environment image and the second environment image collected at different times.
  • the vision sensor includes a binocular vision device
  • the first environment image is the environment image collected by the first vision device in the binocular vision device
  • the second environment image is the environment collected by the second vision device in the binocular vision device
  • the preset time can be set based on the actual situation, which is not specifically limited in the specification of this application, for example, the preset time is 100 milliseconds.
  • S202 Determine multiple objects in the environment where the drone is located at different moments according to the first environment image and the second environment image collected at different moments.
  • the first environmental images collected at different times are processed based on a preset edge detection algorithm to obtain first edge images corresponding to the first environmental images at different times; based on the preset connected domain detection algorithm, the Perform connected domain detection on the first edge image to determine the connected domains in the first edge image at different times, and use the determined connected domains in the first edge image at different times as multiple objects in the environment where the drone is located .
  • the preset edge detection algorithm includes any one of the Sobelo perator algorithm, the Canny algorithm, and the Laplacian algorithm
  • the preset connected domain detection algorithm includes the Flood Fill algorithm.
  • the second environment images collected at different times are processed to obtain the second edge images corresponding to the second environment images at different moments; based on the preset connected domain detection algorithm, the second environment images at different moments are obtained.
  • the second edge image performs connected domain detection to determine the connected domains in the second edge image at different moments, and uses the determined connected domains in the second edge image at different moments as multiple objects in the environment where the drone is located .
  • the depth map of the environment of the drone at different moments is generated; based on the preset connected domain detection algorithm, the Connected domain detection is performed on the depth map of the environment where the drone is located to determine the connected domain in the depth map of the environment where the drone is located at different moments, and determine the environment where the drone is located at different moments.
  • the connected domains in the depth map serve as multiple objects in the environment of the drone.
  • the depth map of the environment of the drone at different moments is generated; based on the environment of the drone at different moments Depth map to determine the three-dimensional point cloud data of the environment where the drone is located at different moments; extract the plane from the three-dimensional point cloud data of the environment where the drone is located at different moments, and connect the extracted plane with the preset connection Domain detection algorithm to determine multiple objects in the environment where the drone is located.
  • the plane can be extracted according to the RANSAC (Random Sample Consensus) algorithm.
  • S203 Determine the target three-dimensional position coordinates of the multiple objects at different times according to the first environment image and the second environment image collected at different times.
  • the feature point matching pairs of multiple spatial points on multiple objects are determined at different moments; according to the multiple spatial points on multiple objects Matching pairs of feature points at different times respectively determine the depth information of multiple objects at different times; determine the target three-dimensional position coordinates of multiple objects at different times according to the depth information of multiple objects at different times.
  • the feature point matching pairs of the central spatial points of the multiple objects at different moments are determined in a fitting manner. ; Determine the depth information of multiple objects at different times according to the matching pairs of feature points of the central space points of multiple objects at different times.
  • S204 Determine the motion information of the multiple objects according to the target three-dimensional position coordinates of the multiple objects at different moments.
  • the motion information includes the state, motion trajectory, and motion speed of the object
  • the state includes the motion state and the static state.
  • multiple objects can be tracked based on the motion information of multiple objects in combination with the two-dimensional image information of multiple objects, so as to track one of the main targets, so as to avoid follow-up and wrong follow-up. Problems to improve the accuracy of tracking.
  • the speed of multiple objects between every two adjacent moments is determined; according to the speed of multiple objects between every two adjacent moments, the speed of multiple objects is determined.
  • the state of each object is determined according to the target three-dimensional position coordinates of multiple objects at different moments.
  • the three-dimensional position coordinates of the object in the moving state at different times are acquired; the three-dimensional position coordinates of the object in the moving state at different times are used to determine the movement trajectory of the object in the moving state.
  • the motion information determination method provided by the present application obtains the first environment image and the second environment image collected by the vision sensor at different moments, and determines the different moments according to the first environment image and the second environment image collected at different moments According to the multiple objects in the environment where the drone is located, the three-dimensional position coordinates of the multiple objects at different times are determined according to the first environment image and the second environment image collected at different times; The three-dimensional target position coordinates of the object at different times determine the movement information of multiple objects, so that the UAV can perceive moving objects and stationary objects.
  • FIG. 6 is a schematic block diagram of the structure of an unmanned aerial vehicle control device provided by an embodiment of the present application.
  • the drone control device 300 is applied to a drone.
  • the drone includes a vision sensor.
  • the drone control device 300 includes a processor 301 and a memory 302.
  • the processor 301 and the memory 302 are connected by a bus 303.
  • the bus 303 is, for example, an I2C (Inter-integrated Circuit) bus.
  • the processor 301 may be a micro-controller unit (MCU), a central processing unit (CPU), a digital signal processor (Digital Signal Processor, DSP), or the like.
  • MCU micro-controller unit
  • CPU central processing unit
  • DSP Digital Signal Processor
  • the memory 302 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, or a mobile hard disk.
  • the processor 301 is configured to run a computer program stored in the memory 302, and implement the following steps when the computer program is executed:
  • the processor when the processor is implemented to determine whether there is at least one moving object in the environment where the drone is located according to the first environment image and the second environment image collected at different moments, the processor is used to implement:
  • the processor realizes that, according to the first environment image and the second environment image collected at different moments, when determining the target three-dimensional position coordinates of multiple objects in the environment where the drone is located at different moments, Used to achieve:
  • the target three-dimensional position coordinates of the multiple objects at different moments are determined.
  • the processor when the processor is used to determine the depth information of the multiple objects at different moments according to the feature point matching pairs of the multiple spatial points on the multiple objects at different moments, it is used to implement:
  • the processor realizes that the state of the multiple objects is determined according to the target three-dimensional position coordinates of the multiple objects at different times, it is used to implement:
  • the state of the plurality of objects is determined according to the speed of the plurality of objects between every two adjacent moments.
  • the processor implements adjustment of the flight trajectory of the drone, it is used to implement:
  • the flight trajectory of the drone is adjusted according to at least one of the intersection positions.
  • the processor implements the acquisition of the motion trajectory of the at least one moving object, it is used to implement:
  • the motion trajectory of the at least one moving object is determined according to the three-dimensional position coordinates of the at least one moving object at different moments.
  • the processor implements the acquisition of the intersection position of the motion trajectory of the at least one moving object and the flight trajectory of the drone, it is used to achieve:
  • the intersection position of the motion trajectory of the at least one moving object and the flight trajectory of the drone is determined.
  • the processor when the processor implements adjusting the flight trajectory of the drone according to at least one of the intersection positions, it is used to implement:
  • processor is further configured to implement the following steps:
  • the flight trajectory of the drone is adjusted according to the moment of collision between the at least one moving object and the drone.
  • the processor adjusts the flight trajectory of the drone according to the moment of collision between the at least one moving object and the drone, it is used to achieve:
  • the processor determines whether there is at least one moving object in the environment where the drone is located according to the first environment image and the second environment image collected at different moments, it is also used to achieve:
  • the flying height of the drone is less than or equal to the preset height, determine whether there is at least one moving object in the environment where the drone is located according to the first environment image and the second environment image collected at different times .
  • the vision sensor includes a binocular vision device
  • the first environment image is an environment image collected by a first vision device in the binocular vision device
  • the second environment image is the binocular vision device.
  • FIG. 7 is a schematic block diagram of a structure of an apparatus for determining motion information according to an embodiment of the present application.
  • the motion information determining device 400 is applied to a drone.
  • the drone includes a vision sensor.
  • the motion information determining device 400 includes a processor 401 and a memory 402.
  • the processor 401 and the memory 402 are connected by a bus 403.
  • 403 is an I2C (Inter-integrated Circuit) bus.
  • the processor 401 may be a micro-controller unit (MCU), a central processing unit (CPU), a digital signal processor (Digital Signal Processor, DSP), or the like.
  • MCU micro-controller unit
  • CPU central processing unit
  • DSP Digital Signal Processor
  • the memory 402 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, or a mobile hard disk.
  • the processor 401 is configured to run a computer program stored in the memory 402, and implement the following steps when the computer program is executed:
  • the motion information of the multiple objects is determined according to the target three-dimensional position coordinates of the multiple objects at different moments.
  • the processor determines the target three-dimensional position coordinates of the multiple objects at different times according to the first environment image and the second environment image collected at different times, it is used to achieve:
  • the target three-dimensional position coordinates of the multiple objects at different moments are determined.
  • the processor when the processor is used to determine the depth information of the multiple objects at different moments according to the feature point matching pairs of the multiple spatial points on the multiple objects at different moments, it is used to implement:
  • the processor realizes determining the motion information of the multiple objects according to the target three-dimensional position coordinates of the multiple objects at different times, it is used to implement:
  • the state of the plurality of objects is determined according to the speed of the plurality of objects between every two adjacent moments, where the state includes a moving state and a stationary state.
  • the processor realizes that after determining the state of the plurality of objects according to the speeds of the plurality of objects between every two adjacent moments, it is further used to realize:
  • the motion trajectory of the object in the motion state is determined.
  • FIG. 8 is a schematic block diagram of the structure of an unmanned aerial vehicle according to an embodiment of the present application.
  • the drone 500 includes a vision sensor 501, a processor 502, and a memory 503.
  • the vision sensor 501, the processor 502, and the memory 503 are connected by a bus 504, which is, for example, an I2C (Inter-integrated Circuit) bus. .
  • I2C Inter-integrated Circuit
  • the vision sensor 501 may be a binocular vision device, or other vision devices.
  • the installation position and quantity of the vision sensor 501 on the drone 500 can be set according to actual conditions, and this application will not specifically describe this. limited.
  • the drone 500 includes a vision sensor 501, and the vision sensor 501 is installed in the front area of the drone to sense objects in front of the drone 500.
  • the drone 500 includes two vision sensors 501, which are respectively installed in the front area and the rear area of the drone 500, and are used to sense objects in front of and behind the drone 500.
  • the drone 500 includes four vision sensors 501, and the four vision sensors 501 are respectively installed in the front, rear, left, and right regions of the drone 500 to sense the unmanned Objects in front, back, left, and right of the machine 500.
  • the drone 500 includes six vision sensors 501, and the six vision sensors 501 are respectively installed on the front, rear, left, right, upper, and lower sides of the drone 500. The area is used to sense objects in front, back, left, right, above, and below the drone 500.
  • the drone 500 may have one or more propulsion units to allow the drone 500 to fly in the air.
  • the one or more propulsion units can make the drone 500 at one or more, two or more, three or more, four or more, five or more, six or more free angles move.
  • the drone 500 can rotate around one, two, three, or more rotation axes.
  • the rotation axes may be perpendicular to each other.
  • the rotation axes can be maintained perpendicular to each other during the entire flight of the drone 500.
  • the rotation axis may include a pitch axis, a roll axis, and/or a yaw axis.
  • the drone 500 can move in one or more dimensions.
  • the drone 500 can move upward due to the lifting force generated by one or more rotors.
  • the drone 500 can move along the Z axis (which can be upward relative to the drone 500), the X axis, and/or the Y axis (which can be lateral).
  • the drone 500 can move along one, two, or three axes that are perpendicular to each other.
  • the drone 500 may be a rotary wing aircraft.
  • the drone 500 may be a multi-rotor aircraft that may include multiple rotors.
  • the multiple rotors can rotate to generate lifting force for the UAV 500.
  • the rotor may be a propulsion unit, which allows the drone 500 to move freely in the air.
  • the rotor can rotate at the same rate and/or can generate the same amount of lift or thrust.
  • the rotor can rotate at different speeds at will, generating different amounts of lifting force or thrust, and/or allowing the drone 500 to rotate.
  • one, two, three, four, five, six, seven, eight, nine, ten or more rotors may be provided on the drone 500.
  • These rotors can be arranged such that their rotation axes are parallel to each other.
  • the rotation axis of the rotors can be at any angle with respect to each other, which can affect the movement of the drone 500.
  • the drone 500 may have multiple rotors.
  • the rotor may be connected to the main body of the drone 500, and the main body may include a control unit, an inertial measurement unit (IMU), a processor, a battery, a power supply, and/or other sensors.
  • the rotor may be connected to the body by one or more arms or extensions branching from the central part of the body.
  • one or more arms may extend radially from the central body of the drone 500, and may have rotors at or near the end of the arms.
  • the processor 502 may be a micro-controller unit (MCU), a central processing unit (Central Processing Unit, CPU), a digital signal processor (Digital Signal Processor, DSP), or the like.
  • MCU micro-controller unit
  • CPU Central Processing Unit
  • DSP Digital Signal Processor
  • the memory 503 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, or a mobile hard disk.
  • the processor 502 is configured to run a computer program stored in the memory 503, and when executing the computer program, implement any of the drone control methods or motion information determination methods provided in the specification of this application.
  • the embodiments of the present application also provide a computer-readable storage medium, the computer-readable storage medium stores a computer program, the computer program includes program instructions, and the processor executes the program instructions to implement the foregoing implementation
  • the example provides the drone control method or motion information determination method.
  • the computer-readable storage medium may be the internal storage unit of the drone described in any of the foregoing embodiments, such as the hard disk or memory of the drone.
  • the computer-readable storage medium may also be an external storage device of the drone, such as a plug-in hard disk equipped on the drone, a smart memory card (Smart Media Card, SMC), or a secure digital (Secure Digital, SD) card, flash card (Flash Card), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

La présente invention concerne un procédé de commande de véhicule aérien sans pilote, un procédé et un dispositif de détermination d'informations de mouvement et un véhicule aérien sans pilote. Le procédé de commande de véhicule aérien sans pilote consiste à : acquérir une première image environnementale et une seconde image environnementale qui sont collectées à différents instants par un capteur visuel (S101) ; en fonction de la première image environnementale et de la seconde image environnementale qui sont collectées à différents instants, déterminer s'il existe au moins un objet mobile dans un environnement où se trouve le véhicule aérien sans pilote (S102) ; s'il est déterminé qu'il existe au moins un objet mobile dans l'environnement où se trouve le véhicule aérien sans pilote et qu'il y a un risque que le véhicule aérien sans pilote entrant en collision avec l'au moins un objet mobile, ajuster la trajectoire de vol du véhicule aérien sans pilote (S103) ; et commander le véhicule aérien sans pilote pour voler selon la trajectoire de vol ajustée, de telle sorte que le véhicule aérien sans pilote reste à l'écart de l'au moins un objet mobile (S104). Ledit procédé améliore la sécurité de vol du véhicule aérien sans pilote.
PCT/CN2020/087592 2020-04-28 2020-04-28 Procédé de commande de véhicule aérien sans pilote, procédé et dispositif de détermination d'informations de mouvement et véhicule aérien sans pilote WO2021217451A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/087592 WO2021217451A1 (fr) 2020-04-28 2020-04-28 Procédé de commande de véhicule aérien sans pilote, procédé et dispositif de détermination d'informations de mouvement et véhicule aérien sans pilote
CN202080006476.XA CN113168188A (zh) 2020-04-28 2020-04-28 无人机控制方法、运动信息确定方法、装置及无人机

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/087592 WO2021217451A1 (fr) 2020-04-28 2020-04-28 Procédé de commande de véhicule aérien sans pilote, procédé et dispositif de détermination d'informations de mouvement et véhicule aérien sans pilote

Publications (1)

Publication Number Publication Date
WO2021217451A1 true WO2021217451A1 (fr) 2021-11-04

Family

ID=76879283

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/087592 WO2021217451A1 (fr) 2020-04-28 2020-04-28 Procédé de commande de véhicule aérien sans pilote, procédé et dispositif de détermination d'informations de mouvement et véhicule aérien sans pilote

Country Status (2)

Country Link
CN (1) CN113168188A (fr)
WO (1) WO2021217451A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106767682A (zh) * 2016-12-01 2017-05-31 腾讯科技(深圳)有限公司 一种获取飞行高度信息的方法及飞行器
CN107194339A (zh) * 2017-05-15 2017-09-22 武汉星巡智能科技有限公司 障碍物识别方法、设备及无人飞行器
CN107980138A (zh) * 2016-12-28 2018-05-01 深圳前海达闼云端智能科技有限公司 一种虚警障碍物检测方法及装置
US20190094861A1 (en) * 2017-09-28 2019-03-28 Intel IP Corporation Unmanned aerial vehicle and method for operating an unmanned aerial vehicle
CN110689578A (zh) * 2019-10-11 2020-01-14 南京邮电大学 一种基于单目视觉的无人机障碍物识别方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110362098B (zh) * 2018-03-26 2022-07-05 北京京东尚科信息技术有限公司 无人机视觉伺服控制方法、装置以及无人机
CN110799921A (zh) * 2018-07-18 2020-02-14 深圳市大疆创新科技有限公司 拍摄方法、装置和无人机
CN109634304B (zh) * 2018-12-13 2022-07-15 中国科学院自动化研究所南京人工智能芯片创新研究院 无人机飞行路径规划方法、装置和存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106767682A (zh) * 2016-12-01 2017-05-31 腾讯科技(深圳)有限公司 一种获取飞行高度信息的方法及飞行器
CN107980138A (zh) * 2016-12-28 2018-05-01 深圳前海达闼云端智能科技有限公司 一种虚警障碍物检测方法及装置
CN107194339A (zh) * 2017-05-15 2017-09-22 武汉星巡智能科技有限公司 障碍物识别方法、设备及无人飞行器
US20190094861A1 (en) * 2017-09-28 2019-03-28 Intel IP Corporation Unmanned aerial vehicle and method for operating an unmanned aerial vehicle
CN110689578A (zh) * 2019-10-11 2020-01-14 南京邮电大学 一种基于单目视觉的无人机障碍物识别方法

Also Published As

Publication number Publication date
CN113168188A (zh) 2021-07-23

Similar Documents

Publication Publication Date Title
Barry et al. High‐speed autonomous obstacle avoidance with pushbroom stereo
US20190138029A1 (en) Collision avoidance system, depth imaging system, vehicle, map generator and methods thereof
Forster et al. Continuous on-board monocular-vision-based elevation mapping applied to autonomous landing of micro aerial vehicles
Shen et al. Vision-based state estimation for autonomous rotorcraft MAVs in complex environments
WO2018086133A1 (fr) Procédés et systèmes de fusion sélective de capteurs
Saha et al. A real-time monocular vision-based frontal obstacle detection and avoidance for low cost UAVs in GPS denied environment
Eynard et al. UAV altitude estimation by mixed stereoscopic vision
Qi et al. Autonomous landing solution of low-cost quadrotor on a moving platform
CN110570463B (zh) 一种目标状态估计方法、装置和无人机
Eynard et al. Real time UAV altitude, attitude and motion estimation from hybrid stereovision
CN112363528B (zh) 基于机载视觉的无人机抗干扰集群编队控制方法
Zheng et al. Robust and accurate monocular visual navigation combining IMU for a quadrotor
Vetrella et al. RGB-D camera-based quadrotor navigation in GPS-denied and low light environments using known 3D markers
Williams et al. Feature and pose constrained visual aided inertial navigation for computationally constrained aerial vehicles
WO2018068193A1 (fr) Procédé de commande, dispositif de commande, système de commande de vol, et véhicule aérien sans pilote à rotors multiples
CN109764864B (zh) 一种基于颜色识别的室内无人机位姿获取方法及系统
Jurado et al. Vision‐based trajectory tracking system for an emulated quadrotor UAV
WO2021217450A1 (fr) Procédé et dispositif de suivi de cible, et support de stockage
WO2021217451A1 (fr) Procédé de commande de véhicule aérien sans pilote, procédé et dispositif de détermination d'informations de mouvement et véhicule aérien sans pilote
Ha et al. Vision-based Obstacle Avoidance Based on Monocular SLAM and Image Segmentation for UAVs.
CN108733076B (zh) 一种无人机抓取目标物体的方法、装置及电子设备
Park et al. 3D shape mapping of obstacle using stereo vision sensor on quadrotor UAV
Dubey et al. Droan-disparity-space representation for obstacle avoidance: Enabling wire mapping & avoidance
Gao et al. Altitude information acquisition of uav based on monocular vision and mems
Qin et al. A 3D rotating laser-based navigation solution for micro aerial vehicles in dynamic environments

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20933985

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20933985

Country of ref document: EP

Kind code of ref document: A1