WO2019127518A1 - 避障方法、装置及可移动平台 - Google Patents

避障方法、装置及可移动平台 Download PDF

Info

Publication number
WO2019127518A1
WO2019127518A1 PCT/CN2017/120249 CN2017120249W WO2019127518A1 WO 2019127518 A1 WO2019127518 A1 WO 2019127518A1 CN 2017120249 W CN2017120249 W CN 2017120249W WO 2019127518 A1 WO2019127518 A1 WO 2019127518A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature point
optical flow
moving object
obstacle avoidance
flow vector
Prior art date
Application number
PCT/CN2017/120249
Other languages
English (en)
French (fr)
Inventor
周游
杜劼熹
刘洁
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201780029125.9A priority Critical patent/CN109196556A/zh
Priority to PCT/CN2017/120249 priority patent/WO2019127518A1/zh
Publication of WO2019127518A1 publication Critical patent/WO2019127518A1/zh
Priority to US16/910,890 priority patent/US20210103299A1/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • G05D1/106Change initiated in response to external conditions, e.g. avoidance of elevated terrain or of no-fly zones
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/0004Transmission of traffic-related information to or from an aircraft
    • G08G5/0013Transmission of traffic-related information to or from an aircraft with a ground station
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/0017Arrangements for implementing traffic-related aircraft activities, e.g. arrangements for generating, displaying, acquiring or managing traffic information
    • G08G5/0021Arrangements for implementing traffic-related aircraft activities, e.g. arrangements for generating, displaying, acquiring or managing traffic information located in the aircraft
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/0017Arrangements for implementing traffic-related aircraft activities, e.g. arrangements for generating, displaying, acquiring or managing traffic information
    • G08G5/0026Arrangements for implementing traffic-related aircraft activities, e.g. arrangements for generating, displaying, acquiring or managing traffic information located on the ground
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/0047Navigation or guidance aids for a single aircraft
    • G08G5/0069Navigation or guidance aids for a single aircraft specially adapted for an unmanned aircraft
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/0073Surveillance aids
    • G08G5/0078Surveillance aids for monitoring traffic from the aircraft
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/04Anti-collision systems
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/04Anti-collision systems
    • G08G5/045Navigation or guidance aids, e.g. determination of anti-collision manoeuvers
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Definitions

  • the present application relates to the field of drone technology, and in particular, to an obstacle avoidance method, device, and movable platform.
  • the embodiment of the invention provides an obstacle avoidance method, device and a movable platform, so as to realize obstacle avoidance based on moving object detection, and improve the safety and user experience of the movable platform motion.
  • a first aspect of the embodiments of the present invention provides a method for avoiding obstacles, including:
  • Determining an area where the moving object and the movable platform may collide based on a motion speed vector of the moving object, thereby controlling the movable platform to perform an obstacle avoidance strategy in the area where the collision may occur.
  • a second aspect of the embodiments of the present invention provides an obstacle avoidance device that is disposed on a movable platform, the obstacle avoidance device includes a processor and a photographing device, and the processor communicates with the photographing device connection;
  • the photographing device is configured to capture a depth map
  • the processor is configured to: acquire a depth map obtained by the photographing device, identify a moving object based on the depth map, determine a motion speed vector of the moving object, and determine the motion speed vector based on the moving object An area where the moving object and the movable platform may collide, thereby controlling the movable platform to perform an obstacle avoidance strategy in the area where the collision may occur. Determining an area where the moving object and the movable platform may collide
  • a third aspect of the embodiments of the present invention provides a mobile platform, including:
  • a power system mounted on the fuselage for powering the movable platform
  • a fourth aspect of the present invention provides a obstacle avoidance device, the obstacle avoidance device being disposed on a ground station, the obstacle avoidance device comprising a processor and a communication interface, wherein the processor is in communication with the communication interface ;
  • the communication interface is configured to: acquire a depth map obtained by photographing a camera mounted on the movable platform;
  • the processor is configured to: identify a moving object based on the depth map, determine a motion velocity vector of the moving object, and determine, according to the motion velocity vector of the moving object, that the moving object and the movable platform may collide An area, thereby controlling the movable platform to perform an obstacle avoidance strategy in the area where the collision may occur.
  • the obstacle avoidance method and device and the movable platform acquire a depth map obtained by a photographing device mounted on the movable platform, identify a moving object based on the depth map, and determine a motion speed vector of the moving object, based on The motion velocity vector of the moving object determines the region where the moving object and the movable platform may collide, thereby controlling the movable platform to perform the obstacle avoidance strategy in the region where the collision may occur.
  • the embodiment of the present invention can identify the dynamic object and determine the area where the moving object and the movable platform may collide according to the moving speed vector of the dynamic object, so that the movable platform can evade the moving object, especially when movable When the platform is a car or other movable object moving on the ground, or a drone flying near the ground, it can effectively eliminate the influence of moving objects on the movable platform, and improve the safety and user experience of the movable platform.
  • FIG. 1 is a flowchart of an obstacle avoidance method according to an embodiment of the present invention
  • FIG. 2 is a flowchart of a method for acquiring an optical flow vector of a target feature point according to an embodiment of the present invention
  • FIG. 3 is a flowchart of a method for identifying a moving object according to an embodiment of the present invention
  • 4a is a schematic diagram of an optical flow vector of a target feature point according to an embodiment of the present invention.
  • Figure 4b is a schematic diagram showing the results of the optical flow vector clustering in Figure 4a;
  • FIG. 5 is a schematic structural diagram of an obstacle avoidance device 10 according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of an obstacle avoidance device 30 according to an embodiment of the present invention.
  • a component when referred to as being "fixed” to another component, it can be directly on the other component or the component can be present. When a component is considered to "connect” another component, it can be directly connected to another component or possibly a central component.
  • ultrasonic and infrared ranging module One way is to use ultrasonic and infrared ranging module. This method is relatively simple. When an object is detected, it brakes, so there is a large amount of measurement noise, which is easy to cause false braking, and the measuring distance is short. For a high-speed moving platform. Said that it can not be used at all.
  • Another way is to use machine vision algorithms to acquire images and calculate depth maps through visual cameras, or to obtain depth maps using modules such as RGB-D or TOF, and then implement obstacle avoidance strategies through depth maps.
  • the static hypothesis is assumed, that is, the objects appearing in the image are all fixed for the earth.
  • the assumptions are generally true, but for mobile platforms that are flying near ground, it is not true for vehicles or Pedestrians cannot carry out effective obstacle avoidance.
  • Self-driving mobile platforms are generally based on laser radar to perform obstacle avoidance strategies, which are expensive and heavy.
  • an embodiment of the present invention provides an obstacle avoidance method, which acquires a depth map obtained by photographing a camera mounted on a movable platform, identifies a moving object based on the depth map, and determines a moving object.
  • the motion velocity vector determines the region where the moving object and the movable platform may collide based on the motion velocity vector of the moving object, thereby controlling the movable platform to perform the obstacle avoidance strategy in the region where the collision may occur.
  • the embodiment is capable of recognizing a moving object based on the depth map and controlling the movable platform to perform obstacle avoidance on the moving object, especially when the movable platform is a car or other movable object moving on the ground, or is flying near the ground. When the drone is used, it can effectively avoid obstacles on moving objects and improve the safety and user experience of the movable platform.
  • R is the rotation matrix of the camera
  • [u v 1] T represents a two-dimensional coordinate (2D) point in a pixel coordinate system (pixel coordinates);
  • [x w y w z w 1] T represents a three-dimensional coordinate (3D) point in the earth coordinate system (World coordinates);
  • the matrix K is called a camera calibration matrix, that is, the internal parameters of each camera (Intrinsic Parameters);
  • the matrix K contains five internal parameters:
  • ⁇ x fm x
  • ⁇ y fm y
  • f is a focal length
  • m x and m y are scale factors of a unit distance in the x and y directions.
  • is the skew parameter between the x-axis and the y-axis, and the pixels are not square in the CCD camera.
  • ⁇ 0 , v 0 is the principal point.
  • FIG. 1 is a flowchart of an obstacle avoidance method according to an embodiment of the present invention.
  • the method may be performed by an obstacle avoidance device, which may be installed on a movable platform or installed on a ground. stand on.
  • the method includes:
  • Step 101 Obtain a depth map obtained by photographing a photographing device mounted on the movable platform.
  • the movable platform involved in this embodiment includes, but is not limited to, any one of the following: a drone, a car, VR glasses, and AR glasses.
  • the photographing apparatus includes at least one camera.
  • the present embodiment can simultaneously acquire two images of the same scene based on two cameras with preset distances, and process the two images based on the stereo matching algorithm to obtain a depth map, which is of course only an example and not for the present invention.
  • the method is not limited.
  • the method for obtaining the depth map may be any one of the prior art.
  • Step 102 Identify a moving object based on the depth map.
  • the optical flow vector of the target feature point on the depth map is obtained, and the moving object is identified based on the optical flow vector of the target feature point.
  • the method for acquiring the optical flow vector of the target feature point includes at least the following:
  • the feature points on the stationary object occupy all the depth maps.
  • the feature points on the stationary object can be selected from the depth map captured by the shooting device mounted on the movable platform based on the visual odometer (VO) algorithm carried on the movable platform.
  • the optical flow vector of the feature point other than the feature point ie, the optical flow vector of the target feature point.
  • FIG. 2 is a flowchart of a method for acquiring an optical flow vector of a target feature point according to an embodiment of the present invention. As shown in FIG. 2, the method includes:
  • Step 1011 Extract a feature point from a depth map obtained by photographing the camera mounted on the movable platform based on a preset corner detection algorithm.
  • the present embodiment adopts a method of generating a sparse matrix (sparse), which first extracts feature points from a depth map obtained by the photographing device. Without loss of generality, this embodiment selects Corner detection on the depth map as the feature point.
  • the optional Corner Detection Algorithm includes but is not limited to: FAST (features from accelerated segment test) ), SUSAN, and Harris corner detection algorithms. This embodiment takes the Harris corner detection algorithm as an example:
  • matrix A as the structure tensor of the depth map:
  • I x and I y are the gradient information of the point on the depth map in the x, y direction, respectively, based on the matrix A, the corner point response value M c of the point on the depth map can be obtained:
  • det(A) is the determinant of matrix A
  • trace(A) is the trace of matrix A
  • k is the tunable sensitivity parameter
  • the threshold M th is set .
  • Step 1012 Determine an optical flow vector of the feature point by tracking a relative position of the feature point on two frames of images.
  • the relative position of the feature point obtained in the two or more frames may be tracked according to the offset of the relative position of the feature point in the image of two or more frames.
  • the optical flow vector of the feature point Taking two frames of images as an example, after tracking the relative positions of the feature points on the two frames of images, the present embodiment can iteratively obtain the offset h of the feature points on the two frames according to the following expression:
  • the embodiment can perform two offset detections, that is, the image of the latter frame is F(x), and the image of the previous frame is G(x), and the feature points are iteratively obtained according to the above formula.
  • each feature point may also determine only one offset.
  • the method for determining may refer to the above example, and details are not described herein again.
  • Step 1013 Filter out an optical flow vector of a feature point on the stationary object from the optical flow vector of the feature point based on the three-dimensional coordinates of the feature point in a geodetic coordinate system in the two frame images to obtain a target feature.
  • the optical flow vector of the point is
  • a position and a pose of the photographing device when capturing two frames of images may be determined based on three-dimensional coordinates of the feature point in a geodetic coordinate system in two frames of images, and Based on the position and attitude of the shooting device when shooting two frames of images, the random sampling consensus algorithm (RANSAC) is used to obtain the optical flow vector of the feature points other than the feature points on the stationary object in the depth map.
  • RANSAC random sampling consensus algorithm
  • R is a position and posture when the image is taken by the photographing device as a priori rotation matrix.
  • p c is the two-dimensional coordinate of the feature point on the depth map
  • p w is the three-dimensional coordinate of the feature point in the geodetic coordinate system.
  • an essential matrix corresponding to the three-dimensional coordinates of the feature point in the geodetic coordinate system of the two frames of images may be determined according to the preset second prior condition, and then calculated based on the calculation.
  • the obtained essential matrix uses a random sampling consensus algorithm (RANSAC) to acquire optical flow vectors of feature points in the depth map other than the feature points on the stationary object.
  • RANSAC random sampling consensus algorithm
  • the essential matrix E can be determined according to the second prior condition as shown below:
  • the moving object is identified based on the optical flow vector of the target feature point, the depth information of the target feature point, and the visual information of the target feature point, wherein the visual information includes color and/or Or light.
  • FIG. 3 is a flowchart of a method for identifying a moving object according to an embodiment of the present invention. As shown in FIG. 3, a moving object may be identified according to the following steps:
  • Step 1021 Perform clustering processing on the acquired optical flow vector of the target feature point to obtain at least one set of optical flow vectors, wherein the direction deviation between the optical flow vectors in the same optical flow vector group is smaller than the first preset.
  • the threshold, the difference between the lengths of the optical flow vectors is less than the second predetermined threshold.
  • FIG. 4a is a schematic diagram of an optical flow vector of a target feature point according to an embodiment of the present invention.
  • the optical flow vector in FIG. 4a can be expressed by the Lucas-Kanade algorithm as follows:
  • q i is a point in the field of the point P on the depth map
  • the size of the field of the point P can be set as needed, which is not limited in this embodiment.
  • the point is The field corresponding to P includes 25 points.
  • q i corresponding optical flow vector of [V x V y] T, I x, I y are q i a gradient in the x direction and the y direction on the depth map, I t of q i of light on the two images Strong change.
  • this example uses a clustering algorithm in Unsupervised ML (such as K-Means++ algorithm, but not limited to K-Means++ algorithm) to target feature points.
  • Unsupervised ML such as K-Means++ algorithm, but not limited to K-Means++ algorithm
  • the optical flow vector is clustered.
  • the position [u v] T of each target feature point on the depth map, the color and/or intensity of the target feature point, and the optical flow vector [V x V y ] T are used as clustering basis, as shown in Figure 4b.
  • the set of at least one set of optical flow vectors, the group of optical flow vectors after clustering can be expressed as follows:
  • Step 1022 Identify a moving object from the at least one set of optical flow vectors based on depth information and visual information of each feature point in the target feature point.
  • At least one group obtained from the above is obtained by using the depth information of each feature point in the acquired target feature points and the color and/or light intensity of each feature point by using a floodfill algorithm.
  • Moving objects are identified in the optical flow vector group.
  • the specific implementation manner can refer to the existing floodfill algorithm, and details are not described herein again.
  • Step 103 Determine a motion speed vector of the moving object.
  • the motion velocity vector of the moving object may be determined based on the three-dimensional coordinates of the moving object in the geodetic coordinate system in the preset frame number image.
  • the specific implementation refer to the prior art, and the implementation is not described in detail.
  • Step 104 Determine, according to the motion speed vector of the moving object, an area where the moving object and the movable platform may collide, thereby controlling the movable platform to perform an obstacle avoidance strategy in the area where the collision may occur.
  • the embodiment may determine the moving trajectory of the movable platform based on the moving speed vector of the moving object and the moving speed vector of the movable platform, and determine the moving object based on the moving trajectory of the movable platform.
  • the embodiment of the present invention may project the movement trajectory of the movable platform onto the depth map, and determine the moving object and the movable object on the depth map according to the motion trajectory of the moving object.
  • the area where the platform may collide and based on the coordinate information of each area on the depth map in the geodetic coordinate system, determine the three-dimensional coordinates of the area where the moving object and the movable platform may collide in the geodetic coordinate system.
  • controlling the movable platform to perform a preset obstacle avoidance strategy at the three-dimensional coordinates for example, after determining an area where a collision may occur, controlling the movable platform to move in a direction opposite to the current moving direction, or by adjusting the movable platform
  • the moving track makes the movable platform bypass the area where the collision may occur, or can control the movable platform to stop moving for a preset time length to achieve the obstacle avoidance purpose.
  • this is merely an example and not a sole limitation of the obstacle avoidance strategy in the present invention. In fact, a corresponding obstacle avoidance strategy can be set according to specific needs.
  • the mobile track of the mobile platform may be displayed in the embodiment, or the area where the collision may occur may be displayed to the user, so that the user can take timely avoidance. Barrier measures.
  • the obstacle avoidance method and device and the movable platform obtained the optical flow vector of the target feature point on the depth map obtained by the photographing device mounted on the movable platform, and the moving speed vector of the movable platform, based on the acquisition
  • the optical flow vector of the target feature point and the depth information and visual information of the target feature point are recognized, and the moving object is identified from the depth map, and the three-dimensional coordinates of the moving object in the geodetic coordinate system in the preset frame number image are determined.
  • a motion velocity vector of the moving object thereby determining a region where the moving object and the movable platform may collide based on the motion velocity vector of the moving object and the moving velocity vector of the movable platform, thereby controlling the movable platform to execute in an area where collision may occur Obstacle avoidance strategy.
  • the embodiment of the present invention can identify the dynamic object and determine the area where the moving object and the movable platform may collide according to the moving speed vector of the dynamic object, so that the movable platform can evade the moving object, especially when movable When the platform is a car or other movable object moving on the ground, or a drone flying near the ground, it can effectively eliminate the influence of moving objects on the movable platform, and improve the safety and user experience of the movable platform.
  • the intermediate result of the visual odometer (VO) algorithm in the existing mobile platform generates the optical flow vector of the target feature point required by the embodiment, the present embodiment can directly obtain the existing visual odometer.
  • the intermediate result of the (VO) algorithm is used to detect the moving object, which can effectively reduce the calculation amount of the embodiment, improve the efficiency of the obstacle avoidance detection, and further improve the real-time performance of the obstacle avoidance detection.
  • FIG. 5 is a schematic structural diagram of an obstacle avoidance device 10 according to an embodiment of the present invention. As shown in FIG. 5, the obstacle avoidance device 10 is disposed on the movable platform 20 to avoid obstacles.
  • the device 10 includes a processor 11 and a photographing device 21, the processor 11 is communicatively coupled to the photographing device 21, the photographing device is configured to capture a depth map, and the processor 11 is configured to: acquire the photographing device 21 Taking a obtained depth map, identifying a moving object based on the depth map, determining a motion velocity vector of the moving object, and determining, according to the motion velocity vector of the moving object, that the moving object and the movable platform may collide An area, thereby controlling the movable platform to perform an obstacle avoidance strategy in the area where the collision may occur. An area in which the moving object and the movable platform may collide is determined.
  • the processor 11 is configured to: acquire an optical flow vector of a target feature point on the depth map, where the target feature point does not include a feature point on the stationary object, and the optical flow vector is based on the target feature point. , identify moving objects.
  • the processor 11 is configured to: acquire an optical flow vector of the target feature point on the depth map based on a visual odometer VO algorithm.
  • the processor 11 is configured to: extract a feature point from a depth map captured by a photographing device mounted on the movable platform based on a preset corner detection algorithm; and track the feature point on the two frames by tracking the feature point a relative position of the optical flow vector of the feature point; and filtering the stationary object from the optical flow vector of the feature point based on the three-dimensional coordinates of the feature point in the geodetic coordinate system of the two frame images
  • the optical flow vector of the feature point obtains the optical flow vector of the target feature point.
  • the processor 11 is configured to determine, according to a relative position of the feature point in two frames of images, a relative position of the feature point on a subsequent frame image, respectively, with respect to the feature point. a first offset of the relative position on the frame image, and a second offset of the relative position of the feature point on the previous frame image relative to the relative position of the feature point on the subsequent frame image; Determining the relationship between the first offset and the second offset according to a preset first a prior condition, and determining the feature based on the first offset or the second offset The optical flow vector of the point.
  • the processor 11 is configured to: determine, according to a preset second a prior condition, an essential matrix corresponding to the three-dimensional coordinates of the feature point in a geodetic coordinate system in the two frames of images, where
  • the second prior condition is a conditional relationship between the three-dimensional coordinates of the feature point in the geodetic coordinate system in the two-frame image and the essential matrix; based on the essential matrix, a random sampling consensus algorithm is adopted, from the feature point
  • the optical flow vector is used to screen out the optical flow vector of the feature points on the stationary object to obtain the optical flow vector of the target feature point.
  • the processor 11 is configured to determine, according to the three-dimensional coordinates of the feature point in a geodetic coordinate system in the two frame images, a position and a posture of the photographing device when capturing the two frames of images. And based on the position and posture of the photographing device when photographing the two frames of images, using a random sampling and consistent algorithm, screening the optical flow vector of the feature points on the stationary object from the optical flow vector of the feature points to obtain the target The optical flow vector of the feature point.
  • the processor 11 is configured to: identify a moving object based on an optical flow vector of the target feature point, depth information of the target feature point, and visual information of the target feature point, where the visual information is Includes color and/or light intensity.
  • the processor 11 is configured to perform clustering processing on the acquired optical flow vector of the target feature point to obtain at least one set of optical flow vectors, where the optical flow vector is in the same optical flow vector group.
  • the direction deviation between the optical flow vectors is smaller than the second preset threshold; the depth information and the visual information of each feature point in the target feature point are from the at least one Moving objects are identified in the group of optical flow vectors.
  • the processor 11 is configured to: use the seed padding algorithm from the at least one set of optical flow vectors based on depth information of each feature point in the target feature point and visual information of each feature point A moving object is identified in the group.
  • the processor 11 is configured to determine a motion speed vector of the moving object based on the three-dimensional coordinates of the moving object in a geodetic coordinate system in a preset frame number image.
  • the processor 11 is configured to: determine a moving trajectory of the movable platform based on a motion velocity vector of the moving object, and a moving speed vector of the movable platform; determine the moving object and based on the moving trajectory The area where the movable platform may collide.
  • the processor 11 is configured to: project the moving trajectory onto the depth map, and determine, on the depth map, an area where the moving object and the movable platform may collide;
  • the depth map determines the three-dimensional coordinates of the region where the collision may occur in the geodetic coordinate system.
  • the processor 11 is configured to: control the movable platform to move in a direction opposite to a current moving direction in an area where the collision may occur.
  • the processor 11 is configured to: adjust a movement trajectory of the movable platform, so that the movable platform bypasses an area where the collision may occur.
  • the processor 11 is configured to: control the movable platform to stop moving for a preset time length to avoid the moving object.
  • the photographing device comprises at least one camera.
  • the obstacle avoidance device provided in this embodiment can perform the obstacle avoidance method described in the foregoing embodiment, and the execution manner and the beneficial effects are similar, and details are not described herein again.
  • This embodiment provides a mobile platform, including:
  • a power system mounted on the fuselage to power the mobile platform
  • the movable platform includes any one of the following: a drone, a car, VR glasses, AR glasses.
  • FIG. 6 is a schematic structural diagram of an obstacle avoidance device 30 according to an embodiment of the present invention.
  • the obstacle avoidance device 30 is disposed on a ground station 40, and the obstacle avoidance device 30 includes a processor 31 and a communication interface 32, the processor 31 is communicatively coupled to the communication interface 32; the communication interface 32 is configured to: acquire a depth map obtained by the photographing device 51 carried on the movable platform 50; 31: configured to: identify a moving object based on the depth map, determine a motion velocity vector of the moving object, and determine, according to the motion velocity vector of the moving object, a region where the moving object and the movable platform may collide And thereby controlling the movable platform to perform an obstacle avoidance strategy in the area where the collision may occur.
  • the processor 31 is configured to: acquire an optical flow vector of a target feature point on the depth map, where the target feature point does not include a feature point on the stationary object, and the optical flow vector is based on the target feature point. , identify moving objects.
  • the processor 31 is configured to: acquire an optical flow vector of the target feature point on the depth map based on a visual odometer VO algorithm.
  • the processor 31 is configured to: extract a feature point from a depth map captured by a photographing device mounted on the movable platform based on a preset corner detection algorithm; and track the feature point on the two frames by tracking the feature point a relative position of the optical flow vector of the feature point; and filtering the stationary object from the optical flow vector of the feature point based on the three-dimensional coordinates of the feature point in the geodetic coordinate system of the two frame images
  • the optical flow vector of the feature point obtains the optical flow vector of the target feature point.
  • the processor 31 is configured to determine, according to a relative position of the feature point in two frames of images, a relative position of the feature point on a subsequent frame image, respectively, with respect to the feature point. a first offset of the relative position on the frame image, and a second offset of the relative position of the feature point on the previous frame image relative to the relative position of the feature point on the subsequent frame image; Determining the relationship between the first offset and the second offset according to a preset first a prior condition, and determining the feature based on the first offset or the second offset The optical flow vector of the point.
  • the processor 31 is configured to: determine, according to a preset second a prior condition, an essential matrix corresponding to the three-dimensional coordinates of the feature point in a geodetic coordinate system in the two frames of images, where
  • the second prior condition is a conditional relationship between the three-dimensional coordinates of the feature point in the geodetic coordinate system in the two-frame image and the essential matrix; based on the essential matrix, a random sampling consensus algorithm is adopted, from the feature point
  • the optical flow vector is used to screen out the optical flow vector of the feature points on the stationary object to obtain the optical flow vector of the target feature point.
  • the processor 31 is configured to determine, according to the three-dimensional coordinates of the feature point in a geodetic coordinate system in the two frame images, a position and a posture of the photographing device when capturing the two frames of images. And based on the position and posture of the photographing device when photographing the two frames of images, using a random sampling and consistent algorithm, screening the optical flow vector of the feature points on the stationary object from the optical flow vector of the feature points to obtain the target The optical flow vector of the feature point.
  • the processor 31 is configured to: identify a moving object based on an optical flow vector of the target feature point, depth information of the target feature point, and visual information of the target feature point, where the visual information is Includes color and/or light intensity.
  • the processor 31 is configured to perform clustering processing on the acquired optical flow vector of the target feature point to obtain at least one set of optical flow vectors, where the optical flow vector is in the same optical flow vector group.
  • the direction deviation between the optical flow vectors is smaller than the second preset threshold; the depth information and the visual information of each feature point in the target feature point are from the at least one Moving objects are identified in the group of optical flow vectors.
  • the processor 31 is configured to: use the seed filling algorithm from the at least one optical flow vector based on depth information of each feature point in the target feature point and visual information of each feature point Identifying moving objects in the group
  • the processor 31 is configured to determine a motion velocity vector of the moving object based on the three-dimensional coordinates of the moving object in a geodetic coordinate system in a preset frame number image.
  • the processor 31 is configured to: determine a moving trajectory of the movable platform based on a motion velocity vector of the moving object, and a moving speed vector of the movable platform; determine the moving object and based on the moving trajectory The area where the movable platform may collide.
  • the processor 31 is configured to: project the moving trajectory onto the depth map, and determine, on the depth map, an area where the moving object and the movable platform may collide;
  • the depth map determines the three-dimensional coordinates of the region where the collision may occur in the geodetic coordinate system.
  • the obstacle avoidance device further includes a display component 33, the display component 33 is communicatively coupled to the processor 31; and the display component 33 is configured to: display a movement trajectory of the movable platform.
  • the display component 33 is configured to: display the depth map, and mark the area where the collision may occur on the depth map.
  • the processor 31 is configured to: control the movable platform to move in an opposite direction of the current moving direction in the area where the collision may occur.
  • the processor 31 is configured to: adjust a movement trajectory of the movable platform, so that the movable platform bypasses an area where the collision may occur.
  • the processor 31 is configured to: control the movable platform to stop moving for a preset length of time to avoid the moving object.
  • the photographing device comprises at least one camera.
  • the obstacle avoidance device provided in this embodiment can perform the obstacle avoidance method described in the foregoing embodiment, and the execution manner and the beneficial effects are similar, and details are not described herein again.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as the units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the above-described integrated unit implemented in the form of a software functional unit can be stored in a computer readable storage medium.
  • the above software functional unit is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to perform the methods of the various embodiments of the present invention. Part of the steps.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program codes. .

Abstract

本发明实施例提供一种避障方法、装置及可移动平台,其中,该方法包括:获取可移动平台搭载的拍摄设备拍摄获得的深度图;基于所述深度图识别出运动物体;确定所述运动物体的运动速度向量;基于所述运动物体的运动速度向量,确定所述运动物体和所述可移动平台可能发生碰撞的区域,从而控制所述可移动平台在所述可能发生碰撞的区域执行避障策略。本发明实施例可以提升可移动平台运动的安全性和用户体验。

Description

避障方法、装置及可移动平台 技术领域
本申请涉及无人机技术领域,尤其涉及一种避障方法、装置及可移动平台。
背景技术
随着无人机越来越普及,更多的人加入了无人机航拍的行列。但是对于之前从未使用过无人机的用户来说,操作是个问题,稍有不慎容易造成坠毁撞击。因此,对于这些用户来说需要一些辅助驾驶手段来帮助用户进行避障。
发明内容
本发明实施例提供一种避障方法、装置及可移动平台,以实现基于运动物体检测的避障,提升可移动平台运动的安全性和用户体验。
本发明实施例的第一方面是提供一种避障方法,包括:
获取可移动平台搭载的拍摄设备拍摄获得的深度图;
基于所述深度图识别出运动物体;
确定所述运动物体的运动速度向量;
基于所述运动物体的运动速度向量,确定所述运动物体和所述可移动平台可能发生碰撞的区域,从而控制所述可移动平台在所述可能发生碰撞的区域执行避障策略。
本发明实施例的第二方面是提供一种避障装置,所述避障装置设置在可移动平台上,所述避障装置包括处理器和拍摄设备,所述处理器与所述拍摄设备通信连接;
所述拍摄设备用于拍摄获得深度图;
所述处理器用于:获取所述拍摄设备拍摄获得的深度图,基于所述深度图识别出运动物体,确定所述运动物体的运动速度向量,基于所述运动物体的运动速度向量,确定所述运动物体和所述可移动平台可能发生碰撞的区域,从而控制所述可移动平台在所述可能发生碰撞的区域执行避障策 略。确定所述运动物体和所述可移动平台可能发生碰撞的区域
本发明实施例的第三方面是提供一种可移动平台,包括:
机身;
动力系统,安装在所述机身上,用于为所述可移动平台提供动力;
以及上述第二方面所述的避障装置。
本发明实施例的第四方面是提供一种避障装置,所述避障装置设置在地面站上,所述避障装置包括处理器和通信接口,所述处理器与所述通信接口通信连接;
所述通信接口用于:获取可移动平台搭载的拍摄设备拍摄获得的深度图;
所述处理器用于:基于所述深度图识别出运动物体,确定所述运动物体的运动速度向量,基于所述运动物体的运动速度向量,确定所述运动物体和所述可移动平台可能发生碰撞的区域,从而控制所述可移动平台在所述可能发生碰撞的区域执行避障策略。
本发明实施例提供的避障方法、装置及可移动平台,通过获取可移动平台搭载的拍摄设备拍摄获得的深度图,基于该深度图识别出运动物体,并确定运动物体的运动速度向量,基于运动物体的运动速度向量,确定运动物体和可移动平台可能发生碰撞的区域,从而控制可移动平台在可能发生碰撞的区域执行避障策略。由于本发明实施例可以对动态物体进行识别,并根据动态物体的运动速度向量确定运动物体和可移动平台可能发生碰撞的区域,从而使得可移动平台能够对运动物体进行躲避,尤其是当可移动平台是在地面上运动的汽车或其他可移动物体,或者是近地飞行的无人机时,能够有效排除运动物体对可移动平台的影响,提升可移动平台运动的安全性和用户体验。
附图说明
图1为本发明实施例提供的一种避障方法的流程图;
图2为本发明实施例提供的获取目标特征点的光流向量的方法流程图;
图3为本发明实施例提供的运动物体的识别方法的流程图;
图4a为本发明实施例提供的目标特征点的光流向量示意图;
图4b为图4a上的光流向量聚类后的结果示意图;
图5为本发明实施例提供的一种避障装置10的结构示意图;
图6为本发明实施例提供的避障装置30的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
需要说明的是,当组件被称为“固定于”另一个组件,它可以直接在另一个组件上或者也可以存在居中的组件。当一个组件被认为是“连接”另一个组件,它可以是直接连接到另一个组件或者可能同时存在居中组件。
除非另有定义,本文所使用的所有的技术和科学术语与属于本发明的技术领域的技术人员通常理解的含义相同。本文中在本发明的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本发明。本文所使用的术语“及/或”包括一个或多个相关的所列项目的任意的和所有的组合。
下面结合附图,对本发明的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
在现有技术中,诸如汽车、无人机等可移动平台一般基于如下几种方式进行避障:
一种方式是使用超声波、红外测距模块,这种方式比较简单,检测到物体就刹车,故有大量的测量噪声,容易造成误刹车,且测量距离较短,对于高速运动的可移动平台来说完全不能使用。
另一种方式是使用机器视觉算法,通过视觉摄像机获取图像并计算深度图,或是使用RGB-D或TOF等模块获取深度图,进而通过深度图实现避障策略。但是基于深度图的计算,目前来说都是做了静止假设,即假设图像中出现的物体都是针对于地球是静止不动的。这对于高空飞行的可移动平台(比如无人机,但不局限于无人机)来说,假设一般都是成立的,但对于近地面飞行的可移动平台,则是不成立的,对于车辆或行人不能进行 有效的避障。
而自动驾驶的可移动平台(比如自动驾驶汽车,但不局限于自动驾驶汽车)一般基于激光雷达执行避障策略,造价昂贵且重量较大。
针对现有技术存在的上述技术问题,本发明实施例提供一种避障方法,通过获取可移动平台搭载的拍摄设备拍摄获得的深度图,基于该深度图识别出运动物体,并确定运动物体的运动速度向量,基于运动物体的运动速度向量,确定运动物体和可移动平台可能发生碰撞的区域,从而控制可移动平台在可能发生碰撞的区域执行避障策略。本实施例能够基于深度图进行运动物体的识别,并控制可移动平台对运动物体进行避障,尤其是当可移动平台是在地面上运动的汽车或其他可移动物体,或者是近地飞行的无人机时,能够有效的对运动物体进行避障,提升可移动平台运动的安全性和用户体验。
由于本发明实施例涉及深度图的处理,为了便于本发明技术方案的理解,下面首先对拍摄设备所涉及的部分技术参数进行说明,以相机为例:
相机模型:
Figure PCTCN2017120249-appb-000001
其中:R为相机的旋转矩阵;
[u v 1] T表示像素坐标系(pixel coordinates)中的二维坐标(2D)点;
[x w y w z w 1] T表示大地坐标系(World coordinates)中的三维坐标(3D)点;
矩阵K称为相机校准矩阵(Camera calibration matrix),即每个相机的内参(Intrinsic Parameters);
其中,对于有限射影相机(Finite projective camera)来说矩阵K包含了5个内参:
Figure PCTCN2017120249-appb-000002
其中,α x=fm xy=fm y,f为焦距(focal length),m x和m y为x、y方向上单位距离的像素数(scale factors)。γ为x轴和y轴之间的畸变参数(skew parameters),在CCD相机中像素不为正方形。μ 0,v 0为光心位置(principal point)。
具体的,图1为本发明实施例提供的一种避障方法的流程图,该方法可以由一种避障装置来执行,该避障装置可以安装在可移动平台上,也可以安装在地面站上。如图1所示,该方法包括:
步骤101、获取可移动平台搭载的拍摄设备拍摄获得的深度图。
可选的,本实施例中涉及的可移动平台包括但不局限于如下任意一种:无人机、汽车、VR眼镜、AR眼镜。
本实施例涉及的拍摄设备包括至少一个摄像头。
示例的,本实施例可以基于两个间隔预设距离的摄像头同时获取同一场景的两幅图像,基于立体匹配算法对两幅图像进行处理获得深度图,当然这里仅为示例说明而不是对本发明的唯一限定,实际上,本实施例获取深度图的方法可以采用现有技术中的任意一种方法,本实施例不做具体限定。
步骤102、基于所述深度图识别出运动物体。
可选的,本实施例可以通过获取深度图上目标特征点的光流向量,基于目标特征点的光流向量,识别运动物体。
具体的,本实施例中,获取目标特征点的光流向量的方法至少包括如下几种:
在一种可能的实现方式中,由于一般拍摄环境下大部分物体都是静止的,只有少数物体是运动的,因此,在拍摄获得的深度图上,静止物体上的特征点占据深度图上全部特征点的绝大部分,而运动物体上的特征点的数量占据全部特征点的绝少部分,并且运动物体上特征点的光流向量方向与大部分静止物体上特征点的光流向量方向不同,因此,可通过对深度图上的特征点的光流向量进行聚类,将获取到的上述绝少部分的特征点的光 流向量,作为目标特征点的光流向量,其具体执行方式可以参见现有技术,在这里不再赘述。
在另一种可能的实现方式中,可以基于可移动平台上搭载的视觉里程计(VO)算法,从可移动平台搭载的拍摄设备拍摄获得的深度图上,筛选出除静止物体上的特征点以外的特征点的光流向量(即目标特征点的光流向量)。具体的,图2为本发明实施例提供的获取目标特征点的光流向量的方法流程图,如图2所示,该方法包括:
步骤1011、基于预设的角点检测算法,从可移动平台搭载的拍摄设备拍摄获得的深度图中提取特征点。
这里为了减少计算量,本实施例采用产生稀疏矩阵(sparse)的方法,先从拍摄设备拍摄获得的深度图上提取特征点。不失一般性的,本实施例选用深度图上的角点(Corner detection)作为特征点,可选的角点检测算法(Corner Detection Algorithm)包括但不局限于包括:FAST(features from accelerated segment test)、SUSAN、以及Harris角点检测算法。本实施例以Harris角点检测算法为例:
首先,定义矩阵A为深度图的构造张量(structure tensor):
Figure PCTCN2017120249-appb-000003
其中,I x和I y分别为深度图上的点在x,y方向上的梯度信息,则基于矩阵A可得位于深度图上的点的角点响应值M c
M c=λ 1λ 2-k(λ 12) 2=det(A)-ktrace 2(A)
其中det(A)为矩阵A的行列式,trace(A)为矩阵A的迹,k为调节灵敏度的参数(tunable sensitivity parameter),设定阈值M th,当M c>M th时,则确定该点为特征点。
步骤1012、通过跟踪所述特征点在两帧图像上的相对位置,确定所述特征点的光流向量。
可选的,本实施例可以通过跟踪上述获得的特征点在两帧或两帧以上的图像中的相对位置,根据特征点在两帧或两帧以上图像中的相对位置的偏移量来确定特征点的光流向量。以两帧图像为例,在跟踪获得特征点在两帧图像上的相对位置后,本实施例可以根据如下表达式迭代获得特征点在两帧图像上的偏移量h:
Figure PCTCN2017120249-appb-000004
这里针对每个特征点,本实施例可以做两次偏移量检测,即先令后一帧图像为F(x),前一帧图像为G(x),根据上述公式迭代获得特征点在后一帧图像上的相对位置相对于特征点在前一帧图像上的相对位置的第一偏移量h,再反过来,令前一帧图像为F(x),后一帧图像为G(x),获得特征点在前一帧图像上的相对位置相对于特征点在后一帧图像上的相对位置的第二偏移量h’,若第一偏移量和第二偏移量之间的关系符合预设的第一先验条件,则确定特征点的光流向量为h,其中,第一先验条件可以被具体为h=-h’,但不局限于h=-h’,实际上第一先验条件还可以被具体为h=-h’+a,其中a为预设误差,是一个常量。
当然上述举例仅为示例说明而不是对本发明的唯一限定,实际上,每个特征点也可以仅确定一次偏移量,其确定方法可以参照上述示例,在这里不再赘述。
步骤1013、基于所述特征点在所述两帧图像中的大地坐标系下的三维坐标,从所述特征点的光流向量中筛除静止物体上的特征点的光流向量,获得目标特征点的光流向量。
可选的,在一种可能的实现方式中,可以基于特征点在两帧图像中的大地坐标系下的三维坐标,确定拍摄设备在拍摄两帧图像时的位置和姿态(camera pose),并基于拍摄设备在拍摄两帧图像时的位置和姿态,采用随机抽样一致算法(RANSAC),获取深度图中除静止物体上的特征点以外的特征点的光流向量,其具体算法如下:
sp c=K[R|T]p ω.
Figure PCTCN2017120249-appb-000005
其中,R为以拍摄设备拍摄图像时的位置和姿态作为先验的旋转矩阵。p c为特征点在深度图上的二维坐标,p w为特征点在大地坐标系下的三维坐标。
在另一种可能的实现方式中,可以根据预设的第二先验条件,确定特征点在两帧图像中的大地坐标系下的三维坐标所对应的本质矩阵 (Essential Matrix),再基于计算获得的本质矩阵,采用随机抽样一致算法(RANSAC),获取深度图中除静止物体上的特征点以外的特征点的光流向量。
举例来说,假设特征点在两帧图像中对应的大地坐标系下的三维坐标为y以及y′,那么可以根据如下所示的第二先验条件确定本质矩阵E:
(y′) TEy=0
进一步的,再通过随机抽样一致算法(RANSAC),即可获得深度图中除静止物体上的特征点以外的特征点的光流向量,即目标特征点的光流向量,其中随机抽样一致算法的原理以及执行方式可以参见现有技术,本实施例不做赘述。
进一步的,在获得目标特征点的光流向量后,基于目标特征点的光流向量、目标特征点的深度信息和目标特征点的视觉信息,识别运动物体,其中所述视觉信息包括色彩和/或光强。
具体的,图3为本发明实施例提供的运动物体的识别方法的流程图,如图3所示,可以根据如下步骤识别运动物体:
步骤1021、对获取到的目标特征点的光流向量进行聚类处理,获得至少一组光流向量组,其中,在同一光流向量组中光流向量之间的方向偏差小于第一预设阈值,光流向量之间的长度之差小于第二预设阈值。
示例的,图4a为本发明实施例提供的目标特征点的光流向量示意图,图4a中的光流向量可用Lucas-Kanade算法表示如下:
Figure PCTCN2017120249-appb-000006
其中,q i是深度图上点P领域中的点,点P的领域大小可以根据需要进行设定,本实施例中不做限定,比如,当点P的领域具体为5x5的领域时,点P对应的领域中包括25个点。q i对应的光流向量为[V x V y] T,I x,I y分别为q i在深度图上x方向和y方向上的梯度,I t为q i在两帧图像上的光强变化。
进一步的,在上述光流向量表示的基础上,本示例采用无监督机器学习(Unsupervised ML)中的聚类算法(比如K-Means++算法,但不局限于K-Means++算法)来对目标特征点的光流向量进行聚类处理。从而将每个目标特征点在深度图上的位置[u v] T,目标特征点的色彩和/或光强,以及 光流向量[V x V y] T作为聚类依据,获得如图4b所示的至少一组光流向量组,聚类后的光流向量组可表示如下:
[u v V x V y I(u,v,t)] T
当然上述仅为示例说明而不是对本发明的唯一限定。
步骤1022、基于所述目标特征点中每个特征点的深度信息和视觉信息,从所述至少一组光流向量组中识别出运动物体。
可选的,本实施例通过获取到的目标特征点中每个特征点的深度信息,以及每个特征点的色彩和/或光强采用种子填充(floodfill)算法,从上述获得的至少一组光流向量组中识别出运动物体。其具体执行方式可以参照现有的floodfill算法,在这里不再赘述。
步骤103、确定所述运动物体的运动速度向量。
示例的,在一种可能的实现方式中可以基于运动物体在预设帧数图像中的大地坐标系下的三维坐标,确定运动物体的运动速度向量。其具体实现方式可以参见现有技术,本实施不多做赘述。
步骤104、基于所述运动物体的运动速度向量,确定所述运动物体和所述可移动平台可能发生碰撞的区域,从而控制所述可移动平台在所述可能发生碰撞的区域执行避障策略。
可选的,本实施例可以基于运动物体的运动速度向量和可移动平台的移动速度向量,采用如下表达式,确定可移动平台的移动轨迹,并基于可移动平台的移动轨迹确定运动物体和可移动平台可能发生碰撞的区域:
Figure PCTCN2017120249-appb-000007
Figure PCTCN2017120249-appb-000008
可选的,在获得可移动平台的移动轨迹之后,本发明实施例可以将可移动平台的移动轨迹投影到深度图上,并根据运动物体的运动轨迹,在深度图上确定运动物体和可移动平台可能发生碰撞的区域,并基于深度图上的各个区域在大地坐标系下的坐标信息,确定运动物体和可移动平台可能 发生碰撞的区域在大地坐标系下的三维坐标。从而控制可移动平台在该三维坐标处执行预设的避障策略,比如,可以在确定可能发生碰撞的区域后,控制可移动平台向当前移动方向相反的方向移动,或者通过调整可移动平台的移动轨迹使得可移动平台绕过可能发生碰撞的区域,或者可以控制可移动平台停止移动预设时间长度,以达到避障的目的。当然这里仅为示例说明而不是对本发明中避障策略的唯一限定,实际上,可以根据具体需要设置相应的避障策略。
可选的,为了增加与用户之间的交互,提高用户体验,本实施例还可以对可移动平台的移动轨迹进行显示,或者还可以向用户显示可能发生碰撞的区域,以使用户及时采取避障措施。
本实施例提供的避障方法、装置及可移动平台,通过获取可移动平台搭载的拍摄设备拍摄获得的深度图上的目标特征点的光流向量,以及可移动平台的移动速度向量,基于获取到的目标特征点的光流向量以及目标特征点的深度信息和视觉信息,从深度图上识别出运动物体,通过跟踪运动物体在预设帧数图像中的大地坐标系下的三维坐标,确定运动物体的运动速度向量,从而基于运动物体的运动速度向量,以及可移动平台的移动速度向量,确定运动物体和可移动平台可能发生碰撞的区域,从而控制可移动平台在可能发生碰撞的区域执行避障策略。由于本发明实施例可以对动态物体进行识别,并根据动态物体的运动速度向量确定运动物体和可移动平台可能发生碰撞的区域,从而使得可移动平台能够对运动物体进行躲避,尤其是当可移动平台是在地面上运动的汽车或其他可移动物体,或者是近地飞行的无人机时,能够有效排除运动物体对可移动平台的影响,提升可移动平台运动的安全性和用户体验。另外,由于在现有的可移动平台中视觉里程计(VO)算法的中间结果会产生本实施例所需的目标特征点的光流向量,因此,本实施例可以直接获取现有视觉里程计(VO)算法的中间结果来进行运动物体的检测,能够有效降低本实施例的计算量,提高避障检测的效率,进一步提高了避障检测的实时性。
本发明实施例提供一种避障装置,图5为本发明实施例提供的一种避障装置10的结构示意图,如图5所示,避障装置10设置在可移动平台20 上,避障装置10包括处理器11和拍摄设备21,所述处理器11与所述拍摄设备21通信连接,所述拍摄设备用于拍摄获得深度图;所述处理器11用于:获取所述拍摄设备21拍摄获得的深度图,基于所述深度图识别出运动物体,确定所述运动物体的运动速度向量,基于所述运动物体的运动速度向量,确定所述运动物体和所述可移动平台可能发生碰撞的区域,从而控制所述可移动平台在所述可能发生碰撞的区域执行避障策略。确定所述运动物体和所述可移动平台可能发生碰撞的区域。
可选的,所述处理器11用于:获取所述深度图上目标特征点的光流向量,所述目标特征点不包括静止物体上的特征点,基于所述目标特征点的光流向量,识别运动物体。
可选的,所述处理器11用于:基于视觉里程计VO算法,获取所述深度图上目标特征点的光流向量。
可选的,所述处理器11用于:基于预设的角点检测算法,从可移动平台搭载的拍摄设备拍摄获得的深度图中提取特征点;通过跟踪所述特征点在两帧图像上的相对位置,确定所述特征点的光流向量;基于所述特征点在所述两帧图像中的大地坐标系下的三维坐标,从所述特征点的光流向量中筛除静止物体上的特征点的光流向量,获得目标特征点的光流向量。
可选的,所述处理器11用于:基于所述特征点在两帧图像中的相对位置,分别确定所述特征点在后一帧图像上的相对位置相对于所述特征点在前一帧图像上的相对位置的第一偏移量,以及所述特征点在前一帧图像上的相对位置相对于所述特征点在后一帧图像上的相对位置的第二偏移量;若所述第一偏移量和所述第二偏移量之间关系符合预设的第一先验条件,则基于所述第一偏移量或所述第二偏移量,确定所述特征点的光流向量。
可选的,所述处理器11用于:基于预设的第二先验条件,确定所述特征点在所述两帧图像中的大地坐标系下的三维坐标所对应的本质矩阵,其中,所述第二先验条件为特征点在两帧图像中的大地坐标系下的三维坐标和本质矩阵之间的条件关系;基于所述本质矩阵,采用随机抽样一致算法,从所述特征点的光流向量中筛除静止物体上的特征点的光流向量,获得目标特征点的光流向量。
可选的,所述处理器11用于:基于所述特征点在所述两帧图像中的大地坐标系下的三维坐标,确定所述拍摄设备在拍摄所述两帧图像时的位置和姿态;基于所述拍摄设备在拍摄所述两帧图像时的位置和姿态,采用随机抽样一致算法,从所述特征点的光流向量中筛除静止物体上的特征点的光流向量,获得目标特征点的光流向量。
可选的,所述处理器11用于:基于所述目标特征点的光流向量、所述目标特征点的深度信息和所述目标特征点的视觉信息,识别运动物体,其中所述视觉信息包括色彩和/或光强。
可选的,所述处理器11用于:对获取到的目标特征点的光流向量进行聚类处理,获得至少一组光流向量组,其中,在同一光流向量组中光流向量之间的方向偏差小于第一预设阈值,光流向量之间的长度之差小于第二预设阈值;基于所述目标特征点中每个特征点的深度信息和视觉信息,从所述至少一组光流向量组中识别出运动物体。
可选的,所述处理器11用于:基于所述目标特征点中每个特征点的深度信息,以及每个特征点的视觉信息,采用种子填充算法,从所述至少一组光流向量组中识别出运动物体。
可选的,所述处理器11用于:基于所述运动物体在预设帧数图像中的大地坐标系下的三维坐标,确定所述运动物体的运动速度向量。
可选的,所述处理器11用于:基于运动物体的运动速度向量,以及可移动平台的移动速度向量,确定所述可移动平台的移动轨迹;基于所述移动轨迹确定所述运动物体和所述可移动平台可能发生碰撞的区域。
可选的,所述处理器11用于:将所述移动轨迹投影到所述深度图上,在所述深度图上确定所述运动物体和所述可移动平台可能发生碰撞的区域;基于所述深度图确定所述可能发生碰撞的区域在大地坐标系下的三维坐标。
可选的,所述处理器11用于:控制所述可移动平台在所述可能发生碰撞的区域向当前移动方向相反的方向移动。
可选的,所述处理器11用于:调整所述可移动平台的移动轨迹,以使所述可移动平台绕过所述可能发生碰撞的区域。
可选的,所述处理器11用于:控制所述可移动平台停止移动预设时 间长度,以避让所述运动物体。
可选的,所述拍摄设备包括至少一个摄像头。
本实施例提供的避障装置能够执行上述实施例所述的避障方法,其执行方式和有益效果类似,在这里不再赘述。
本实施例提供一种可移动平台,包括:
机身;
动力系统,安装在机身上,用于为可移动平台提供动力;
以及上述实施例所述的避障装置。该可移动平台包括如下的任意一种:无人机、汽车、VR眼镜、AR眼镜。
本发明实施例提供一种避障装置,图6为本发明实施例提供的避障装置30的结构示意图,如图6所示,避障装置30设置在地面站40上,避障装置30包括处理器31和通信接口32,所述处理器31与所述通信接口32通信连接;所述通信接口32用于:获取可移动平台50搭载的拍摄设备51拍摄获得的深度图;所述处理器31用于:基于所述深度图识别出运动物体,确定所述运动物体的运动速度向量,基于所述运动物体的运动速度向量,确定所述运动物体和所述可移动平台可能发生碰撞的区域,从而控制所述可移动平台在所述可能发生碰撞的区域执行避障策略。
可选的,所述处理器31用于:获取所述深度图上目标特征点的光流向量,所述目标特征点不包括静止物体上的特征点,基于所述目标特征点的光流向量,识别运动物体。
可选的,所述处理器31用于:基于视觉里程计VO算法,获取所述深度图上目标特征点的光流向量。
可选的,所述处理器31用于:基于预设的角点检测算法,从可移动平台搭载的拍摄设备拍摄获得的深度图中提取特征点;通过跟踪所述特征点在两帧图像上的相对位置,确定所述特征点的光流向量;基于所述特征点在所述两帧图像中的大地坐标系下的三维坐标,从所述特征点的光流向量中筛除静止物体上的特征点的光流向量,获得目标特征点的光流向量。
可选的,所述处理器31用于:基于所述特征点在两帧图像中的相对 位置,分别确定所述特征点在后一帧图像上的相对位置相对于所述特征点在前一帧图像上的相对位置的第一偏移量,以及所述特征点在前一帧图像上的相对位置相对于所述特征点在后一帧图像上的相对位置的第二偏移量;若所述第一偏移量和所述第二偏移量之间关系符合预设的第一先验条件,则基于所述第一偏移量或所述第二偏移量,确定所述特征点的光流向量。
可选的,所述处理器31用于:基于预设的第二先验条件,确定所述特征点在所述两帧图像中的大地坐标系下的三维坐标所对应的本质矩阵,其中,所述第二先验条件为特征点在两帧图像中的大地坐标系下的三维坐标和本质矩阵之间的条件关系;基于所述本质矩阵,采用随机抽样一致算法,从所述特征点的光流向量中筛除静止物体上的特征点的光流向量,获得目标特征点的光流向量。
可选的,所述处理器31用于:基于所述特征点在所述两帧图像中的大地坐标系下的三维坐标,确定所述拍摄设备在拍摄所述两帧图像时的位置和姿态;基于所述拍摄设备在拍摄所述两帧图像时的位置和姿态,采用随机抽样一致算法,从所述特征点的光流向量中筛除静止物体上的特征点的光流向量,获得目标特征点的光流向量。
可选的,所述处理器31用于:基于所述目标特征点的光流向量、所述目标特征点的深度信息和所述目标特征点的视觉信息,识别运动物体,其中所述视觉信息包括色彩和/或光强。
可选的,所述处理器31用于:对获取到的目标特征点的光流向量进行聚类处理,获得至少一组光流向量组,其中,在同一光流向量组中光流向量之间的方向偏差小于第一预设阈值,光流向量之间的长度之差小于第二预设阈值;基于所述目标特征点中每个特征点的深度信息和视觉信息,从所述至少一组光流向量组中识别出运动物体。
可选的,所述处理器31用于:基于所述目标特征点中每个特征点的深度信息,以及每个特征点的视觉信息,采用种子填充算法,从所述至少一组光流向量组中识别出运动物体
可选的,所述处理器31用于:基于所述运动物体在预设帧数图像中的大地坐标系下的三维坐标,确定所述运动物体的运动速度向量。
可选的,所述处理器31用于:基于运动物体的运动速度向量,以及可移动平台的移动速度向量,确定所述可移动平台的移动轨迹;基于所述移动轨迹确定所述运动物体和所述可移动平台可能发生碰撞的区域。
可选的,所述处理器31用于:将所述移动轨迹投影到所述深度图上,在所述深度图上确定所述运动物体和所述可移动平台可能发生碰撞的区域;基于所述深度图确定所述可能发生碰撞的区域在大地坐标系下的三维坐标。
可选的,所述避障装置还包括显示组件33,所述显示组件33与所述处理器31通信连接;所述显示组件33用于:显示所述可移动平台的移动轨迹。
可选的,所述显示组件33用于:显示所述深度图,并在所述深度图上标示所述可能发生碰撞的区域。
可选的,所述处理器31用于:控制所述可移动平台在所述可能发生碰撞的区域向当前移动方向相反的方向移动。
可选的,所述处理器31用于:调整所述可移动平台的移动轨迹,以使所述可移动平台绕过所述可能发生碰撞的区域。
可选的,所述处理器31用于:控制所述可移动平台停止移动预设时间长度,以避让所述运动物体。
可选的,所述拍摄设备包括至少一个摄像头。
本实施例提供的避障装置能够执行上述实施例所述的避障方法,其执行方式和有益效果类似,在这里不再赘述。
在本发明所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的, 作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
上述以软件功能单元的形式实现的集成的单元,可以存储在一个计算机可读取存储介质中。上述软件功能单元存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本发明各个实施例所述方法的部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
本领域技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (57)

  1. 一种避障方法,其特征在于,包括:
    获取可移动平台搭载的拍摄设备拍摄获得的深度图;
    基于所述深度图识别出运动物体;
    确定所述运动物体的运动速度向量;
    基于所述运动物体的运动速度向量,确定所述运动物体和所述可移动平台可能发生碰撞的区域,从而控制所述可移动平台在所述可能发生碰撞的区域执行避障策略。
  2. 根据权利要求1所述的方法,其特征在于,所述基于所述深度图识别出运动物体,包括:
    获取所述深度图上目标特征点的光流向量,所述目标特征点不包括静止物体上的特征点;
    基于所述目标特征点的光流向量,识别运动物体。
  3. 根据权利要求2所述的方法,其特征在于,所述获取所述深度图上目标特征点的光流向量,包括
    基于视觉里程计VO算法,获取所述深度图上目标特征点的光流向量。
  4. 根据权利要求3所述的方法,其特征在于,所述基于视觉里程计VO算法,获取所述深度图上目标特征点的光流向量,包括:
    基于预设的角点检测算法,从可移动平台搭载的拍摄设备拍摄获得的深度图中提取特征点;
    通过跟踪所述特征点在两帧图像上的相对位置,确定所述特征点的光流向量;
    基于所述特征点在所述两帧图像中的大地坐标系下的三维坐标,从所述特征点的光流向量中筛除静止物体上的特征点的光流向量,获得目标特征点的光流向量。
  5. 根据权利要求4所述的方法,其特征在于,所述通过跟踪所述特征点在两帧图像上的相对位置,确定所述特征点的光流向量,包括:
    基于所述特征点在两帧图像中的相对位置,分别确定所述特征点在后一帧图像上的相对位置相对于所述特征点在前一帧图像上的相对位置的第一偏移量,以及所述特征点在前一帧图像上的相对位置相对于所述特征 点在后一帧图像上的相对位置的第二偏移量;
    若所述第一偏移量和所述第二偏移量之间关系符合预设的第一先验条件,则基于所述第一偏移量或所述第二偏移量,确定所述特征点的光流向量。
  6. 根据权利要求4所述的方法,其特征在于,所述基于所述特征点在所述两帧图像中的大地坐标系下的三维坐标,从所述特征点的光流向量中筛除静止物体上的特征点的光流向量,获得目标特征点的光流向量,包括:
    基于预设的第二先验条件,确定所述特征点在所述两帧图像中的大地坐标系下的三维坐标所对应的本质矩阵,其中,所述第二先验条件为特征点在两帧图像中的大地坐标系下的三维坐标和本质矩阵之间的条件关系;
    基于所述本质矩阵,采用随机抽样一致算法,从所述特征点的光流向量中筛除静止物体上的特征点的光流向量,获得目标特征点的光流向量。
  7. 根据权利要求4所述的方法,其特征在于,所述基于所述特征点在所述两帧图像中的大地坐标系下的三维坐标,从所述特征点的光流向量中筛除静止物体上的特征点的光流向量,获得目标特征点的光流向量,包括:
    基于所述特征点在所述两帧图像中的大地坐标系下的三维坐标,确定所述拍摄设备在拍摄所述两帧图像时的位置和姿态;
    基于所述拍摄设备在拍摄所述两帧图像时的位置和姿态,采用随机抽样一致算法,从所述特征点的光流向量中筛除静止物体上的特征点的光流向量,获得目标特征点的光流向量。
  8. 根据权利要求2所述的方法,其特征在于,所述基于所述目标特征点的光流向量,识别运动物体,包括:
    基于所述目标特征点的光流向量、所述目标特征点的深度信息和所述目标特征点的视觉信息,识别运动物体,其中所述视觉信息包括色彩和/或光强。
  9. 根据权利要求8所述的方法,其特征在于,所述基于所述目标特征点的光流向量、所述目标特征点的深度信息和所述目标特征点的视觉信息,识别运动物体,包括:对获取到的目标特征点的光流向量进行聚类处 理,获得至少一组光流向量组,其中,在同一光流向量组中光流向量之间的方向偏差小于第一预设阈值,光流向量之间的长度之差小于第二预设阈值;
    基于所述目标特征点中每个特征点的深度信息和视觉信息,从所述至少一组光流向量组中识别出运动物体。
  10. 根据权利要求9所述的方法,其特征在于,所述基于所述目标特征点中每个特征点的深度信息和视觉信息,从所述至少一组光流向量组中识别出运动物体,包括:
    基于所述目标特征点中每个特征点的深度信息,以及每个特征点的视觉信息,采用种子填充算法,从所述至少一组光流向量组中识别出运动物体。
  11. 根据权利要求1所述的方法,其特征在于,所述确定所述运动物体的运动速度向量,包括:
    基于所述运动物体在预设帧数图像中的大地坐标系下的三维坐标,确定所述运动物体的运动速度向量。
  12. 根据权利要求1所述的方法,其特征在于,所述基于所述运动物体的运动速度向量,确定所述运动物体和所述可移动平台可能发生碰撞的区域,包括:
    基于运动物体的运动速度向量,以及可移动平台的移动速度向量,确定所述可移动平台的移动轨迹;
    基于所述移动轨迹确定所述运动物体和所述可移动平台可能发生碰撞的区域。
  13. 根据权利要求12所述的方法,其特征在于,所述基于所述移动轨迹确定所述运动物体和所述可移动平台可能发生碰撞的区域,包括:
    将所述移动轨迹投影到所述深度图上,在所述深度图上确定所述运动物体和所述可移动平台可能发生碰撞的区域;
    基于所述深度图确定所述可能发生碰撞的区域在大地坐标系下的三维坐标。
  14. 根据权利要求12所述的方法,其特征在于,所述方法还包括:
    显示所述可移动平台的移动轨迹。
  15. 根据根据权利要求13所述的方法,其特征在于,所述方法还包括:
    显示所述深度图,并在所述深度图上标示所述可能发生碰撞的区域。
  16. 根据权利要求1-15中任一项所述的方法,其特征在于,所述控制所述可移动平台在所述可能发生碰撞的区域执行避障策略,包括:
    控制所述可移动平台在所述可能发生碰撞的区域向当前移动方向相反的方向移动。
  17. 根据权利要求1-15中任一项所述的方法,其特征在于,所述控制所述可移动平台在所述可能发生碰撞的区域执行避障策略,包括:
    调整所述可移动平台的移动轨迹,以使所述可移动平台绕过所述可能发生碰撞的区域。
  18. 根据权利要求1-15中任一项所述的方法,其特征在于,所述控制所述可移动平台在所述可能发生碰撞的区域执行避障策略,包括:
    控制所述可移动平台停止移动预设时间长度,以避让所述运动物体。
  19. 根据权利要求1-18中任一项所述的方法,其特征在于,所述拍摄设备包括至少一个摄像头。
  20. 一种避障装置,其特征在于,所述避障装置设置在可移动平台上,所述避障装置包括处理器和拍摄设备,所述处理器与所述拍摄设备通信连接;
    所述拍摄设备用于拍摄获得深度图;
    所述处理器用于:获取所述拍摄设备拍摄获得的深度图,基于所述深度图识别出运动物体,确定所述运动物体的运动速度向量,基于所述运动物体的运动速度向量,确定所述运动物体和所述可移动平台可能发生碰撞的区域,从而控制所述可移动平台在所述可能发生碰撞的区域执行避障策略。
  21. 根据权利要求20所述的避障装置,其特征在于,所述处理器用于:
    获取所述深度图上目标特征点的光流向量,所述目标特征点不包括静止物体上的特征点,基于所述目标特征点的光流向量,识别运动物体。
  22. 根据权利要求21所述的避障装置,其特征在于,所述处理器用 于:基于视觉里程计VO算法,获取所述深度图上目标特征点的光流向量。
  23. 根据权利要求22所述的避障装置,其特征在于,所述处理器用于:基于预设的角点检测算法,从可移动平台搭载的拍摄设备拍摄获得的深度图中提取特征点;通过跟踪所述特征点在两帧图像上的相对位置,确定所述特征点的光流向量;基于所述特征点在所述两帧图像中的大地坐标系下的三维坐标,从所述特征点的光流向量中筛除静止物体上的特征点的光流向量,获得目标特征点的光流向量。
  24. 根据权利要求23所述的避障装置,其特征在于,所述处理器用于:基于所述特征点在两帧图像中的相对位置,分别确定所述特征点在后一帧图像上的相对位置相对于所述特征点在前一帧图像上的相对位置的第一偏移量,以及所述特征点在前一帧图像上的相对位置相对于所述特征点在后一帧图像上的相对位置的第二偏移量;若所述第一偏移量和所述第二偏移量之间关系符合预设的第一先验条件,则基于所述第一偏移量或所述第二偏移量,确定所述特征点的光流向量。
  25. 根据权利要求23所述的避障装置,其特征在于,所述处理器用于:基于预设的第二先验条件,确定所述特征点在所述两帧图像中的大地坐标系下的三维坐标所对应的本质矩阵,其中,所述第二先验条件为特征点在两帧图像中的大地坐标系下的三维坐标和本质矩阵之间的条件关系;基于所述本质矩阵,采用随机抽样一致算法,从所述特征点的光流向量中筛除静止物体上的特征点的光流向量,获得目标特征点的光流向量。
  26. 根据权利要求23所述的避障装置,其特征在于,所述处理器用于:基于所述特征点在所述两帧图像中的大地坐标系下的三维坐标,确定所述拍摄设备在拍摄所述两帧图像时的位置和姿态;基于所述拍摄设备在拍摄所述两帧图像时的位置和姿态,采用随机抽样一致算法,从所述特征点的光流向量中筛除静止物体上的特征点的光流向量,获得目标特征点的光流向量。
  27. 根据权利要求21所述的避障装置,其特征在于,所述处理器用于:基于所述目标特征点的光流向量、所述目标特征点的深度信息和所述目标特征点的视觉信息,识别运动物体,其中所述视觉信息包括色彩和/或光强。
  28. 根据权利要求27所述的避障装置,其特征在于,所述处理器用于:对获取到的目标特征点的光流向量进行聚类处理,获得至少一组光流向量组,其中,在同一光流向量组中光流向量之间的方向偏差小于第一预设阈值,光流向量之间的长度之差小于第二预设阈值;基于所述目标特征点中每个特征点的深度信息和视觉信息,从所述至少一组光流向量组中识别出运动物体。
  29. 根据权利要求28所述的避障装置,其特征在于,所述处理器用于:基于所述目标特征点中每个特征点的深度信息,以及每个特征点的视觉信息,采用种子填充算法,从所述至少一组光流向量组中识别出运动物体。
  30. 根据权利要求20所述的避障装置,其特征在于,所述处理器用于:基于所述运动物体在预设帧数图像中的大地坐标系下的三维坐标,确定所述运动物体的运动速度向量。
  31. 根据权利要求20所述的避障装置,其特征在于,所述处理器用于:基于运动物体的运动速度向量,以及可移动平台的移动速度向量,确定所述可移动平台的移动轨迹;基于所述移动轨迹确定所述运动物体和所述可移动平台可能发生碰撞的区域。
  32. 根据权利要求31所述的避障装置,其特征在于,所述处理器用于:将所述移动轨迹投影到所述深度图上,在所述深度图上确定所述运动物体和所述可移动平台可能发生碰撞的区域;基于所述深度图确定所述可能发生碰撞的区域在大地坐标系下的三维坐标。
  33. 根据权利要求20-32中任一项所述的避障装置,其特征在于,所述处理器用于:控制所述可移动平台在所述可能发生碰撞的区域向当前移动方向相反的方向移动。
  34. 根据权利要求20-32中任一项所述的避障装置,其特征在于,所述处理器用于:调整所述可移动平台的移动轨迹,以使所述可移动平台绕过所述可能发生碰撞的区域。
  35. 根据权利要求20-32中任一项所述的避障装置,其特征在于,所述处理器用于:控制所述可移动平台停止移动预设时间长度,以避让所述运动物体。
  36. 根据权利要求20-35中任一项所述的避障装置,其特征在于,所述拍摄设备包括至少一个摄像头。
  37. 一种可移动平台,其特征在于,包括:
    机身;
    动力系统,安装在所述机身上,用于为所述可移动平台提供动力;
    以及如权利要求20-36中任一项所述的避障装置。
  38. 根据权利要求37所述的可移动平台,其特征在于,所述移动平台包括如下的任意一种:无人机、汽车、VR眼镜、AR眼镜。
  39. 一种避障装置,其特征在于,所述避障装置设置在地面站上,所述避障装置包括处理器和通信接口,所述处理器与所述通信接口通信连接;
    所述通信接口用于:获取可移动平台搭载的拍摄设备拍摄获得的深度图;
    所述处理器用于:基于所述深度图识别出运动物体,确定所述运动物体的运动速度向量,基于所述运动物体的运动速度向量,确定所述运动物体和所述可移动平台可能发生碰撞的区域,从而控制所述可移动平台在所述可能发生碰撞的区域执行避障策略。
  40. 根据权利要求39所述的避障装置,其特征在于,所述处理器用于:获取所述深度图上目标特征点的光流向量,所述目标特征点不包括静止物体上的特征点,基于所述目标特征点的光流向量,识别运动物体。
  41. 根据权利要求40所述的避障装置,其特征在于,所述处理器用于:基于视觉里程计VO算法,获取所述深度图上目标特征点的光流向量。
  42. 根据权利要求41所述的避障装置,其特征在于,所述处理器用于:基于预设的角点检测算法,从可移动平台搭载的拍摄设备拍摄获得的深度图中提取特征点;通过跟踪所述特征点在两帧图像上的相对位置,确定所述特征点的光流向量;基于所述特征点在所述两帧图像中的大地坐标系下的三维坐标,从所述特征点的光流向量中筛除静止物体上的特征点的光流向量,获得目标特征点的光流向量。
  43. 根据权利要求42所述的避障装置,其特征在于,所述处理器用于:基于所述特征点在两帧图像中的相对位置,分别确定所述特征点在后一帧图像上的相对位置相对于所述特征点在前一帧图像上的相对位置的 第一偏移量,以及所述特征点在前一帧图像上的相对位置相对于所述特征点在后一帧图像上的相对位置的第二偏移量;若所述第一偏移量和所述第二偏移量之间关系符合预设的第一先验条件,则基于所述第一偏移量或所述第二偏移量,确定所述特征点的光流向量。
  44. 根据权利要求42所述的避障装置,其特征在于,所述处理器用于:基于预设的第二先验条件,确定所述特征点在所述两帧图像中的大地坐标系下的三维坐标所对应的本质矩阵,其中,所述第二先验条件为特征点在两帧图像中的大地坐标系下的三维坐标和本质矩阵之间的条件关系;基于所述本质矩阵,采用随机抽样一致算法,从所述特征点的光流向量中筛除静止物体上的特征点的光流向量,获得目标特征点的光流向量。
  45. 根据权利要求42所述的避障装置,其特征在于,所述处理器用于:基于所述特征点在所述两帧图像中的大地坐标系下的三维坐标,确定所述拍摄设备在拍摄所述两帧图像时的位置和姿态;基于所述拍摄设备在拍摄所述两帧图像时的位置和姿态,采用随机抽样一致算法,从所述特征点的光流向量中筛除静止物体上的特征点的光流向量,获得目标特征点的光流向量。
  46. 根据权利要求40所述的避障装置,其特征在于,所述处理器用于:基于所述目标特征点的光流向量、所述目标特征点的深度信息和所述目标特征点的视觉信息,识别运动物体,其中所述视觉信息包括色彩和/或光强。
  47. 根据权利要求46所述的避障装置,其特征在于,所述处理器用于:对获取到的目标特征点的光流向量进行聚类处理,获得至少一组光流向量组,其中,在同一光流向量组中光流向量之间的方向偏差小于第一预设阈值,光流向量之间的长度之差小于第二预设阈值;基于所述目标特征点中每个特征点的深度信息和视觉信息,从所述至少一组光流向量组中识别出运动物体。
  48. 根据权利要求47所述的避障装置,其特征在于,所述处理器用于:基于所述目标特征点中每个特征点的深度信息,以及每个特征点的视觉信息,采用种子填充算法,从所述至少一组光流向量组中识别出运动物体。
  49. 根据权利要求39所述的避障装置,其特征在于,所述处理器用于:基于所述运动物体在预设帧数图像中的大地坐标系下的三维坐标,确定所述运动物体的运动速度向量。
  50. 根据权利要求39所述的避障装置,其特征在于,所述处理器用于:基于运动物体的运动速度向量,以及可移动平台的移动速度向量,确定所述可移动平台的移动轨迹;基于所述移动轨迹确定所述运动物体和所述可移动平台可能发生碰撞的区域。
  51. 根据权利要求50所述的避障装置,其特征在于,所述处理器用于:将所述移动轨迹投影到所述深度图上,在所述深度图上确定所述运动物体和所述可移动平台可能发生碰撞的区域;基于所述深度图确定所述可能发生碰撞的区域在大地坐标系下的三维坐标。
  52. 根据权利要求50所述的避障装置,其特征在于,所述避障装置还包括显示组件,所述显示组件与所述处理器通信连接;
    所述显示组件用于:显示所述可移动平台的移动轨迹。
  53. 根据权利要求52所述的避障装置,其特征在于,所述显示组件用于:显示所述深度图,并在所述深度图上标示所述可能发生碰撞的区域。
  54. 根据权利要求39-53中任一项所述的避障装置,其特征在于,所述处理器用于:控制所述可移动平台在所述可能发生碰撞的区域向当前移动方向相反的方向移动。
  55. 根据权利要求39-53中任一项所述的避障装置,其特征在于,所述处理器用于:调整所述可移动平台的移动轨迹,以使所述可移动平台绕过所述可能发生碰撞的区域。
  56. 根据权利要求39-53中任一项所述的避障装置,其特征在于,所述处理器用于:控制所述可移动平台停止移动预设时间长度,以避让所述运动物体。
  57. 根据权利要求39-56中任一项所述的避障装置,其特征在于,所述拍摄设备包括至少一个摄像头。
PCT/CN2017/120249 2017-12-29 2017-12-29 避障方法、装置及可移动平台 WO2019127518A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201780029125.9A CN109196556A (zh) 2017-12-29 2017-12-29 避障方法、装置及可移动平台
PCT/CN2017/120249 WO2019127518A1 (zh) 2017-12-29 2017-12-29 避障方法、装置及可移动平台
US16/910,890 US20210103299A1 (en) 2017-12-29 2020-06-24 Obstacle avoidance method and device and movable platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/120249 WO2019127518A1 (zh) 2017-12-29 2017-12-29 避障方法、装置及可移动平台

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/910,890 Continuation US20210103299A1 (en) 2017-12-29 2020-06-24 Obstacle avoidance method and device and movable platform

Publications (1)

Publication Number Publication Date
WO2019127518A1 true WO2019127518A1 (zh) 2019-07-04

Family

ID=64948918

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/120249 WO2019127518A1 (zh) 2017-12-29 2017-12-29 避障方法、装置及可移动平台

Country Status (3)

Country Link
US (1) US20210103299A1 (zh)
CN (1) CN109196556A (zh)
WO (1) WO2019127518A1 (zh)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10460511B2 (en) * 2016-09-23 2019-10-29 Blue Vision Labs UK Limited Method and system for creating a virtual 3D model
CN111247557A (zh) * 2019-04-23 2020-06-05 深圳市大疆创新科技有限公司 用于移动目标物体检测的方法、系统以及可移动平台
CN111656294A (zh) * 2019-05-31 2020-09-11 深圳市大疆创新科技有限公司 可移动平台的控制方法、控制终端及可移动平台
CN111338382B (zh) * 2020-04-15 2021-04-06 北京航空航天大学 一种安全态势引导的无人机路径规划方法
US11545039B2 (en) * 2020-07-28 2023-01-03 Ford Global Technologies, Llc Systems and methods for controlling an intersection of a route of an unmanned aerial vehicle
KR20220090597A (ko) * 2020-12-22 2022-06-30 한국전자기술연구원 특징 매칭을 이용한 위치추적장치 및 방법
CN113408353B (zh) * 2021-05-18 2023-04-07 杭州电子科技大学 一种基于rgb-d的实时避障系统
CN113657164A (zh) * 2021-07-15 2021-11-16 美智纵横科技有限责任公司 标定目标对象的方法、装置、清扫设备和存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102707724A (zh) * 2012-06-05 2012-10-03 清华大学 一种无人机的视觉定位与避障方法及系统
CN105974938A (zh) * 2016-06-16 2016-09-28 零度智控(北京)智能科技有限公司 避障方法、装置、载体及无人机
CN106096559A (zh) * 2016-06-16 2016-11-09 深圳零度智能机器人科技有限公司 障碍物检测方法及系统以及运动物体
CN106127788A (zh) * 2016-07-04 2016-11-16 触景无限科技(北京)有限公司 一种视觉避障方法和装置
CN106527468A (zh) * 2016-12-26 2017-03-22 德阳科蚁科技有限责任公司 一种无人机避障控制方法、系统及无人机
WO2017177533A1 (zh) * 2016-04-12 2017-10-19 深圳市龙云创新航空科技有限公司 基于激光雷达的微型无人机操控方法及系统

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9558584B1 (en) * 2013-07-29 2017-01-31 Google Inc. 3D position estimation of objects from a monocular camera using a set of known 3D points on an underlying surface
CN104881881B (zh) * 2014-02-27 2018-04-10 株式会社理光 运动对象表示方法及其装置
CN105931275A (zh) * 2016-05-23 2016-09-07 北京暴风魔镜科技有限公司 基于移动端单目和imu融合的稳定运动跟踪方法和装置
CN106920259B (zh) * 2017-02-28 2019-12-06 武汉工程大学 一种定位方法及系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102707724A (zh) * 2012-06-05 2012-10-03 清华大学 一种无人机的视觉定位与避障方法及系统
WO2017177533A1 (zh) * 2016-04-12 2017-10-19 深圳市龙云创新航空科技有限公司 基于激光雷达的微型无人机操控方法及系统
CN105974938A (zh) * 2016-06-16 2016-09-28 零度智控(北京)智能科技有限公司 避障方法、装置、载体及无人机
CN106096559A (zh) * 2016-06-16 2016-11-09 深圳零度智能机器人科技有限公司 障碍物检测方法及系统以及运动物体
CN106127788A (zh) * 2016-07-04 2016-11-16 触景无限科技(北京)有限公司 一种视觉避障方法和装置
CN106527468A (zh) * 2016-12-26 2017-03-22 德阳科蚁科技有限责任公司 一种无人机避障控制方法、系统及无人机

Also Published As

Publication number Publication date
US20210103299A1 (en) 2021-04-08
CN109196556A (zh) 2019-01-11

Similar Documents

Publication Publication Date Title
WO2019127518A1 (zh) 避障方法、装置及可移动平台
US11915502B2 (en) Systems and methods for depth map sampling
CN109144095B (zh) 用于无人驾驶飞行器的基于嵌入式立体视觉的障碍躲避系统
CN110582798B (zh) 用于虚拟增强视觉同时定位和地图构建的系统和方法
CN107688391B (zh) 一种基于单目视觉的手势识别方法和装置
EP2710554B1 (en) Head pose estimation using rgbd camera
CN112567201A (zh) 距离测量方法以及设备
JP2018522348A (ja) センサーの3次元姿勢を推定する方法及びシステム
Varga et al. Super-sensor for 360-degree environment perception: Point cloud segmentation using image features
CN111344644B (zh) 用于基于运动的自动图像捕获的技术
EP2874097A2 (en) Automatic scene parsing
CN104021538B (zh) 物体定位方法和装置
WO2020113423A1 (zh) 目标场景三维重建方法、系统及无人机
CN106873619A (zh) 一种无人机飞行路径的处理方法
US11729367B2 (en) Wide viewing angle stereo camera apparatus and depth image processing method using the same
US11842440B2 (en) Landmark location reconstruction in autonomous machine applications
CN113537047A (zh) 障碍物检测方法、装置、交通工具及存储介质
CN113724335B (zh) 一种基于单目相机的三维目标定位方法及系统
Byrne et al. Expansion segmentation for visual collision detection and estimation
Lin et al. Biologically inspired composite vision system for multiple depth-of-field vehicle tracking and speed detection
KR102310958B1 (ko) 광시야각의 스테레오 카메라 장치 및 이를 이용한 깊이 영상 처리 방법
CN113011212B (zh) 图像识别方法、装置及车辆
CN113168532A (zh) 目标检测方法、装置、无人机及计算机可读存储介质
KR102430273B1 (ko) 광시야각의 스테레오 카메라 기반 1인칭 비전 시스템 및 이를 이용한 영상 처리 방법
Sappa et al. Moving object detection from mobile platforms using stereo data registration

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17936507

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17936507

Country of ref document: EP

Kind code of ref document: A1