WO2019126930A1 - 测距方法、装置以及无人机 - Google Patents

测距方法、装置以及无人机 Download PDF

Info

Publication number
WO2019126930A1
WO2019126930A1 PCT/CN2017/118266 CN2017118266W WO2019126930A1 WO 2019126930 A1 WO2019126930 A1 WO 2019126930A1 CN 2017118266 W CN2017118266 W CN 2017118266W WO 2019126930 A1 WO2019126930 A1 WO 2019126930A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
drone
distance
frames
determining
Prior art date
Application number
PCT/CN2017/118266
Other languages
English (en)
French (fr)
Inventor
张柯
臧波
Original Assignee
深圳市道通智能航空技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市道通智能航空技术有限公司 filed Critical 深圳市道通智能航空技术有限公司
Priority to CN201780002607.5A priority Critical patent/CN108140245B/zh
Priority to PCT/CN2017/118266 priority patent/WO2019126930A1/zh
Priority to EP17832901.7A priority patent/EP3531375B1/en
Priority to US15/886,186 priority patent/US10621456B2/en
Publication of WO2019126930A1 publication Critical patent/WO2019126930A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64CAEROPLANES; HELICOPTERS
    • B64C39/00Aircraft not otherwise provided for
    • B64C39/02Aircraft not otherwise provided for characterised by special use
    • B64C39/024Aircraft not otherwise provided for characterised by special use of the remote controlled vehicle type, i.e. RPV
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/0094Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots involving pointing a payload, e.g. camera, weapon, sensor, towards a fixed or moving target
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • G05D1/102Simultaneous control of position or course in three dimensions specially adapted for aircraft specially adapted for vertical take-off of aircraft
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/182Network patterns, e.g. roads or rivers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2101/00UAVs specially adapted for particular uses or applications
    • B64U2101/30UAVs specially adapted for particular uses or applications for imaging, photography or videography
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2201/00UAVs characterised by their flight controls
    • B64U2201/10UAVs characterised by their flight controls autonomous, i.e. by navigating independently from ground or air stations, e.g. by using inertial navigation systems [INS]

Definitions

  • the present invention relates to the field of drone technology, and in particular, to a distance measuring method, a device, and a drone using the same.
  • the obstacle avoidance mode of monocular stereo vision is generally used for long-distance obstacle avoidance, and the position and posture changes and corresponding image information of the drone during a flight time interval are obtained, and the correction is performed.
  • the aligned images are subjected to Stereo Matching to obtain depth of field information for each point in the image. That is, when there is a target within a certain distance, the image changes with the flight of the drone.
  • Stereo Matching By stereoscopic matching of the two frames before and after the information in the image, the instantaneous position of the target relative to the drone can be obtained. Therefore, early warning and implementation of obstacle avoidance strategies.
  • the present invention provides a ranging method for solving the technical problem thereof, the method comprising:
  • ROI Region of Interest
  • the determining that the UAV cannot determine the distance between the UAV and the target by a method of stereo matching includes:
  • the determining whether the sum of the gray values of the n ⁇ n pixel units and the gray value of the n ⁇ n cells in the block meet a preset condition including:
  • S cell is the size of the cell
  • the ratio of the size of the second region of interest to the size of the first region of interest is greater than one.
  • the method further includes:
  • the determining one of the targets as the detection object includes:
  • the method further includes:
  • the target is a stationary target, and the distance between the drone and the target is determined according to a foreground extracted from the adjacent two frames of images respectively.
  • the S is a distance at which the drone flies when the adjacent two frames of images are taken
  • the f is a focal length of the camera of the drone
  • the h1 is the target at the The width of the camera on the image plane of the camera when one of the frames is taken
  • the h2 is the width of the target on the image plane of the camera when the drone photographs another frame of image .
  • the determining the distance according to h1, h2, f, S includes:
  • the target is a dynamic target
  • the distance between the drone and the target is determined according to a foreground extracted from the adjacent two frames of images respectively.
  • the f is a focal length of an image capturing device of the drone
  • the h1 is a width of the target on an image plane of the image capturing device when the drone photographs one frame of the image
  • H2 is the width of the target on the image plane of the camera when the drone takes another frame of image.
  • the determining the distance according to at least the h1, h2, f, and the S includes:
  • the method further includes:
  • the calculating the distance S that the target moves when the drone photographs the adjacent two frames of images includes:
  • the speed model is:
  • t is the exercise time of the target
  • k, b, and c are constants.
  • the k, b, c are determined by the image of the target of the at least three frames.
  • the method further includes:
  • the flight attitude of the drone is adjusted such that the center of the image falls on a straight line in which the flight direction of the drone is located.
  • the present invention provides a distance measuring device for solving the technical problem thereof, the device comprising:
  • a target acquisition module configured to acquire an image of any two adjacent frames captured by the drone
  • Image processing module for:
  • ROI Region of Interest
  • the determining module is used to:
  • the determining module is configured to:
  • the determining module is specifically configured to:
  • S cell is the size of the cell
  • the ratio of the size of the second region of interest to the size of the first region of interest is greater than one.
  • the apparatus further includes a determining module, wherein the determining module is configured to:
  • the determining module is used to:
  • the determining module is specifically configured to:
  • the determining module is further configured to determine that the target with the largest edge scale change in the foreground is the detection object.
  • the target is a stationary target
  • the determining module is configured to:
  • the S is a distance at which the drone flies when the adjacent two frames of images are taken
  • the f is a focal length of the camera of the drone
  • the h1 is the target at the The width of the camera on the image plane of the camera when one of the frames is taken
  • the h2 is the width of the target on the image plane of the camera when the drone photographs another frame of image .
  • the determining module is specifically configured to:
  • the target is a dynamic target
  • the determining module is configured to:
  • the f is a focal length of an image capturing device of the drone
  • the h1 is a width of the target on an image plane of the image capturing device when the drone photographs one frame of the image
  • H2 is the width of the target on the image plane of the camera when the drone takes another frame of image.
  • the determining module is specifically configured to:
  • the determining module is further configured to:
  • the determining module is further configured to:
  • the speed model is:
  • t is the exercise time of the target
  • k, b, and c are constants.
  • the k, b, c are determined by the image of the target of the at least three frames.
  • the apparatus further includes an adjustment module, wherein the adjustment module is configured to adjust a flight attitude of the drone such that a center of the image falls in a flight direction of the drone On the line.
  • the present invention provides a drone for solving the technical problem thereof, including:
  • a processor disposed within the housing or arm;
  • a memory communicatively coupled to the processor, the memory being disposed within the housing or arm;
  • the memory stores instructions executable by the processor, and when the processor executes the instructions, implements a ranging method as described above.
  • the present invention provides a computer readable storage medium storing computer executable instructions for storing the computer executable instructions when the computer executable instructions are executed by the drone, The drone is caused to perform the ranging method as described above.
  • the ranging method, device and the drone provided by the embodiments of the present invention obtain the change of the image scale by performing foreground background segmentation on the adjacent two frames of images and extracting the edge features by using the expanded region of interest, and the change of the image scale is adopted. It can realize long-distance obstacle avoidance under extreme conditions, and solves the problem of poor stereo matching accuracy or stereo matching in extreme cases such as no texture, low texture or dense repeating texture.
  • FIG. 1 is a schematic structural view of an embodiment of a drone according to the present invention.
  • FIG. 2 is a schematic diagram of one of the images taken by the drone shown in FIG. 1, wherein the image includes a first region of interest;
  • Figure 3a is a schematic view showing the target in the image taken by the drone shown in Figure 1 in a foggy day;
  • FIG. 3b is a schematic diagram showing the target in the image captured by the drone shown in FIG. 1 exhibiting low texture in a low brightness environment;
  • FIG. 3c is a schematic diagram showing the target in the image captured by the drone shown in FIG. 1 showing a dense repeating texture in a long distance;
  • FIG. 4 is a flow chart of one embodiment of the drone shown in FIG. 1 determining that the relative distance between the drone and the target in the flight direction cannot be determined by the method of stereo matching;
  • FIG. 5 is a flowchart of FIG. 4, determining whether a sum of gray values of the n ⁇ n pixel units and a gray value of the n ⁇ n cells in the block satisfy a preset condition;
  • Figure 6 is a schematic structural view of cells and blocks mentioned in the flow chart shown in Figure 4;
  • FIG. 7 is a schematic diagram of the first region of interest in FIG. 2 being enlarged to form a second region of interest;
  • FIG. 8 is a flow chart of one embodiment of the unmanned aerial vehicle determining detection object shown in FIG. 1 when there are at least two targets in the current scene;
  • FIG. 9 is a geometric relationship diagram of the unmanned aerial vehicle shown in FIG. 1 capturing adjacent two frames of images when the target is a static target;
  • FIG. 10 is a geometric relationship diagram of the unmanned aerial vehicle shown in FIG. 1 capturing adjacent two frames of images when the target is a dynamic target;
  • FIG. 11 is a flow chart of an embodiment of a ranging method of the present invention.
  • Figure 13 is a block diagram showing the structure of a distance measuring device of the present invention.
  • the method, the device and the drone provided by the invention can solve the problem that the stereo matching accuracy is poor or the stereo matching cannot be performed in the extreme case of the target without texture, low texture or dense repeating texture in front of the drone, thereby resulting in no Man-machines cannot achieve the problem of long-distance obstacle avoidance.
  • FIG. 1 is a schematic structural diagram of a drone 10 according to an embodiment of the present invention.
  • the drone 10 includes a housing 11, an arm 12 connected to the housing 11, a power unit 13 disposed at one end of the arm 12, a platform 15 connected to the housing 11, and an imaging device 14 connected to the platform 15. And a processor 16 and a memory 17 disposed within the housing 11.
  • the number of the arms 12 is four, that is, the aircraft is a quadrotor. In other possible embodiments, the number of the arms 12 may also be 3, 6, 8, 10, and the like.
  • the drone 10 can also be other movable objects such as manned aircraft, model aircraft, unmanned airships, fixed-wing drones, and unmanned hot air balloons.
  • the power unit 13 includes a motor 132 disposed at one end of the arm 12 and a propeller 131 coupled to the rotating shaft of the motor 132.
  • the rotating shaft of the motor 132 rotates to drive the propeller 131 to rotate to provide lift to the drone 10.
  • the pan/tilt 15 serves to alleviate or even eliminate the vibration transmitted from the power unit 13 to the image pickup device 14 to ensure that the image pickup device 14 can capture a stable and clear image or video.
  • the imaging device 14 may be a high-definition camera or a motion camera or the like for performing image capturing. In an embodiment of the invention, the imaging device 14 supports autonomous optical zooming.
  • the imaging device 14 may be directly mounted on the drone 10, or may be mounted on the drone 10 by the pan/tilt head 15 as shown in this embodiment, and the pan/tilt head 15 allows the imaging device 14 to be wound around at least one of the drones 10. The shaft rotates.
  • the processor 16 may include a plurality of functional units, such as a flight control unit for controlling the flight attitude of the aircraft, a target recognition unit for identifying the target, a tracking unit for tracking a specific target, a navigation unit for navigating the aircraft (for example, GPS (Global Positioning System), Beidou, and a data processing unit for processing environmental information acquired by a related airborne device (for example, the imaging device 14).
  • a flight control unit for controlling the flight attitude of the aircraft
  • a target recognition unit for identifying the target
  • a tracking unit for tracking a specific target
  • a navigation unit for navigating the aircraft (for example, GPS (Global Positioning System), Beidou
  • a data processing unit for processing environmental information acquired by a related airborne device (for example, the imaging device 14).
  • the camera device 14 first acquires an image of any two adjacent frames captured by the drone 10, and the processor 16 determines a first Region of Interest (ROI) of each of the adjacent two frames of images.
  • the region of interest refers to the area in the image processing that is to be processed from the processed image in the form of a box, a circle, an ellipse or an irregular polygon.
  • the region of interest typically includes an image of at least a portion of the target. 2 is an image 140 acquired by the camera device 14.
  • the area within the box represents the first region of interest ROI, and the first region of interest ROI contains at least a portion of the target (the portion indicated by hatching in FIG. 2). image.
  • Stereo Matching is caused when the drone 10 is in the extreme case where the foggy target of Figure 3a exhibits no texture, the low-brightness target of Figure 3b exhibits low texture, and the long-range target of Figure 3c exhibits dense repeating texture.
  • the method is not available.
  • Stereo matching is to use the pixel information of the image to find the parallax map describing the depth information of the point by searching for the same pixel in the two images, and then obtaining the target and the corresponding position change through the corresponding position change. Relative distance information between human and machine.
  • the processor 16 first needs to determine that the drone 10 is in the extremes described above, as shown in Figures 4 through 6, in an embodiment of the invention, the processor 16 determines whether the drone 10 is at the extremes described above. In the case where the distance between the drone 10 and the target in the flight direction of the drone 10 cannot be determined by the stereo matching method:
  • n ⁇ n cells 50 are selected as one block around the cell 50, and gray values g 1 , g 2 , . . . g of the n ⁇ n cells 50 in the block are respectively calculated. n ⁇ n .
  • n takes 4, and in other possible embodiments, n can also take 8, 10, 12, and the like.
  • processor 16 can compare a local gray value difference within the first region of interest.
  • the step further includes:
  • S cell is the size of the cell.
  • a preset threshold ⁇ may be given according to experience or actual application.
  • the ratio in step S421 is less than ⁇ , it indicates that the local texture difference of the target is small, and the drone needs to enable the ranging method of the present invention.
  • the ratio is greater than ⁇ , the local texture difference of the target is large, and the existing stereo matching method can be used to obtain the relative distance between the drone and the target.
  • the processor 16 can determine whether the difference of the local texture of the target is small enough to determine whether the ranging method provided by the embodiment of the present invention needs to be enabled.
  • the processor 16 needs to expand the first of each of the adjacent two frames of images.
  • the region of interest to form a second region of interest.
  • the area within the box represents the first region of interest ROI, the first region of interest ROI containing at least the target (the portion indicated by hatching in FIG. 2) Part of the image.
  • the second region of interest ROI+ has a certain range of expansion compared to the first region of interest ROI. This is because it is only necessary to acquire partial image information of the foreground (ie, target) when performing image matching in the ROI.
  • the edge feature size of the foreground tends to be small.
  • the region has undergone a certain degree of expansion, expressed as the second region of interest ROI+.
  • the embodiment of the present invention also introduces the concept of the expansion factor ⁇ , which represents the ratio of the size of the second region of interest ROI+ to the size of the ROI, where ⁇ >1, and the specific value needs to be based on the detection range of the obstacle avoidance of the drone. And the computing power of the drone itself is determined.
  • the processor 16 After determining the second region of interest of each frame image in the adjacent two frames of images, the processor 16 further needs to perform a segmentation extraction operation on the foreground in the second region of interest, and determine whether at least the extracted foreground exists in the foreground. Two goals. If so, it is necessary to determine one of the targets as the detection object, and then determine the distance between the drone 10 and the detection object based on the foreground extracted from the adjacent two frames of images.
  • the processor 16 may determine the detection object according to the following steps:
  • the target with the highest edge in the foreground is the detection object (see the target C in Fig. 6). This is because the drone 10 generally avoids obstacles by raising the height. Therefore, selecting the target with the highest edge can ensure the most safe obstacle avoidance result.
  • the target with the largest change in the edge scale is determined as the detection target.
  • the processor 16 determines the detected object, the relative distance of the target to the drone 10 needs to be further calculated.
  • the processor 16 can calculate the relative distance between the target and the drone 10 based on the geometric relationship between the adjacent two frames of images:
  • the processor 16 can advance the early warning according to the distance and implement the corresponding obstacle avoidance strategy.
  • the relative distance between the drone 10 and the target can be measured by:
  • the present invention proposes a strategy for continuously acquiring at least three frames of images, which is performed once after four frames of images in one embodiment of the present invention.
  • Distance detection the UAV is kept stationary during the detection, the moving distance of the target is predicted by the speed of the first four frames, the tracking algorithm continues, the target size of the two frames before and after the detection is stably obtained, and the target is obtained according to the geometric relationship.
  • the relative distance H between the drone and the target, the width E of the target can be obtained by the following formula:
  • the moving distance S of the target of the detection phase is obtained by the following method:
  • t is the exercise time of the target
  • k, b, and c are constants.
  • the above method enables a continuous solution of the relative distance between the target and the drone.
  • the embodiment of the invention obtains the change of the image scale by performing foreground background segmentation on the adjacent two frames of images and edge feature extraction using the expanded region of interest, and the long-distance obstacle avoidance under extreme conditions can be realized by the change of the image scale.
  • the problem of poor stereo matching or inability to perform stereo matching in extreme cases such as no texture, low texture or dense repeating texture is solved.
  • the relative distance between the moving target and the drone can also be obtained in real time according to the ranging method of the present invention.
  • an embodiment of the present invention further provides a ranging method, where the method includes:
  • the drone should have been flying straight, but there is inevitably a change in attitude caused by wind disturbance or instability of the control system.
  • the change of the attitude of the drone may cause some image centers to not fall on the drone.
  • On the straight line where the flight direction is located since the present invention focuses on the safety problem in the flight direction of the drone, it is necessary to correct the alignment of the image at this time so that the center of the image falls on the straight line where the flight direction of the drone is. Make the image meet the Epipolar Constraint.
  • the step further includes:
  • n ⁇ n cells 50 are selected as one block around the cell 50, and gray values g 1 , g 2 , . . . g of the n ⁇ n cells 50 in the block are respectively calculated. n ⁇ n .
  • n takes 4, and in other possible embodiments, n can also take 8, 10, 12, and the like.
  • processor 16 can compare the grayscale value differences for a particular portion of the first region of interest.
  • step S42 further includes:
  • S cell is the size of the cell.
  • a preset threshold ⁇ may be given according to experience or actual application.
  • the ratio in step S421 is less than ⁇ , it indicates that the local texture difference of the target is small, and the drone needs to enable the ranging method of the present invention.
  • the ratio is greater than ⁇ , the local texture difference of the target is large, and the existing stereo matching method can be used to obtain the relative distance between the drone and the target.
  • the second region of interest ROI+ has a range of expansion compared to the first region of interest ROI. This is because it is only necessary to acquire partial image information of the foreground (ie, target) when performing image matching in the ROI. However, in the obtained long-distance photographed images, the edge feature size of the foreground tends to be small. In order to obtain the edge features of the foreground in the approaching flight, it is necessary to ensure that the edge features exist in the region of interest and thus are interested. The region has undergone a certain degree of expansion, expressed as the second region of interest ROI+.
  • the embodiment of the present invention also introduces the concept of the expansion factor ⁇ , which represents the ratio of the size of the second region of interest ROI+ to the size of the ROI, where ⁇ >1, and the specific value needs to be based on the detection range of the obstacle avoidance of the drone. And the computing power of the drone itself is determined.
  • represents the ratio of the size of the second region of interest ROI+ to the size of the ROI, where ⁇ >1, and the specific value needs to be based on the detection range of the obstacle avoidance of the drone.
  • the computing power of the drone itself is determined.
  • step S117 Determine whether there are at least two targets in the foreground. If yes, proceed to step S118; if no, proceed to step S121, and if there is only one target in the current scene, the target is directly determined as the detection target.
  • step S118 Determine whether the change of the at least two target sizes in the foreground is the same; if yes, proceed to step S119; if no, proceed to step S120.
  • the target with the highest edge in the foreground is the detection object (see the target C in Fig. 6). This is because the drone 10 generally avoids obstacles by raising the height. Therefore, selecting the target with the highest edge can ensure the most safe obstacle avoidance result.
  • the target with the largest change in the edge scale is determined as the detection target.
  • the relative distance between the target and the drone can be calculated according to the geometric relationship between the adjacent two frames of images:
  • the drone After obtaining the relative distance between the target and the drone, the drone can advance the warning according to the distance and implement the corresponding obstacle avoidance strategy.
  • the relative distance between the drone and the target can be measured by the following method:
  • the present invention proposes a strategy for continuously acquiring at least three frames of images, which is performed once after four frames of images in one embodiment of the present invention.
  • Distance detection the UAV is kept stationary during the detection, the moving distance of the target is predicted by the speed of the first four frames, the tracking algorithm continues, the target size of the two frames before and after the detection is stably obtained, and the target is obtained according to the geometric relationship.
  • the relative distance H between the drone and the target, the width E of the target can be obtained by the following formula:
  • the moving distance S of the target of the detection phase is obtained by the following method:
  • t is the exercise time of the target
  • k, b, and c are constants.
  • the above method enables a continuous solution of the relative distance between the target and the drone.
  • another embodiment of the present invention further provides a ranging method, where the method includes:
  • the present invention further provides a distance measuring device 130, the device 130 comprising:
  • An obtaining module 132 configured to acquire an image of any two adjacent frames captured by the drone
  • the image processing module 134 is configured to:
  • ROI Region of Interest
  • the determining module 133 is configured to:
  • the determining module 133 is configured to:
  • the determining module 133 is specifically configured to:
  • S cell is the size of the cell
  • the ratio of the size of the second region of interest to the size of the first region of interest is greater than one.
  • the device 130 further includes a determining module 135, and the determining module 135 is configured to:
  • the determining module 133 is configured to:
  • the determining module 135 is specifically configured to:
  • the determining module 135 is further configured to determine that the target with the largest edge scale change in the foreground is the detection object. .
  • the target is a stationary target
  • the determining module 133 is configured to:
  • the S is a distance at which the drone flies when the adjacent two frames of images are taken
  • the f is a focal length of the camera of the drone
  • the h1 is the target at the The width of the camera on the image plane of the camera when one of the frames is taken
  • the h2 is the width of the target on the image plane of the camera when the drone photographs another frame of image .
  • the determining module 133 is specifically configured to:
  • the target is a dynamic target
  • the determining module 133 is configured to:
  • the f is a focal length of an image capturing device of the drone
  • the h1 is a width of the target on an image plane of the image capturing device when the drone photographs one frame of the image
  • H2 is the width of the target on the image plane of the camera when the drone takes another frame of image.
  • the determining module 133 is specifically configured to:
  • the determining module 133 is further configured to:
  • the determining module 133 is further configured to:
  • the speed model is:
  • t is the exercise time of the target
  • k, b, and c are constants.
  • the k, b, c are determined by the image of the target of the at least three frames.
  • the apparatus 130 further includes an adjustment module 131, wherein the adjustment module 131 is configured to adjust a flight attitude of the drone such that a center of the image falls on the drone The line on which the direction is located.
  • the present invention also proposes a computer readable storage medium storing a computer program, when executed by a processor, causing the processor to perform the method described in the embodiment shown in FIG. 11 or FIG. .
  • the device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located A place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Automation & Control Theory (AREA)
  • Astronomy & Astrophysics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

本发明公开了一种测距方法及其装置、以及使用该方法和装置的无人机。本发明实施例提供的测距方法、装置及无人机通过对相邻两帧图像进行前景背景分割、利用扩大的感兴趣区域进行边缘特征提取来获取图像尺度的变化,通过该图像尺度的变化能够实现极端条件下的远距离避障,同时解决了在无纹理、低纹理或者密集重复纹理等极端情况下立体匹配精度差或无法进行立体匹配的问题。

Description

测距方法、装置以及无人机 技术领域
本发明涉及无人机技术领域,尤其涉及一种测距方法、装置及使用该方法或装置的无人机。
背景技术
目前,在无人机自主返航过程中,一般采用单目立体视觉的避障方式进行远距离避障,通过获取一段飞行时间间隔内无人机的位置姿态变化和对应的图像信息,通过对校正对齐后的图像进行立体视觉匹配(Stereo Matching)从而获得图像中每一点的景深信息。即,当前方一定距离内有目标时,图像随着无人机的飞行发生变化,通过对图像中每一点的信息进行前后两帧的立体视觉匹配,可获得目标相对于无人机的即时位置,从而提前预警,实施避障策略。
值得注意的是,尽管立体视觉匹配算法已经相当成熟,无论误差平方和算法(Sum of Squared Differences,SSD)、绝对误差和算法(Sum of Absolute Differences,SAD)、Census等局部匹配算法或是动态规划、置信传播、模拟退火等全局匹配算法均须利用图像的像素信息,通过寻找两幅图像中的相同像素点,进而通过其对应的图像位置来得到描述该点景深信息的视差图,进一步通过对应的位置变化获得目标实际相对于无人机的距离信息。依据这样的匹配原理,图像像素点与临近像素点的差异性成为决定匹配精度甚至匹配能否成功的重要因素。而对于某些情况,如天气因素或亮度引起的目标低纹理甚至无纹理、密集重复纹理等极端情况,该远距离避障方法则容易引起误匹配,导致避障方法不可用。
发明内容
第一方面,本发明为解决其技术问题提供了一种测距方法,该方法包括:
目标
获取所述无人机拍摄的相邻两帧的图像;
确定所述相邻两帧图像中每个所述图像的第一感兴趣区域(Region of Interest,ROI);
确定无人机无法通过立体匹配的方法确定所述无人机与所述无人机飞行方向上的目标之间的距离;
扩大所述相邻两帧图像中每个所述图像的第一感兴趣区域,以形成第二感兴趣区域;
从所述相邻两帧图像中,分别提取所述第二感兴趣区域中的前景;根据从所述相邻两帧图像中分别提取出的所述前景,确定所述无人机与所述目标之间的距离。
在本发明的一实施例中,所述确定所述无人机无法通过立体匹配的方法确定所述无人机与所述目标之间的距离,包括:
在所述第一感兴趣区域内选取n×n个像素单元作为一个胞元,并计算所述n×n个像素单元的灰度值之和g 0
在所述胞元周围选取n×n个胞元作为一个块,并分别计算所述块内所述n×n个胞元的灰度值g 1,g 2,……g n×n
判断所述n×n个像素单元的灰度值之和与所述块内所述n×n个胞元的灰度值是否满足预设条件;
若是,则确定所述无人机无法通过立体匹配的方法确定所述无人机与所述目标之间的距离。
在本发明的一实施例中,所述判断所述n×n个像素单元的灰度值之和与所述块内所述n×n个胞元的灰度值是否满足预设条件,包括:
计算所述块内所有所述胞元的灰度值与所述g 0差值的和与所述胞元大小的比值:
Figure PCTCN2017118266-appb-000001
其中,S cell为所述胞元的大小;
判断所述比值是否在预设范围内;
若是,则确定所述无人机无法通过立体匹配的方法确定所述无人机与所述目标之间的距离。
在本发明的一实施例中,所述第二感兴趣区域的尺寸与所述第一感兴趣区域的尺寸的比值大于1。
在本发明的一实施例中,该方法还包括:
判断所述前景中是否存在至少两个目标;
若是,则确定其中一个所述目标为检测对象;
则,所述根据从所述相邻两帧图像中分别提取出的所述前景,确定所述无人机与所述目标之间的距离,包括:
根据从所述相邻两帧图像中分别提取出的所述前景,确定所述无人机与所述检测对象的距离。
在本发明的一实施例中,所述确定其中一个所述目标作为检测对象,包括:
判断所述前景中所述至少两个目标尺度的变化是否相同;
若是,则确定在所述前景中上边缘最高的目标为所述检测对象。
在本发明的一实施例中,该方法还包括:
若所述前景中所述至少两个目标尺度的变化不同,则确定在所述前景中边缘尺度变化最大的目标为所述检测对象。
在本发明的一实施例中,所述目标为静止的目标,则所述根据从所述相邻两帧图像中分别提取出的前景,确定所述无人机与所述目标之间的距离,包括:
至少根据h1,h2,f,S确定所述距离;
其中,所述S为拍摄所述相邻两帧图像时所述无人机飞行的距离,所述f为所述无人机的摄像装置的焦距,所述h1为所述目标在所述无人机拍摄其中一帧图像时在所述摄像装置的像平面上的宽度,所述h2为所述目标在所述无人机拍摄另一帧图像时在所述摄像装置的像平面上的宽度。
在本发明的一实施例中,所述根据h1,h2,f,S确定所述距离,包括:
按照下述公式计算所述无人机与所述目标之间的距离H:
Figure PCTCN2017118266-appb-000002
在本发明的一实施例中,所述目标为动态的目标,则所述根据从所述相邻两帧图像中分别提取出的前景,确定所述无人机与所述目标之间的距离,包括:
计算所述目标在所述无人机拍摄所述相邻两帧图像时运动的距离S;
至少根据h1,h2,f,所述S确定所述距离;
其中,所述f为所述无人机的摄像装置的焦距,所述h1为所述目标在所述 无人机拍摄其中一帧图像时在所述摄像装置的像平面上的宽度,所述h2为所述目标在所述无人机拍摄另一帧图像时在所述摄像装置的像平面上的宽度。
在本发明的一实施例中,所述至少根据h1,h2,f,所述S确定所述距离,包括:
按照下述公式计算所述无人机与所述目标之间的距离H:
Figure PCTCN2017118266-appb-000003
在本发明的一实施例中,所述方法还包括:
按照下述公式计算所述目标的宽度E:
Figure PCTCN2017118266-appb-000004
在本发明的一实施例中,所述计算所述目标在所述无人机拍摄所述相邻两帧图像时运动的距离S,包括:
获取所述无人机拍摄的至少三帧所述目标的图像;
根据所述至少三帧所述目标的图像获取描述所述目标运动规律的速度模型;
根据所述速度模型计算所述目标在所述无人机拍摄所述相邻两帧图像时运动的距离S。
在本发明的一实施例中,所述速度模型为:
Figure PCTCN2017118266-appb-000005
其中,t为所述目标的运动时间,k、b、c为常数。
在本发明的一实施例中,所述k、b、c由所述至少三帧所述目标的图像确定。
在本发明的一实施例中,该方法还包括:
调整所述无人机的飞行姿态,以使得所述图像的中心落在所述无人机飞行方向所在的直线上。
第二方面,本发明为解决其技术问题提供了一种测距装置,该装置包括:
目标获取模块,用于获取所述无人机拍摄的任意相邻两帧的图像;
图像处理模块,用于:
确定所述任意相邻两帧图像的感兴趣区域(Region of Interest,ROI);
根据所述感兴趣区域分别确定所述任意相邻两帧图像的扩大的感兴趣区域;
分别分割并提取所述扩大的感兴趣区域中的前景;
所述确定模块用于:
用于确定无人机无法通过立体匹配的方法确定所述无人机与所述无人机飞行方向上的目标之间的距离;以及
根据所述任意相邻两帧图像的所述前景确定所述无人机与所述无人机飞行方向上的目标之间的距离。
在本发明的一实施例中,所述确定模块用于:
在所述感兴趣区域内选取n×n个像素单元作为一个胞元,并计算所述n×n个像素单元的灰度值之和g 0
在所述胞元周围选取n×n个胞元作为一个块,并分别计算所述块内所述n×n个胞元的灰度值g 1,g 2,……g n×n
判断所述n×n个像素单元的灰度值之和与所述块内所述n×n个胞元的灰度值是否满足预设条件;
若是,则确定所述无人机无法通过立体匹配的方法确定所述无人机与所述目标之间的距离。
在本发明的一实施例中,所述确定模块具体用于:
计算所述块内所有所述胞元的灰度值与所述g 0差值的和与所述胞元大小的比值:
Figure PCTCN2017118266-appb-000006
其中,S cell为所述胞元的大小
判断所述比值是否在预设范围内;
若是,则确定所述无人机无法通过立体匹配的方法确定所述无人机与所述目标之间的距离。
在本发明的一实施例中,所述第二感兴趣区域的尺寸与所述第一感兴趣区域的尺寸的比值大于1。
在本发明的一实施例中,该装置还包括判断模块,所述判断模块用于:
判断所述前景中是否存在至少两个目标;
若是,则确定其中一个所述目标为检测对象;
则,所述确定模块用于:
根据从所述相邻两帧图像中分别提取出的所述前景,确定所述无人机与所述检测对象的距离。
在本发明的一实施例中,所述判断模块具体用于:
判断所述前景中所述至少两个目标尺度的变化是否相同;
若是,则确定在所述前景中上边缘最高的目标为所述检测对象。
在本发明的一实施例中,若所述前景中所述多个目标尺度变化不相同,则所述判断模块还用于确定在所述前景中边缘尺度变化最大的目标为所述检测对象。
在本发明的一实施例中,所述目标为静止的目标,则所述确定模块用于:
至少根据h1,h2,f,S确定所述距离;
其中,所述S为拍摄所述相邻两帧图像时所述无人机飞行的距离,所述f为所述无人机的摄像装置的焦距,所述h1为所述目标在所述无人机拍摄其中一帧图像时在所述摄像装置的像平面上的宽度,所述h2为所述目标在所述无人机拍摄另一帧图像时在所述摄像装置的像平面上的宽度。
在本发明的一实施例中,所述确定模块具体用于:
计算所述无人机与所述无人机飞行方向上的目标之间的距离H:
Figure PCTCN2017118266-appb-000007
在本发明的一实施例中,所述目标为动态的目标,则所述确定模块用于:
计算所述目标在所述无人机拍摄所述相邻两帧图像时运动的距离S;
至少根据h1,h2,f,所述S确定所述距离;
其中,所述f为所述无人机的摄像装置的焦距,所述h1为所述目标在所述无人机拍摄其中一帧图像时在所述摄像装置的像平面上的宽度,所述h2为所述目标在所述无人机拍摄另一帧图像时在所述摄像装置的像平面上的宽度。
在本发明的一实施例中,所述确定模块具体用于:
计算所述无人机与所述无人机飞行方向上的目标之间的距离H:
Figure PCTCN2017118266-appb-000008
在本发明的一实施例中,所述确定模块还用于:
按照下述公式计算所述目标的宽度E:
Figure PCTCN2017118266-appb-000009
在本发明的一实施例中,所述确定模块还用于:
获取所述无人机拍摄的至少三帧所述目标的图像;
根据所述至少三帧所述目标的图像获取描述所述目标运动规律的速度模型;
根据所述速度模型计算所述目标在所述无人机拍摄所述相邻两帧图像时运动的距离S。
在本发明的一实施例中,所述速度模型为:
Figure PCTCN2017118266-appb-000010
其中,t为所述目标的运动时间,k、b、c为常数。
在本发明的一实施例中,所述k、b、c由所述至少三帧所述目标的图像确定。
在本发明的一实施例中,该装置还包括调整模块,所述调整模块用于调整所述无人机的飞行姿态,以使得所述图像的中心落在所述无人机飞行方向所在的直线上。
第三方面,本发明为解决其技术问题还提供了一种无人机,包括:
壳体;
与所述壳体连接的机臂;
设置在所述壳体或者机臂内的处理器;以及,
与所述处理器通信连接的存储器,所述存储器设在所述壳体或者机臂内;其中,
所述存储器存储有可被所述处理器执行的指令,所述处理器执行所述指令时,实现如上述所述的测距方法。
第四方面,本发明为解决其技术问题还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,当所述计算机可执行指令被无人机执行时,使所述无人机执行如上述所述的测距方法。
本发明实施例提供的测距方法、装置及无人机通过对相邻两帧图像进行前景背景分割、利用扩大的感兴趣区域进行边缘特征提取来获取图像尺度的变化,通过该图像尺度的变化能够实现极端条件下的远距离避障,同时解决了在无纹理、低纹理或者密集重复纹理等极端情况下立体匹配精度差或无法进行立体匹配的问题。
附图说明
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示 例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。
图1是本发明一种无人机其中一实施例的结构示意图;
图2是图1所示无人机拍摄的其中一幅图像的示意图,其中该图像包含第一感兴趣区域;
图3a是图1所示的无人机拍摄的图像中的目标在雾天呈现无纹理的示意图;
图3b是图1所示的无人机拍摄的图像中的目标在低亮度环境下呈现低纹理的示意图;
图3c是图1所示的无人机拍摄的图像中的目标在远距离情况下呈现密集重复纹理的示意图;
图4是图1所示的无人机确定无法通过立体匹配的方法确定无人机与其飞行方向上的目标的相对距离的其中一实施例的流程图;
图5是图4所示的流程图中判断所述n×n个像素单元的灰度值之和与所述块内所述n×n个胞元的灰度值是否满足预设条件其中一实施例的流程图;
图6是图4所示流程图中提及的胞元与块的结构示意图;
图7是图2中的第一感兴趣区域被扩大后形成第二感兴趣区域的示意图;
图8是当前景中存在至少两个目标时,图1所示的无人机确定检测对象的其中一实施例的流程图;
图9是当目标为静态目标时,图1所示的无人机拍摄相邻两帧图像形成的几何关系图;
图10是当目标为动态目标时,图1所示的无人机拍摄相邻两帧图像形成的几何关系图;
图11是本发明一种测距方法其中一实施例的流程图;
图12是本发明一种测距方法另一实施例的流程图;
图13是本发明一种测距装置的结构框图。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明 实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明提供的一种测距方法、装置及无人机能够解决在无人机前方的目标无纹理、低纹理或者密集重复纹理等极端情况下立体匹配精度差或无法进行立体匹配,从而导致无人机无法实现远距离避障的问题。
图1是本发明实施例提供的一种无人机10的结构示意图。该无人机10包括壳体11、与壳体11相连的机臂12、设置在机臂12一端的动力装置13、与壳体11相连的云台15、与云台15相连的摄像装置14以及设置在壳体11内的处理器16和存储器17。
在本实施例中,机臂12的数量为4,即该飞行器为四旋翼飞行器,在其他可能的实施例中,机臂12的数量也可以为3、6、8、10等。无人机10还可以是其他可移动物体,例如载人飞行器、航模、无人飞艇、固定翼无人机和无人热气球等。
动力装置13包括设置在机臂12一端的电机132以及与电机132的转轴相连的螺旋桨131。电机132的转轴转动以带动螺旋桨131旋转从而给无人机10提供升力。
云台15用于减轻甚至消除动力装置13传递给摄像装置14的振动,以保证摄像装置14能够拍摄出稳定清晰的图像或视频。
摄像装置14可以是高清摄像头或者运动相机等,用于完成图像的拍摄,在本发明的一实施例中,摄像装置14支持自主光学变焦。摄像装置14可以直接搭载在无人机10上,也可以通过如本实施例所示的云台15搭载在无人机10上,云台15允许摄像装置14相对于无人机10绕至少一个轴转动。
处理器16可以包括多个功能性单元,如,用于控制飞行器飞行姿态的飞行控制单元、用于识别目标的目标识别单元、用于跟踪特定目标的跟踪单元、用于导航飞行器的导航单元(例如GPS(Global Positioning System)、北斗)、以及用于处理相关机载设备(如,摄像装置14)所获取的环境信息的数据处理单元等。
摄像装置14首先获取无人机10拍摄的任意相邻两帧的图像,处理器16确 定相邻两帧图像中每个图像的第一感兴趣区域(Region of Interest,ROI)。感兴趣区域是指图像处理中,以方框、圆、椭圆或不规则多边形等方式从被处理的图像中勾勒出需要处理的区域。感兴趣区域通常会包括至少一部分目标的图像。图2为摄像装置14获取的其中一幅图像140,方框内的区域表示第一感兴趣区域ROI,该第一感兴趣区域ROI包含了目标(图2中阴影线表示的部分)的至少部分图像。
当无人机10处于图3a的雾天目标呈现无纹理、图3b的低亮度目标呈现低纹理以及图3c的远距离目标呈现密集重复纹理的极端情况时,会导致立体匹配(Stereo Matching)的方法不可用。立体匹配是利用图像的像素信息,通过寻找两幅图像中的相同像素点,进而通过相同像素点对应的图像位置来得到描述该点景深信息的视差图,进一步通过对应的位置变化获得目标与无人机之间的相对距离信息。
处理器16首先需要确定无人机10处于上述的极端情况,如图4至图6所示,在本发明的一实施例中,处理器16通过下述方法判断无人机10是否处于上述极端情况,即无法通过立体匹配的方法确定无人机10与无人机10飞行方向上的目标之间的距离的情况:
S40、在所述第一感兴趣区域内选取n×n个像素单元51作为一个胞元50,并计算所述n×n个像素单元51的灰度值之和g 0
S41、在所述胞元50周围选取n×n个胞元50作为一个块,并分别计算所述块内所述n×n个胞元50的灰度值g 1,g 2,……g n×n
在本发明的一实施例中,n取4,在其他可能的实施例中,n也可以取8,10,12等。通过选取胞元以及块,处理器16可以比较第一感兴趣区域内某个局部的灰度值差异。
S42、判断所述n×n个像素单元51的灰度值之和与所述块内所述n×n个胞元50的灰度值是否满足预设条件。
如图5所示,在本发明的一实施例中,该步骤进一步包括:
S421、计算所述块内所有所述胞元的灰度值与所述g 0差值的和与所述胞元大小的比值:
Figure PCTCN2017118266-appb-000011
其中,S cell为所述胞元的大小。
S422、判断所述比值是否在预设范围内。
例如,可以根据经验或者实际应用情况给出一个预设阈值λ,当步骤S421中的比值小于λ时,表明目标的局部纹理差异小,无人机需要启用本发明的测距方法。相反,若比值大于λ,则说明目标的局部纹理差异较大,可以采用现有的立体匹配的方法来获取无人机与目标的相对距离。
若是,则确定所述无人机无法通过立体匹配的方法确定所述无人机与所述目标之间的距离。
S43、若是,则确定所述无人机无法通过立体匹配的方法确定所述无人机与所述目标之间的距离。
S44、若否,则确定所述无人机可以通过立体匹配的方法确定所述无人机与所述目标之间的距离。
通过以上方法步骤,处理器16可以确定目标的局部纹理的差异是否足够小,从而确定是否需要启用本发明实施例提供的测距方法。
如图7所示,处理器16在确定了无人机10无法通过立体匹配的方法确定无人机与目标之间的距离后,需要扩大所述相邻两帧图像中每个图像的第一感兴趣区域,以形成第二感兴趣区域。仍然以摄像装置14获取的其中一幅图像140为例,方框内的区域表示第一感兴趣区域ROI,该第一感兴趣区域ROI包含了目标(图2中阴影线表示的部分)的至少部分图像。从图6可知,与第一感兴趣区域ROI相比,第二感兴趣区域ROI+有一定范围的扩张。这是因为进行ROI内图像匹配时只需获取前景(即目标)的局部图像信息即可。但是在获得的远距离拍摄图像中,前景的边缘特征尺寸往往较小,为了能够在逼近飞行中鲁棒的获取前景的边缘特征,首先须保证边缘特征存在于感兴趣区域内,因而对感兴趣区域进行了一定程度的扩张,表示为第二感兴趣区域ROI+。此外,本发明实施例还引入了扩张因子σ的概念,其表示第二感兴趣区域ROI+的尺寸与ROI尺寸的比例值,其中,σ>1,具体数值需根据无人机避障的探测范围及无人机本身的计算能力确定。
在确定了相邻两帧图像中每一帧图像的第二感兴趣区域后,处理器16还需要对该第二感兴趣区域中的前景进行分割提取操作,以及判断提取的前景中是否存在至少两个目标。若是,则需要确定其中一个目标为检测对象,继而根据 从相邻两帧图像中分别提取出来的前景,确定无人机10与检测对象之间的距离。
如图8所示,在本发明的一实施例中,当前景中存在至少两个目标时,处理器16可以根据以下步骤来确定检测对象:
S70、判断所述前景中所述至少两个目标尺度的变化是否相同;
S71、若是,则确定在所述前景中上边缘最高的目标为所述检测对象。
S72、若否,则确定在所述前景中边缘尺度变化最大的目标为所述检测对象。
这是因为,目标相对无人机的距离越远,其对应边缘的尺度变化就越小,相反,目标相对无人机的距离越近,其对应边缘的尺度变化就越大。若所有目标的尺度变化相同,说明这些目标相对无人机的距离可以认为基本相同,此时,可以确定在前景中上边缘最高的目标为检测对象(如图6中的目标C)。这是因为无人机10一般通过升高高度的方式进行避障,因此,选取上边缘最高的目标能够保证避障结果最安全。
若多个目标的尺度变化不相同(说明此时多个目标相对于无人机10的距离有远近的差别),则确定边缘尺度变化最大的目标(离无人机最近的目标)作为检测对象。
在处理器16确定了检测对象之后,需要进一步计算目标与无人机10的相对距离。
在本发明的一实施例中,当目标为静止的目标时,处理器16根据相邻两帧图像之间的几何关系可以算出目标与无人机10的相对距离:
如图9所示。假设目标80的实际宽度为E,目标80在无人机的摄像装置前后两帧图像的像平面上的宽度分别为h 1和h 2,无人机10在拍摄相邻两帧图像期间,即从位置81移动到位置81’期间移动的距离为S,摄像装置的焦距为f,则存在以下几何关系:
Figure PCTCN2017118266-appb-000012
对于式(2)、(3),E和H属于未知数,通过求解方程可以得到:
Figure PCTCN2017118266-appb-000013
Figure PCTCN2017118266-appb-000014
在获取了目标与无人机之间的相对距离后,处理器16就能够根据该距离提前预警,实施相应的避障策略。
在本发明的一实施例中,当目标为动态的目标时,即目标处于运动中时,可以通过以下方法测出无人机10与目标之间的相对距离:
对于运动的目标,为了保证对目标的持续跟踪,往往要求保持目标处于图像中心且尺度保持不变,在跟踪算法鲁棒精确,机载硬件实时性和飞控装置稳定性能够满足保持这样的状态的条件下,根据无人机的运动信息能够得到一段时间内目标的运动距离,如图10所示,由图10的几何关系可得:
Figure PCTCN2017118266-appb-000015
由h 1=h 2可以得到,目标该时间间隔内的运动距离与无人机的飞行距离一致。
倘若一直保持上述状态,则无法得到目标与无人机的相对距离,因此针对该情况本发明提出如下策略:连续获取至少三帧图像,在本发明的一实施例中为四帧图像后进行一次距离检测,检测时保持无人机静止,该阶段目标的运动距离通过前面四帧的速度进行预测,跟踪算法持续进行,稳定获得检测前后两帧图像的目标尺寸,进而根据几何关系得到目标与无人机的相对距离。
Figure PCTCN2017118266-appb-000016
在S已知的情况下,无人机与目标之间的相对距离H,目标的宽度E可以通过以下公式得到:
Figure PCTCN2017118266-appb-000017
Figure PCTCN2017118266-appb-000018
在本发明的一实施例中,对检测阶段目标的运动距离S通过以下方法得到:
为使结果更具有普遍性,建立变加速运动模型,加速度a=kt+b,进而可得速度模型为:
Figure PCTCN2017118266-appb-000019
其中,t为所述目标的运动时间,k、b、c为常数。
通过前面四帧图像及其相对距离关系,可求解得到k、b、c;
利用上述速度模型与运动时间,完成目标运动距离S的解算。
上述方法能够实现对目标与无人机之间相对距离的持续解算。
本发明实施例通过对相邻两帧图像进行前景背景分割、利用扩大的感兴趣区域进行边缘特征提取来获取图像尺度的变化,通过该图像尺度的变化能够实现极端条件下的远距离避障,同时解决了在无纹理、低纹理或者密集重复纹理等极端情况下立体匹配精度差或无法进行立体匹配的问题。
在本发明的另一实施例中,还可以根据本发明的测距方法实时获取运动的目标与无人机之间的相对距离。
如图11所示,本发明实施例还提供了一种测距方法,该方法包括:
S111、调整所述无人机的飞行姿态,以使得图像的中心落在所述无人机飞行方向所在的直线上。
无人机在飞行过程中,本来应该是平直飞行的,但是难免存在风干扰或者控制系统不稳定引起的姿态变化,无人机姿态的变化可能导致某几帧图像中心没有落在无人机的飞行方向所在的直线上,由于本发明关注的是无人机飞行方向上的安全问题,所以这个时候需要对图像进行校正对齐,使图像的中心落在无人机飞行方向所在的直线上,使图像满足极线约束(Epipolar Constraint)。
S112、获取所述无人机拍摄的相邻两帧的图像;
S113、确定所述相邻两帧图像中每个所述图像的第一感兴趣区域(Region of Interest,ROI);
S114、确定无人机无法通过立体匹配的方法确定所述无人机与所述无人机飞行方向上的目标之间的距离;
如图4和图5所示,在本发明的一实施例中,该步骤进一步包括:
S40、在所述第一感兴趣区域内选取n×n个像素单元51作为一个胞元50,并计算所述n×n个像素单元51的灰度值之和g 0
S41、在所述胞元50周围选取n×n个胞元50作为一个块,并分别计算所述块内所述n×n个胞元50的灰度值g 1,g 2,……g n×n
在本发明的一实施例中,n取4,在其他可能的实施例中,n也可以取8,10,12等。通过选取胞元以及块,处理器16可以比较第一感兴趣区域内某个局部的灰 度值差异。
S42、判断所述n×n个像素单元51的灰度值之和与所述块内所述n×n个胞元50的灰度值是否满足预设条件。
如图5所示,在本发明的一实施例中,步骤S42进一步包括:
S421、计算所述块内所有所述胞元的灰度值与所述g 0差值的和与所述胞元大小的比值:
Figure PCTCN2017118266-appb-000020
其中,S cell为所述胞元的大小。
S422、判断所述比值是否在预设范围内。
例如,可以根据经验或者实际应用情况给出一个预设阈值λ,当步骤S421中的比值小于λ时,表明目标的局部纹理差异小,无人机需要启用本发明的测距方法。相反,若比值大于λ,则说明目标的局部纹理差异较大,可以采用现有的立体匹配的方法来获取无人机与目标的相对距离。
若是,则确定所述无人机无法通过立体匹配的方法确定所述无人机与所述目标之间的距离。
在确定了无人机无法通过立体匹配的方法确定无人机与目标之间的相对距离后,需进行下一步的处理:
S115、扩大所述相邻两帧图像中每个所述图像的第一感兴趣区域,以形成第二感兴趣区域。
从图2和6可知,与第一感兴趣区域ROI相比,第二感兴趣区域ROI+有一定范围的扩张。这是因为进行ROI内图像匹配时只需获取前景(即目标)的局部图像信息即可。但是在获得的远距离拍摄图像中,前景的边缘特征尺寸往往较小,为了能够在逼近飞行中鲁棒的获取前景的边缘特征,首先须保证边缘特征存在于感兴趣区域内,因而对感兴趣区域进行了一定程度的扩张,表示为第二感兴趣区域ROI+。此外,本发明实施例还引入了扩张因子σ的概念,其表示第二感兴趣区域ROI+的尺寸与ROI尺寸的比例值,其中,σ>1,具体数值需根据无人机避障的探测范围及无人机本身的计算能力确定。
S116、从所述相邻两帧图像中,分别提取所述第二感兴趣区域中的前景。
S117、判断所述前景中是否存在至少两个目标。若是,则进行步骤S118;若否,则进行步骤S121,当前景中只有一个目标,则直接将该目标确定为检测对象。
S118、判断所述前景中所述至少两个目标尺度的变化是否相同;若是,则进行步骤S119;若否,则进行步骤S120。
S119、确定在所述前景中上边缘最高的目标为所述检测对象。
S120、确定在所述前景中边缘尺度变化最大的目标为所述检测对象。
这是因为,目标相对无人机的距离越远,其对应边缘的尺度变化就越小,相反,目标相对无人机的距离越近,其对应边缘的尺度变化就越大。若所有目标的尺度变化相同,说明这些目标相对无人机的距离可以认为基本相同,此时,可以确定在前景中上边缘最高的目标为检测对象(如图6中的目标C)。这是因为无人机10一般通过升高高度的方式进行避障,因此,选取上边缘最高的目标能够保证避障结果最安全。
若多个目标的尺度变化不相同(说明此时多个目标相对于无人机10的距离有远近的差别),则确定边缘尺度变化最大的目标(离无人机最近的目标)作为检测对象。
S121、根据从所述相邻两帧图像中分别提取出的所述前景,确定所述无人机与所述检测对象之间的距离。
在本发明的一实施例中,当目标为静止的目标时,根据相邻两帧图像之间的几何关系可以算出目标与无人机的相对距离:
如图8所示。假设目标80的实际宽度为E,目标80在无人机的摄像装置前后两帧图像的像平面上的宽度分别为h 1和h 2,无人机在拍摄相邻两帧图像期间,即从位置81移动到位置81’期间移动的距离为S,摄像装置的焦距为f,则存在以下几何关系:
Figure PCTCN2017118266-appb-000021
对于式(2)、(3),E和H属于未知数,通过求解方程可以得到:
Figure PCTCN2017118266-appb-000022
Figure PCTCN2017118266-appb-000023
在获取了目标与无人机之间的相对距离后,无人机就能够根据该距离提前预警,实施相应的避障策略。
在本发明的一实施例中,当目标为动态的目标时,即目标处于运动中时,可以通过以下方法测出无人机与目标之间的相对距离:
对于运动的目标,为了保证对目标的持续跟踪,往往要求保持目标处于图像中心且尺度保持不变,在跟踪算法鲁棒精确,机载硬件实时性和飞控装置稳定性能够满足保持这样的状态的条件下,根据无人机的运动信息能够得到一段时间内目标的运动距离,如图9所示,由图9的几何关系可得:
Figure PCTCN2017118266-appb-000024
由h 1=h 2可以得到,目标该时间间隔内的运动距离与无人机的飞行距离一致。
倘若一直保持上述状态,则无法得到目标与无人机的相对距离,因此针对该情况本发明提出如下策略:连续获取至少三帧图像,在本发明的一实施例中为四帧图像后进行一次距离检测,检测时保持无人机静止,该阶段目标的运动距离通过前面四帧的速度进行预测,跟踪算法持续进行,稳定获得检测前后两帧图像的目标尺寸,进而根据几何关系得到目标与无人机的相对距离。
Figure PCTCN2017118266-appb-000025
在S已知的情况下,无人机与目标之间的相对距离H,目标的宽度E可以通过以下公式得到:
Figure PCTCN2017118266-appb-000026
Figure PCTCN2017118266-appb-000027
在本发明的一实施例中,对检测阶段目标的运动距离S通过以下方法得到:
为使结果更具有普遍性,建立变加速运动模型,加速度a=kt+b,进而可得速度模型为:
Figure PCTCN2017118266-appb-000028
其中,t为所述目标的运动时间,k、b、c为常数。
通过前面四帧图像及其相对距离关系,可求解得到k、b、c;
利用上述速度模型与运动时间,完成目标运动距离S的解算。
上述方法能够实现对目标与无人机之间相对距离的持续解算。
有关该方法中各步骤的详细内容可以参考前述的描述,在此不再赘述。
如图12所示,本发明的另一实施例还提供了一种测距方法,该方法包括:
S122、获取所述无人机拍摄的相邻两帧的图像;
S123、确定所述相邻两帧图像中每个所述图像的第一感兴趣区域(Region of Interest,ROI);
S124、确定无人机无法通过立体匹配的方法确定所述无人机与所述无人机飞行方向上的目标之间的距离;
S125、扩大所述相邻两帧图像中每个所述图像的第一感兴趣区域,以形成第二感兴趣区域;
S126、从所述相邻两帧图像中,分别提取所述第二感兴趣区域中的前景;
S127、根据从所述相邻两帧图像中分别提取出的所述前景,确定所述无人机与所述目标之间的距离。
有关该方法中各步骤的详细内容可以参考前述的描述,在此不再赘述。
如图13所示,本发明还提供了一种测距装置130,该装置130包括:
获取模块132,用于获取所述无人机拍摄的任意相邻两帧的图像;
图像处理模块134,用于:
确定所述任意相邻两帧图像的感兴趣区域(Region of Interest,ROI);
根据所述感兴趣区域分别确定所述任意相邻两帧图像的扩大的感兴趣区域;
分别分割并提取所述扩大的感兴趣区域中的前景;
所述确定模块133用于:
确定无人机无法通过立体匹配的方法确定所述无人机与所述无人机飞行方向上的目标之间的距离;以及
根据所述任意相邻两帧图像的所述前景确定所述无人机与所述无人机飞行 方向上的目标之间的距离。
在本发明的一实施例中,所述确定模块133用于:
在所述感兴趣区域内选取n×n个像素单元作为一个胞元,并计算所述n×n个像素单元的灰度值之和g 0
在所述胞元周围选取n×n个胞元作为一个块,并分别计算所述块内所述n×n个胞元的灰度值g 1,g 2,……g n×n
判断所述n×n个像素单元的灰度值之和与所述块内所述n×n个胞元的灰度值是否满足预设条件;
若是,则确定所述无人机无法通过立体匹配的方法确定所述无人机与所述目标之间的距离。
在本发明的一实施例中,所述确定模块133具体用于:
计算所述块内所有所述胞元的灰度值与所述g 0差值的和与所述胞元大小的比值:
Figure PCTCN2017118266-appb-000029
其中,S cell为所述胞元的大小
判断所述比值是否在预设范围内;
若是,则确定所述无人机无法通过立体匹配的方法确定所述无人机与所述目标之间的距离。
在本发明的一实施例中,所述第二感兴趣区域的尺寸与所述第一感兴趣区域的尺寸的比值大于1。
在本发明的一实施例中,该装置130还包括判断模块135,所述判断模块135用于:
判断所述前景中是否存在至少两个目标;
若是,则确定其中一个所述目标为检测对象;
则,所述确定模块133用于:
根据从所述相邻两帧图像中分别提取出的所述前景,确定所述无人机与所述检测对象的距离。
在本发明的一实施例中,所述判断模块135具体用于:
判断所述前景中所述至少两个目标尺度的变化是否相同;
若是,则确定在所述前景中上边缘最高的目标为所述检测对象。
在本发明的一实施例中,若所述前景中所述多个目标尺度变化不相同,则所述判断模块135还用于确定在所述前景中边缘尺度变化最大的目标为所述检测对象。
在本发明的一实施例中,所述目标为静止的目标,则所述确定模块133用于:
至少根据h1,h2,f,S确定所述距离;
其中,所述S为拍摄所述相邻两帧图像时所述无人机飞行的距离,所述f为所述无人机的摄像装置的焦距,所述h1为所述目标在所述无人机拍摄其中一帧图像时在所述摄像装置的像平面上的宽度,所述h2为所述目标在所述无人机拍摄另一帧图像时在所述摄像装置的像平面上的宽度。
在本发明的一实施例中,所述确定模块133具体用于:
计算所述无人机与所述无人机飞行方向上的目标之间的距离H:
Figure PCTCN2017118266-appb-000030
在本发明的一实施例中,所述目标为动态的目标,则所述确定模块133用于:
计算所述目标在所述无人机拍摄所述相邻两帧图像时运动的距离S;
至少根据h1,h2,f,所述S确定所述距离;
其中,所述f为所述无人机的摄像装置的焦距,所述h1为所述目标在所述无人机拍摄其中一帧图像时在所述摄像装置的像平面上的宽度,所述h2为所述目标在所述无人机拍摄另一帧图像时在所述摄像装置的像平面上的宽度。
在本发明的一实施例中,所述确定模块133具体用于:
计算所述无人机与所述无人机飞行方向上的目标之间的距离H:
Figure PCTCN2017118266-appb-000031
在本发明的一实施例中,所述确定模块133还用于:
按照下述公式计算所述目标的宽度E:
Figure PCTCN2017118266-appb-000032
在本发明的一实施例中,所述确定模块133还用于:
获取所述无人机拍摄的至少三帧所述目标的图像;
根据所述至少三帧所述目标的图像获取描述所述目标运动规律的速度模型;
根据所述速度模型计算所述目标在所述无人机拍摄所述相邻两帧图像时运动的距离S。
在本发明的一实施例中,所述速度模型为:
Figure PCTCN2017118266-appb-000033
其中,t为所述目标的运动时间,k、b、c为常数。
在本发明的一实施例中,所述k、b、c由所述至少三帧所述目标的图像确定。
在本发明的一实施例中,该装置130还包括调整模块131,所述调整模块131用于调整所述无人机的飞行姿态,以使得所述图像的中心落在所述无人机飞行方向所在的直线上。
有关该装置中各模块的详细内容可以参考前述的描述,在此不再赘述。
本发明还提出了一种计算机可读存储介质,存储有计算机程序,所述计算机程序被处理器执行时,使得所述处理器执行在图11或图12所示的实施例中所描述的方法。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
通过以上的实施方式的描述,本领域普通技术人员可以清楚地了解到各实施方式可借助软件加通用硬件平台的方式来实现,当然也可以通过硬件。本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;在本发明的思路下,以上实施例或者不同实施例中的技术特征之间也可以进行组合,步骤可以以任意顺序实现,并存在如上所述的本发明的不同方面的许多其它变化,为了简明,它们没有在细节中提供;尽管参照前述实施例对本 发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (34)

  1. 一种测距方法,其特征在于,该方法包括:
    获取所述无人机拍摄的相邻两帧的图像;
    确定所述相邻两帧图像中每个所述图像的第一感兴趣区域(Region of Interest,ROI);
    确定无人机无法通过立体匹配的方法确定所述无人机与所述无人机飞行方向上的目标之间的距离;
    扩大所述相邻两帧图像中每个所述图像的第一感兴趣区域,以形成第二感兴趣区域;
    从所述相邻两帧图像中,分别提取所述第二感兴趣区域中的前景;
    根据从所述相邻两帧图像中分别提取出的所述前景,确定所述无人机与所述目标之间的距离。
  2. 根据权利要求1所述的方法,其特征在于,所述确定所述无人机无法通过立体匹配的方法确定所述无人机与所述目标之间的距离,包括:
    在所述第一感兴趣区域内选取n×n个像素单元作为一个胞元,并计算所述n×n个像素单元的灰度值之和g 0
    在所述胞元周围选取n×n个胞元作为一个块,并分别计算所述块内所述n×n个胞元的灰度值g 1,g 2,……g n×n
    判断所述n×n个像素单元的灰度值之和与所述块内所述n×n个胞元的灰度值是否满足预设条件;
    若是,则确定所述无人机无法通过立体匹配的方法确定所述无人机与所述目标之间的距离。
  3. 根据权利要求2所述的方法,其特征在于,所述判断所述n×n个像素单元的灰度值之和与所述块内所述n×n个胞元的灰度值是否满足预设条件,包括:
    计算所述块内所有所述胞元的灰度值与所述g 0差值的和与所述胞元大小的比值:
    Figure PCTCN2017118266-appb-100001
    其中,S cell为所述胞元的大小;
    判断所述比值是否在预设范围内;
    若是,则确定所述无人机无法通过立体匹配的方法确定所述无人机与所述目标之间的距离。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述第二感兴趣区域的尺寸与所述第一感兴趣区域的尺寸的比值大于1。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,该方法还包括:
    判断所述前景中是否存在至少两个目标;
    若是,则确定其中一个所述目标为检测对象;
    则,所述根据从所述相邻两帧图像中分别提取出的所述前景,确定所述无人机与所述目标之间的距离,包括:
    根据从所述相邻两帧图像中分别提取出的所述前景,确定所述无人机与所述检测对象的距离。
  6. 根据权利要求5所述的方法,其特征在于,所述确定其中一个所述目标作为检测对象,包括:
    判断所述前景中所述至少两个目标尺度的变化是否相同;
    若是,则确定在所述前景中上边缘最高的目标为所述检测对象。
  7. 根据权利要求6所述的方法,其特征在于,该方法还包括:
    若所述前景中所述至少两个目标尺度的变化不同,则确定在所述前景中边缘尺度变化最大的目标为所述检测对象。
  8. 根据权利要求1-7任一项所述的方法,其特征在于,所述目标为静止的目标,则所述根据从所述相邻两帧图像中分别提取出的前景,确定所述无人机与所述目标之间的距离,包括:
    至少根据h1,h2,f,S确定所述距离;
    其中,所述S为拍摄所述相邻两帧图像时所述无人机飞行的距离,所述f为所述无人机的摄像装置的焦距,所述h1为所述目标在所述无人机拍摄其中一帧图像时在所述摄像装置的像平面上的宽度,所述h2为所述目标在所述无人机拍摄另一帧图像时在所述摄像装置的像平面上的宽度。
  9. 根据权利要求8所述的方法,其特征在于,所述根据h1,h2,f,S确定所述距离,包括:
    按照下述公式计算所述无人机与所述目标之间的距离H:
    Figure PCTCN2017118266-appb-100002
  10. 根据权利要求1-7任一项所述的方法,其特征在于,所述目标为动态的目标,则所述根据从所述相邻两帧图像中分别提取出的前景,确定所述无人机与所述目标之间的距离,包括:
    计算所述目标在所述无人机拍摄所述相邻两帧图像时运动的距离S;
    至少根据h1,h2,f,所述S确定所述距离;
    其中,所述f为所述无人机的摄像装置的焦距,所述h1为所述目标在所述无人机拍摄其中一帧图像时在所述摄像装置的像平面上的宽度,所述h2为所述目标在所述无人机拍摄另一帧图像时在所述摄像装置的像平面上的宽度。
  11. 根据权利要求10所述的方法,其特征在于,所述至少根据h1,h2,f,所述S确定所述距离,包括:
    按照下述公式计算所述无人机与所述目标之间的距离H:
    Figure PCTCN2017118266-appb-100003
  12. 根据权利要求8-11任一项所述的方法,所述方法还包括:
    按照下述公式计算所述目标的宽度E:
    Figure PCTCN2017118266-appb-100004
  13. 根据权利要求11所述的方法,其特征在于,所述计算所述目标在所述无人机拍摄所述相邻两帧图像时运动的距离S,包括:
    获取所述无人机拍摄的至少三帧所述目标的图像;
    根据所述至少三帧所述目标的图像获取描述所述目标运动规律的速度模型;
    根据所述速度模型计算所述目标在所述无人机拍摄所述相邻两帧图像时运动的距离S。
  14. 根据权利要求13所述的方法,其特征在于,所述速度模型为:
    Figure PCTCN2017118266-appb-100005
    其中,t为所述目标的运动时间,k、b、c为常数。
  15. 根据权利要求14所述的方法,其特征在于,所述k、b、c由所述至少三帧所述目标的图像确定。
  16. 根据权利要求1-15任一项所述的方法,其特征在于,该方法还包括:
    调整所述无人机的飞行姿态,以使得所述图像的中心落在所述无人机飞行方向所在的直线上。
  17. 一种测距装置,其特征在于,该装置包括:
    获取模块,用于获取所述无人机拍摄的任意相邻两帧的图像;
    图像处理模块,用于:
    确定所述任意相邻两帧图像的感兴趣区域(Region of Interest,ROI);
    根据所述感兴趣区域分别确定所述任意相邻两帧图像的扩大的感兴趣区域;
    分别分割并提取所述扩大的感兴趣区域中的前景;
    所述确定模块用于:
    确定无人机无法通过立体匹配的方法确定所述无人机与所述无人机飞行方向上的目标之间的距离;以及
    根据所述任意相邻两帧图像的所述前景确定所述无人机与所述无人机飞行方向上的目标之间的距离。
  18. 根据权利要求17所述的装置,其特征在于,所述确定模块用于:
    在所述感兴趣区域内选取n×n个像素单元作为一个胞元,并计算所述n×n个像素单元的灰度值之和g 0
    在所述胞元周围选取n×n个胞元作为一个块,并分别计算所述块内所述n×n个胞元的灰度值g 1,g 2,……g n×n
    判断所述n×n个像素单元的灰度值之和与所述块内所述n×n个胞元的灰度值是否满足预设条件;
    若是,则确定所述无人机无法通过立体匹配的方法确定所述无人机与所述目标之间的距离。
  19. 根据权利要求18所述的装置,其特征在于,所述确定模块具体用于:
    计算所述块内所有所述胞元的灰度值与所述g 0差值的和与所述胞元大小的比值:
    Figure PCTCN2017118266-appb-100006
    其中,S cell为所述胞元的大小
    判断所述比值是否在预设范围内;
    若是,则确定所述无人机无法通过立体匹配的方法确定所述无人机与所述目标之间的距离。
  20. 根据权利要求17-19任一项所述的装置,其特征在于,所述第二感兴趣区域的尺寸与所述第一感兴趣区域的尺寸的比值大于1。
  21. 根据权利要求17-20任一项所述的装置,其特征在于,该装置还包括判断模块,所述判断模块用于:
    判断所述前景中是否存在至少两个目标;
    若是,则确定其中一个所述目标为检测对象;
    则,所述确定模块用于:
    根据从所述相邻两帧图像中分别提取出的所述前景,确定所述无人机与所述检测对象的距离。
  22. 根据权利要求21所述的装置,其特征在于,所述判断模块具体用于:
    判断所述前景中所述至少两个目标尺度的变化是否相同;
    若是,则确定在所述前景中上边缘最高的目标为所述检测对象。
  23. 根据权利要求22所述的装置,其特征在于,若所述前景中所述多个目标尺度变化不相同,则所述判断模块还用于确定在所述前景中边缘尺度变化最大的目标为所述检测对象。
  24. 根据权利要求17-23任一项所述的装置,其特征在于,所述目标为静止的目标,则所述确定模块用于:
    至少根据h1,h2,f,S确定所述距离;
    其中,所述S为拍摄所述相邻两帧图像时所述无人机飞行的距离,所述f为所述无人机的摄像装置的焦距,所述h1为所述目标在所述无人机拍摄其中一帧图像时在所述摄像装置的像平面上的宽度,所述h2为所述目标在所述无人机拍摄另一帧图像时在所述摄像装置的像平面上的宽度。
  25. 根据权利要求24所述的装置,其特征在于,所述确定模块具体用于:
    计算所述无人机与所述无人机飞行方向上的目标之间的距离H:
    Figure PCTCN2017118266-appb-100007
  26. 根据权利要求17-23任一项所述的装置,其特征在于,所述目标为动态的目标,则所述确定模块用于:
    计算所述目标在所述无人机拍摄所述相邻两帧图像时运动的距离S;
    至少根据h1,h2,f,所述S确定所述距离;
    其中,所述f为所述无人机的摄像装置的焦距,所述h1为所述目标在所述无人机拍摄其中一帧图像时在所述摄像装置的像平面上的宽度,所述h2为所述目标在所述无人机拍摄另一帧图像时在所述摄像装置的像平面上的宽度。
  27. 根据权利要求26所述的装置,其特征在于,所述确定模块具体用于:
    计算所述无人机与所述无人机飞行方向上的目标之间的距离H:
    Figure PCTCN2017118266-appb-100008
  28. 根据权利要求24-27任一项所述的装置,其特征在于,所述确定模块还用于:
    按照下述公式计算所述目标的宽度E:
    Figure PCTCN2017118266-appb-100009
  29. 根据权利要求27所述的装置,其特征在于,所述确定模块还用于:
    获取所述无人机拍摄的至少三帧所述目标的图像;
    根据所述至少三帧所述目标的图像获取描述所述目标运动规律的速度模型;
    根据所述速度模型计算所述目标在所述无人机拍摄所述相邻两帧图像时运动的距离S。
  30. 根据权利要求29所述的装置,其特征在于,所述速度模型为:
    Figure PCTCN2017118266-appb-100010
    其中,t为所述目标的运动时间,k、b、c为常数。
  31. 根据权利要求30所述的装置,其特征在于,所述k、b、c由所述至少三帧所述目标的图像确定。
  32. 根据权利要求17-31任一项所述的装置,其特征在于,该装置还包括调整模块,所述调整模块用于调整所述无人机的飞行姿态,以使得所述图像的中心落在所述无人机飞行方向所在的直线上。
  33. 一种无人机,其特征在于,包括:
    壳体;
    与所述壳体连接的机臂;
    设置在所述壳体或者机臂内的处理器;以及,
    与所述处理器通信连接的存储器,所述存储器设在所述壳体或者机臂内; 其中,
    所述存储器存储有可被所述处理器执行的指令,所述处理器执行所述指令时,实现如权利要求1-16任意一项所述的方法。
  34. 一种非易失性计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机可执行指令,当所述计算机可执行指令被无人机执行时,使所述无人机执行权利要求1-16任意一项所述的方法。
PCT/CN2017/118266 2017-12-25 2017-12-25 测距方法、装置以及无人机 WO2019126930A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201780002607.5A CN108140245B (zh) 2017-12-25 2017-12-25 测距方法、装置以及无人机
PCT/CN2017/118266 WO2019126930A1 (zh) 2017-12-25 2017-12-25 测距方法、装置以及无人机
EP17832901.7A EP3531375B1 (en) 2017-12-25 2017-12-25 Method and apparatus for measuring distance, and unmanned aerial vehicle
US15/886,186 US10621456B2 (en) 2017-12-25 2018-02-01 Distance measurement method and apparatus, and unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/118266 WO2019126930A1 (zh) 2017-12-25 2017-12-25 测距方法、装置以及无人机

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/886,186 Continuation US10621456B2 (en) 2017-12-25 2018-02-01 Distance measurement method and apparatus, and unmanned aerial vehicle

Publications (1)

Publication Number Publication Date
WO2019126930A1 true WO2019126930A1 (zh) 2019-07-04

Family

ID=62400288

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/118266 WO2019126930A1 (zh) 2017-12-25 2017-12-25 测距方法、装置以及无人机

Country Status (4)

Country Link
US (1) US10621456B2 (zh)
EP (1) EP3531375B1 (zh)
CN (1) CN108140245B (zh)
WO (1) WO2019126930A1 (zh)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020010620A1 (zh) * 2018-07-13 2020-01-16 深圳市大疆创新科技有限公司 波浪识别方法、装置、计算机可读存储介质和无人飞行器
CN109684994A (zh) * 2018-12-21 2019-04-26 宁波如意股份有限公司 基于视觉的叉车避障方法及系统
CN111736622B (zh) * 2019-03-25 2023-04-07 海鹰航空通用装备有限责任公司 基于双目视觉与imu相结合的无人机避障方法及系统
CN110187720B (zh) * 2019-06-03 2022-09-27 深圳铂石空间科技有限公司 无人机导引方法、装置、系统、介质及电子设备
CN110687929B (zh) * 2019-10-10 2022-08-12 辽宁科技大学 基于单目视觉与运动想象的飞行器三维空间目标搜索系统
WO2021102994A1 (zh) * 2019-11-29 2021-06-03 深圳市大疆创新科技有限公司 高度确定方法、飞行器及计算机可读存储介质
CN112639881A (zh) * 2020-01-21 2021-04-09 深圳市大疆创新科技有限公司 距离测量方法、可移动平台、设备和存储介质
CN112698661B (zh) * 2021-03-22 2021-08-24 成都睿铂科技有限责任公司 一种飞行器的航测数据采集方法、装置、系统及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015175201A1 (en) * 2014-05-15 2015-11-19 Intel Corporation Content adaptive background-foreground segmentation for video coding
CN105225241A (zh) * 2015-09-25 2016-01-06 广州极飞电子科技有限公司 无人机深度图像的获取方法及无人机
CN106529495A (zh) * 2016-11-24 2017-03-22 腾讯科技(深圳)有限公司 一种飞行器的障碍物检测方法和装置
CN106960454A (zh) * 2017-03-02 2017-07-18 武汉星巡智能科技有限公司 景深避障方法、设备及无人飞行器
WO2017143589A1 (en) * 2016-02-26 2017-08-31 SZ DJI Technology Co., Ltd. Systems and methods for visual target tracking

Family Cites Families (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4519478B2 (ja) * 2004-02-16 2010-08-04 株式会社東芝 目標距離測定装置
JP2009270863A (ja) * 2008-05-01 2009-11-19 Toshiba Corp バイスタティックレーダ装置
US9878804B2 (en) * 2013-10-21 2018-01-30 Eric Olsen Systems and methods for producing temperature accurate thermal images
US9848112B2 (en) * 2014-07-01 2017-12-19 Brain Corporation Optical detection apparatus and methods
JP6121063B1 (ja) * 2014-11-04 2017-04-26 エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co.,Ltd カメラ較正方法、デバイス及びシステム
US9933264B2 (en) * 2015-04-06 2018-04-03 Hrl Laboratories, Llc System and method for achieving fast and reliable time-to-contact estimation using vision and range sensor data for autonomous navigation
CN113093808A (zh) * 2015-05-23 2021-07-09 深圳市大疆创新科技有限公司 使用惯性传感器和图像传感器的传感器融合
US9549125B1 (en) * 2015-09-01 2017-01-17 Amazon Technologies, Inc. Focus specification and focus stabilization
WO2017041303A1 (en) * 2015-09-11 2017-03-16 SZ DJI Technology Co., Ltd. Systems and methods for detecting and tracking movable objects
EP3353706A4 (en) * 2015-09-15 2019-05-08 SZ DJI Technology Co., Ltd. SYSTEM AND METHOD FOR MONITORING UNIFORM TARGET TRACKING
EP3368957B1 (en) * 2015-10-30 2022-02-09 SZ DJI Technology Co., Ltd. Systems and methods for uav path planning and control
CN105346706B (zh) * 2015-11-13 2018-09-04 深圳市道通智能航空技术有限公司 飞行装置、飞行控制系统及方法
CN105447853B (zh) * 2015-11-13 2018-07-13 深圳市道通智能航空技术有限公司 飞行装置、飞行控制系统及方法
WO2017096547A1 (en) * 2015-12-09 2017-06-15 SZ DJI Technology Co., Ltd. Systems and methods for uav flight control
CN105578034A (zh) * 2015-12-10 2016-05-11 深圳市道通智能航空技术有限公司 一种对目标进行跟踪拍摄的控制方法、控制装置及系统
CN105574894B (zh) * 2015-12-21 2018-10-16 天津远度科技有限公司 一种运动物体特征点跟踪结果的筛选方法和系统
US10665115B2 (en) * 2016-01-05 2020-05-26 California Institute Of Technology Controlling unmanned aerial vehicles to avoid obstacle collision
CN107851308A (zh) * 2016-03-01 2018-03-27 深圳市大疆创新科技有限公司 用于识别目标物体的系统和方法
KR20170136750A (ko) * 2016-06-02 2017-12-12 삼성전자주식회사 전자 장치 및 그의 동작 방법
US10074183B1 (en) * 2016-06-03 2018-09-11 Amazon Technologies, Inc. Image alignment correction for imaging processing during operation of an unmanned aerial vehicle
US10301041B2 (en) * 2016-06-09 2019-05-28 California Institute Of Technology Systems and methods for tracking moving objects
CN105974938B (zh) * 2016-06-16 2023-10-03 零度智控(北京)智能科技有限公司 避障方法、装置、载体及无人机
US9977434B2 (en) * 2016-06-23 2018-05-22 Qualcomm Incorporated Automatic tracking mode for controlling an unmanned aerial vehicle
US10033980B2 (en) * 2016-08-22 2018-07-24 Amazon Technologies, Inc. Determining stereo distance information using imaging devices integrated into propeller blades
US10196141B1 (en) * 2016-09-21 2019-02-05 Amazon Technologies, Inc. Detection of transparent elements using reflective disparities
CN107065895A (zh) * 2017-01-05 2017-08-18 南京航空航天大学 一种植保无人机定高技术
CN107507190B (zh) * 2017-07-12 2020-02-14 西北工业大学 一种基于可见光序列图像的低空运动目标检测方法
CN107329490B (zh) * 2017-07-21 2020-10-09 歌尔科技有限公司 无人机避障方法及无人机
US10434451B2 (en) * 2017-07-26 2019-10-08 Nant Holdings Ip, Llc Apparatus and method of harvesting airborne moisture
US10402646B2 (en) * 2017-09-21 2019-09-03 Amazon Technologies, Inc. Object detection and avoidance for aerial vehicles

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015175201A1 (en) * 2014-05-15 2015-11-19 Intel Corporation Content adaptive background-foreground segmentation for video coding
CN105225241A (zh) * 2015-09-25 2016-01-06 广州极飞电子科技有限公司 无人机深度图像的获取方法及无人机
WO2017143589A1 (en) * 2016-02-26 2017-08-31 SZ DJI Technology Co., Ltd. Systems and methods for visual target tracking
CN106529495A (zh) * 2016-11-24 2017-03-22 腾讯科技(深圳)有限公司 一种飞行器的障碍物检测方法和装置
CN106960454A (zh) * 2017-03-02 2017-07-18 武汉星巡智能科技有限公司 景深避障方法、设备及无人飞行器

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3531375A4 *

Also Published As

Publication number Publication date
CN108140245A (zh) 2018-06-08
EP3531375A4 (en) 2019-08-28
EP3531375A1 (en) 2019-08-28
US10621456B2 (en) 2020-04-14
EP3531375B1 (en) 2021-08-18
US20190197335A1 (en) 2019-06-27
CN108140245B (zh) 2022-08-23

Similar Documents

Publication Publication Date Title
WO2019126930A1 (zh) 测距方法、装置以及无人机
US20230360230A1 (en) Methods and system for multi-traget tracking
US20210027641A1 (en) Systems and methods for vehicle guidance
CN111326023B (zh) 一种无人机航线预警方法、装置、设备及存储介质
CN112567201B (zh) 距离测量方法以及设备
US20210141378A1 (en) Imaging method and device, and unmanned aerial vehicle
CN106529495B (zh) 一种飞行器的障碍物检测方法和装置
US10860039B2 (en) Obstacle avoidance method and apparatus and unmanned aerial vehicle
US20210133996A1 (en) Techniques for motion-based automatic image capture
US20190265734A1 (en) Method and system for image-based object detection and corresponding movement adjustment maneuvers
WO2020113423A1 (zh) 目标场景三维重建方法、系统及无人机
JP5990453B2 (ja) 自律移動ロボット
US11057604B2 (en) Image processing method and device
WO2021035731A1 (zh) 无人飞行器的控制方法、装置及计算机可读存储介质
WO2021047502A1 (zh) 一种目标状态估计方法、装置和无人机
US20210009270A1 (en) Methods and system for composing and capturing images
WO2023197841A1 (zh) 一种对焦方法、摄像装置、无人机和存储介质
US20210256732A1 (en) Image processing method and unmanned aerial vehicle
CN111433819A (zh) 目标场景三维重建方法、系统及无人机
WO2021035746A1 (zh) 图像处理方法、装置和可移动平台
CN114096929A (zh) 信息处理设备、信息处理方法和信息处理程序
WO2022141123A1 (zh) 可移动平台及其控制方法、装置、终端设备和存储介质
US20240104754A1 (en) Information processing system, method and program
JP2021154857A (ja) 操縦支援装置、操縦支援方法、及びプログラム
CN111324139A (zh) 无人机降落方法、装置、设备及存储介质

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 2017832901

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2017832901

Country of ref document: EP

Effective date: 20180202

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17832901

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE