WO2017050279A1 - 无人机深度图像的获取方法、装置及无人机 - Google Patents

无人机深度图像的获取方法、装置及无人机 Download PDF

Info

Publication number
WO2017050279A1
WO2017050279A1 PCT/CN2016/099925 CN2016099925W WO2017050279A1 WO 2017050279 A1 WO2017050279 A1 WO 2017050279A1 CN 2016099925 W CN2016099925 W CN 2016099925W WO 2017050279 A1 WO2017050279 A1 WO 2017050279A1
Authority
WO
WIPO (PCT)
Prior art keywords
drone
pixel
coordinate system
image
camera
Prior art date
Application number
PCT/CN2016/099925
Other languages
English (en)
French (fr)
Other versions
WO2017050279A9 (zh
Inventor
陈有生
Original Assignee
广州极飞科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州极飞科技有限公司 filed Critical 广州极飞科技有限公司
Priority to AU2016327918A priority Critical patent/AU2016327918B2/en
Priority to US15/565,582 priority patent/US10198004B2/en
Priority to ES16848160T priority patent/ES2798798T3/es
Priority to KR1020177034364A priority patent/KR101886013B1/ko
Priority to EP16848160.4A priority patent/EP3264364B1/en
Priority to JP2017566134A priority patent/JP6484729B2/ja
Publication of WO2017050279A1 publication Critical patent/WO2017050279A1/zh
Publication of WO2017050279A9 publication Critical patent/WO2017050279A9/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/579Depth or shape recovery from multiple images from motion
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64CAEROPLANES; HELICOPTERS
    • B64C39/00Aircraft not otherwise provided for
    • B64C39/02Aircraft not otherwise provided for characterised by special use
    • B64C39/024Aircraft not otherwise provided for characterised by special use of the remote controlled vehicle type, i.e. RPV
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/0094Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots involving pointing a payload, e.g. camera, weapon, sensor, towards a fixed or moving target
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/0047Navigation or guidance aids for a single aircraft
    • G08G5/0069Navigation or guidance aids for a single aircraft specially adapted for an unmanned aircraft
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2101/00UAVs specially adapted for particular uses or applications
    • B64U2101/30UAVs specially adapted for particular uses or applications for imaging, photography or videography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation

Definitions

  • the present invention relates to the field of image processing technologies, and in particular, to a method and device for acquiring a depth image of a drone and a drone.
  • the traditional imaging scheme is to convert a three-dimensional image model into a two-dimensional gray image, which loses the depth information of the image during the imaging process.
  • the depth information of the image is very important for subsequent applications (such as three-dimensional reconstruction, geographic mapping, etc.) ), obtaining depth maps is of great significance both for theoretical research and for engineering practice.
  • the existing method for obtaining a depth image is generally an active acquisition method, which actively emits energy, such as laser, electromagnetic wave, ultrasonic wave, etc., which is received by an obstacle to be received; passive measurement has a machine vision-based method, such as a double Visual vision and so on.
  • the method for the UAV to acquire the depth image is generally to actively emit an energy beam, then detect the returned energy, and finally calculate the depth map accordingly.
  • this method is susceptible to the surrounding environment, such as the effect of light on the laser;
  • the method requires that the object to be measured must be able to reflect energy, and if most of the emitted energy is absorbed, it will cause the failure of the method;
  • the measurable range of the method is limited because the emitted energy is attenuated in the atmosphere. If the distance is too far, the attenuation is severe and the depth information cannot be accurately measured.
  • binocular vision-based methods require two cameras, and a certain distance must be required between the two cameras. The farther the distance is measured, the greater the distance between the two cameras. For small drones. This adds to the load and, because of the limited space of the small drone, limits the maximum distance between the two cameras.
  • the present invention aims to solve at least one of the technical problems in the related art described above to some extent.
  • the object of the present invention is to provide a method for acquiring a depth image of a drone, a drone image acquisition device and a drone, which can accurately acquire a depth image of the drone, and has a suitable range. Wide, low cost and easy to implement.
  • an embodiment of the first aspect of the present invention provides a method for acquiring a depth image of a drone, comprising the steps of: reading an image sequence of a predetermined scene collected by an onboard camera of the drone, wherein Image sequence The Nth frame image and the N+1th frame image have overlapping regions, and a ratio of an area of the overlapping region to an area of the Nth frame image or an N+1th frame image is higher than a preset ratio; acquiring the overlap Position change information of each pixel of the Nth frame image in the N+1th frame image, and obtaining, according to the position change information, each pixel of the UAV in the overlap region in the camera coordinate system a moving speed of the pixel; obtaining an actual flying speed of the drone in the world coordinate system; and a moving speed of the pixel in the camera coordinate system according to each pixel of the drone in each of the overlapping areas, Determining the depth image of each of the overlapping regions by the actual flying speed of the drone in the world coordinate system and the parameters of the
  • a continuous image captured by an onboard camera of the drone is read, and the position change information of each pixel in the continuous overlapping area of the two frames is obtained.
  • the pixel moving speed of each pixel in the camera coordinate system, and the actual flying speed of the drone in the world coordinate system is measured by using equipment such as the unmanned aerial vehicle GPS, and finally the camera system is passed by the drone in the camera coordinate system.
  • the relationship between the pixel moving speed of each pixel, the actual flying speed of the drone in the world coordinate system and the flying height, and the depth image of the drone is calculated.
  • the method can accurately acquire the depth image and the operation flow is simple. ,Easy to implement.
  • the method is implemented by using the equipment on the existing drone, without adding additional equipment, thereby reducing the load of the drone, reducing the measurement cost, avoiding the energy attenuation, or the surface of the object to be measured. Problems such as failure of active measurement caused by absorption and other problems.
  • the method for acquiring the depth image of the drone according to the above embodiment of the present invention may further have the following additional technical features:
  • the moving speed of the pixel of each pixel of the machine in the camera coordinate system may further include: calculating a moving distance of each pixel of the drone in the overlapping coordinate area in the camera coordinate system; and the overlapping area The moving distance of each pixel of the drone in the camera coordinate system is derived, and the pixel moving speed of each pixel of the drone in the overlapping coordinate region in the camera coordinate system is obtained.
  • the calculating a moving distance of each pixel of the drone in the camera coordinate system in the overlapping area further includes: according to position information of the same pixel in the image of the Nth frame The position information in the N+1th frame image obtains movement information of the same pixel point, and the moving distance of the same pixel point in the camera coordinate system is obtained according to the movement information.
  • the pixel moving speed in a camera coordinate system according to each pixel point of the drone in each of the overlapping regions, the actual flying speed of the drone in the world coordinate system, and the The parameters of the onboard camera obtain a depth image of each of the overlapping regions, and integrate the depth image of each of the overlapping regions to obtain the predetermined scene
  • the depth image further includes: establishing a pixel moving speed of each pixel of the drone in the overlapping area in the camera coordinate system, an actual flying speed of the drone in the world coordinate system, and the absence The relationship between the flying height of the man-machine;
  • Depth images of each of the overlapping regions are obtained according to depth values of each pixel of the drone in each of the overlapping regions, and depth images of each of the overlapping regions are integrated to obtain a depth image of the predetermined scene.
  • the method further includes: determining whether the directions of the camera coordinate system and the world coordinate system are consistent; if the directions of the camera coordinate system and the world coordinate system are inconsistent, the direction of the camera coordinate system Adjustment is made such that the direction of the camera coordinate system coincides with the direction of the world coordinate system.
  • the on-board camera has a field of view that is lower than a predetermined angle, the predetermined angle being less than or equal to 60 degrees.
  • the position change information of each pixel of the Nth frame image in the (N+1)th frame image is acquired in the overlapping area, and the overlapping area is obtained according to the position change information.
  • the method further includes: correcting distortion of the image in the image sequence.
  • an embodiment of the second aspect of the present invention further provides an apparatus for acquiring a depth image of a drone, comprising: a reading module, the reading module is configured to read an onboard camera of the drone. a sequence of images of a predetermined scene, wherein the Nth frame image and the (N+1)th frame image in the image sequence have overlapping regions and an area of the overlapping region and the Nth frame image or the N+1th frame image The ratio of the area is higher than the preset ratio; the calculation module is configured to acquire position change information of each pixel of the Nth frame image in the overlap region in the (N+1)th frame image, and according to the The position change information is obtained by the pixel moving speed of each pixel of the drone in the camera coordinate system in the overlapping area; and the measuring module is configured to acquire the actual flight of the drone in the world coordinate system And an image generation module, wherein the image generation module is configured to: according to a pixel moving speed of the camera coordinate system of each pixel of the drone in each of the world coordinate system
  • the apparatus for acquiring a depth image of a drone reads a continuous image by reading a camera on the on-board camera, and then the calculation module calculates position change information of each pixel in the overlapping area of two consecutive frames.
  • the pixel moving speed of each pixel of the drone in the camera coordinate system, and then using the measuring module to obtain the actual flying speed of the drone in the world coordinate system, such as the unmanned aerial vehicle GPS, and finally the image generating module utilizes The relationship between the pixel moving speed of each pixel in the camera coordinate system, the actual flying speed of the drone in the world coordinate system and the flying height, and the depth image of the drone is calculated, so the drone
  • the depth image acquisition device can accurately acquire the depth image.
  • the UAV depth image acquisition device is implemented by using the equipment on the existing UAV, without adding additional equipment, thereby reducing the load of the UAV, reducing the measurement cost and avoiding the energy. Problems such as attenuation, or active measurement failure caused by problems such as surface absorption of the object being measured.
  • the apparatus for acquiring a depth image of a drone may further have the following additional technical features:
  • the computing module is configured to: calculate a moving distance of each pixel of the drone in the overlapping area in the camera coordinate system; and each of the drones in the overlapping area The moving distance of the pixel in the camera coordinate system is derived, and the pixel moving speed of each pixel of the drone in the camera coordinate system in the overlapping area is obtained.
  • the calculating module is configured to: according to the position information of the same pixel in the image of the Nth frame and the position information in the image of the (N+1)th frame, obtain a moving distance of the same pixel, according to The moving distance obtains a moving distance of the same pixel point in the camera coordinate system.
  • the image generation module is configured to establish a pixel moving speed of each pixel of the drone in the overlapping area in the camera coordinate system, and the actual operation of the drone in a world coordinate system. The relationship between the flight speed and the flying height of the drone;
  • Depth images of each of the overlapping regions are obtained according to depth values of each pixel of the drone in each of the overlapping regions, and depth images of each of the overlapping regions are integrated to obtain a depth image of the predetermined scene.
  • the method further includes: an adjustment module, configured to determine whether the direction of the camera coordinate system and the world coordinate system are consistent, and the direction of the camera coordinate system and the world coordinate system are inconsistent The direction of the camera coordinate system is adjusted such that the direction of the camera coordinate system coincides with the direction of the world coordinate system.
  • the on-board camera has a field of view that is lower than a predetermined angle, the predetermined angle being less than or equal to 60 degrees.
  • the onboard camera is also operative to correct distortion of an image in the sequence of images.
  • an embodiment of a third aspect of the present invention provides an unmanned aerial vehicle comprising: an onboard camera for acquiring an image sequence of a predetermined scene; and a speed measuring device for measuring or calculating The actual flight speed of the drone in the world coordinate system; the processor configured to perform the method for acquiring the depth image of the drone of the first aspect of the present invention; the body, the body is used for installation The onboard camera, speed measuring device and processor.
  • the drone further includes a self-stabilizing pan, and the on-board camera is mounted on the body through the self-stabilizing pan.
  • FIG. 1 is a flow chart of a method for acquiring a depth image of a drone according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a specific model of a drone acquiring a depth image according to an embodiment of the present invention
  • FIG. 3 is a block diagram showing the structure of an apparatus for acquiring a depth image of a drone according to an embodiment of the present invention.
  • FIG. 1 is a flow chart of a method for acquiring a depth image of a drone according to an embodiment of the present invention. As shown in Figure 1, the method includes the following steps:
  • Step S1 reading an image sequence of a predetermined scene collected by the onboard camera of the drone, wherein the Nth frame image and the N+1th frame image in the image sequence have an overlapping area and an area of the overlapping area and an Nth frame image or The ratio of the area of the N+1th frame image is higher than the preset ratio.
  • the image of the object to be measured is captured by the on-board camera of the unmanned aerial vehicle, and two consecutive images are extracted therefrom, for example, the Nth frame image and the N+1th frame image, respectively, and the Nth frame. There must be an overlap area between the image and the N+1th frame image.
  • the ratio of the area of the overlapping area to the area of the Nth frame image or the N+1th frame image is higher than a preset ratio. More specifically, in one embodiment of the present invention, the preset ratio is, for example, 60%, that is, the area of the overlap region is higher than 60% of the image area of the Nth frame or the N+1th frame.
  • the onboard camera passes the self-stabilizing cloud.
  • the station is mounted on the drone, as shown in Figure 2.
  • the angle of view of the onboard camera cannot be too large.
  • the selected field of view of the onboard camera is lower than the preset angle, more specifically
  • the preset angle is, for example, 60 degrees, as shown in FIG.
  • the value of the preset angle is not limited thereto, and may be selected according to actual scene requirements (for example, the preset angle may also be less than 60 degrees), and the description is only for the purpose of illustration.
  • Correction of the image distortion in the image sequence is performed so that the distortion of the image is within the usable range before performing subsequent operations.
  • Step S2 acquiring position change information of each pixel of the Nth frame image in the overlapped area in the (N+1)th frame image, and obtaining each pixel point of the drone in the overlapping area according to the position change information in the camera coordinate system The pixel movement speed under.
  • a change in position information (ie, position change information) of each pixel of the Nth frame image in the N+1th frame image in the overlap region may be obtained according to the feature matching based optical flow method, According to the change of the position information, the moving speed of the pixel in the camera coordinate system of each pixel of the drone in the overlapping area is obtained.
  • the step S2 may further include:
  • Step S21 Calculate the moving distance of each pixel of the drone in the camera coordinate system in the overlapping area.
  • the moving distance of each pixel of the drone in the overlapping region in the camera coordinate system may be obtained according to the feature matching-based optical flow method.
  • the moving distance of each pixel of the drone in the camera coordinate system in the overlapping area is calculated, and specifically, the position information in the image of the Nth frame according to the same pixel may be included. And the position information in the image of the (N+1)th frame, the movement information of the pixel point is obtained, and the moving distance of the pixel point in the camera coordinate system is obtained according to the movement information.
  • the calculation formula of the moving distance of each pixel of the drone in the camera coordinate system in the overlapping area is:
  • (x1, y1) is the position information of the pixel in the image of the Nth frame
  • (x2, y2) is the position information of the pixel in the image of the (N+1)th frame
  • (u x , u y ) is the pixel point The distance moved in the camera coordinate system.
  • Step S22 Deriving the moving distance of each pixel of the drone in the camera coordinate system in the overlapping area, and obtaining the moving speed of the pixel in the camera coordinate system of each pixel of the drone in the overlapping area.
  • the optical flow method based on feature matching calculates the position of each pixel in the image of the Nth frame in the image of the N+1th frame, thereby calculating each pixel point in the image of the Nth frame.
  • the moving distance of the N+1 frame image, and then the moving speed of the pixel is used to obtain the pixel moving speed of each pixel of the drone in the camera coordinate system.
  • the optical flow method based on feature matching includes a dense algorithm and a sparse algorithm.
  • the dense algorithm is that each pixel in the image participates in the calculation, so as to calculate the pixel moving speed of each pixel in the image; the sparse optical flow method selects a part of the pixels that are easy to track in the image, which is easy to select.
  • the tracked pixels perform optical flow operations to obtain pixel moving speeds of these easily trackable pixels.
  • the feature matching based optical flow method used is, for example, a dense optical flow method. It should be noted that using the optical flow method based on feature matching to calculate the pixel moving speed of each pixel of the drone in the camera coordinate system is only one embodiment of the present invention, and is not to be construed as limiting the present invention. A method of calculating the pixel moving speed of each pixel of the drone in the camera coordinate system is also applicable to the present invention and is also within the scope of the present invention.
  • Step S3 Obtain the actual flight speed of the drone in the world coordinate system.
  • the actual flight speed of the drone in the world coordinate system is measured or calculated by a speed measuring device such as a GNSS positioning speed measurement (for example, GPS, Beidou satellite, etc.), an airspeed tube, and a radar. Obtain the measured or calculated flight speed of the drone in the world coordinate system.
  • a speed measuring device such as a GNSS positioning speed measurement (for example, GPS, Beidou satellite, etc.), an airspeed tube, and a radar.
  • Step S4 obtaining each overlapping area according to the pixel moving speed in the camera coordinate system of each pixel of the drone in each overlapping area, the actual flying speed of the drone in the world coordinate system, and the parameters of the onboard camera.
  • the depth image integrates the depth image of each overlapping area to obtain a depth image of the predetermined scene.
  • the parameters of the onboard camera include, for example, the focal length of the onboard camera.
  • the camera can be mounted on the self-stabilizing head of the camera, so the angular velocity of the on-board camera is always 0 when the photo is taken.
  • the step S4 further includes:
  • Step S41 Establish a correlation relationship between the pixel moving speed of each pixel of the drone in the camera coordinate system, the actual flying speed of the drone in the world coordinate system, and the flying height of the drone.
  • the pixel moving speed of each pixel of the drone in the camera coordinate system, the actual flying speed of the drone in the world coordinate system, and the flight of the drone are established according to the principle of small hole imaging.
  • the relationship between heights, where the relationship can be:
  • v m is the actual flight speed of the drone in the world coordinate system
  • v is the moving speed of the pixel in the camera coordinate system of each pixel of the drone in the overlapping area
  • Z is the flying height of the drone
  • f is the focal length of the onboard camera.
  • Step S42 Converting the expression formula of the association relationship described in the above step S41 to obtain the depth value of each pixel of the drone in the overlapping area:
  • Z i is the depth value of the i-th pixel point in the overlap region
  • v i is the pixel moving speed of the i-th pixel point in the camera coordinate system
  • v m is the actual flying speed of the drone in the world coordinate system
  • f is the focal length of the onboard camera, which is a known constant.
  • Step S43 obtaining a depth image of each overlapping area according to the depth value of each pixel of the unmanned aerial vehicle in each overlapping area obtained in the above step S42, and obtaining a predetermined scene according to the depth image integration of each overlapping area (measured The depth image of the object).
  • the above process further comprising: determining whether the directions of the camera coordinate system and the world coordinate system are consistent, and if the direction of the camera coordinate system and the world coordinate system are inconsistent, the camera coordinates are The direction of the system is adjusted so that the direction of the camera coordinate system is consistent with the direction of the world coordinate system.
  • the present invention combines the pixel speed of each pixel in the camera coordinate system with the actual flying speed of the drone itself to calculate the image depth data, and then obtain the depth image. Therefore, any method that combines the image speed (the pixel moving speed of each pixel of the drone in the camera coordinate system) with the actual flying speed of the drone itself to obtain a depth image should be within the scope of the present invention.
  • a continuous image is captured by an onboard camera of a drone, and no position change information of each pixel in a continuous overlapping area of two frames is obtained.
  • the pixel moving speed of each pixel in the camera coordinate system, and the actual flying speed of the drone in the world coordinate system is measured by using equipment such as the unmanned aerial vehicle GPS, and finally the camera system is passed by the drone in the camera coordinate system.
  • the relationship between the pixel moving speed of each pixel, the actual flying speed of the drone in the world coordinate system and the flying height, and the depth image of the drone is calculated.
  • the method can accurately acquire the depth image and the operation flow is simple. ,Easy to implement.
  • the method is implemented by using the equipment on the existing drone, without adding additional equipment, thereby reducing the load of the drone, reducing the measurement cost, avoiding the energy attenuation, or the surface of the object to be measured. Problems such as failure of active measurement caused by absorption and other problems.
  • a further embodiment of the present invention also provides an apparatus for acquiring a depth image of a drone.
  • the drone image acquisition apparatus 100 includes a reading module 110, a calculation module 120, a measurement module 130, and an image generation module 140.
  • the reading module 110 is configured to read an image sequence of a predetermined scene collected by the onboard camera of the drone, wherein the Nth frame image and the N+1th frame image in the image sequence have overlapping regions and an area of the overlapping region
  • the ratio of the area of the image of the Nth frame or the image of the (N+1)th frame is higher than the preset ratio.
  • the on-board camera captures the image sequence of the measured object, and extracts two consecutive images from the image, such as the Nth frame image and the N+1th frame image, respectively, and the Nth frame image and the N+1th image. There must be overlapping areas between the frame images.
  • the ratio of the area of the overlapping area to the area of the Nth frame image or the N+1th frame image is higher than a preset ratio, and the preset ratio is, for example, 60%, that is, the overlapping area.
  • the area is higher than 60% of the area of the Nth frame image or the N+1th frame image.
  • the onboard camera passes, for example, through the self-stabilizing gimbal. Installed on the drone.
  • the angle of view of the onboard camera cannot be too large.
  • the angle of view of the selected onboard camera is lower than the preset angle, and Specifically, the preset angle is, for example, 60 degrees.
  • the value of the preset angle is not limited thereto, and may be selected according to actual scene requirements. For example, the preset angle may also be less than 60 degrees, and the description is only for the purpose of illustration.
  • the reading module 110 is further configured to correct the image distortion in the image sequence, so that the distortion of the image is within the available range, and then the subsequent execution is performed. operating.
  • the calculation module 120 is configured to acquire position change information of each pixel of the Nth frame image in the overlap region in the N+1th frame image, and obtain each pixel of the UAV in the overlap region according to the position change information.
  • the speed of pixel movement in the coordinate system Specifically, for example, the calculation module 120 obtains, according to the feature matching-based optical flow method, a change of position information (ie, position change information) of each pixel of the Nth frame image in the overlap region in the (N+1)th frame image, According to the change of the position information, the moving speed of the pixel in the camera coordinate system of each pixel of the drone in the overlapping area is obtained.
  • the calculation module 120 is configured to obtain the moving distance of each pixel of the drone in the camera coordinate system in the overlapping area by using the feature matching-based optical flow method, specifically including According to the position information of the same pixel in the image of the Nth frame and the position information in the image of the N+1th frame, the movement information of the pixel is obtained, and the moving distance of the pixel in the camera coordinate system is obtained according to the movement information. Then, the moving distance of each pixel of the drone in the camera coordinate system in the overlapping area is derived, and the moving speed of the pixel in the camera coordinate system of each pixel of the drone in the overlapping area is obtained.
  • the feature matching-based optical flow method specifically including According to the position information of the same pixel in the image of the Nth frame and the position information in the image of the N+1th frame, the movement information of the pixel is obtained, and the moving distance of the pixel in the camera coordinate system is obtained according to the movement information. Then, the moving distance of each pixel of the drone in the camera coordinate system in the
  • the calculation formula of the moving distance of each pixel of the drone in the camera coordinate system in the overlapping area is:
  • (x1, y1) is the position information of the pixel in the image of the Nth frame
  • (x2, y2) is the position information of the pixel in the image of the (N+1)th frame
  • (u x , u y ) is the pixel point The distance moved in the camera coordinate system.
  • the image of each pixel point to the (N+1)th frame in the image of the Nth frame can be calculated by matching the position of each pixel in the Nth frame image by the optical flow method of feature matching.
  • the moving distance, and then the moving speed of the pixel is obtained by the moving distance of each pixel of the drone in the camera coordinate system.
  • the optical flow method based on feature matching includes a dense algorithm and a sparse algorithm.
  • the dense algorithm is that each pixel in the image participates in the calculation, so as to calculate the pixel moving speed of each pixel in the image; the sparse optical flow method selects a part of the pixels that are easy to track in the image, which is easy to select.
  • the tracked pixels perform optical flow operations to obtain pixel moving speeds of these easily trackable pixels.
  • the feature matching based optical flow method used may be a dense optical flow method. It should be noted that using the optical flow method based on feature matching to calculate the pixel moving speed of each pixel of the drone in the camera coordinate system is only one embodiment of the present invention, and is not to be construed as limiting the present invention. A method of calculating the pixel moving speed of each pixel of the drone in the camera coordinate system is also applicable to the present invention and is also within the scope of the present invention.
  • the measurement module 130 is configured to acquire the actual flight speed of the drone in the world coordinate system.
  • the specific implementation process for example, through GPS, Beidou satellite, airspeed tube, radar, etc., the actual flight speed of the drone in the world coordinate system is obtained. degree.
  • the image generation module 140 is configured to obtain, according to the pixel moving speed of the camera in the camera coordinate system, the actual flying speed of the drone in the world coordinate system, and the parameters of the onboard camera according to each pixel of the drone in each overlapping area.
  • the depth images of the overlapping regions integrate the depth images of each overlapping region to obtain a depth image of the predetermined scene.
  • the parameters of the onboard camera include, for example, the focal length of the onboard camera.
  • the image generation module 140 is configured to establish a pixel moving speed of each pixel of the drone in the camera coordinate system in the overlapping area, and an actual flying speed of the drone in the world coordinate system.
  • the image generation module 140 establishes, for example, the pixel moving speed of each pixel of the drone in the camera coordinate system, the actual flying speed of the drone in the world coordinate system, and the unmanned person according to the principle of the small hole imaging.
  • the relationship between the flight heights of the aircraft, wherein the association relationship can be:
  • v m is the actual flight speed of the drone in the world coordinate system
  • v is the moving speed of the pixel in the camera coordinate system of each pixel of the drone in the overlapping area
  • Z is the flying height of the drone
  • f is the focal length of the onboard camera.
  • Z i is the depth value of the i-th pixel point in the overlap region
  • v i is the pixel moving speed of the i-th pixel point in the camera coordinate system
  • v m is the actual flying speed of the drone in the world coordinate system
  • f is the focal length of the onboard camera, which is a known constant.
  • the depth image of each overlapping area is obtained according to the depth value of each pixel of the drone in each overlapping area, and the depth image of the predetermined scene (the measured object) is obtained according to the depth image of each overlapping area.
  • the drone image acquisition apparatus 100 further includes, for example, an adjustment module (not shown).
  • the adjustment module is configured to determine whether the direction of the camera coordinate system and the world coordinate system are consistent, and adjust the direction of the camera coordinate system to make the direction of the camera coordinate system and the world coordinate when the directions of the camera coordinate system and the world coordinate system are inconsistent.
  • the direction of the system is the same.
  • the present invention combines the pixel speed of each pixel in the camera coordinate system with the actual flying speed of the drone itself to calculate the image depth data, and then obtain the depth image. Therefore, any method that combines the image speed (the pixel moving speed of each pixel of the drone in the camera coordinate system) with the actual flying speed of the drone itself to obtain a depth image should be within the scope of the present invention.
  • the apparatus for acquiring a depth image of a drone reads a continuous image captured by an onboard camera of a drone by a reading module, and calculates a position of each pixel in an overlapping area of two consecutive frames.
  • the change information obtains the pixel moving speed of each pixel of the drone in the camera coordinate system, and then uses the equipment such as the unmanned aerial vehicle GPS to measure the actual flying speed of the drone in the world coordinate system, and finally passes the drone.
  • the relationship between the pixel moving speed of each pixel in the camera coordinate system, the actual flying speed of the drone in the world coordinate system and the flying height, and the depth image of the drone is calculated, so the depth image of the drone is
  • the acquisition device is capable of accurately acquiring a depth image.
  • the UAV depth image acquisition device is implemented by using the equipment on the existing UAV, without adding additional equipment, thereby reducing the load of the UAV, reducing the measurement cost and avoiding the energy. Problems such as attenuation, or active measurement failure caused by problems such as surface absorption of the object being measured.
  • a further embodiment of the invention also proposes a drone.
  • the drone includes an onboard camera, a speed measuring device, a processor, and a body, and the onboard camera and the speed measuring device are respectively connected to the processor, and the body is used for mounting the onboard camera and the speed measuring device. And processor.
  • the airborne camera is configured to acquire an image sequence of a predetermined scene; the speed measuring device is configured to measure or calculate an actual flight speed of the drone in a world coordinate system; in a specific implementation process, the speed measuring device may It is a GNSS positioning speed measurement (such as GPS, Beidou satellite, etc.), airspeed tube, radar, etc., and the present invention is not limited to which speed measuring device is used, as long as the actual flying speed of the drone in the world coordinate system can be measured or calculated. All are within the scope of the present invention.
  • GNSS positioning speed measurement such as GPS, Beidou satellite, etc.
  • the present invention is not limited to which speed measuring device is used, as long as the actual flying speed of the drone in the world coordinate system can be measured or calculated. All are within the scope of the present invention.
  • the processor is configured to perform the above-described method of acquiring a drone depth image; in other words, the processor includes the apparatus for acquiring a drone depth image described in the above embodiments.
  • the drone further includes a self-stabilizing head, and the on-board camera can be mounted on the body through the self-stabilizing head.
  • the onboard camera, the speed measuring device and the processor are all mounted on the body, and the continuous image is taken by the onboard camera, the processor reads the image taken by the onboard camera, and then calculates two
  • the position change information of each pixel in the continuous image overlap region is obtained by the pixel moving speed of each pixel in the camera coordinate system, and the drone is measured by a speed measuring device such as a drone onboard GPS.
  • the actual flight speed in the world coordinate system is finally calculated by the relationship between the pixel moving speed of each pixel in the camera coordinate system, the actual flying speed of the drone in the world coordinate system and the flying altitude.
  • the depth image of the drone so the drone can accurately capture the depth image.
  • the UAV is realized by using the equipment on the existing drone, without adding additional equipment, thereby reducing the load of the drone, reducing the measurement cost, avoiding the energy attenuation, or being measured. Problems such as active measurement failure caused by problems such as surface absorption of the object.
  • the disclosed technical contents may be implemented in other manners.
  • the device embodiments described above are only schematic.
  • the division of the modules may be a logical function division.
  • there may be another division manner for example, multiple modules or components may be combined or may be Integrate into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, unit or module, and may be electrical or otherwise.
  • the modules described as separate components may or may not be physically separate.
  • the components displayed as modules may or may not be physical modules, that is, may be located in one place, or may be distributed to multiple modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional module in each embodiment of the present invention may be integrated into one processing module, or each module may exist physically separately, or two or more modules may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the integrated modules if implemented in the form of software functional modules and sold or used as separate products, may be stored in a computer readable storage medium.
  • the technical solution of the present invention which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause a computer device (which may be a personal computer, server or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and the like. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

一种无人机深度图像的获取方法、装置及无人机,该方法包括:读取无人机的机载相机采集的预定场景的图像序列,其中,图像序列中第N帧图像和第N+1帧图像具有重叠区域且重叠区域的面积与第N帧图像或第N+1帧图像的面积之比高于预设比例(S1);获取重叠区域中第N帧图像的每个像素点在第N+1帧图像中的位置变化信息,并根据位置变化信息得到重叠区域中无人机的每个像素点在相机坐标系下的像素移动速度(S2);获取无人机在世界坐标系下的实际飞行速度(S3);根据每个重叠区域中无人机的每个像素点在相机坐标系下的像素移动速度、无人机在世界坐标系下的实际飞行速度及机载相机的参数得到每个重叠区域的深度图像,根据每个重叠区域的深度图像整合得到预定场景的深度图像(S4)。上述方法能够准确地获取深度图像,具有适用范围广、成本低、易于实现的优点。

Description

无人机深度图像的获取方法、装置及无人机 技术领域
本发明涉及图像处理技术领域,特别涉及一种无人机深度图像的获取方法、装置及无人机。
背景技术
传统的成像方案是将三维的图像模型转换为二维的灰度图像,在成像过程中损失了图像的深度信息,然而图像的深度信息对于后续应用是十分重要的(例如三维重建、地理测绘等),获取深度图无论对于理论研究还是对于工程实践都具有重要的意义。
现有的获取深度图像的方法一般是主动式获取方法,主动的发出能量,例如激光、电磁波,超声波等,此能量经过障碍物反射从而被接收到;被动式测量有基于机器视觉的方法,例如双目视觉等。
目前无人机获取深度图像的方法一般是主动式的发出能量束,然后检测返回的能量,最后据此计算深度图。然而,此方法容易受到周围环境的影响,例如光线对激光的影响;其次该方法要求被测物体必须能够反射能量,如果大部分发射能量被吸收掉,那么就会导致此方法的失效;最后该方法的可测量范围有限,因为发射的能量在大气中会被衰减,如果距离太远,衰减就会严重,从而不能够准确测量出深度信息。另一方面,基于双目视觉的方法需要两个相机,且两个相机之间必须要求有一定的距离,测量的距离越远两个相机间的间距就要越大,对于小型无人机而言其增加了载荷,另外由于小型无人机的空间有限,从而限制了两个相机间的最大距离。
发明内容
本发明旨在至少在一定程度上解决上述相关技术中的技术问题之一。
为此,本发明的目的在于提出一种无人机深度图像的获取方法、一种无人机深度图像获取装置以及无人机,该方法能够准确地获取无人机的深度图像,具有适用范围广、成本低、易于实现的优点。
为了实现上述目的,本发明第一方面的实施例提出了一种无人机深度图像的获取方法,包括以下步骤:读取无人机的机载相机采集的预定场景的图像序列,其中,所述图像序列 中第N帧图像和第N+1帧图像具有重叠区域且所述重叠区域的面积与所述第N帧图像或第N+1帧图像的面积之比高于预设比例;获取所述重叠区域中第N帧图像的每个像素点在第N+1帧图像中的位置变化信息,并根据所述位置变化信息得到所述重叠区域中无人机的每个像素点在相机坐标系下的像素移动速度;获取所述无人机在世界坐标系下的实际飞行速度;以及根据每个所述重叠区域中无人机的每个像素点在相机坐标系下的像素移动速度、所述无人机在世界坐标系下的实际飞行速度以及所述机载相机的参数得到每个所述重叠区域的深度图像,根据每个所述重叠区域的深度图像整合得到所述预定场景的深度图像。
根据本发明实施例的无人机深度图像的获取方法,读取无人机的机载相机拍摄的连续的图像,通过计算两帧连续的图像重叠区域中每个像素点的位置变化信息得到无人机在相机坐标系下各像素点的像素移动速度,再利用如无人机机载GPS等设备测量出无人机在世界坐标系下的实际飞行速度,最后通过无人机在相机坐标系下各像素点的像素移动速度、无人机在世界坐标系下的实际飞行速度及飞行高度之间的关系,计算得到无人机的深度图像,该方法能够准确获取深度图像,且操作流程简单、易于实现。同时,对被测物体是否能够反射能量没有特定要求,可测距离足够远,不存在能量的衰减问题,适用范围广。另外,该方法的实现都是利用现有无人机上的设备,不需要增加额外的设备,从而减小无人机的载荷,也降低了测量成本,避免了由于能量衰减,或者被测物体表面吸收等问题导致的主动式测量失败等问题。
另外,根据本发明上述实施例的无人机深度图像的获取方法还可以具有如下附加的技术特征:
在一些示例中,所述获取所述重叠区域中第N帧图像的每个像素点在第N+1帧图像中的位置变化信息,并根据所述位置变化信息得到所述重叠区域中无人机的每个像素点在相机坐标系下的像素移动速度,进一步可以包括:计算所述重叠区域中无人机的每个像素点在所述相机坐标系下的移动距离;对所述重叠区域中无人机的每个像素点在所述相机坐标系下的移动距离进行求导,得到所述重叠区域中无人机的每个像素点在所述相机坐标系下的像素移动速度。
在一些示例中,所述计算所述重叠区域中无人机的每个像素点在所述相机坐标系下的移动距离,进一步包括:根据同一个像素点在第N帧图像中的位置信息和第N+1帧图像中的位置信息,得到所述同一个像素点的移动信息,根据所述移动信息得到所述同一个像素点在所述相机坐标系下的移动距离。
在一些示例中,所述根据每个所述重叠区域中无人机的每个像素点在相机坐标系下的像素移动速度、所述无人机在世界坐标系下的实际飞行速度以及所述机载相机的参数得到每个所述重叠区域的深度图像,将每个所述重叠区域的深度图像整合得到所述预定场景的 深度图像,进一步包括:建立所述重叠区域中无人机的每个像素点在所述相机坐标系下的像素移动速度、所述无人机在世界坐标系下的实际飞行速度和所述无人机的飞行高度之间的关联关系;
根据所述关联关系得到所述重叠区域中无人机的每个像素点的深度值;
根据每个所述重叠区域中无人机的每个像素点的深度值得到每个所述重叠区域的深度图像,将每个所述重叠区域的深度图像整合得到所述预定场景的深度图像。
在一些示例中,还包括:判断所述相机坐标系和所述世界坐标系的方向是否一致;如果所述相机坐标系和所述世界坐标系的方向不一致,则对所述相机坐标系的方向进行调整,以使所述相机坐标系的方向与所述世界坐标系的方向一致。
在一些示例中,所述机载相机的视野角度低于预设角度,所述预设角度小于或等于60度。
在一些示例中,在所述获取所述重叠区域中第N帧图像的每个像素点在第N+1帧图像中的位置变化信息,并根据所述位置变化信息得到所述重叠区域中无人机的每个像素点在相机坐标系下的像素移动速度之前,还包括:对所述图像序列中的图像的畸变进行校正。
为了实现上述目的,本发明第二方面的实施例还提供了一种无人机深度图像的获取装置,包括:读取模块,所述读取模块用于读取无人机的机载相机采集的预定场景的图像序列,其中,所述图像序列中第N帧图像和第N+1帧图像具有重叠区域且所述重叠区域的面积与所述第N帧图像或第N+1帧图像的面积之比高于预设比例;计算模块,所述计算模块用于获取所述重叠区域中第N帧图像的每个像素点在第N+1帧图像中的位置变化信息,并根据所述位置变化信息得到所述重叠区域中无人机的每个像素点在相机坐标系下的像素移动速度;测量模块,所述测量模块用于获取所述无人机在世界坐标系下的实际飞行速度;以及图像生成模块,所述图像生成模块用于根据每个所述重叠区域中无人机的每个像素点在相机坐标系下的像素移动速度、所述无人机在世界坐标系下的实际飞行速度以及所述机载相机的参数得到每个所述重叠区域的深度图像,将每个所述重叠区域的深度图像整合得到所述预定场景的深度图像。
根据本发明实施例的无人机深度图像的获取装置,通过读取模块读取机载相机拍摄连续的图像,然后计算模块计算两帧连续的图像重叠区域中每个像素点的位置变化信息得到无人机在相机坐标系下各像素点的像素移动速度,再利用测量模块获取如无人机机载GPS等设备测量出无人机在世界坐标系下的实际飞行速度,最后图像生成模块利用无人机在相机坐标系下各像素点的像素移动速度、无人机在世界坐标系下的实际飞行速度及飞行高度之间的关系,计算得到无人机的深度图像,因此该无人机深度图像的获取装置能够准确获取深度图像。同时,对被测物体是否能够反射能量没有特定要求,可测距离足够远,不存 在能量的衰减问题,适用范围广。另外,该无人机深度图像的获取装置的实现都是利用现有无人机上的设备,不需要增加额外的设备,从而减小无人机的载荷,也降低了测量成本,避免了由于能量衰减,或者被测物体表面吸收等问题导致的主动式测量失败等问题。
另外,根据本发明上述实施例的无人机深度图像的获取装置还可以具有如下附加的技术特征:
在一些示例中,所述计算模块用于:计算所述重叠区域中无人机的每个像素点在所述相机坐标系下的移动距离;以及对所述重叠区域中无人机的每个像素点在所述相机坐标系下的移动距离进行求导,得到所述重叠区域中无人机的每个像素点在所述相机坐标系下的像素移动速度。
在一些示例中,所述计算模块用于:根据同一个像素点在第N帧图像中的位置信息和第N+1帧图像中的位置信息,得到所述同一个像素点的移动距离,根据所述移动距离得到所述同一个像素点在所述相机坐标系下的移动距离。
在一些示例中,所述图像生成模块用于建立所述重叠区域中无人机的每个像素点在所述相机坐标系下的像素移动速度、所述无人机在世界坐标系下的实际飞行速度和所述无人机的飞行高度之间的关联关系;
根据所述关联关系得到所述重叠区域中无人机的每个像素点的深度值;
根据每个所述重叠区域中无人机的每个像素点的深度值得到每个所述重叠区域的深度图像,将每个所述重叠区域的深度图像整合得到所述预定场景的深度图像。在一些示例中,还包括:调整模块,所述调整模块用于判断所述相机坐标系和所述世界坐标系的方向是否一致,并在所述相机坐标系和所述世界坐标系的方向不一致时,对所述相机坐标系的方向进行调整,以使所述相机坐标系的方向与所述世界坐标系的方向一致。
在一些示例中,所述机载相机的视野角度低于预设角度,所述预设角度小于或等于60度。
在一些示例中,所述机载相机还用于对所述图像序列中的图像的畸变进行校正。
为了实现上述目的,本发明第三方面的实施例提出了一种无人机,包括:机载相机,所述机载相机用于采集预定场景的图像序列;速度测量装置,用于测量或者计算无人机在世界坐标系下的实际飞行速度;处理器,所述处理器被配置为执行本发明上述第一方面实施例的无人机深度图像的获取方法;机体,所述机体用于安装所述机载相机、速度测量装置和处理器。
在一些示例中,所述无人机还包括自稳云台,所述机载相机通过所述自稳云台安装在所述机体上。
本发明的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明 显,或通过本发明的实践了解到。
附图说明
本发明的上述和/或附加的方面和优点从结合下面附图对实施例的描述中将变得明显和容易理解,其中:
图1是根据本发明一个实施例的无人机深度图像的获取方法的流程图;
图2是根据本发明一个实施例的无人机获取深度图像的具体模型示意图;以及
图3是根据本发明一个实施例的无人机深度图像的获取装置的结构框图。
具体实施方式
下面详细描述本发明的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,仅用于解释本发明,而不能理解为对本发明的限制。
以下结合附图描述根据本发明实施例的无人机深度图像的获取方法及无人机。
图1是根据本发明一个实施例的无人机深度图像的获取方法的流程图。如图1所示,该方法包括以下步骤:
步骤S1:读取无人机的机载相机采集的预定场景的图像序列,其中,图像序列中第N帧图像和第N+1帧图像具有重叠区域且重叠区域的面积与第N帧图像或第N+1帧图像的面积之比高于预设比例。换言之,即读取无人机的机载相机拍摄得到被测物体的图像序列,并从中提取两帧连续的图像,例如分别为第N帧图像和第N+1帧图像,并且,第N帧图像和第N+1帧图像之间必须具有重叠区域。为了保证后续光流计算的准确性,该重叠区域的面积与第N帧图像或第N+1帧图像的面积之比高于预设比例。更为具体地,在本发明的一个实施例中,预设比例例如为60%,也即重叠区域的面积高于第N帧图像或第N+1帧图像面积的60%。
进一步地,在本发明的一个实施例中,为了保证机载相机拍摄图像的质量,并且消除由无人机自身抖动给后续光流法计算带来的干扰,例如将机载相机通过自稳云台安装在无人机上,例如图2所示。同时,为了减小相机拍摄的图像本身畸变影响,机载相机的视野角度不能太大,在本发明的一个实施例中,选取的机载相机的视野角度低于预设角度,更为具体地,预设角度例如为60度,例如图2所示。当然,预设角度的值不限于此,可以根据实际场景需求而选定(例如,预设角度还可以小于60度),此处仅是以示例性为目的的描述。
进一步地,在一些示例中,如上所述,如果机载相机拍摄图像的畸变比较严重,则需 要对图像序列中的图像畸变进行校正,使图像的畸变在可用范围内后再执行后续操作。
步骤S2:获取重叠区域中第N帧图像的每个像素点在第N+1帧图像中的位置变化信息,并根据位置变化信息得到重叠区域中无人机的每个像素点在相机坐标系下的像素移动速度。
在一些示例中,例如,可以根据基于特征匹配的光流法得到重叠区域中第N帧图像的每个像素点在第N+1帧图像中的位置信息的变化情况(即位置变化信息),并根据位置信息的变化情况得到重叠区域中无人机的每个像素点在相机坐标系下的像素移动速度。
在本发明的一个实施例中,该步骤S2进一步可以包括:
步骤S21:计算重叠区域中无人机的每个像素点在相机坐标系下的移动距离。具体地,在一些示例中,例如,可以根据基于特征匹配的光流法得到重叠区域中无人机的每个像素点在相机坐标系下的移动距离。
在本发明的一个实施例中,计算重叠区域中无人机的每个像素点在相机坐标系下的移动距离,具体地,可以包括:根据同一个像素点在第N帧图像中的位置信息和第N+1帧图像中的位置信息,得到该像素点的移动信息,根据该移动信息得到该像素点在相机坐标系下的移动距离。作为具体的示例,例如,重叠区域中无人机的每个像素点在相机坐标系下的移动距离的计算公式为:
(ux,uy)=(x2-x1,y2-y1),
其中,(x1,y1)为像素点在第N帧图像中的位置信息,(x2,y2)为像素点在第N+1帧图像中的位置信息,(ux,uy)为像素点在相机坐标系下的移动距离。
步骤S22:对重叠区域中无人机的每个像素点在相机坐标系下的移动距离进行求导,得到重叠区域中无人机的每个像素点在相机坐标系下的像素移动速度。
换言之,以具体示例进行说明:基于特征匹配的光流法即通过匹配第N帧图像中各个像素点在第N+1帧图像中的位置,从而计算出第N帧图像中各像素点到第N+1帧图像的移动距离,进而通过该移动距离得到无人机的每个像素点在相机坐标系下的像素移动速度。其中,基于特征匹配的光流法包括稠密算法和稀疏算法。稠密算法是图像中的每一个像素点都参与计算,从而计算得到图像中每一个像素点的像素移动速度;稀疏光流法是在图像中选取一部分易于跟踪的像素点,对选取的这部分易于跟踪的像素点进行光流运算,得到这些易于跟踪的像素点的像素移动速度。其中,在本发明的一个实施例中,使用的基于特征匹配的光流法例如为稠密光流法。需要说明的是,使用基于特征匹配的光流法计算无人机的各像素点在相机坐标系下的像素移动速度仅是本发明的一种实施方式,并不能理解为对本发明的限制,其它可以计算无人机各像素点在相机坐标系下的像素移动速度的方法也适用于本发明,也属于本发明的保护范围之内。
步骤S3:获取无人机在世界坐标系下的实际飞行速度。
在具体实施过程中,例如通过GNSS定位测速(例如GPS、北斗卫星等)、空速管、雷达等速度测量装置所测量或者计算无人机在世界坐标系下的实际飞行速度。获取测量或者计算得到的无人机在世界坐标系下的飞行速度。
步骤S4:根据每个重叠区域中无人机的每个像素点在相机坐标系下的像素移动速度、无人机在世界坐标系下的实际飞行速度以及机载相机的参数得到每个重叠区域的深度图像,将每个重叠区域的深度图像整合得到预定场景的深度图像。其中,在本发明的一个实施例中,机载相机的参数例如包括机载相机的焦距。
具体地说,机载可以相机安装在自稳云台上,所以可认为机载相机拍摄照片时角速度始终为0。在载相机拍摄照片时角速度始终为0或接近0的情况下,步骤S4进一步包括:
步骤S41:建立重叠区域中无人机的每个像素点在相机坐标系下的像素移动速度、无人机在世界坐标系下的实际飞行速度和无人机的飞行高度之间的关联关系。具体地,例如,根据小孔成像原理建立重叠区域中无人机的每个像素点在相机坐标系下的像素移动速度、无人机在世界坐标系下的实际飞行速度和无人机的飞行高度之间的关联关系,其中,该关联关系可以为:
Figure PCTCN2016099925-appb-000001
其中,vm为无人机在世界坐标系下的实际飞行速度,v为重叠区域中无人机的每个像素点在相机坐标系下的像素移动速度,Z为无人机的飞行高度,f为机载相机的焦距。
步骤S42:对上述步骤S41中所述的关联关系的表述公式进行变换后得到重叠区域中无人机的每个像素点的深度值:
Figure PCTCN2016099925-appb-000002
其中,Zi为重叠区域中第i个像素点的深度值,vi为第i个像素点在相机坐标系下的像素移动速度,vm为无人机在世界坐标系下的实际飞行速度,f为机载相机的焦距大小,其为一个已知常量。
步骤S43:根据上述步骤S42中得到的每个重叠区域中无人机的每个像素点的深度值得到每个重叠区域的深度图像,根据每个重叠区域的深度图像整合得到预定场景(被测物体)的深度图像。
其中,在本发明的一个实施例中,在上述过程中,例如还包括:判断相机坐标系和世界坐标系的方向是否一致,如果相机坐标系的和世界坐标系的方向不一致,则对相机坐标 系的方向进行调整,以使相机坐标系的方向与世界坐标系的方向一致。
综上,本发明是利用无人机在相机坐标系下各像素点的像素速度和无人机本身的实际飞行速度相结合,计算得到图像深度数据,进而获取深度图像。因此,任何利用图像速度(无人机在相机坐标系下各像素点的像素移动速度)和无人机本身实际飞行速度相结合得到深度图像的方法都应在本发明的保护范围之内。
综上,根据本发明实施例的无人机深度图像的获取方法,通过无人机机载相机拍摄连续的图像,通过计算两帧连续的图像重叠区域中每个像素点的位置变化信息得到无人机在相机坐标系下各像素点的像素移动速度,再利用如无人机机载GPS等设备测量出无人机在世界坐标系下的实际飞行速度,最后通过无人机在相机坐标系下各像素点的像素移动速度、无人机在世界坐标系下的实际飞行速度及飞行高度之间的关系,计算得到无人机的深度图像,该方法能够准确获取深度图像,且操作流程简单、易于实现。同时,对被测物体是否能够反射能量没有特定要求,可测距离足够远,不存在能量的衰减问题,适用范围广。另外,该方法的实现都是利用现有无人机上的设备,不需要增加额外的设备,从而减小无人机的载荷,也降低了测量成本,避免了由于能量衰减,或者被测物体表面吸收等问题导致的主动式测量失败等问题。
本发明的进一步实施例还提供了一种无人机深度图像的获取装置。
图3是根据本发明一个实施例的无人机深度图像的获取装置的结构框图。如图3所示,无人机深度图像的获取装置100包括:读取模块110、计算模块120、测量模块130和图像生成模块140。
具体地,读取模块110用于读取无人机的机载相机采集的预定场景的图像序列,其中,图像序列中第N帧图像和第N+1帧图像具有重叠区域且重叠区域的面积与第N帧图像或第N+1帧图像的面积之比高于预设比例。换言之,即机载相机拍摄得到被测物体的图像序列,并从中提取两帧连续的图像,例如分别为第N帧图像和第N+1帧图像,并且,第N帧图像和第N+1帧图像之间必须具有重叠区域。为了保证后续光流计算的准确性,该重叠区域的面积与第N帧图像或第N+1帧图像的面积之比高于预设比例,预设比例例如为60%,也即重叠区域的面积高于第N帧图像或第N+1帧图像面积的60%。
进一步地,在本发明的一个实施例中,为了保证机载相机拍摄图像的质量,并且消除由无人机自身抖动给后续光流法计算带来的干扰,机载相机例如通过自稳云台安装在无人机上。同时,为了减小机载相机拍摄的图像本身畸变影响,机载相机的视野角度不能太大,在本发明的一个实施例中,选取的机载相机的视野角度低于预设角度,更为具体地,预设角度例如为60度。当然,预设角度的值不限于此,可以根据实际场景需求而选定,例如,预设角度还可以是小于60度,此处仅是以示例性为目的的描述。
在本发明的一个实施例中,如果机载相机拍摄图像的畸变比较严重,则读取模块110还用于对图像序列中的图像畸变进行校正,使图像的畸变在可用范围内后再执行后续操作。
计算模块120用于获取重叠区域中第N帧图像的每个像素点在第N+1帧图像中的位置变化信息,并根据位置变化信息得到重叠区域中无人机的每个像素点在相机坐标系下的像素移动速度。具体地,例如,计算模块120根据基于特征匹配的光流法得到重叠区域中第N帧图像的每个像素点在第N+1帧图像中的位置信息的变化情况(即位置变化信息),并根据位置信息的变化情况得到重叠区域中无人机的每个像素点在相机坐标系下的像素移动速度。
更为具体地,在本发明的一个实施例中,计算模块120用于利用基于特征匹配的光流法得到重叠区域中无人机的每个像素点在相机坐标系下的移动距离,具体包括:根据同一个像素点在第N帧图像中的位置信息和第N+1帧图像中的位置信息,得到该像素点的移动信息,根据移动信息得到该像素点在相机坐标系下的移动距离;然后对重叠区域中无人机的每个像素点在相机坐标系下的移动距离进行求导,得到重叠区域中无人机的每个像素点在相机坐标系下的像素移动速度。
作为具体的示例,例如,重叠区域中无人机的每个像素点在相机坐标系下的移动距离的计算公式为:
(ux,uy)=(x2-x1,y2-y1),
其中,(x1,y1)为像素点在第N帧图像中的位置信息,(x2,y2)为像素点在第N+1帧图像中的位置信息,(ux,uy)为像素点在相机坐标系下的移动距离。
其中,可以基于特征匹配的光流法即通过匹配第N帧图像中各个像素点在第N+1帧图像中的位置,从而计算出第N帧图像中各像素点到第N+1帧图像的移动距离,进而通过该移动距离得到无人机的每个像素点在相机坐标系下的像素移动速度。其中,基于特征匹配的光流法包括稠密算法和稀疏算法。稠密算法是图像中的每一个像素点都参与计算,从而计算得到图像中每一个像素点的像素移动速度;稀疏光流法是在图像中选取一部分易于跟踪的像素点,对选取的这部分易于跟踪的像素点进行光流运算,得到这些易于跟踪的像素点的像素移动速度。其中,在本发明的一个实施例中,使用的基于特征匹配的光流法可以为稠密光流法。需要说明的是,使用基于特征匹配的光流法计算无人机的各像素点在相机坐标系下的像素移动速度仅是本发明的一种实施方式,并不能理解为对本发明的限制,其它可以计算无人机各像素点在相机坐标系下的像素移动速度的方法也适用于本发明,也属于本发明的保护范围之内。
测量模块130用于获取无人机在世界坐标系下的实际飞行速度。在具体实施过程中,例如通过GPS、北斗卫星、空速管、雷达等方式获取无人机在世界坐标系下的实际飞行速 度。
图像生成模块140用于根据每个重叠区域中无人机的每个像素点在相机坐标系下的像素移动速度、无人机在世界坐标系下的实际飞行速度以及机载相机的参数得到每个重叠区域的深度图像,将每个重叠区域的深度图像整合得到预定场景的深度图像。其中,在本发明的一个实施例中,机载相机的参数例如包括机载相机的焦距。
具体地说,由于机载相机安装在自稳云台上,所以可认为机载相机拍摄照片时角速度始终为0。则在本发明的一个实施例中,图像生成模块140用于建立重叠区域中无人机的每个像素点在相机坐标系下的像素移动速度、无人机在世界坐标系下的实际飞行速度和无人机的飞行高度之间的关联关系。具体地,图像生成模块140例如根据小孔成像原理建立重叠区域中无人机的每个像素点在相机坐标系下的像素移动速度、无人机在世界坐标系下的实际飞行速度和无人机的飞行高度之间的关联关系,其中,该关联关系可以为:
Figure PCTCN2016099925-appb-000003
其中,vm为无人机在世界坐标系下的实际飞行速度,v为重叠区域中无人机的每个像素点在相机坐标系下的像素移动速度,Z为无人机的飞行高度,f为机载相机的焦距。
然后根据上述的关联关系得到重叠区域中无人机的每个像素点的深度值:
Figure PCTCN2016099925-appb-000004
其中,Zi为重叠区域中第i个像素点的深度值,vi为第i个像素点在相机坐标系下的像素移动速度,vm为无人机在世界坐标系下的实际飞行速度,f为机载相机的焦距大小,其为一个已知常量。
最后根据每个重叠区域中无人机的每个像素点的深度值得到每个重叠区域的深度图像,并根据每个重叠区域的深度图像整合得到预定场景(被测物体)的深度图像。
在本发明的一个实施例中,无人机深度图像的获取装置100例如还包括调整模块(图中未示出)。调整模块用于判断相机坐标系和世界坐标系的方向是否一致,并在相机坐标系和世界坐标系的方向不一致时,对相机坐标系的方向进行调整,以使相机坐标系的方向与世界坐标系的方向一致。
综上,本发明是利用无人机在相机坐标系下各像素点的像素速度和无人机本身的实际飞行速度相结合,计算得到图像深度数据,进而获取深度图像。因此,任何利用图像速度(无人机在相机坐标系下各像素点的像素移动速度)和无人机本身实际飞行速度相结合得到深度图像的方法都应在本发明的保护范围之内。
根据本发明实施例的无人机深度图像的获取装置,通过读取模块读取无人机的机载相机拍摄的连续的图像,通过计算两帧连续的图像重叠区域中每个像素点的位置变化信息得到无人机在相机坐标系下各像素点的像素移动速度,再利用如无人机机载GPS等设备测量出无人机在世界坐标系下的实际飞行速度,最后通过无人机在相机坐标系下各像素点的像素移动速度、无人机在世界坐标系下的实际飞行速度及飞行高度之间的关系,计算得到无人机的深度图像,因此该无人机深度图像的获取装置能够准确获取深度图像。同时,对被测物体是否能够反射能量没有特定要求,可测距离足够远,不存在能量的衰减问题,适用范围广。另外,该无人机深度图像的获取装置的实现都是利用现有无人机上的设备,不需要增加额外的设备,从而减小无人机的载荷,也降低了测量成本,避免了由于能量衰减,或者被测物体表面吸收等问题导致的主动式测量失败等问题。
本发明的进一步实施例还提出了一种无人机。该无人机包括机载相机、速度测量装置、处理器以及机体,所述机载相机和速度测量装置分别与所述处理器相连,所述机体用于安装所述机载相机、速度测量装置和处理器。所述机载相机用于采集预定场景的图像序列;所述速度测量装置,用于测量或者计算无人机在世界坐标系下的实际飞行速度;在具体实施过程中,所述速度测量装置可以是GNSS定位测速(例如GPS、北斗卫星等)、空速管、雷达等,本发明不局限于采用何种速度测量装置,只要能够测量或者计算出无人机在世界坐标系下的实际飞行速度,均在本发明保护范围之内。
所述处理器被配置为执行上述述的无人机深度图像的获取方法;换句话说,所述处理器包括发明上述实施例所描述的无人机深度图像的获取装置。
在本发明的一个实施例中,所述无人机还包括自稳云台,所述机载相机能够通过所述自稳云台安装在所述机体上。
根据本发明实施例的无人机,机载相机、速度测量装置和处理器都安装在机体上,通过机载相机拍摄连续的图像,处理器读取机载相机拍摄的图像,然后通过计算两帧连续的图像重叠区域中每个像素点的位置变化信息得到无人机在相机坐标系下各像素点的像素移动速度,再利用如无人机机载GPS等速度测量装置测量出无人机在世界坐标系下的实际飞行速度,最后通过无人机在相机坐标系下各像素点的像素移动速度、无人机在世界坐标系下的实际飞行速度及飞行高度之间的关系,计算得到无人机的深度图像,因此该无人机能够准确获取深度图像。同时,对被测物体是否能够反射能量没有特定要求,可测距离足够远,不存在能量的衰减问题,适用范围广。另外,该无人机的实现都是利用现有无人机上的设备,不需要增加额外的设备,从而减小无人机的载荷,也降低了测量成本,避免了由于能量衰减,或者被测物体表面吸收等问题导致的主动式测量失败等问题。
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。
在本发明的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的技术内容,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述模块的划分,可以为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个模块或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。

Claims (16)

  1. 一种无人机深度图像的获取方法,其特征在于,包括以下步骤:
    读取无人机的机载相机采集的预定场景的图像序列,其中,所述图像序列中第N帧图像和第N+1帧图像具有重叠区域且所述重叠区域的面积与所述第N帧图像或第N+1帧图像的面积之比高于预设比例;
    获取所述重叠区域中第N帧图像的每个像素点在第N+1帧图像中的位置变化信息,根据所述位置变化信息得到所述重叠区域中无人机的每个像素点在相机坐标系下的像素移动速度;
    获取所述无人机在世界坐标系下的实际飞行速度;以及
    根据每个所述重叠区域中无人机的每个像素点在相机坐标系下的像素移动速度、所述无人机在世界坐标系下的实际飞行速度以及所述机载相机的参数得到每个所述重叠区域的深度图像,将每个所述重叠区域的深度图像整合得到所述预定场景的深度图像。
  2. 根据权利要求1所述的无人机深度图像的获取方法,其特征在于,所述获取所述重叠区域中第N帧图像的每个像素点在第N+1帧图像中的位置变化信息,根据所述位置变化信息得到所述重叠区域中无人机的每个像素点在相机坐标系下的像素移动速度,包括:
    计算所述重叠区域中无人机的每个像素点在所述相机坐标系下的移动距离;
    对所述重叠区域中无人机的每个像素点在所述相机坐标系下的移动距离进行求导,得到所述重叠区域中无人机的每个像素点在所述相机坐标系下的像素移动速度。
  3. 根据权利要求2所述的无人机深度图像的获取方法,其特征在于,所述计算所述重叠区域中无人机的每个像素点在所述相机坐标系下的移动距离,包括:
    根据同一个像素点在第N帧图像中的位置信息和第N+1帧图像中的位置信息,得到所述同一个像素点的移动信息,根据所述移动信息得到所述同一个像素点在所述相机坐标系下的移动距离。
  4. 根据权利要求1所述的无人机深度图像的获取方法,其特征在于,所述根据每个所述重叠区域中无人机的每个像素点在相机坐标系下的像素移动速度、所述无人机在世界坐标系下的实际飞行速度以及所述机载相机的参数得到每个所述重叠区域的深度图像,将每个所述重叠区域的深度图像整合得到所述预定场景的深度图像,包括:
    建立所述重叠区域中无人机的每个像素点在所述相机坐标系下的像素移动速度、所述无人机在世界坐标系下的实际飞行速度和所述无人机的飞行高度之间的关联关系;
    根据所述关联关系得到所述重叠区域中无人机的每个像素点的深度值;
    根据每个所述重叠区域中无人机的每个像素点的深度值得到每个所述重叠区域的深度 图像,将每个所述重叠区域的深度图像整合得到所述预定场景的深度图像。
  5. 根据权利要求4所述的无人机深度图像的获取方法,其特征在于,还包括:
    判断所述相机坐标系和所述世界坐标系的方向是否一致;
    如果所述相机坐标系和所述世界坐标系的方向不一致,则对所述相机坐标系的方向进行调整,以使所述相机坐标系的方向与所述世界坐标系的方向一致。
  6. 根据权利要求1-5任一项所述的无人机深度图像的获取方法,其特征在于,所述机载相机的视野角度低于预设角度,所述预设角度小于或等于60度。
  7. 根据权利要求1所述的无人机深度图像的获取方法,其特征在于,在所述获取所述重叠区域中第N帧图像的每个像素点在第N+1帧图像中的位置变化信息之前,还包括:对所述图像序列中的图像的畸变进行校正。
  8. 一种无人机深度图像的获取装置,其特征在于,包括:
    读取模块,所述读取模块用于读取无人机的机载相机采集的预定场景的图像序列,其中,所述图像序列中第N帧图像和第N+1帧图像具有重叠区域且所述重叠区域的面积与所述第N帧图像或第N+1帧图像的面积之比高于预设比例;
    计算模块,所述计算模块用于获取所述重叠区域中第N帧图像的每个像素点在第N+1帧图像中的位置变化信息,并根据所述位置变化信息得到所述重叠区域中无人机的每个像素点在相机坐标系下的像素移动速度;
    测量模块,所述测量模块用于获取所述无人机在世界坐标系下的实际飞行速度;以及
    图像生成模块,所述图像生成模块用于根据每个所述重叠区域中无人机的每个像素点在相机坐标系下的像素移动速度、所述无人机在世界坐标系下的实际飞行速度以及所述机载相机的参数得到每个所述重叠区域的深度图像,将每个所述重叠区域的深度图像整合得到所述预定场景的深度图像。
  9. 根据权利要求8所述的无人机深度图像的获取装置,其特征在于,所述计算模块用于:
    计算所述重叠区域中无人机的每个像素点在所述相机坐标系下的移动距离;以及
    对所述重叠区域中无人机的每个像素点在所述相机坐标系下的移动距离进行求导,得到所述重叠区域中无人机的每个像素点在所述相机坐标系下的像素移动速度。
  10. 根据权利要求9所述的无人机深度图像的获取装置,其特征在于,所述计算模块用于:
    根据同一个像素点在第N帧图像中的位置信息和第N+1帧图像中的位置信息,得到所述同一个像素点的移动信息,根据所述移动信息得到所述同一个像素点在所述相机坐标系下的移动距离。
  11. 根据权利要求8所述的无人机深度图像的获取装置,其特征在于,所述图像生成模块用于建立所述重叠区域中无人机的每个像素点在所述相机坐标系下的像素移动速度、所述无人机在世界坐标系下的实际飞行速度和所述无人机的飞行高度之间的关联关系;
    根据所述关联关系得到所述重叠区域中无人机的每个像素点的深度值;
    根据每个所述重叠区域中无人机的每个像素点的深度值得到每个所述重叠区域的深度图像,将每个所述重叠区域的深度图像整合得到所述预定场景的深度图像。
  12. 根据权利要求11所述的无人机深度图像的获取装置,其特征在于,还包括:
    调整模块,所述调整模块用于判断所述相机坐标系和所述世界坐标系的方向是否一致,并在所述相机坐标系和所述世界坐标系的方向不一致时,对所述相机坐标系的方向进行调整,以使所述相机坐标系的方向与所述世界坐标系的方向一致。
  13. 根据权利要求8-12任一项所述的无人机深度图像的获取装置,其特征在于,所述机载相机的视野角度低于预设角度,所述预设角度小于或等于60度。
  14. 根据权利要求13所述的无人机深度图像的获取装置,其特征在于,所述机载相机还用于对所述图像序列中的图像的畸变进行校正。
  15. 一种无人机,其特征在于,包括:
    机载相机,所述机载相机用于采集预定场景的图像序列;
    速度测量装置,用于测量或者计算无人机在世界坐标系下的实际飞行速度;
    处理器,所述处理器被配置为执行如权利要求1-7任一项所述的无人机深度图像的获取方法;
    机体,所述机体用于安装所述机载相机、速度测量装置和处理器。
  16. 根据权利要求15所述的无人机,其特征在于,还包括自稳云台,所述机载相机通过所述自稳云台安装在所述机体上。
PCT/CN2016/099925 2015-09-25 2016-09-23 无人机深度图像的获取方法、装置及无人机 WO2017050279A1 (zh)

Priority Applications (6)

Application Number Priority Date Filing Date Title
AU2016327918A AU2016327918B2 (en) 2015-09-25 2016-09-23 Unmanned aerial vehicle depth image acquisition method, device and unmanned aerial vehicle
US15/565,582 US10198004B2 (en) 2015-09-25 2016-09-23 Method and apparatus for obtaining range image with UAV, and UAV
ES16848160T ES2798798T3 (es) 2015-09-25 2016-09-23 Procedimiento y aparato para obtener imágenes de determinación de distancia con un UAV, y UAV
KR1020177034364A KR101886013B1 (ko) 2015-09-25 2016-09-23 무인기의 심도 이미지 획득 방법, 장치 및 무인기
EP16848160.4A EP3264364B1 (en) 2015-09-25 2016-09-23 Method and apparatus for obtaining range image with uav, and uav
JP2017566134A JP6484729B2 (ja) 2015-09-25 2016-09-23 無人航空機の奥行き画像の取得方法、取得装置及び無人航空機

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510628505.7 2015-09-25
CN201510628505.7A CN105225241B (zh) 2015-09-25 2015-09-25 无人机深度图像的获取方法及无人机

Publications (2)

Publication Number Publication Date
WO2017050279A1 true WO2017050279A1 (zh) 2017-03-30
WO2017050279A9 WO2017050279A9 (zh) 2018-01-25

Family

ID=54994190

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/099925 WO2017050279A1 (zh) 2015-09-25 2016-09-23 无人机深度图像的获取方法、装置及无人机

Country Status (8)

Country Link
US (1) US10198004B2 (zh)
EP (1) EP3264364B1 (zh)
JP (1) JP6484729B2 (zh)
KR (1) KR101886013B1 (zh)
CN (1) CN105225241B (zh)
AU (1) AU2016327918B2 (zh)
ES (1) ES2798798T3 (zh)
WO (1) WO2017050279A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019183789A1 (zh) * 2018-03-27 2019-10-03 深圳市大疆创新科技有限公司 无人机的控制方法、装置和无人机
CN114782525A (zh) * 2022-06-22 2022-07-22 中国地质大学(武汉) 基于全局几何约束的无人机影像定位定向方法及设备

Families Citing this family (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105225241B (zh) 2015-09-25 2017-09-15 广州极飞科技有限公司 无人机深度图像的获取方法及无人机
US10665115B2 (en) * 2016-01-05 2020-05-26 California Institute Of Technology Controlling unmanned aerial vehicles to avoid obstacle collision
US11461912B2 (en) 2016-01-05 2022-10-04 California Institute Of Technology Gaussian mixture models for temporal depth fusion
US10939055B2 (en) * 2016-03-02 2021-03-02 Sony Corporation Imaging control apparatus and image control method
CN105809687B (zh) * 2016-03-08 2019-09-27 清华大学 一种基于图像中边沿点信息的单目视觉测程方法
CN106771329B (zh) * 2016-12-27 2020-06-05 歌尔科技有限公司 一种无人机减速过程中运行速度的检测方法
CN106708081B (zh) * 2017-03-17 2019-06-04 北京思比科微电子技术股份有限公司 多旋翼无人飞行器控制系统
WO2018195869A1 (en) * 2017-04-27 2018-11-01 SZ DJI Technology Co., Ltd. Systems and methods for generating real-time map using movable object
CN109211185A (zh) * 2017-06-30 2019-01-15 北京臻迪科技股份有限公司 一种飞行设备、获取位置信息的方法及装置
CN107329490B (zh) * 2017-07-21 2020-10-09 歌尔科技有限公司 无人机避障方法及无人机
CN107610212B (zh) * 2017-07-25 2020-05-12 深圳大学 场景重建方法、装置、计算机设备以及计算机存储介质
CN108702444B (zh) * 2017-08-31 2021-07-06 深圳市大疆创新科技有限公司 一种图像处理方法、无人机及系统
US11217107B2 (en) 2017-10-05 2022-01-04 California Institute Of Technology Simultaneous representation of moving and static obstacles for automatically controlled vehicles
US11127202B2 (en) * 2017-12-18 2021-09-21 Parthiv Krishna Search and rescue unmanned aerial system
CN112672133A (zh) * 2017-12-22 2021-04-16 深圳市大疆创新科技有限公司 基于无人机的立体成像方法和装置、计算机可读存储介质
CN115222793A (zh) * 2017-12-22 2022-10-21 展讯通信(上海)有限公司 深度图像的生成及显示方法、装置、系统、可读介质
WO2019126930A1 (zh) * 2017-12-25 2019-07-04 深圳市道通智能航空技术有限公司 测距方法、装置以及无人机
WO2019144281A1 (zh) * 2018-01-23 2019-08-01 深圳市大疆创新科技有限公司 表面图形确定方法和装置
CN108335353B (zh) 2018-02-23 2020-12-22 清华-伯克利深圳学院筹备办公室 动态场景的三维重建方法、装置和系统、服务器、介质
CN108804161B (zh) * 2018-06-21 2022-03-04 北京字节跳动网络技术有限公司 应用的初始化方法、装置、终端和存储介质
CN111344644B (zh) * 2018-08-01 2024-02-20 深圳市大疆创新科技有限公司 用于基于运动的自动图像捕获的技术
CN109084733A (zh) * 2018-09-12 2018-12-25 朱光兴 一种智能遥感测绘系统
WO2020065719A1 (ja) * 2018-09-25 2020-04-02 株式会社エアロネクスト 飛行体
CN111192318B (zh) * 2018-11-15 2023-09-01 杭州海康威视数字技术股份有限公司 确定无人机位置和飞行方向的方法、装置及无人机
JP7182710B2 (ja) * 2018-11-21 2022-12-02 広州極飛科技股▲ふん▼有限公司 測量方法、装置及びデバイス
US11151737B1 (en) * 2018-12-20 2021-10-19 X Development Llc Automatic field of view detection
CN109978947B (zh) * 2019-03-21 2021-08-17 广州极飞科技股份有限公司 一种监控无人机的方法、装置、设备和存储介质
CN110414392B (zh) * 2019-07-15 2021-07-20 北京天时行智能科技有限公司 一种障碍物距离的确定方法及装置
CN110487184A (zh) * 2019-09-17 2019-11-22 河北唐银钢铁有限公司 无人机测量料场物料的系统及方法
CN110880184B (zh) * 2019-10-03 2023-07-21 上海淡竹体育科技有限公司 一种基于光流场进行摄像头自动巡检的方法及装置
CN111340840A (zh) * 2020-02-28 2020-06-26 浙江大华技术股份有限公司 确定目标物体相对运动速度的方法、设备和装置
CN112198891B (zh) * 2020-04-22 2021-12-07 北京理工大学 多旋翼机自主回收方法
CN111506112A (zh) * 2020-05-09 2020-08-07 中海油能源发展装备技术有限公司 基于海上油气田设备设施的无人机倾斜摄影方法
CN112364796B (zh) * 2020-11-18 2023-08-01 合肥湛达智能科技有限公司 一种基于深度学习的目标速度检测方法及系统
CN113177984B (zh) * 2021-06-30 2021-09-17 湖北亿咖通科技有限公司 基于稀疏直接法的语义要素测距方法、装置和电子设备
CN113759944A (zh) * 2021-08-19 2021-12-07 深圳市鑫疆基业科技有限责任公司 基于指定高度飞行的自动巡检方法、系统和设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202075794U (zh) * 2011-05-24 2011-12-14 段连飞 一种无人机航摄立体成像处理设备
CN102749071A (zh) * 2012-04-24 2012-10-24 北京林业大学 一种基于无人机航空摄影监测土壤侵蚀的方法
CN103426200A (zh) * 2013-08-26 2013-12-04 天津大学 基于无人机航拍序列图像的树木三维重建方法
EP2849150A1 (en) * 2013-09-17 2015-03-18 Thomson Licensing Method for capturing the 3D motion of an object, unmanned aerial vehicle and motion capture system
CN105225241A (zh) * 2015-09-25 2016-01-06 广州极飞电子科技有限公司 无人机深度图像的获取方法及无人机

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2697917B2 (ja) * 1989-09-28 1998-01-19 日本電信電話株式会社 三次元座標計測装置
JPH0696208A (ja) * 1991-04-26 1994-04-08 Nippon Telegr & Teleph Corp <Ntt> カメラ移動による奥行測定方法
JPH05141930A (ja) * 1991-11-19 1993-06-08 Nippon Telegr & Teleph Corp <Ntt> 3次元形状計測装置
JP4970296B2 (ja) 2008-01-21 2012-07-04 株式会社パスコ オルソフォト画像の生成方法、および撮影装置
US20090201380A1 (en) * 2008-02-12 2009-08-13 Decisive Analytics Corporation Method and apparatus for streamlined wireless data transfer
EP2538298A1 (en) * 2011-06-22 2012-12-26 Sensefly Sàrl Method for acquiring images from arbitrary perspectives with UAVs equipped with fixed imagers
US9798928B2 (en) 2013-07-17 2017-10-24 James L Carr System for collecting and processing aerial imagery with enhanced 3D and NIR imaging capability

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202075794U (zh) * 2011-05-24 2011-12-14 段连飞 一种无人机航摄立体成像处理设备
CN102749071A (zh) * 2012-04-24 2012-10-24 北京林业大学 一种基于无人机航空摄影监测土壤侵蚀的方法
CN103426200A (zh) * 2013-08-26 2013-12-04 天津大学 基于无人机航拍序列图像的树木三维重建方法
EP2849150A1 (en) * 2013-09-17 2015-03-18 Thomson Licensing Method for capturing the 3D motion of an object, unmanned aerial vehicle and motion capture system
CN105225241A (zh) * 2015-09-25 2016-01-06 广州极飞电子科技有限公司 无人机深度图像的获取方法及无人机

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LEE, D. ET AL.: "Depth Estimation for Image-Based Visual Servoing of an Under-Actuated System", JOURNAL OF INSTITUTE OF CONTROL, ROBOTICS AND SYSTEMS, vol. 18, no. 1, 31 December 2012 (2012-12-31), pages 42 - 46, XP055370647, ISSN: 1976-5622 *
See also references of EP3264364A4 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019183789A1 (zh) * 2018-03-27 2019-10-03 深圳市大疆创新科技有限公司 无人机的控制方法、装置和无人机
CN114782525A (zh) * 2022-06-22 2022-07-22 中国地质大学(武汉) 基于全局几何约束的无人机影像定位定向方法及设备
CN114782525B (zh) * 2022-06-22 2022-09-20 中国地质大学(武汉) 基于全局几何约束的无人机影像定位定向方法及设备

Also Published As

Publication number Publication date
ES2798798T3 (es) 2020-12-14
EP3264364B1 (en) 2020-04-22
US20180120847A1 (en) 2018-05-03
EP3264364A4 (en) 2018-06-20
JP2018527554A (ja) 2018-09-20
CN105225241B (zh) 2017-09-15
KR20170137934A (ko) 2017-12-13
AU2016327918B2 (en) 2019-01-17
EP3264364A1 (en) 2018-01-03
JP6484729B2 (ja) 2019-03-13
WO2017050279A9 (zh) 2018-01-25
CN105225241A (zh) 2016-01-06
KR101886013B1 (ko) 2018-08-06
US10198004B2 (en) 2019-02-05
AU2016327918A1 (en) 2017-10-12

Similar Documents

Publication Publication Date Title
WO2017050279A9 (zh) 无人机深度图像的获取方法、装置及无人机
EP3469306B1 (en) Geometric matching in visual navigation systems
CN106681353B (zh) 基于双目视觉与光流融合的无人机避障方法及系统
WO2018210078A1 (zh) 无人机的距离测量方法以及无人机
US20160363451A1 (en) Multi-sensor merging based super-close distance autonomous navigation apparatus and method
CN106444837A (zh) 一种无人机避障方法及系统
US20180075614A1 (en) Method of Depth Estimation Using a Camera and Inertial Sensor
WO2019126930A1 (zh) 测距方法、装置以及无人机
WO2022054422A1 (ja) 障害物検知装置、障害物検知システム及び障害物検知方法
CN110889873A (zh) 一种目标定位方法、装置、电子设备及存储介质
WO2019061064A1 (zh) 图像处理方法和设备
CN110998241A (zh) 用于校准可移动对象的光学系统的系统和方法
CN105844692A (zh) 基于双目立体视觉的三维重建装置、方法、系统及无人机
CN113240813B (zh) 三维点云信息确定方法及装置
WO2023056789A1 (zh) 农机自动驾驶障碍物识别方法、系统、设备和存储介质
JP2022045947A5 (zh)
US20210229810A1 (en) Information processing device, flight control method, and flight control system
EP3255455A1 (en) Single pulse lidar correction to stereo imaging
CN117115271A (zh) 无人机飞行过程中的双目相机外参数自标定方法及系统
CN116957360A (zh) 一种基于无人机的空间观测与重建方法及系统
CN113895482B (zh) 基于轨旁设备的列车测速方法及装置
CN113625750A (zh) 一种基于毫米波与深度视觉相机结合的无人机避障系统
CN113110562A (zh) 一种基于多个广角摄像头的无人机避障装置及其避障方法
Wang Towards real-time 3d reconstruction using consumer uavs
Cheng et al. Monocular visual based obstacle distance estimation method for ultra-low altitude flight

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16848160

Country of ref document: EP

Kind code of ref document: A1

REEP Request for entry into the european phase

Ref document number: 2016848160

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 15565582

Country of ref document: US

ENP Entry into the national phase

Ref document number: 2016327918

Country of ref document: AU

Date of ref document: 20160923

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20177034364

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2017566134

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE