WO2018086050A1 - Depth map generation method and unmanned aerial vehicle based on this method - Google Patents

Depth map generation method and unmanned aerial vehicle based on this method Download PDF

Info

Publication number
WO2018086050A1
WO2018086050A1 PCT/CN2016/105408 CN2016105408W WO2018086050A1 WO 2018086050 A1 WO2018086050 A1 WO 2018086050A1 CN 2016105408 W CN2016105408 W CN 2016105408W WO 2018086050 A1 WO2018086050 A1 WO 2018086050A1
Authority
WO
WIPO (PCT)
Prior art keywords
depth map
occlusion
images
drone
area
Prior art date
Application number
PCT/CN2016/105408
Other languages
French (fr)
Chinese (zh)
Inventor
周游
朱振宇
杜劼熹
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201680002264.8A priority Critical patent/CN107077741A/en
Priority to PCT/CN2016/105408 priority patent/WO2018086050A1/en
Publication of WO2018086050A1 publication Critical patent/WO2018086050A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images

Definitions

  • Computer vision is a technique that relies on an imaging system instead of a visual organ as an input-sensitive means.
  • the most common imaging system is the camera, which consists of a dual camera that forms a basic vision system called Stereo Vision.
  • the difference between the two photos is usually caused by shooting from different angles, in which case the calculated scene depth information is correct. However, this difference may also be caused by imaging differences between the two cameras themselves, or by a single camera being occluded. If this is the case, then the calculated depth information is wrong.
  • the invention aims to solve the problem that the fixed obstruction in the binocular image affects the depth map calculation when the UAV is obstacle avoidance, thereby causing the obstacle avoidance error.
  • the present invention also provides a drone, comprising: an image acquisition device for acquiring two images simultaneously photographing the same scene from different angles; and one or more processors for detecting the two images An abnormal region in the middle; and in the case of masking the abnormal region, a depth map is generated from the two images.
  • the present invention also proposes a processor for executing the above-described depth map generation method, and a computer readable medium storing a program for executing the above-described depth map generation method.
  • the invention proposes a method for detecting occlusion on a depth map and removing its influence in real time from an engineering point of view.
  • the method of the invention is simple and effective, has strong robustness, can be detected and processed in real time, and reduces the rate of false detection of obstacles.
  • FIG. 3 is a schematic diagram of a depth map of FIG. 1A generated by a depth map generation method in accordance with the present invention
  • the present invention proposes a depth map generation method for occlusion recovery, and a drone using the generation method.
  • the method of generating the depth map includes two basic steps: S1, detecting an abnormal region in the two images; S2, generating a depth map in the case of masking pixels in the abnormal region among the two images.
  • the two images here are images taken by a binocular camera, that is, the two images are obtained by simultaneously shooting the same scene from different angles.
  • one aspect of the present invention is to detect abnormal areas that may be considered as obstacles, and on the other hand, how to make these abnormal areas not be regarded as obstacles.
  • the basic principle of the present invention is to make the abnormal region not affect the calculation of the entire image depth map when generating the depth map.
  • the present invention is not limited to how to calculate a depth map, but rather, the present invention is not included in the detected obstacle region not participating in the depth map calculation, or participating in the depth map calculation in a less-participating manner. The scope covered.
  • the present invention it is possible to formulate a specific one according to the characteristics of an abnormal region appearing in an actual application.
  • the abnormal region detecting method the present invention is not limited to a specific detecting method, and an abnormal region in one image can be obtained after the detecting.
  • the detection may be performed by detecting the texture of the occlusion that may occur, the area where the occlusion may occur, and the like.
  • the present invention improves the existing depth map generation method to minimize the influence on the obstruction.
  • the drone with occlusion recovery of the present invention can be designed.
  • the present invention also proposes to include a drone having an occlusion recovery function, which should also include an image acquisition device for acquiring two images simultaneously photographing the same scene from different angles. And one or more processors for detecting an abnormal region in the two images; and in the case of masking the abnormal region, generating a depth map according to the two images. Thereby, the drone performs occlusion detection based on the image acquired by the image acquisition device.
  • FIG. 1A is a schematic illustration of an image taken from a camera in a front-mounted binocular camera of a drone in one embodiment of the present invention.
  • a long strip of object A appears in the upper right of the picture, which is the blade guard of the drone.
  • the pad guard usually appears in a fixed area of the camera to the range of the shot.
  • the propeller, tripod and other components of the drone may appear in the image, and the position is usually fixed.
  • the paddle protective cover shown in FIG. 1 will be described as an example.
  • the above method for detecting texture is merely an example, and the present invention can formulate a specific detection method according to the existing method according to the specific shape, pattern, and the like of the specific occlusion, which should be covered by the present invention.
  • the present invention can formulate a specific detection method according to the existing method according to the specific shape, pattern, and the like of the specific occlusion, which should be covered by the present invention.
  • the exposure time of the image acquisition device is set to be very short (eg, less than 5 ms), and the gain thereof is also small, but many pixel values appear to be larger than the threshold (if there are a large number of pixels with pixel values greater than 255) The pixel point at this time can be judged as overexposed.
  • a depth map is generated according to the two images.
  • the abnormal region in the two images is detected by the above-described detecting method, and then the depth map is generated in the case of masking the pixels of the abnormal region.
  • the present invention may adopt different shielding methods according to a specific depth map generation method, and the principle is that the pixels of the abnormal region do not participate in the calculation of the depth map, or the components participating in the depth map are involved. The calculation does not have a significant impact.
  • L r (p,d) represents the path cost of the pixel p along the path r
  • L r (p,d) is obtained by iterative calculation of:
  • the warning device 12 can be any device that can be used to generate sound, light, and electric signals, such as a speaker, a warning light, or the like, mounted on the drone.
  • the present invention is preferably a device in which the alert device 12 is designed to have a human-computer interaction function, or is integrated in a device having a human-computer interaction function.
  • the alert device 12 can be placed on the remote control of the drone, or by existing devices in the remote control of the drone, such as screens, lights, and the like.
  • the alerting device is a device that can receive user input.
  • the alert device can be a touch screen, or a combination of a screen and a button, and the like.
  • the alert device 12 can alert the user by displaying a pop-up window on the screen and prompt the user to confirm whether occlusion is indeed present. Thereby, the user can directly perform the relevant confirmation action.
  • the occlusion detection system 1 can generate a depth according to the two images and perform occlusion detection.
  • the specific implementation has been described in the foregoing, and will not be described herein.
  • the processor 5 can also determine whether there is a large area occlusion according to the generated depth map. For example, when calculating the depth map for the Semi-global Matching algorithm, as described above, ⁇ p S(p,d) can be calculated. When ⁇ p S(p,d) is greater than a threshold, it is judged that large-area occlusion occurs. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

A method for generating a depth map capable of performing shielding recovery, comprising: S1, detecting abnormal areas in two images, wherein the two images are obtained by photographing the same scene from different angles at the same time; and S2, where pixels, located in the abnormal areas, in the two images are shielded, generating a depth map according to the two images. A shielding detection system, obstacle avoidance system and unmanned aerial vehicle based on the depth map generation method are capable of detecting and dealing with a fixed obstacle in real -time, thereby reducing the influence thereof on obstacle detection.

Description

深度图生成方法和基于该方法的无人机Depth map generation method and drone based on the same
版权申明Copyright statement
本专利文件披露的内容包含受版权保护的材料。该版权为版权所有人所有。版权所有人不反对任何人复制专利与商标局的官方记录和档案中所存在的该专利文件或者专利披露。The disclosure of this patent document contains material that is subject to copyright protection. This copyright is the property of the copyright holder. The copyright owner has no objection to the reproduction of the patent document or patent disclosure contained in the official records and files of the Patent and Trademark Office.
技术领域Technical field
本发明属于计算机视觉技术领域,具体是指采用双目摄像头系统生成用于避障的深度图时进行遮挡恢复的方法,以及采用该遮挡恢复方法的遮挡检测系统。本发明可应用于带有多摄像头的载具的避障应用,例如无人驾驶的汽车、自动飞行的无人机、VR/AR眼镜、双摄像头的手机等设备。The invention belongs to the field of computer vision technology, and specifically relates to a method for performing occlusion recovery when a binocular camera system generates a depth map for avoiding obstacles, and an occlusion detection system using the occlusion recovery method. The invention can be applied to obstacle avoidance applications of vehicles with multiple cameras, such as unmanned cars, self-propelled drones, VR/AR glasses, dual-camera mobile phones and the like.
背景技术Background technique
计算机视觉是依靠成像系统代替视觉器官作为输入敏感手段的技术。最常用的成像系统是摄像头,由双摄像头即可组成一个基础的视觉系统,称为Stereo Vision。Computer vision is a technique that relies on an imaging system instead of a visual organ as an input-sensitive means. The most common imaging system is the camera, which consists of a dual camera that forms a basic vision system called Stereo Vision.
双目摄像头系统Stereo Vision System是通过两个摄像头,拍摄同一时刻、不同角度的两张照片,再通过两张照片的差异,以及两个摄像头之间的位置、角度关系,利用三角关系,计算出场景与摄像头的距离关系,这种距离关系一张图显示即为深度图(Depth Map)。也就是说,双目摄像头系统是通过同一时刻不同角度的两张照片的差异,来获取场景的深度信息。The binocular camera system Stereo Vision System uses two cameras to capture two photos at the same time and at different angles, and then through the difference between the two photos and the position and angle relationship between the two cameras, using the triangular relationship to calculate The distance between the scene and the camera. This distance relationship shows a depth map (Depth Map). That is to say, the binocular camera system acquires the depth information of the scene by the difference of two photos at different angles at the same time.
两张照片的差异通常是由于从不同角度拍摄而造成的,这种情况下计算出来的场景深度信息是正确的。但是,这种差异也有可能是由两个摄像头本身的成像差异引起的,或是单个摄像头被遮挡引起的。如果是这种情况,那么计算出来的深度信息就是错误的。 The difference between the two photos is usually caused by shooting from different angles, in which case the calculated scene depth information is correct. However, this difference may also be caused by imaging differences between the two cameras themselves, or by a single camera being occluded. If this is the case, then the calculated depth information is wrong.
为了消除单个摄像头被遮挡引起的两个摄像头的成像差异,需要对遮挡的影响进行消除,即进行遮挡恢复操作。当双目摄像头系统应用于无人机等无人载具的避障应用上时,如果发生深度图的计算错误,就会引起对障碍的误检测,从而错误地触发制动措施,从而影响无人载具的操作及用户体验。In order to eliminate the imaging difference between the two cameras caused by the occlusion of a single camera, it is necessary to eliminate the influence of the occlusion, that is, perform an occlusion recovery operation. When the binocular camera system is applied to the obstacle avoidance application of unmanned vehicles such as drones, if the calculation of the depth map is wrong, it will cause false detection of the obstacle, thereby erroneously triggering the braking measure, thereby affecting no The operation and user experience of the vehicle.
发明内容Summary of the invention
本发明旨在解决无人机避障时在双目图像在出现的固定遮挡物影响深度图计算,从而导致避障失误的问题。The invention aims to solve the problem that the fixed obstruction in the binocular image affects the depth map calculation when the UAV is obstacle avoidance, thereby causing the obstacle avoidance error.
为解决上述技术问题,本发明提出一种能进行遮挡恢复的深度图生成方法,包括:S1、检测两个图像中的异常区域,该两个图像是从不同角度对同一场景同时进行拍摄而获得;S2、在屏蔽所述两个图像中的处于异常区域的像素的情况下,根据所述两个图像生成深度图。In order to solve the above technical problem, the present invention provides a depth map generation method capable of performing occlusion recovery, comprising: S1, detecting an abnormal region in two images, which are obtained by simultaneously shooting the same scene from different angles. S2, in the case of masking pixels in the abnormal region among the two images, generating a depth map according to the two images.
本发明还提出一种无人机,包括:图像获取装置,用于获取从不同角度对同一场景同时进行拍摄的两个图像;以及一个或多个处理器,用于:检测所述两个图像中的异常区域;以及在屏蔽所述异常区域的情况下,根据所述两个图像生成深度图。The present invention also provides a drone, comprising: an image acquisition device for acquiring two images simultaneously photographing the same scene from different angles; and one or more processors for detecting the two images An abnormal region in the middle; and in the case of masking the abnormal region, a depth map is generated from the two images.
本发明还提出一种用于执行上述深度图生成方法的处理器,以及存储有执行上述深度图生成方法的程序的计算机可读介质。The present invention also proposes a processor for executing the above-described depth map generation method, and a computer readable medium storing a program for executing the above-described depth map generation method.
本发明从工程角度出发,提出一种能够实时检测深度图上遮挡并移除其影响的方法。本发明的方法简单有效,鲁棒性强,能够实时检测和处理,减少障碍误检率。The invention proposes a method for detecting occlusion on a depth map and removing its influence in real time from an engineering point of view. The method of the invention is simple and effective, has strong robustness, can be detected and processed in real time, and reduces the rate of false detection of obstacles.
附图说明DRAWINGS
图1A是本发明的一个实施例中,从无人机的前置双目摄像头中的一个摄像头中拍摄的一个图像的示意图;1A is a schematic diagram of an image taken from one of the front binocular cameras of the drone in one embodiment of the present invention;
图1B是本发明的一个实施例中进行异常区域检测的示意图; 1B is a schematic diagram of performing abnormal region detection in an embodiment of the present invention;
图2是本发明的深度图生成方法所包括的步骤流程图;2 is a flow chart of steps included in the depth map generation method of the present invention;
图3是根据本发明的深度图生成方法生成的图1A的深度图的示意图;3 is a schematic diagram of a depth map of FIG. 1A generated by a depth map generation method in accordance with the present invention;
图4是根据本发明的深度图生成方法架构的用于无人机的一种遮挡检测系统的一个实施例的结构示意图;4 is a block diagram showing an embodiment of an occlusion detecting system for a drone according to the depth map generating method architecture of the present invention;
图5是根据本发明的一个实施例的无人机的避障系统的模块架构图;5 is a block diagram of a module of an obstacle avoidance system of a drone according to an embodiment of the present invention;
图6是本发明的另一实施例的无人机的模块架构图。Figure 6 is a block diagram of a module of a drone according to another embodiment of the present invention.
具体实施方式detailed description
由于深度图能够显示场景中物体的远近,因此能够根据深度图检测无人机的前进方向上是否具有障碍物,从而据此进行相应的避障操作。然而,在实际应用中,摄像头中往往会出现无人机自身的部件的遮挡,这些部件例如是无人机的螺旋桨、螺旋桨罩等。在计算深度图时,如果这些部件的部分出现在摄像头,往往会将这些部件视为距离很近的障碍物,从而引发误操作。为此,本发明提出在计算深度图时消除这类遮挡的干扰,即进行遮挡的恢复。Since the depth map can display the distance of the object in the scene, it is possible to detect whether there is an obstacle in the forward direction of the drone according to the depth map, thereby performing corresponding obstacle avoidance operation accordingly. However, in practical applications, the occlusion of the components of the drone itself is often present in the camera, such as a propeller of a drone, a propeller cover, and the like. When calculating the depth map, if parts of these parts appear in the camera, they are often treated as obstacles that are close together, causing misoperation. To this end, the present invention proposes to eliminate the interference of such occlusions when calculating the depth map, that is, to recover the occlusion.
总的来说,本发明提出一种遮挡恢复的深度图生成方法,以及采用该生成方法的无人机。深度图的生成方法包括两个基本的步骤:即S1、检测两个图像中的异常区域;S2、在屏蔽所述两个图像中的处于异常区域的像素的情况下生成深度图。当然,此处的两个图像是双目摄像机拍摄的图像,即该两个图像是从不同角度对同一场景同时进行拍摄而获得的。In general, the present invention proposes a depth map generation method for occlusion recovery, and a drone using the generation method. The method of generating the depth map includes two basic steps: S1, detecting an abnormal region in the two images; S2, generating a depth map in the case of masking pixels in the abnormal region among the two images. Of course, the two images here are images taken by a binocular camera, that is, the two images are obtained by simultaneously shooting the same scene from different angles.
也就是说,本发明一方面在于检测出那些可能会被认为是障碍物的异常区域,另一方面在于如何使这些异常区域不被视为障碍物。本发明的基本原理是在生成深度图时使异常区域不影响到整个图像深度图的计算。但是,应当理解,本发明并不限于如何计算深度图,而是,只要使被检测出的障碍物区域不参与深度图计算,或以参与度较小的方式参与深度图计算,都是本发明所涵盖的范围。That is, one aspect of the present invention is to detect abnormal areas that may be considered as obstacles, and on the other hand, how to make these abnormal areas not be regarded as obstacles. The basic principle of the present invention is to make the abnormal region not affect the calculation of the entire image depth map when generating the depth map. However, it should be understood that the present invention is not limited to how to calculate a depth map, but rather, the present invention is not included in the detected obstacle region not participating in the depth map calculation, or participating in the depth map calculation in a less-participating manner. The scope covered.
根据本发明,可以根据实际应用中异常区域出现的特点来制定特定 的异常区域检测方式,本发明不局限于具体的检测方式,只要检测之后能够得到一个图像中的异常区域。例如,可以检测可能出现的遮挡物的纹理、遮挡物可能出现的区域等来进行所述检测。According to the present invention, it is possible to formulate a specific one according to the characteristics of an abnormal region appearing in an actual application. The abnormal region detecting method, the present invention is not limited to a specific detecting method, and an abnormal region in one image can be obtained after the detecting. For example, the detection may be performed by detecting the texture of the occlusion that may occur, the area where the occlusion may occur, and the like.
除了无人机自身部件的遮挡之外,天空、强光源、太阳等引起图像过度曝光的景象也会影响深度图的生成,这些过度曝光区域可能会被认为是障碍物出现的区域,因此,本发明还提出在检测两个图像中的异常区域时,将过度曝光的区域也作为异常区域,以消除这些区域的影响。In addition to the occlusion of the UAV's own components, scenes such as the sky, strong light sources, and the sun that cause image overexposure can also affect the generation of depth maps. These overexposed areas may be considered as areas where obstacles appear. The invention also proposes that when detecting an abnormal region in two images, the overexposed region is also used as an abnormal region to eliminate the influence of these regions.
由此可知,本发明对现有的深度图生成方法进行了改进,使之对遮挡物的影响降至最低。采用这种方法,可以设计出本发明的具有遮挡恢复的无人机。It can be seen from the above that the present invention improves the existing depth map generation method to minimize the influence on the obstruction. In this way, the drone with occlusion recovery of the present invention can be designed.
此外,本发明还提出包括具有遮挡恢复功能的无人机,该无人机还应包括图像获取装置,用于获取从不同角度对同一场景同时进行拍摄的两个图像。以及一个或多个处理器,处理器用于检测所述两个图像中的异常区域;以及在屏蔽所述异常区域的情况下,根据所述两个图像生成深度图。由此,该无人机根据图像获取装置获取的图像进行遮挡检测。Furthermore, the present invention also proposes to include a drone having an occlusion recovery function, which should also include an image acquisition device for acquiring two images simultaneously photographing the same scene from different angles. And one or more processors for detecting an abnormal region in the two images; and in the case of masking the abnormal region, generating a depth map according to the two images. Thereby, the drone performs occlusion detection based on the image acquired by the image acquisition device.
下面参照具体实施例来说明。下面的说明尽管是采用无人机为例来进行说明,但本发明的方法、系统、装置并不限于该领域,任何需要计算深度图的应用领域都可合理地利用本发明,例如包括任何不需要人直接在设备本体上进行操作的各种应用设备和应用场景,例如机器人技术、虚拟现实技术等。The following description will be made with reference to specific embodiments. The following description is directed to the use of a drone as an example. However, the method, system, and apparatus of the present invention are not limited to the field, and any application field that requires calculation of a depth map can reasonably utilize the present invention, for example, including any Various application devices and application scenarios that require humans to operate directly on the device body, such as robot technology, virtual reality technology, and the like.
图1A是本发明的一个实施例中,从无人机的前置双目摄像头中的一个摄像头中拍摄的一个图像的示意图。如图1A所示,在该图片中的右上方出现了一个长条状的物体A,该物体是无人机的桨叶保护罩。桨叶保护罩通常会出现在摄像对拍摄的范围中的固定区域。除此之外,无人机的螺旋桨、脚架等部件均有可能出现在图像中,并且位置通常较为固定。该实施例中以图1所示的桨中保护罩为例进行说明。1A is a schematic illustration of an image taken from a camera in a front-mounted binocular camera of a drone in one embodiment of the present invention. As shown in Fig. 1A, a long strip of object A appears in the upper right of the picture, which is the blade guard of the drone. The pad guard usually appears in a fixed area of the camera to the range of the shot. In addition, the propeller, tripod and other components of the drone may appear in the image, and the position is usually fixed. In this embodiment, the paddle protective cover shown in FIG. 1 will be described as an example.
浆叶保护罩上有竖状条纹。因此,在无人机起飞前,对桨叶保护罩可能出现的固定区域做检测。如图1B所示,对固定区域B进行检测,在该实施例中是对于已知的遮挡物进行纹理检测来确定是否出现遮挡。 在此,本发明先做二值化处理,变为黑白色,再按照箭头C方向取一行的像素数据进行统计,用0表示白,1表示黑,应该有如下规律[000000001111 000000111 000011 001]:①黑白相间  ②在左图中,从左到右,黑带和白带逐渐缩减,而在右图中相反  ③黑白相间重复次数有9次,数到8次以上就可以记为有效。满足以上三个条件,则可以认为有检测到浆叶保护罩,可以在手机APP等交互界面上进一步弹窗提示,要求用户确认,是否真的安装有浆叶保护罩。通过这样的步骤能够快速确认浆叶保护罩的遮挡,从而触发对应的遮挡恢复逻辑。There are vertical stripes on the blade protection cover. Therefore, before the drone takes off, the fixed area where the blade guard may appear is tested. As shown in Fig. 1B, the fixed area B is detected, in this embodiment, a texture detection is performed on a known occlusion to determine whether occlusion occurs. Here, the present invention first performs binarization processing, turns black and white, and then counts the pixel data of one line according to the direction of the arrow C, and uses 0 to indicate white and 1 to represent black. The following law [000000001111 000000111 000011 001] should be present: 1 Black and white phase 2 In the left picture, the black band and the leucorrhea are gradually reduced from left to right, and in the right picture, the number of repetitions of black and white is 9 times, and the number of times is more than 8 times. If the above three conditions are met, it can be considered that the blade protection cover is detected, and the window prompt can be further displayed on the interactive interface such as the mobile phone APP, and the user is required to confirm whether the blade protection cover is actually installed. Through such a procedure, the occlusion of the blade guard can be quickly confirmed, thereby triggering the corresponding occlusion recovery logic.
应该了解的是,上述检测纹理的方法仅仅是一种示例,本发明可以根据具体遮挡物的形状、图案等特定,根据现有的方法来制定特定的检测方式,其均应涵盖在本发明的范围内。It should be understood that the above method for detecting texture is merely an example, and the present invention can formulate a specific detection method according to the existing method according to the specific shape, pattern, and the like of the specific occlusion, which should be covered by the present invention. Within the scope.
再者,本发明还考虑到了,出现在图像中的部件本身的位置相对于图像获取装置(双目相机)也可能不是固定的。例如,在无人机的飞行过程中或者在无人机起飞之前,为了使固定在云台上的拍摄装置(不同于双目相机)持续对某一场景进行拍摄,云台的姿态可能是随时变化的。由此,云台在双目相机所获取的图像中可能出现的区域、大小或形状均可能是变化的。在这种情况下,就需要根据云台的姿态来判断图像中是否存在异常区域,并进行相应的遮挡检测和深度图遮挡恢复操作。具体来说,可以根据云台的实时姿态角(如俯仰角、横滚角、偏航角等)来估计云台在检测图像中的位置,并在估计出云台的位置之后进行遮挡恢复操作。Furthermore, the present invention also contemplates that the position of the component itself appearing in the image may not be fixed relative to the image acquisition device (binocular camera). For example, in the process of flying a drone or before the drone takes off, in order to make the shooting device fixed on the gimbal (unlike the binocular camera) continue to shoot a certain scene, the attitude of the gimbal may be at any time. Changed. Thus, the area, size, or shape that the gimbal may appear in the image acquired by the binocular camera may vary. In this case, it is necessary to judge whether there is an abnormal area in the image according to the posture of the gimbal, and perform corresponding occlusion detection and depth map occlusion recovery operation. Specifically, the position of the gimbal in the detected image can be estimated according to the real-time attitude angle of the gimbal (such as the pitch angle, the roll angle, the yaw angle, etc.), and the occlusion recovery operation is performed after estimating the position of the gimbal. .
此外,根据本发明的另一实施方式,在检测两个图像中的异常区域时,将过度曝光的区域作为异常区域。过度曝光区域包括阳光、灯光等直接照射而产生的强光区域,也可以是由于高动态场景切换时的过度曝光区域。由于过度曝光的区域和遮挡物一样会影响深度图的计算,因此,在检测异常区域时,也将这些过度曝光区域作为一种特殊的异常区域进行处理。检测过度曝光的区域可以通过检测图像中各个像素的亮度值来进行,例如,可以将图像中的亮度值大于某个阈值的区域作为过度曝光区域。当然,本发明还可以采用其他的检测过度曝光区域的技术。在一 些实施例中,图像获取装置的曝光时间被设为很短(如小于5ms),且其增益也很小,但是会出现很多像素值大于所述阈值(如有大量像素值大于255的像素点)的像素点,此时的场景则可以被判断为过度曝光。Further, according to another embodiment of the present invention, when detecting an abnormal region in two images, an overexposed region is regarded as an abnormal region. The overexposed area includes a strong light area generated by direct illumination such as sunlight, light, or the like, or may be an overexposed area due to high dynamic scene switching. Since the overexposed area affects the calculation of the depth map like the occlusion, these overexposed areas are also treated as a special abnormal area when detecting the abnormal area. The area where the overexposure is detected may be performed by detecting the luminance value of each pixel in the image, for example, an area in which the luminance value in the image is larger than a certain threshold may be used as the overexposed area. Of course, other techniques for detecting overexposed regions can be employed with the present invention. In a In some embodiments, the exposure time of the image acquisition device is set to be very short (eg, less than 5 ms), and the gain thereof is also small, but many pixel values appear to be larger than the threshold (if there are a large number of pixels with pixel values greater than 255) The pixel point at this time can be judged as overexposed.
图2是根据本发明的深度图生成方法生成的图1A的深度图的示意图。如图2所示,该生成的深度图在未受到遮挡物的影响。图3是本发明的深度图生成方法所包括的步骤流程图。如图3所示,深度图的生成方法包括两个基本的步骤:即2 is a schematic diagram of the depth map of FIG. 1A generated by a depth map generation method in accordance with the present invention. As shown in Figure 2, the generated depth map is unaffected by the obstruction. 3 is a flow chart showing the steps included in the depth map generation method of the present invention. As shown in Figure 3, the method of generating the depth map includes two basic steps:
S1、检测两个图像中的异常区域,该两个图像是从不同角度对同一场景同时进行拍摄而获得;S1: detecting an abnormal region in two images obtained by simultaneously shooting the same scene from different angles;
S2、在屏蔽所述两个图像中的处于异常区域的像素的情况下同,根据所述两个图像生成深度图。S2. In the case of masking pixels in the abnormal region among the two images, a depth map is generated according to the two images.
在本发明的上述实施例中,通过上述检测方法检测出两个图像中的异常区域,然后在屏蔽异常区域的像素的情况下生成深度图。在进行屏蔽处理的时候,本发明根据具体的深度图生成方法可以采用不同的屏蔽方式,其原理是使异常区域的像素不参与到深度图的计算中,或者使之参与的成分对深度图的计算不产生显著的影响。In the above-described embodiment of the present invention, the abnormal region in the two images is detected by the above-described detecting method, and then the depth map is generated in the case of masking the pixels of the abnormal region. When performing the masking process, the present invention may adopt different shielding methods according to a specific depth map generation method, and the principle is that the pixels of the abnormal region do not participate in the calculation of the depth map, or the components participating in the depth map are involved. The calculation does not have a significant impact.
作为一个例子,步骤S2中生成深度图D时可采用Semi-global Matching算法,该算法计算匹配代价S取最小值时相应的深度值d,S表示所述两幅图像在进行匹配计算时的匹配程度。所述匹配代价S由下式计算:As an example, when generating the depth map D in step S2, a Semi-global Matching algorithm may be used, which calculates a corresponding depth value d when the matching cost S takes a minimum value, and S represents a matching of the two images when performing matching calculation. degree. The matching cost S is calculated by:
Figure PCTCN2016105408-appb-000001
Figure PCTCN2016105408-appb-000001
其中Lr(p,d)表示像素p沿路径r的路径代价,所述路径代价Lr(p,d)由下式迭代计算获得:Where L r (p,d) represents the path cost of the pixel p along the path r, and the path cost L r (p,d) is obtained by iterative calculation of:
L′r(p,d)=C(p,d)+min(L′r(p-r,d),L′r(p-r,d-1)L' r (p,d)=C(p,d)+min(L' r (pr,d), L' r (pr,d-1)
+P1,L′r(p-r,d+1)+P1,min(L′r(p-r,i)+P2)+P 1 , L' r (pr,d+1)+P 1 ,min(L' r (pr,i)+P 2 )
其中,C(p,d)表示像素p的匹配度,P1、P2为惩罚因子,均为常数,i为自然数。Where C(p,d) represents the matching degree of the pixel p, P 1 and P 2 are penalty factors, all of which are constants, and i is a natural number.
对于步骤1中检测到的所有异常区域的所有像素,令C(p,d)为一个常数CB,该CB大于其他区域像素的C(p,d)的最大值。 All pixels of all of the abnormal region detected in step 1 to make C (p, d) is a constant C B, C B is greater than that of the other region of the pixel C (p, d) maximum.
根据上述公式,由于L’r(p,d)只取决于式中的后一部分,此此点的深度信息是通过周围的点推算来的,这样就能够去除遮挡引用的本身匹配错误问题。According to the above formula, since L' r (p,d) depends only on the latter part of the formula, the depth information of this point is estimated by the surrounding points, so that the problem of matching errors of the occlusion reference itself can be removed.
根据本发明的一种实施方式,当步骤S1检测到异常区域时,步骤S2还包括提示出现遮挡的步骤。例如,对于Semi-global Matching算法来计算深度图时,还可以计算∑pS(p,d),当∑pS(p,d)大于一个阈值时,提示出现遮挡。由于该方法手动地设置了一些像素的本身匹配结果令C(p,d)为一个较大的数字,会引起有较大偏差,因此在计算整个匹配过程的适配参数的时候,手动设置的像素将一律不参与计算,由此,计算出来的∑pS(p,d)若很大(大于一定的阈值),则认为当前观测很差,可能出现物体遮挡,或者其他情况(镜头被污染等),导致双目之间匹配很差。此时无人机可以向用户发出提示信息,如在控制端(如遥控器或智能手机)发出警告信息,警告用户可能有遮挡物挡住了图像获取装置。用户收到警告信息之后,可以去查看是否有遮挡物存在。According to an embodiment of the present invention, when the abnormal area is detected in step S1, step S2 further includes the step of prompting the occurrence of occlusion. For example, for the Semi-global Matching algorithm to calculate the depth map, ∑ p S(p,d) can also be calculated. When ∑ p S(p,d) is greater than a threshold, occlusion is prompted. Since the method manually sets the matching result of some pixels to make C(p,d) a large number, it will cause a large deviation, so when calculating the adaptation parameters of the entire matching process, manually set Pixels will never participate in the calculation. Therefore, if the calculated ∑ p S(p,d) is large (greater than a certain threshold), the current observation is considered to be poor, object occlusion may occur, or other conditions (the lens is contaminated Etc.), resulting in poor matching between binoculars. At this time, the drone can send a prompt message to the user, such as issuing a warning message on the control end (such as a remote controller or a smart phone), warning the user that there may be obstructions blocking the image acquisition device. After the user receives the warning message, they can check to see if there is an obstruction.
图4是根据本发明的深度图生成方法架构的用于无人机的一种遮挡检测系统的一个实施例的结构示意图。如图4所示,该检测系统1包括深度图生成装置10和遮挡检测装置11,深度图生成装置10用于检测从不同角度对同一场景同时进行拍摄的两个图像中的异常区域,并在屏蔽所述两个图像中的处于异常区域的像素的情况下,根据所述两个图像生成深度图;遮挡检测装置11则用于根据所述深度图生成装置生成的深度图检测遮挡。4 is a block diagram showing an embodiment of an occlusion detecting system for a drone according to the depth map generating method architecture of the present invention. As shown in FIG. 4, the detection system 1 includes a depth map generating device 10 and an occlusion detecting device 10 for detecting an abnormal region in two images simultaneously photographing the same scene from different angles, and In the case of masking the pixels in the abnormal region among the two images, a depth map is generated according to the two images; the occlusion detecting device 11 is configured to detect the occlusion according to the depth map generated by the depth map generating device.
所述的深度图生成装置10采用本发明前述的深度图生成方法,在此不再赘述。所述遮挡检测装置11根据生成的深度图来判断是否存在遮挡物。例如,对于Semi-global Matching算法来计算深度图时,如前所述,可以计算∑pS(p,d),当∑pS(p,d)大于一个阈值时,则判断出现遮挡。The depth map generating device 10 uses the foregoing depth map generating method of the present invention, and details are not described herein again. The occlusion detecting device 11 determines whether or not there is an obstruction based on the generated depth map. For example, for the Semi-global Matching algorithm to calculate the depth map, as described above, ∑ p S(p,d) can be calculated, and when ∑ p S(p,d) is greater than a threshold, occlusion is determined.
如图4所示,本发明的遮挡检测系统还可以包括警示装置12。当遮挡检测装置11检测到存在遮挡物时,其向警示装置12发送警示指令, 警示装置根据该警示指令在所述遮挡检测装置确定出现遮挡时进行警示动作。As shown in FIG. 4, the occlusion detection system of the present invention may further include an alerting device 12. When the occlusion detecting device 11 detects that there is an obstruction, it sends an alert instruction to the alert device 12, The warning device performs an alerting action when the occlusion detecting device determines that occlusion occurs due to the warning command.
作为无人机上应用的一种具体的实施方式,警示装置12可以是安装在无人机上的任何能产生声、光、电信号的器件,例如扬声器、警示灯等。除此之外,本发明优选为将警示装置12设计为具有人机交互功能的器件,或者集成在具有人机交互功能的器件中。对于无人机来说,其可以设置于无人机的遥控器上,或者由无人机的遥控器中的现有器件、例如屏幕、指示灯等。As a specific implementation of the application on the UAV, the warning device 12 can be any device that can be used to generate sound, light, and electric signals, such as a speaker, a warning light, or the like, mounted on the drone. In addition to this, the present invention is preferably a device in which the alert device 12 is designed to have a human-computer interaction function, or is integrated in a device having a human-computer interaction function. For drones, it can be placed on the remote control of the drone, or by existing devices in the remote control of the drone, such as screens, lights, and the like.
作为更优选的实施方式,所述警示装置是可以接收用户输入的装置。例如,警示装置可以是触摸式屏幕,或者是屏幕与按键的组合等。当警示装置12接收到警示指令时,其可以通过在屏幕上显示弹窗的方式向用户警示,并提示用户确认是否确实存在遮挡。由此,用户可以直接进行相关的确认动作。As a more preferred embodiment, the alerting device is a device that can receive user input. For example, the alert device can be a touch screen, or a combination of a screen and a button, and the like. When the alert device 12 receives the alert command, it can alert the user by displaying a pop-up window on the screen and prompt the user to confirm whether occlusion is indeed present. Thereby, the user can directly perform the relevant confirmation action.
图5是根据本发明的一个实施例的无人机的避障系统的模块架构图。如图5所示,该避障系统包括图像获取装置2、遮挡检测系统1和避障控制装置3。Figure 5 is a block diagram of a block diagram of an obstacle avoidance system for a drone in accordance with one embodiment of the present invention. As shown in FIG. 5, the obstacle avoidance system includes an image acquisition device 2, an occlusion detection system 1, and an obstacle avoidance control device 3.
图像获取装置2用于获取从不同角度对同一场景同时进行拍摄的两个图像。通常图像获取装置为双目相机,双目相机可以是可见光波段的相机或红外相机,或者二者集成。图像获取装置2将获取的图片或视频中的图像帧发送给遮挡检测系统。The image acquisition device 2 is for acquiring two images that simultaneously capture the same scene from different angles. Usually the image acquisition device is a binocular camera, and the binocular camera may be a visible light band camera or an infrared camera, or both. The image acquisition device 2 transmits the acquired image frame in the picture or video to the occlusion detection system.
遮挡检测系统1能够根据所述两个图像生成深度并进行遮挡检测,具体实施方式已在前文中进行说明,在此不再赘述。The occlusion detection system 1 can generate a depth according to the two images and perform occlusion detection. The specific implementation has been described in the foregoing, and will not be described herein.
避障控制装置3用于进行无人机等无人驾驶载具有避障控制,能够对无人驾驶载具的行驶或飞行进行自动控制。其通常具有自身的障碍物检测装置并根据检测的障碍物来调整无人驾驶载具的行驶或飞行路线。在本发明中,为了不使遮挡物被当作障碍物,本发明引入了前述的遮挡检测系统2,并使得避障控制装置3根据遮挡检测系统2的检测结果来作为是否进行避障的依据之一。具体来说,当遮挡检测装置检测到存在遮挡时,避障控制装置3不针对该遮挡物产生的图像异常进行避障,而 在遮挡检测装置检测不到存在遮挡时,避障控制装置3才以常规的方式进行避障。The obstacle avoidance control device 3 is configured to perform obstacle avoidance control for an unmanned vehicle such as a drone, and can automatically control the traveling or flight of the unmanned vehicle. It typically has its own obstacle detection device and adjusts the travel or flight path of the driverless vehicle based on the detected obstacle. In the present invention, in order to prevent the obstruction from being regarded as an obstacle, the present invention introduces the aforementioned occlusion detection system 2, and causes the obstacle avoidance control device 3 to use the detection result of the occlusion detection system 2 as a basis for whether or not to perform obstacle avoidance. one. Specifically, when the occlusion detecting device detects that there is occlusion, the obstacle avoidance control device 3 does not perform obstacle avoidance on the image abnormality generated by the occlusion object, and When the occlusion detecting device detects that there is no occlusion, the obstacle avoidance control device 3 performs obstacle avoidance in a conventional manner.
在实际的应用中,遮挡检测系统和避障控制装置可能由独立的硬件或软件实现,也可以由集成的硬件或软件实现。对于无人机来说,其也可以共同作为飞行控制系统的一部分由硬件和/或软件来实现。In practical applications, the occlusion detection system and the obstacle avoidance control device may be implemented by independent hardware or software, or may be implemented by integrated hardware or software. For drones, they can also be implemented together as hardware and/or software as part of a flight control system.
图6是本发明的另一实施例的无人机的模块架构图。如图6所示,该无人机包括图像获取装置2和处理器5。图像获取装置2用于获取从不同角度对同一场景同时进行拍摄的两个图像;处理器5用于检测所述两个图像中的异常区域,以及在屏蔽所述异常区域的情况下,根据所述两个图像生成深度图。所述处理器5具有前述的深度图生成方法,在此不再赘述。Figure 6 is a block diagram of a module of a drone according to another embodiment of the present invention. As shown in FIG. 6, the drone includes an image acquisition device 2 and a processor 5. The image acquisition device 2 is configured to acquire two images that simultaneously capture the same scene from different angles; the processor 5 is configured to detect an abnormal region in the two images, and in the case of shielding the abnormal region, according to the The two images are generated to generate a depth map. The processor 5 has the foregoing method for generating a depth map, and details are not described herein again.
处理器5还可根据生成的深度图来判断是否存在大面积遮挡。例如,对于Semi-global Matching算法来计算深度图时,如前所述,可以计算∑pS(p,d),当∑pS(p,d)大于一个阈值时,则判断出现大面积遮挡。The processor 5 can also determine whether there is a large area occlusion according to the generated depth map. For example, when calculating the depth map for the Semi-global Matching algorithm, as described above, ∑ p S(p,d) can be calculated. When ∑ p S(p,d) is greater than a threshold, it is judged that large-area occlusion occurs. .
如图6所示,该实施例的无人机还可包括显示装置6,所述处理器5在所述遮挡出现时还可以使显示装置6显示所述警告信息。所述显示装置例如可以是能够接收用户输入的触摸式屏幕,以使得用户能够能过输入来确认是否存在大面积遮挡。As shown in FIG. 6, the drone of this embodiment may further include a display device 6, which may also cause the display device 6 to display the warning information when the occlusion occurs. The display device may, for example, be a touch screen capable of receiving user input to enable a user to over-enter to confirm whether there is a large area of occlusion.
应当理解,虽然该实施例中采用一个处理器,但其也可由多个处理器来分别执行上述一个处理器中执行的步骤。It should be understood that although one processor is employed in this embodiment, it is also possible for a plurality of processors to perform the steps performed in the one processor described above, respectively.
根据本发明各实施例的上述方法、模块可以通过有计算能力的电子设备执行包含计算机指令的软件来实现。该系统可以包括存储设备,以实现上文所描述的各种存储。所述有计算能力的电子设备可以包含通用处理器、数字信号处理器、专用处理器、可重新配置处理器等能够执行计算机指令的装置,但不限于此。执行这样的指令使得电子设备被配置为执行根据本发明的上述各项操作。上述各方法和/或模块可以在一个电子设备中实现,也可以在不同电子设备中实现。The above method and module according to various embodiments of the present invention may be implemented by executing a computer-containing software by a computing-capable electronic device. The system can include storage devices to implement the various storages described above. The computing capable electronic device can include a general purpose processor, a digital signal processor, a dedicated processor, a reconfigurable processor, etc., but is not limited thereto. Executing such instructions causes the electronic device to be configured to perform the operations described above in accordance with the present invention. The above methods and/or modules may be implemented in one electronic device or in different electronic devices.
本发明的实施例使用软件的,软件可以存储为易失性存储器或非易失性存储装置的形式(比如类似ROM等存储设备),不论是可擦除的 还是可重写的,或者存储为存储器的形式(例如RAM、存储器芯片、设备或集成电路),或者被存储在光可读介质或磁可读介质上(比如,CD、DVD、磁盘或磁带等等)。应该意识到,存储设备和存储介质是适于存储一个或多个程序的机器可读存储装置的实施例,所述一个程序或多个程序包括指令,当所述指令被执行时,实现本发明的实施例。此外,可以经由任何介质(比如,经由有线连接或无线连接携带的通信信号)来电传递这些程序,多个实施例适当地包括这些程序。Embodiments of the invention use software, which may be stored in the form of a volatile memory or a non-volatile storage device (such as a storage device such as a ROM), whether erasable or not. Still rewritable, or stored in the form of a memory (eg, RAM, memory chip, device, or integrated circuit), or stored on an optically readable or magnetically readable medium (eg, CD, DVD, disk, tape, etc.) Wait). It should be appreciated that the storage device and the storage medium are embodiments of a machine-readable storage device adapted to store one or more programs, the program or programs comprising instructions that, when executed, implement the present invention An embodiment. Moreover, these programs can be routed via any medium, such as a communication signal carried via a wired connection or a wireless connection, and various embodiments suitably include such programs.
以上所述的具体实施例,对本发明的目的、技术方案和有益效果进行了进一步详细说明,应理解的是,以上所述仅为本发明的具体实施例而已,并不用于限制本发明,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。 The specific embodiments of the present invention have been described in detail in the foregoing detailed description of the embodiments of the present invention. All modifications, equivalents, improvements, etc., made within the spirit and scope of the invention are intended to be included within the scope of the invention.

Claims (24)

  1. 一种能进行遮挡恢复的深度图生成方法,包括:A depth map generation method capable of occlusion recovery, comprising:
    S1、检测两个图像中的异常区域,该两个图像是从不同角度对同一场景同时进行拍摄而获得;S1: detecting an abnormal region in two images obtained by simultaneously shooting the same scene from different angles;
    S2、在屏蔽所述两个图像中的处于异常区域的像素的情况下,根据所述两个图像生成深度图。S2. In the case of masking pixels in the abnormal region among the two images, a depth map is generated according to the two images.
  2. 如权利要求1所述的深度图生成方法,其中步骤S1中检测两个图像中的异常区域是通过检测遮挡物的纹理和/或遮挡物可能出现的区域来进行。The depth map generating method according to claim 1, wherein detecting the abnormal region in the two images in step S1 is performed by detecting a texture of the occlusion and/or an area where the occlusion may appear.
  3. 如权利要求2所述的深度图生成方法,其中根据遮挡物的姿态角来确定所述遮挡物可能出现的区域。The depth map generating method according to claim 2, wherein an area where the occlusion object may appear is determined according to an attitude angle of the occlusion.
  4. 如权利要求1所述的深度图生成方法,其中步骤S1中检测两个图像中的异常区域时,将过度曝光的区域作为异常区域。The depth map generating method according to claim 1, wherein when the abnormal region in the two images is detected in step S1, the overexposed region is regarded as an abnormal region.
  5. 如权利要求4所述的深度图生成方法,其中过度曝光的区域是指像素的亮度值大于某个阈值的区域。The depth map generating method according to claim 4, wherein the overexposed area refers to an area in which the luminance value of the pixel is greater than a certain threshold.
  6. 如权利要求1所述的深度图生成方法,其中步骤S2中生成深度图D时采用Semi-global Matching算法,该算法计算匹配代价S取最小值时相应的深度值d,S表示所述两幅图像在进行匹配计算时的匹配程度。The depth map generating method according to claim 1, wherein the depth map D is generated in step S2 by using a Semi-global Matching algorithm, and the algorithm calculates a corresponding depth value d when the matching cost S takes a minimum value, and S represents the two The degree to which the image matches when performing a matching calculation.
  7. 如权利要求6所述的深度图生成方法,其中The depth map generating method according to claim 6, wherein
    所述匹配代价S由下式计算:The matching cost S is calculated by:
    Figure PCTCN2016105408-appb-100001
    Figure PCTCN2016105408-appb-100001
    其中Lr(p,d)表示像素p沿路径r的路径代价,所述路径代价Lr(p,d)由下式迭代计算获得:Where L r (p,d) represents the path cost of the pixel p along the path r, and the path cost L r (p,d) is obtained by iterative calculation of:
    L′r(p,d)=C(p,d)+min(L′r(p-r,d),L′r(p-r,d-1)+P1,L′r(p-r,d+1)+P1,min(L′r(p-r,i)+P2)L' r (p,d)=C(p,d)+min(L' r (pr,d), L' r (pr,d-1)+P 1 ,L' r (pr,d+1 ) + P 1 ,min(L' r (pr,i)+P 2 )
    其中,C(p,d)表示像素p的匹配度,P1、P2为惩罚因子,均为常数, i为自然数。Where C(p,d) represents the matching degree of the pixel p, P 1 and P 2 are penalty factors, all of which are constants, and i is a natural number.
  8. 如权利要求7所述的深度图生成方法,其中,The depth map generating method according to claim 7, wherein
    在步骤S2中,对于步骤1中检测到的所有异常区域的所有像素,令C(p,d)为一个常数CB,该CB大于其他区域像素的C(p,d)的最大值。In step S2, all the pixels of all of the abnormal region detected in step 1 to make C (p, d) is a constant C B, C B is greater than that of the other region of the pixel C (p, d) maximum.
  9. 如权利要求8所述的深度图生成方法,步骤S2还包括提示出现大面积遮挡的步骤。The depth map generating method according to claim 8, wherein the step S2 further comprises the step of prompting the occurrence of large-area occlusion.
  10. 如权利要求9所述的深度图生成方法:其中,步骤S2计算∑pS(p,d),当∑pS(p,d)大于一个阈值时,提示出现大面积遮挡。The depth map generating method according to claim 9, wherein step S2 calculates ∑ p S(p, d), and when ∑ p S(p, d) is greater than a threshold, it indicates that large-area occlusion occurs.
  11. 一种无人机,其特征在于,包括:A drone, characterized in that it comprises:
    图像获取装置,用于获取从不同角度对同一场景同时进行拍摄的两个图像;以及An image acquisition device for acquiring two images simultaneously photographing the same scene from different angles;
    一个或多个处理器,用于:One or more processors for:
    检测所述两个图像中的异常区域;以及Detecting an abnormal region in the two images;
    在屏蔽所述异常区域的情况下,根据所述两个图像生成深度图。In the case of masking the abnormal region, a depth map is generated from the two images.
  12. 如权利要求11所述的无人机,其中,所述一个或多个处理器用于通过检测遮挡物的纹理和/或遮挡物可能出现的区域,来检测所述两个图像中的异常区域。The drone of claim 11 wherein said one or more processors are operable to detect anomalous regions in said two images by detecting texture of the obstruction and/or an area in which the obstruction may occur.
  13. 如权利要求12所述的无人机,其中根据遮挡物的姿态角来确定所述遮挡物可能出现的区域。The drone according to claim 12, wherein an area in which said covering object may appear is determined based on an attitude angle of the obstructing object.
  14. 如权利要求11所述的无人机,其中,所述一个或多个处理器检测两个图像中的异常区域时,将过度曝光的区域作为异常区域。The drone according to claim 11, wherein the one or more processors detect an abnormal region in the two images as an abnormal region.
  15. 如权利要求14所述的无人机,其中过度曝光的区域是指像素的亮度值大于某个阈值的区域。The drone according to claim 14, wherein the overexposed area is an area in which the luminance value of the pixel is greater than a certain threshold.
  16. 如权利要求11所述的无人机,其中所述一个或多个处理器生成深度图D时采用Semi-global Matching算法,该算法计算匹配代价S取最小值时相应的深度值d,S表示所述两幅图像在进行匹配计算时的匹配程度。The drone according to claim 11, wherein said one or more processors generate a depth map D using a Semi-global Matching algorithm, and the algorithm calculates a corresponding depth value d when the matching cost S takes a minimum value, and S represents The degree of matching of the two images when performing matching calculation.
  17. 如权利要求16所述的无人机,其中The drone of claim 16 wherein
    所述匹配代价S由下式计算: The matching cost S is calculated by:
    Figure PCTCN2016105408-appb-100002
    Figure PCTCN2016105408-appb-100002
    其中Lr(p,d)表示像素p沿路径r的路径代价,所述路径代价Lr(p,d)由下式迭代计算获得:Where L r (p,d) represents the path cost of the pixel p along the path r, and the path cost L r (p,d) is obtained by iterative calculation of:
    L′r(p,d)=C(p,d)+min(L′r(p-r,d),L′r(p-r,d-1)+P1,L′r(p-r,d+1)+P1,min(L′r(p-r,i)+P2)L' r (p,d)=C(p,d)+min(L' r (pr,d), L' r (pr,d-1)+P 1 ,L' r (pr,d+1 ) + P 1 ,min(L' r (pr,i)+P 2 )
    其中,C(p,d)表示像素p的匹配度,P1、P2为惩罚因子,均为常数,i为自然数。Where C(p,d) represents the matching degree of the pixel p, P 1 and P 2 are penalty factors, all of which are constants, and i is a natural number.
  18. 如权利要求17所述的无人机,其中,The drone according to claim 17, wherein
    对于检测到的所有异常区域的所有像素,令C(p,d)为一个常数CB,该CB大于其他区域像素的C(p,d)的最大值。For all the pixels of all of the abnormal region is detected, so that C (p, d) is a constant C B, C B is greater than that of the other region of the pixel C (p, d) maximum.
  19. 如权利要求18所述的无人机,其中,所述一个或多个处理器用于计算∑pS(p,d),当∑pS(p,d)大于一个阈值时,确定出现大面积遮挡。The drone of claim 18, wherein said one or more processors are operative to calculate ∑ p S(p,d), and when ∑ p S(p,d) is greater than a threshold, determining a large area Occlusion.
  20. 如权利要求19所述的无人机,所述一个或多个处理器还用于在所述遮挡出现时,发出警告信息。The drone of claim 19, the one or more processors further configured to issue a warning message when the occlusion occurs.
  21. 如权利要求20所述的无人机,还包括显示装置,所述一个或多个处理器用于在所述遮挡出现时使显示装置显示所述警告信息。The drone of claim 20, further comprising display means for causing the display means to display the warning message when the occlusion occurs.
  22. 如权利要求21所述的无人机,所述显示装置能够接收用户输入,以使得用户能够能过输入来确认是否存在大面积遮挡。The drone of claim 21, said display device being capable of receiving user input to enable a user to pass an input to confirm whether there is a large area of occlusion.
  23. 一种处理器,用于执行权利要求1至10中任一项所述的深度图生成方法。A processor for performing the depth map generation method according to any one of claims 1 to 10.
  24. 一种计算机可读介质,存储有执行权利要求1至10中任一项所述的深度图生成方法的程序。 A computer readable medium storing a program for performing the depth map generating method according to any one of claims 1 to 10.
PCT/CN2016/105408 2016-11-11 2016-11-11 Depth map generation method and unmanned aerial vehicle based on this method WO2018086050A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201680002264.8A CN107077741A (en) 2016-11-11 2016-11-11 Depth drawing generating method and the unmanned plane based on this method
PCT/CN2016/105408 WO2018086050A1 (en) 2016-11-11 2016-11-11 Depth map generation method and unmanned aerial vehicle based on this method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/105408 WO2018086050A1 (en) 2016-11-11 2016-11-11 Depth map generation method and unmanned aerial vehicle based on this method

Publications (1)

Publication Number Publication Date
WO2018086050A1 true WO2018086050A1 (en) 2018-05-17

Family

ID=59623882

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/105408 WO2018086050A1 (en) 2016-11-11 2016-11-11 Depth map generation method and unmanned aerial vehicle based on this method

Country Status (2)

Country Link
CN (1) CN107077741A (en)
WO (1) WO2018086050A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110347186A (en) * 2019-07-17 2019-10-18 中国人民解放军国防科技大学 Ground moving target autonomous tracking system based on bionic binocular linkage
CN110865865A (en) * 2019-11-22 2020-03-06 科大讯飞股份有限公司 Popup window position determining method, device, equipment and storage medium
CN112215794A (en) * 2020-09-01 2021-01-12 北京中科慧眼科技有限公司 Method and device for detecting dirt of binocular ADAS camera
CN112561874A (en) * 2020-12-11 2021-03-26 杭州海康威视数字技术股份有限公司 Blocking object detection method and device and monitoring camera
CN113467502A (en) * 2021-07-24 2021-10-01 深圳市北斗云信息技术有限公司 Unmanned aerial vehicle driving examination system
CN113776503A (en) * 2018-12-29 2021-12-10 深圳市道通智能航空技术股份有限公司 Depth map processing method and device and unmanned aerial vehicle

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108132666B (en) * 2017-12-15 2019-04-05 珊口(上海)智能科技有限公司 Control method, system and the mobile robot being applicable in
CN110326028A (en) * 2018-02-08 2019-10-11 深圳市大疆创新科技有限公司 Method, apparatus, computer system and the movable equipment of image procossing
CN110770794A (en) * 2018-08-22 2020-02-07 深圳市大疆创新科技有限公司 Image depth estimation method and device, readable storage medium and electronic equipment
CN113994382A (en) * 2020-04-28 2022-01-28 深圳市大疆创新科技有限公司 Depth map generation method, electronic device, calculation processing device, and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105222760A (en) * 2015-10-22 2016-01-06 一飞智控(天津)科技有限公司 The autonomous obstacle detection system of a kind of unmanned plane based on binocular vision and method
CN105717933A (en) * 2016-03-31 2016-06-29 深圳奥比中光科技有限公司 Unmanned aerial vehicle and unmanned aerial vehicle anti-collision method
CN105761265A (en) * 2016-02-23 2016-07-13 英华达(上海)科技有限公司 Method for providing obstacle avoidance based on image depth information and unmanned aerial vehicle
CN105787447A (en) * 2016-02-26 2016-07-20 深圳市道通智能航空技术有限公司 Method and system of unmanned plane omnibearing obstacle avoidance based on binocular vision
WO2016131847A1 (en) * 2015-02-20 2016-08-25 Prox Dynamics As Method for calculating the distance to a ground target from an aerial vehicle
CN106096559A (en) * 2016-06-16 2016-11-09 深圳零度智能机器人科技有限公司 Obstacle detection method and system and moving object

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102609941A (en) * 2012-01-31 2012-07-25 北京航空航天大学 Three-dimensional registering method based on ToF (Time-of-Flight) depth camera
CN103440653A (en) * 2013-08-27 2013-12-11 北京航空航天大学 Binocular vision stereo matching method
CN104880187B (en) * 2015-06-09 2016-03-02 北京航空航天大学 A kind of method for estimating of the aircraft light stream pick-up unit based on twin camera
CN105974938B (en) * 2016-06-16 2023-10-03 零度智控(北京)智能科技有限公司 Obstacle avoidance method and device, carrier and unmanned aerial vehicle

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016131847A1 (en) * 2015-02-20 2016-08-25 Prox Dynamics As Method for calculating the distance to a ground target from an aerial vehicle
CN105222760A (en) * 2015-10-22 2016-01-06 一飞智控(天津)科技有限公司 The autonomous obstacle detection system of a kind of unmanned plane based on binocular vision and method
CN105761265A (en) * 2016-02-23 2016-07-13 英华达(上海)科技有限公司 Method for providing obstacle avoidance based on image depth information and unmanned aerial vehicle
CN105787447A (en) * 2016-02-26 2016-07-20 深圳市道通智能航空技术有限公司 Method and system of unmanned plane omnibearing obstacle avoidance based on binocular vision
CN105717933A (en) * 2016-03-31 2016-06-29 深圳奥比中光科技有限公司 Unmanned aerial vehicle and unmanned aerial vehicle anti-collision method
CN106096559A (en) * 2016-06-16 2016-11-09 深圳零度智能机器人科技有限公司 Obstacle detection method and system and moving object

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113776503A (en) * 2018-12-29 2021-12-10 深圳市道通智能航空技术股份有限公司 Depth map processing method and device and unmanned aerial vehicle
CN113776503B (en) * 2018-12-29 2024-04-12 深圳市道通智能航空技术股份有限公司 Depth map processing method and device and unmanned aerial vehicle
CN110347186A (en) * 2019-07-17 2019-10-18 中国人民解放军国防科技大学 Ground moving target autonomous tracking system based on bionic binocular linkage
CN110347186B (en) * 2019-07-17 2022-04-05 中国人民解放军国防科技大学 Ground moving target autonomous tracking system based on bionic binocular linkage
CN110865865A (en) * 2019-11-22 2020-03-06 科大讯飞股份有限公司 Popup window position determining method, device, equipment and storage medium
CN110865865B (en) * 2019-11-22 2023-01-13 科大讯飞股份有限公司 Popup window position determining method, device, equipment and storage medium
CN112215794A (en) * 2020-09-01 2021-01-12 北京中科慧眼科技有限公司 Method and device for detecting dirt of binocular ADAS camera
CN112215794B (en) * 2020-09-01 2022-09-20 北京中科慧眼科技有限公司 Method and device for detecting dirt of binocular ADAS camera
CN112561874A (en) * 2020-12-11 2021-03-26 杭州海康威视数字技术股份有限公司 Blocking object detection method and device and monitoring camera
CN113467502A (en) * 2021-07-24 2021-10-01 深圳市北斗云信息技术有限公司 Unmanned aerial vehicle driving examination system

Also Published As

Publication number Publication date
CN107077741A (en) 2017-08-18

Similar Documents

Publication Publication Date Title
WO2018086050A1 (en) Depth map generation method and unmanned aerial vehicle based on this method
KR102126513B1 (en) Apparatus and method for determining the pose of the camera
US9396399B1 (en) Unusual event detection in wide-angle video (based on moving object trajectories)
US10701281B2 (en) Image processing apparatus, solid-state imaging device, and electronic apparatus
CN107945105B (en) Background blurring processing method, device and equipment
KR102118066B1 (en) Vehicle control method for safety driving
JP6232502B2 (en) Artificial vision system
US20180137629A1 (en) Processing apparatus, imaging apparatus and automatic control system
US10692225B2 (en) System and method for detecting moving object in an image
CN110187720B (en) Unmanned aerial vehicle guiding method, device, system, medium and electronic equipment
CN106447730A (en) Parameter estimation method, parameter estimation apparatus and electronic equipment
CN108961195B (en) Image processing method and device, image acquisition device, readable storage medium and computer equipment
US20200221005A1 (en) Method and device for tracking photographing
CN104065949B (en) A kind of Television Virtual touch control method and system
WO2016011861A1 (en) Method and photographing terminal for photographing object motion trajectory
KR20150064761A (en) Electro device comprising transparent display and method for controlling thereof
KR101504335B1 (en) Device, method and vehicle for providing around view
CN107323677B (en) Unmanned aerial vehicle auxiliary landing method, device, equipment and storage medium
TWI499999B (en) The 3D ring car image system based on probability calculation and its obtaining method
TWI476735B (en) Abnormal classification detection method for a video camera and a monitering host with video image abnormal detection
JP2019129474A (en) Image shooting device
JP2020048034A (en) Electronic device and notification method
KR101677653B1 (en) Vehicle image display device and control method thereof
US20170351104A1 (en) Apparatus and method for optical imaging
CN114600162A (en) Scene lock mode for capturing camera images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16921271

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16921271

Country of ref document: EP

Kind code of ref document: A1