WO2021104308A1 - 全景深度测量方法、四目鱼眼相机及双目鱼眼相机 - Google Patents

全景深度测量方法、四目鱼眼相机及双目鱼眼相机 Download PDF

Info

Publication number
WO2021104308A1
WO2021104308A1 PCT/CN2020/131506 CN2020131506W WO2021104308A1 WO 2021104308 A1 WO2021104308 A1 WO 2021104308A1 CN 2020131506 W CN2020131506 W CN 2020131506W WO 2021104308 A1 WO2021104308 A1 WO 2021104308A1
Authority
WO
WIPO (PCT)
Prior art keywords
fisheye
depth map
camera
eye
panoramic
Prior art date
Application number
PCT/CN2020/131506
Other languages
English (en)
French (fr)
Inventor
谢亮
姜文杰
刘靖康
Original Assignee
影石创新科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 影石创新科技股份有限公司 filed Critical 影石创新科技股份有限公司
Publication of WO2021104308A1 publication Critical patent/WO2021104308A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • G06T3/047Fisheye or wide-angle transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images

Definitions

  • the invention belongs to the field of panoramic images, and particularly relates to a panoramic depth measurement method, a four-eye fish-eye camera and a binocular fish-eye camera.
  • a panoramic camera generally uses a fisheye lens to take a 360° photo to achieve a panoramic effect.
  • the maximum angle of view of the fisheye image taken by the fisheye lens can reach 180 degrees or 270 degrees. How to determine the position of the target in the real environment based on the pictures taken by the fisheye lens has also become an important application point of the panoramic camera.
  • the present invention proposes a panoramic depth measurement method, a four-eye fisheye camera and a binocular fisheye camera, and aims to use images obtained by multiple fisheye lenses on the panoramic camera to perform stereo matching to form a panoramic depth image and measure the target object depth.
  • the present invention proposes a panoramic depth measurement method using multiple fisheye lenses.
  • the method can calculate the 3D coordinates of the scene, and can also provide the depth map of the target object in real time for the motion of the panoramic camera or the carrier of the panoramic camera such as drones. Calculate the position of the object to achieve the effect of avoiding obstacles.
  • the first aspect of the present invention provides a panoramic depth measurement method, which is suitable for a four-eye fisheye camera, and includes the steps of: acquiring a fisheye image taken by a fisheye lens; performing stereo matching on the fisheye image, and calculating the overlap area Depth map; obtain a panoramic depth map according to the depth map.
  • the four-eye fisheye camera is provided with two fisheye lenses on parallel surfaces, and the fisheye image is stereo-matched, and the depth map of the overlapping area is calculated
  • the method includes: performing stereo matching on the fisheye images taken by the fisheye lens on different surfaces of the four-eye fisheye camera, and calculating a first depth map of a first overlapping area; Perform stereo matching on the fisheye images taken by the two fisheye lenses on the same surface of the camera, and calculate the second depth map of the second overlap area and the third depth map of the third overlap area respectively;
  • Obtaining the panoramic depth map from the depth map further includes: merging the first depth map, the second depth map, and the third depth map to obtain the panoramic depth map.
  • the stereo matching is performed on the fisheye images taken by the fisheye lens on different surfaces of the four-eye fisheye camera, and the first depth map of the first overlapping area is calculated
  • the method includes: performing stereo matching on the fisheye images taken by the two fisheye lenses located at the same end on different surfaces of the four-eye fisheye camera, respectively, and calculating a first depth map of a first overlapping area.
  • the acquiring fisheye images taken by the fisheye lens includes acquiring pictures or video frames currently taken by each fisheye lens.
  • the stereo matching includes finding matching corresponding points from different fisheye images.
  • the four-eye fisheye camera is the body of a drone or an external device.
  • the overlapping area includes a 360-degree panoramic area.
  • the above method further includes the step of determining obstacles from the depth map.
  • a second aspect of the present invention provides a four-eye fisheye camera, including: an image acquisition module for acquiring fisheye images taken by a fisheye lens; a stereo matching module for performing stereo matching on the fisheye images, and Calculate the depth map of the overlapping area; the panoramic synthesis module is used to obtain the panoramic depth map according to the depth map.
  • the performing stereo matching on the fisheye image and calculating the depth map of the overlapping area further includes: Perform stereo matching on the fisheye images taken by the fisheye lens on different surfaces of the quadruple fisheye camera, and calculate the first depth map of the first overlapping area; respectively compare the images on the same surface of the quadruple fisheye camera Perform stereo matching on the fisheye images taken by the two fisheye lenses, and calculate the second depth map of the second overlap area and the third depth map of the third overlap area respectively; said obtaining the panorama according to the depth map
  • the depth map further includes: combining the first depth map, the second depth map, and the third depth map to obtain a panoramic depth map.
  • the three-dimensional matching is performed on the fish-eye images taken by the fish-eye lens on different surfaces of the four-eye fish-eye camera, and the first overlap area is calculated.
  • the depth map further includes: performing stereo matching on the fisheye images taken by the two fisheye lenses located at the same end on different surfaces of the quadruple fisheye camera, respectively, and calculating the first depth map of the first overlapping area.
  • the acquiring a fisheye image taken by the fisheye lens includes acquiring a picture or video frame currently taken by each fisheye lens.
  • the stereo matching includes finding matching corresponding points from different fisheye images.
  • the four-eye fisheye camera is the body of a drone or an external device.
  • the overlapping area includes a 360-degree panoramic area.
  • the above-mentioned four-eye fisheye camera further includes: an obstacle detection module for determining obstacles from the depth map.
  • the third aspect of the present invention provides a panoramic depth measurement method, which is suitable for a binocular fisheye camera, and includes the steps of: acquiring fisheye images taken by the fisheye lens when the binocular fisheye camera is at different positions; Perform stereo matching on the fisheye image, and calculate a depth map of the overlapping area; obtain a panoramic depth map according to the depth map.
  • the acquiring the fisheye images taken by the fisheye lens when the binocular fisheye camera is in different positions further includes: acquiring the fisheye images when the binocular fisheye camera is in the first position.
  • the overlapping area includes a 360-degree panoramic area.
  • the binocular fisheye camera is the body of the drone or an external device.
  • the above method further includes the step of determining obstacles from the depth map.
  • the fourth aspect of the present invention provides a binocular fisheye camera, including: an image module for acquiring fisheye images taken by the fisheye lens when the binocular fisheye camera is at different positions; a calculation module for To perform stereo matching on the fisheye image, and calculate the depth map of the overlapping area; the depth module is used to obtain the panoramic depth map according to the depth map.
  • the acquiring the fisheye image taken by the fisheye lens when the binocular fisheye camera is at different positions further includes: acquiring the binocular fisheye camera at the second position A fisheye image taken by the fisheye lens at one position, and a fisheye image taken by the fisheye lens when the binocular fisheye camera is at the second position; according to the binocular fisheye camera from The displacement from the first position to the second position is calculated for the depth map.
  • the overlapping area includes a 360-degree panoramic area.
  • the binocular fisheye camera is the body of a drone or an external device.
  • the above-mentioned binocular fisheye camera further includes: an obstacle avoidance module for determining obstacles from the depth map.
  • the present invention performs stereo matching on the images taken by the panoramic camera to the top and bottom and/or left and right fisheye lenses, and realizes the method of calculating the depth of the object according to the matching feature points.
  • the present invention can calculate the 3D coordinates of the scene, and can also be the motion of the panoramic camera or
  • the carrier of the panoramic camera, such as a drone provides the position of the target object in real time, so as to achieve the effect of obstacle avoidance.
  • Fig. 1 is a flowchart of a method for measuring a panoramic depth according to an embodiment of the present invention.
  • FIG. 2 is a schematic diagram of a four-eye fisheye camera provided by an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a four-eye fisheye camera provided by another embodiment of the present invention.
  • Fig. 4 is a schematic diagram of a binocular fisheye camera provided by an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a movement state of a binocular fisheye camera according to an embodiment of the present invention.
  • Fig. 6 is a schematic diagram of a binocular fisheye camera provided by another embodiment of the present invention.
  • an embodiment of the present invention discloses a panoramic depth measurement method, which is suitable for a four-eye fisheye camera, and is characterized in that it includes the steps:
  • S102 Perform stereo matching on the fisheye image, and calculate a depth map of the overlapping area
  • the obtaining the fisheye image taken by the fisheye lens includes obtaining the picture or video frame currently taken by each fisheye lens.
  • what is acquired is a picture taken by a fisheye lens.
  • the above-mentioned four-eye fisheye camera is provided with two fisheye lenses on parallel surfaces, a total of four fisheye lenses, namely f1, f2, f3, f4, S102
  • the step of performing stereo matching on the fisheye image and calculating the depth map of the overlapping area further includes: performing stereo matching on the fisheye image taken by the fisheye lens on different surfaces of the four-eye fisheye camera, and Calculate the first depth map of the first overlap area.
  • the two fisheye lenses f1 and f3, and the fisheye images taken by f2 and f4 on the same surface of the four-eye fisheye camera are respectively subjected to stereo matching, and the second overlap area is calculated respectively.
  • Obtaining the panoramic depth map according to the depth map in S103 further includes: merging the first depth map S3 and S3', the second depth map S5, and the third depth map S6 to obtain the panoramic depth map.
  • the shooting angle of view of the four fisheye lenses is far more than 180°, such as 240°; in other embodiments, the number of fisheye lenses of the fisheye camera may be greater than or equal to 4.
  • the step of performing stereo matching on the fisheye images taken by the fisheye lens on different surfaces of the four-eye fisheye camera and calculating the first depth map of the first overlap area specifically includes : Perform stereo matching on the fisheye images taken by the two fisheye lenses located at the same end on different surfaces of the four-eye fisheye camera respectively, and calculate the first depth map of the first overlapping area. Specifically, by stereo matching f1 and f2, the depth map of the annular viewing angle overlap area S3 is calculated, and by stereo matching f3 and f4, the depth map of the annular viewing angle overlapping area S3', S3 and S3' is calculated. Together they constitute a first depth map of the first overlapping area. It should be understood that the above-mentioned overlapping areas are all three-dimensional space areas.
  • S102 may include acquiring the image of any fisheye camera on one side of the quadruple fisheye camera and performing binocular stereo matching with the image of any fisheye camera on the other side to obtain the overlapping area of the view angle image, as shown in FIG. 4
  • This area is a ring-shaped S0 area, and then the overlap area with the other two sides of the camera forms an area equal to or more than 360 degrees.
  • stereo matching includes finding matching corresponding points from different fisheye images, and may be a matching method such as dense optical flow and sparse optical flow.
  • the overlapping area correspondingly includes a 360-degree panoramic area. Since the distance of the object can be distinguished in a region in the depth map, obstacles can be determined from the depth map.
  • the four-eye fisheye camera may be the body of the drone or an external device of the drone.
  • UAVs can be either unmanned aerial vehicles or unmanned robots.
  • the application of the four-eye fisheye camera in the drone in this embodiment can provide the drone with a depth map that perceives the surrounding environment, and can detect obstacles, thereby assisting the drone in avoiding obstacles or realizing path planning. .
  • the present invention performs stereo matching on the images taken by the panoramic camera to the top and bottom and/or left and right fisheye lenses, and realizes the method of calculating the depth of the object according to the matching feature points.
  • the present invention can calculate the 3D coordinates of the scene, and can also be the motion of the panoramic camera or
  • the carrier of the panoramic camera, such as a drone provides the position of the target object in real time, so as to achieve the effect of avoiding obstacles.
  • FIG. 3 another embodiment of the present invention discloses a four-eye fish-eye camera 100.
  • the four-eye fish-eye camera 100 is provided with two fish-eye lenses on parallel surfaces, a total of four fish-eye lenses, namely f1, f2, f3, f4, the step of performing stereo matching on the fisheye image in the stereo matching module 12, and calculating the depth map of the overlapping area, further includes: comparing the fisheye images on different surfaces of the four-eye fisheye camera The fisheye image taken by the fisheye lens performs stereo matching, and calculates the first depth map of the first overlap area.
  • the two fisheye lenses f1 and f3, and the fisheye images taken by f2 and f4 on the same surface of the four-eye fisheye camera are respectively subjected to stereo matching, and the second overlap area is calculated respectively.
  • the depth map S5 and the third depth map S6 of the third overlapping area are respectively subjected to stereo matching, and the second overlap area is calculated respectively.
  • Obtaining a panoramic depth map according to the depth map in the panoramic synthesis module 13 further includes: merging the first depth map S3 and S3', the second depth map S5, and the third depth map S6 to obtain the panoramic depth map.
  • the step of performing stereo matching on the fisheye images taken by the fisheye lens on different surfaces of the four-eye fisheye camera 100, and calculating the first depth map of the first overlapping area includes: performing stereo matching on the fisheye images taken by the two fisheye lenses located at the same end on different surfaces of the four-eye fisheye camera, respectively, and calculating a first depth map of a first overlapping area. Specifically, it calculates the depth map of the overlapping area S3 by stereo matching f1 and f2, and calculating the depth map of the overlapping area S3' by stereo matching f3 and f4. S3 and S3' together constitute the first overlapping area The first depth map. It should be understood that the above-mentioned overlapping areas are all three-dimensional space areas.
  • stereo matching includes finding matching corresponding points from different fisheye images.
  • the overlapping area correspondingly includes a 360-degree panoramic area. Since the distance of the object can be distinguished in a region in the depth map, obstacles can be determined from the depth map.
  • the four-eye fisheye camera may be the body of the drone or an external device of the drone.
  • UAVs can be either unmanned aerial vehicles or unmanned robots.
  • the application of the four-eye fisheye camera in the drone in this embodiment can provide the drone with a depth map that perceives the surrounding environment, and can detect obstacles, thereby assisting the drone in avoiding obstacles or realizing path planning. .
  • the present invention performs stereo matching on the images taken by the panoramic camera to the top and bottom and/or left and right fisheye lenses, and realizes the method of calculating the depth of the object according to the matching feature points.
  • the present invention can calculate the 3D coordinates of the scene, and can also be the motion of the panoramic camera or
  • the carrier of the panoramic camera, such as a drone provides the position of the target object in real time, so as to achieve the effect of avoiding obstacles.
  • the embodiment of the present invention also discloses a panoramic depth measurement method, which is suitable for a binocular fisheye camera, including the step of: acquiring the fisheye lens when the binocular fisheye camera is in different positions A captured fisheye image; stereo matching is performed on the fisheye image, and a depth map of the overlapping area is calculated; a panoramic depth map is obtained according to the depth map.
  • acquiring the fisheye images taken by the fisheye lens when the binocular fisheye camera is at different positions further includes: acquiring the fisheye when the binocular fisheye camera is at the first position t1.
  • the overlapping area includes a 360-degree panoramic area.
  • the fisheye lenses f1 and f2 of the binocular fisheye camera are set back, which is the same as the above principle, and the depth maps of the overlapping regions S3 and S4 can be calculated, but there is no overlap yet. Therefore, the regions S1 and S2 of the depth map cannot be obtained.
  • the binocular fisheye camera produces a certain displacement, and this displacement can be measured, the images taken at the two positions t1 and t2 before and after are used again, the original area that did not overlap will be changed. Covered by overlapping areas, these areas can be stereo-matched to obtain depth maps of these areas, so that a 360-degree depth map can be synthesized.
  • stereo matching includes finding matching corresponding points from different fisheye images.
  • the overlapping area correspondingly includes a 360-degree panoramic area. Since the distance of the object can be distinguished in a region in the depth map, obstacles can be determined from the depth map.
  • the binocular fisheye camera may be the body of the drone or an external device of the drone.
  • UAVs can be either unmanned aerial vehicles or unmanned robots.
  • the application of the binocular fisheye camera in the drone in this embodiment can provide the drone with a depth map that perceives the surrounding environment, and can detect obstacles, thereby assisting the drone in avoiding obstacles or realizing path planning. .
  • the present invention performs stereo matching on the images taken by the panoramic camera to the top and bottom and/or left and right fisheye lenses, and realizes the method of calculating the depth of the object according to the matching feature points.
  • the present invention can calculate the 3D coordinates of the scene, and can also be the motion of the panoramic camera or
  • the carrier of the panoramic camera, such as a drone provides the position of the target object in real time, so as to achieve the effect of obstacle avoidance.
  • the embodiment of the present invention also discloses a binocular fisheye camera 200, including: an image module 21, used to obtain the binocular fisheye camera in different positions, the fisheye lens Image; calculation module 22, used to perform stereo matching on the fisheye image, and calculate the depth map of the overlapping area; depth module 23, used to obtain a panoramic depth map according to the depth map.
  • the fisheye image captured by the fisheye lens further includes: acquiring the binocular fisheye camera at the first position t1.
  • the displacement to the second position t2 is calculated for the depth map.
  • the overlapping area includes a 360-degree panoramic area.
  • the fisheye lenses f1 and f2 of the binocular fisheye camera 200 are set back.
  • the principle is the same as the above principle.
  • the depth map of the overlapping areas S3 and S4 can be calculated, but there is no overlap and therefore cannot Obtain the regions S1 and S2 of the depth map.
  • the binocular fisheye camera produces a certain displacement, and this displacement can be measured, the images taken at the two positions t1 and t2 before and after are used again, the original area that did not overlap will be changed. Covered by overlapping areas, these areas can be stereo-matched to obtain depth maps of these areas, so that a 360-degree depth map can be synthesized.
  • stereo matching includes finding matching corresponding points from different fisheye images.
  • the overlapping area correspondingly includes a 360-degree panoramic area. Since the distance of the object can be distinguished in a region in the depth map, obstacles can be determined from the depth map.
  • the binocular fisheye camera may be the body of the drone or an external device of the drone.
  • UAVs can be either unmanned aerial vehicles or unmanned robots.
  • the application of the binocular fisheye camera in the drone in this embodiment can provide the drone with a depth map that perceives the surrounding environment, and can detect obstacles, thereby assisting the drone in avoiding obstacles or realizing path planning. .
  • the present invention performs stereo matching on the images taken by the panoramic camera to the top and bottom and/or left and right fisheye lenses, and realizes the method of calculating the depth of the object according to the matching feature points.
  • the present invention can calculate the 3D coordinates of the scene, and can also be the motion of the panoramic camera or
  • the carrier of the panoramic camera, such as a drone provides the position of the target object in real time, so as to achieve the effect of avoiding obstacles.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit may be implemented in the form of hardware, or may be implemented in the form of hardware plus software functional units.
  • the above-mentioned integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium.
  • the above-mentioned software functional unit is stored in a storage medium and includes several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor execute the method described in the various embodiments of the present invention. Part of the steps.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), Random Access Memory (Random Access Memory, RAM), magnetic disks or optical disks and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

本发明提供了全景深度测量方法、四目鱼眼相机和双目鱼眼相机。所述方法包括:获取鱼眼镜头拍摄的鱼眼图像;对所述鱼眼图像进行立体匹配,并计算重叠区域的深度图;根据所述深度图获得全景深度图。本发明旨在利用全景相机上的多个鱼眼镜头获取的图像进行立体匹配,形成全景深度图像,并测量目标物体的深度,还可以为全景相机运动或者为全景相机的载体比如无人机等实时提供目标物体的深度图即方位,从而达到避障的效果。

Description

全景深度测量方法、四目鱼眼相机及双目鱼眼相机 技术领域
本发明属于全景图像领域,尤其涉及一种全景深度测量方法、四目鱼眼相机及双目鱼眼相机。
背景技术
现有技术中实时避障大部分采用的避障技术都是依赖于大量传感器如超声波、激光雷达等。上述避障方法存在一些缺陷,比如检测距离短造成无法及时避开障碍物,或设备体积质量庞大不能简单的进行装配。
全景相机一般都是采用鱼眼镜头来进行360°拍照实现全景的效果。鱼眼镜头拍摄的鱼眼图像的最大视角可以达到180度或者270度,如何根据鱼眼镜头所拍摄的图片测定出目标在现实环境中的方位也成为全景相机的一个重要应用点。
技术问题
本发明提出一种全景深度测量方法、四目鱼眼相机及双目鱼眼相机,旨在利用全景相机上的多个鱼眼镜头获取的图像进行立体匹配,形成全景深度图像,并测量目标物体的深度。
本发明利用多个鱼眼镜头提出了一种全景深度测量方法,该方法可以计算场景的3D坐标,还可以为全景相机运动或者为全景相机的载体比如无人机等实时提供目标物体的深度图计算物体的方位,从而达到避障的效果。
技术解决方案
本发明第一方面提供了一种全景深度测量方法,适用于四目鱼眼相机,包括步骤:获取鱼眼镜头拍摄的鱼眼图像;对所述鱼眼图像进行立体匹配,并计算重叠区域的深度图;根据所述深度图获得全景深度图。
进一步地,上述方法中,所述四目鱼眼相机在平行的表面上分别设置两个所述鱼眼镜头,所述对所述鱼眼图像进行立体匹配,并计算重叠区域的深度图,还包括:对所述四目鱼眼相机不同表面上的所述鱼眼镜头拍摄的所述鱼眼图像进行立体匹配,并计算第一重叠区域的第一深度图;分别对所述四目鱼眼相机同一表面上的两个所述鱼眼镜头拍摄的所述鱼眼图像进行立体匹配,并分别计算第二重叠区域的第二深度图和第三重叠区域的第三深度图;所述根据所述深度图获得全景深度图,还包括:将所述第一深度图、第二深度图、第三深度图合并,获得全景深度图。
进一步地,上述方法中,所述对所述四目鱼眼相机不同表面上的所述鱼眼镜头拍摄的所述鱼眼图像进行立体匹配,并计算第一重叠区域的第一深度图,还包括:分别对所述四目鱼眼相机不同表面上位于同一端的两个所述鱼眼镜头拍摄的所述鱼眼图像进行立体匹配,并计算第一重叠区域的第一深度图。
进一步地,上述方法中,所述获取所述鱼眼镜头拍摄的鱼眼图像,包括获取当前每个所述鱼眼镜头拍摄的图片或者视频帧。
进一步地,上述方法中,所述立体匹配包括从不同的所述鱼眼图像中找到匹配的对应点。
进一步地,上述方法中,所述四目鱼眼相机为无人机的机身或者外接装置。
进一步地,上述方法中,所述重叠区域包括360度全景区域。
进一步地,上述方法中,还包括步骤:从所述深度图中确定障碍物。
本发明第二方面提供了一种四目鱼眼相机,包括:图像获取模块,用于获取鱼眼镜头拍摄的鱼眼图像;立体匹配模块,用于对所述鱼眼图像进行立体匹配,并计算重叠区域的深度图;全景合成模块,用于根据所述深度图获得全景深度图。
进一步地,上述四目鱼眼相机中,在平行的表面上分别设置两个鱼眼镜头,所述对所述鱼眼图像进行立体匹配,并计算重叠区域的深度图,还包括:对所述四目鱼眼相机不同表面上的所述鱼眼镜头拍摄的所述鱼眼图像进行立体匹配,并计算第一重叠区域的第一深度图;分别对所述四目鱼眼相机同一表面上的两个所述鱼眼镜头拍摄的所述鱼眼图像进行立体匹配,并分别计算第二重叠区域的第二深度图和第三重叠区域的第三深度图;所述根据所述深度图获得全景深度图,还包括:将所述第一深度图、第二深度图、第三深度图合并,获得全景深度图。
进一步地,上述四目鱼眼相机中,所述对所述四目鱼眼相机不同表面上的所述鱼眼镜头拍摄的所述鱼眼图像进行立体匹配,并计算第一重叠区域的第一深度图,还包括:分别对所述四目鱼眼相机不同表面上位于同一端的两个所述鱼眼镜头拍摄的所述鱼眼图像进行立体匹配,计算第一重叠区域的第一深度图。
进一步地,上述四目鱼眼相机中,所述获取所述鱼眼镜头拍摄的鱼眼图像,包括获取当前每个所述鱼眼镜头拍摄的图片或者视频帧。
进一步地,上述四目鱼眼相机中,所述立体匹配包括从不同的所述鱼眼图像中找到匹配的对应点。
进一步地,上述四目鱼眼相机中,所述四目鱼眼相机为无人机的机身或者外接装置。
进一步地,上述四目鱼眼相机中,所述重叠区域包括360度全景区域。
进一步地,上述四目鱼眼相机中,还包括:障碍检测模块,用于从所述深度图中确定障碍物。
本发明第三方面提供了一种全景深度测量方法,适用于双目鱼眼相机,包括步骤:获取所述双目鱼眼相机在不同位置时,所述鱼眼镜头拍摄的鱼眼图像;对所述鱼眼图像进行立体匹配,并计算重叠区域的深度图;根据所述深度图获得全景深度图。
进一步地,上述方法中,所述获取所述双目鱼眼相机在不同位置时,所述鱼眼镜头拍摄的鱼眼图像,还包括:获取所述双目鱼眼相机在第一位置时所述鱼眼镜头拍摄的鱼眼图像,以及获取所述双目鱼眼相机在第二位置时所述鱼眼镜头拍摄的鱼眼图像;根据所述所述双目鱼眼相机从第一位置到第二位置的位移,计算所述深度图。所述重叠区域包括360度全景区域。
进一步地,上述方法中,所述双目鱼眼相机为无人机的机身或者外接装置。
进一步地,上述方法中,还包括步骤:从所述深度图中确定障碍物。
本发明第四方面提供了一种双目鱼眼相机,包括:图像模块,用于获取所述双目鱼眼相机在不同位置时,所述鱼眼镜头拍摄的鱼眼图像;计算模块,用于对所述鱼眼图像进行立体匹配,并计算重叠区域的深度图;深度模块,用于根据所述深度图获得全景深度图。
进一步地,上述双目鱼眼相机中,所述获取所述双目鱼眼相机在不同位置时,所述鱼眼镜头拍摄的鱼眼图像,还包括:获取所述双目鱼眼相机在第一位置时所述鱼眼镜头拍摄的鱼眼图像,以及获取所述双目鱼眼相机在第二位置时所述鱼眼镜头拍摄的鱼眼图像;根据所述所述双目鱼眼相机从第一位置到第二位置的位移,计算所述深度图。所述重叠区域包括360度全景区域。
进一步地,上述双目鱼眼相机中,所述双目鱼眼相机为无人机的机身或者外接装置。
进一步地,上述双目鱼眼相机中,还包括:避障模块,用于从所述深度图中确定障碍物。
有益效果
本发明通过对全景相机对上下和/或左右鱼眼镜头拍摄的图像进行立体匹配,根据匹配特征点实现计算物体的深度的方法,本发明可以计算场景3D坐标,还可以为全景相机运动或者为全景相机的载体比如无人机等实时提供目标物体的方位,从而达到避障的效果。
附图说明
图1是本发明一实施例提供的一种全景深度测量方法的流程图。
图2是本发明一实施例提供的一种四目鱼眼相机的示意图。
图3是本发明另一实施例提供的一种四目鱼眼相机的示意图。
图4是本发明一实施例提供的一种双目鱼眼相机的示意图。
图5是本发明一实施例提供的一种双目鱼眼相机运动状态的示意图。
图6是本发明另一实施例提供的双目鱼眼相机的示意图。
本发明的实施方式
为了使本发明的目的、技术方案及有益效果更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
为了说明本发明所述的技术方案,下面通过具体实施例来进行说明。
参照图1,本发明实施例公开了一种全景深度测量方法,适用于四目鱼眼相机,其特征在于,包括步骤:
S101,获取鱼眼镜头拍摄的鱼眼图像;
S102,对所述鱼眼图像进行立体匹配,并计算重叠区域的深度图;
S103,根据所述深度图获得全景深度图。
其中,所述获取所述鱼眼镜头拍摄的鱼眼图像,包括获取当前每个所述鱼眼镜头拍摄的图片或者视频帧。本实施例中,获取的是鱼眼镜头拍摄的照片。
参照图2,本实施例中,上述四目鱼眼相机在平行的表面上分别设置两个所述鱼眼镜头,共四个鱼眼镜头,即f1、f2、f3、f4,S102中对所述鱼眼图像进行立体匹配,并计算重叠区域的深度图的步骤,还包括:对所述四目鱼眼相机不同表面上的所述鱼眼镜头拍摄的所述鱼眼图像进行立体匹配,并计算第一重叠区域的第一深度图。分别对所述四目鱼眼相机同一表面上的两个所述鱼眼镜头f1和f3,以及f2和f4拍摄的所述鱼眼图像分别进行立体匹配,并分别计算第二重叠区域的第二深度图S5和第三重叠区域的第三深度图S6。S103中根据所述深度图获得全景深度图,还包括:将所述第一深度图S3和S3’、第二深度图S5、第三深度图S6合并,获得全景深度图。
四个鱼眼镜头的拍摄视角均远超180°,如240°等;其他实施例中,鱼眼相机鱼眼镜头的数量可以为大于或等于4个。
本实施例中,对所述四目鱼眼相机不同表面上的所述鱼眼镜头拍摄的所述鱼眼图像进行立体匹配,并计算第一重叠区域的第一深度图的步骤中,具体包括:分别对所述四目鱼眼相机不同表面上位于同一端的两个所述鱼眼镜头拍摄的所述鱼眼图像进行立体匹配,并计算第一重叠区域的第一深度图。具体来说,就是通过对f1和f2进行立体匹配,计算环形的视角重叠区域S3的深度图,通过对f3和f4进行立体匹配,计算环形的视角重叠区域S3’的深度图,S3和S3’共同构成第一重叠区域的第一深度图。应当理解,上述重叠区域都是三维立体空间区域。
在其他实施例中,S102可以包括获取四目鱼眼相机一侧的任一个鱼眼相机的图像与另一侧任一个鱼眼相机的图像进行双目立体匹配,得到视角图像重叠区域,如图4该区域为环形的S0区域,然后与相机另外两面的重叠区域形成一个等于或者超过360度的区域。
应当理解,上述立体匹配包括从不同的所述鱼眼图像中找到匹配的对应点,可以是稠密光流、稀疏光流等匹配方法。
应当理解,为了获得360度全景深度图,所述重叠区域也相应的包括了360度全景区域。由于深度图中可以在一个区域中区分出物体的远近距离,因此可以从所述深度图中确定障碍物。
作为本实施例中四目鱼眼相机的应用场景,该四目鱼眼相机可以为无人机的机身或者也可以为无人机的外接装置。无人机既可以是无人飞行器,也可以是无人驾驶机器人。本实施例中四目鱼眼相机在无人机上的应用,可以为无人机提供感知周边环境的深度图,并可进行障碍物的检测,从而辅助无人机进行避障,或者实现路径规划。
本发明通过对全景相机对上下和/或左右鱼眼镜头拍摄的图像进行立体匹配,根据匹配特征点实现计算物体的深度的方法,本发明可以计算场景3D坐标,还可以为全景相机运动或者为全景相机的载体比如无人机等实时提供目标物体的方位,从而达到避障的效果。
参照图3,本发明另一实施例公开了一种四目鱼眼相机100,四目鱼眼相机100在平行的表面上分别设置两个所述鱼眼镜头,共四个鱼眼镜头,即f1、f2、f3、f4,立体匹配模块12中对所述鱼眼图像进行立体匹配,并计算重叠区域的深度图的步骤,还包括:对所述四目鱼眼相机不同表面上的所述鱼眼镜头拍摄的所述鱼眼图像进行立体匹配,并计算第一重叠区域的第一深度图。分别对所述四目鱼眼相机同一表面上的两个所述鱼眼镜头f1和f3,以及f2和f4拍摄的所述鱼眼图像分别进行立体匹配,并分别计算第二重叠区域的第二深度图S5和第三重叠区域的第三深度图S6。全景合成模块13中根据所述深度图获得全景深度图,还包括:将所述第一深度图S3和S3’、第二深度图S5、第三深度图S6合并,获得全景深度图。
本实施例中,对所述四目鱼眼相机100不同表面上的所述鱼眼镜头拍摄的所述鱼眼图像进行立体匹配,并计算第一重叠区域的第一深度图的步骤中,具体包括:分别对所述四目鱼眼相机不同表面上位于同一端的两个所述鱼眼镜头拍摄的所述鱼眼图像进行立体匹配,并计算第一重叠区域的第一深度图。具体来说,就是通过对f1和f2进行立体匹配,计算重叠区域S3的深度图,通过对f3和f4进行立体匹配,计算重叠区域S3’的深度图,S3和S3’共同构成第一重叠区域的第一深度图。应当理解,上述重叠区域都是三维立体空间区域。
应当理解,上述立体匹配包括从不同的所述鱼眼图像中找到匹配的对应点。
应当理解,为了获得360度全景深度图,所述重叠区域也相应的包括了360度全景区域。由于深度图中可以在一个区域中区分出物体的远近距离,因此可以从所述深度图中确定障碍物。
作为本实施例中四目鱼眼相机的应用场景,该四目鱼眼相机可以为无人机的机身或者也可以为无人机的外接装置。无人机既可以是无人飞行器,也可以是无人驾驶机器人。本实施例中四目鱼眼相机在无人机上的应用,可以为无人机提供感知周边环境的深度图,并可进行障碍物的检测,从而辅助无人机进行避障,或者实现路径规划。
本发明通过对全景相机对上下和/或左右鱼眼镜头拍摄的图像进行立体匹配,根据匹配特征点实现计算物体的深度的方法,本发明可以计算场景3D坐标,还可以为全景相机运动或者为全景相机的载体比如无人机等实时提供目标物体的方位,从而达到避障的效果。
参照图4和图5,本发明实施例还公开了一种全景深度测量方法,适用于双目鱼眼相机,包括步骤:获取所述双目鱼眼相机在不同位置时,所述鱼眼镜头拍摄的鱼眼图像;对所述鱼眼图像进行立体匹配,并计算重叠区域的深度图;根据所述深度图获得全景深度图。
本实施例中,上述获取所述双目鱼眼相机在不同位置时,所述鱼眼镜头拍摄的鱼眼图像,还包括:获取所述双目鱼眼相机在第一位置t1时所述鱼眼镜头拍摄的鱼眼图像,以及获取所述双目鱼眼相机在第二位置t2时所述鱼眼镜头拍摄的鱼眼图像;根据所述所述双目鱼眼相机从第一位置t1到第二位置t2的位移,计算所述深度图。重叠区域包括360度全景区域。
本实施例图4中,双目鱼眼相机的鱼眼镜头f1和f2是背向设置的,与上述原理相同,可以实现计算得出重叠区域S3和S4的深度图,但尚且有未有重叠因而无法获得深度图的区域S1和S2。如图5所示,当双目鱼眼相机产生了一定的位移,而这一位移又是可以测定的,再利用在前后两个位置t1和t2时拍摄的图像,原来未产生重叠的区域将被重叠区域覆盖,因此可以对这些区域进行立体匹配,从而得到这些区域的深度图,这样也就可以合成得到360度的深度图。利用前后两个位置分别的拍摄图像,就相当于用双目鱼眼相机达到了四目鱼眼相机的效果。
应当理解,上述立体匹配包括从不同的所述鱼眼图像中找到匹配的对应点。
应当理解,为了获得360度全景深度图,所述重叠区域也相应的包括了360度全景区域。由于深度图中可以在一个区域中区分出物体的远近距离,因此可以从所述深度图中确定障碍物。
作为本实施例中双目鱼眼相机的应用场景,该双目鱼眼相机可以为无人机的机身或者也可以为无人机的外接装置。无人机既可以是无人飞行器,也可以是无人驾驶机器人。本实施例中双目鱼眼相机在无人机上的应用,可以为无人机提供感知周边环境的深度图,并可进行障碍物的检测,从而辅助无人机进行避障,或者实现路径规划。
本发明通过对全景相机对上下和/或左右鱼眼镜头拍摄的图像进行立体匹配,根据匹配特征点实现计算物体的深度的方法,本发明可以计算场景3D坐标,还可以为全景相机运动或者为全景相机的载体比如无人机等实时提供目标物体的方位,从而达到避障的效果。
参照图6,本发明实施例还公开了一种双目鱼眼相机200,包括:图像模块21,用于获取所述双目鱼眼相机在不同位置时,所述鱼眼镜头拍摄的鱼眼图像;计算模块22,用于对所述鱼眼图像进行立体匹配,并计算重叠区域的深度图;深度模块23,用于根据所述深度图获得全景深度图。
本实施例中,上述获取所述双目鱼眼相机200在不同位置时,所述鱼眼镜头拍摄的鱼眼图像,还包括:获取所述双目鱼眼相机在第一位置t1时所述鱼眼镜头拍摄的鱼眼图像,以及获取所述双目鱼眼相机在第二位置t2时所述鱼眼镜头拍摄的鱼眼图像;根据所述所述双目鱼眼相机从第一位置t1到第二位置t2的位移,计算所述深度图。重叠区域包括360度全景区域。
参照图4,双目鱼眼相机200的鱼眼镜头f1和f2是背向设置的,与上述原理相同,可以实现计算得出重叠区域S3和S4的深度图,但尚且有未有重叠因而无法获得深度图的区域S1和S2。如图5所示,当双目鱼眼相机产生了一定的位移,而这一位移又是可以测定的,再利用在前后两个位置t1和t2时拍摄的图像,原来未产生重叠的区域将被重叠区域覆盖,因此可以对这些区域进行立体匹配,从而得到这些区域的深度图,这样也就可以合成得到360度的深度图。利用前后两个位置分别的拍摄图像,就相当于用双目鱼眼相机达到了四目鱼眼相机的效果。
应当理解,上述立体匹配包括从不同的所述鱼眼图像中找到匹配的对应点。
应当理解,为了获得360度全景深度图,所述重叠区域也相应的包括了360度全景区域。由于深度图中可以在一个区域中区分出物体的远近距离,因此可以从所述深度图中确定障碍物。
作为本实施例中双目鱼眼相机的应用场景,该双目鱼眼相机可以为无人机的机身或者也可以为无人机的外接装置。无人机既可以是无人飞行器,也可以是无人驾驶机器人。本实施例中双目鱼眼相机在无人机上的应用,可以为无人机提供感知周边环境的深度图,并可进行障碍物的检测,从而辅助无人机进行避障,或者实现路径规划。
本发明通过对全景相机对上下和/或左右鱼眼镜头拍摄的图像进行立体匹配,根据匹配特征点实现计算物体的深度的方法,本发明可以计算场景3D坐标,还可以为全景相机运动或者为全景相机的载体比如无人机等实时提供目标物体的方位,从而达到避障的效果。
在本发明所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的, 作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
上述以软件功能单元的形式实现的集成的单元,可以存储在一个计算机可读取存储介质中。上述软件功能单元存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本发明各个实施例所述方法的部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
本领域技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
最后应说明的是,以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (23)

  1. 一种全景深度测量方法,其特征在于,包括步骤:
    获取相邻鱼眼镜头拍摄的鱼眼图像;
    对所述相邻鱼眼镜头拍摄的鱼眼图像进行立体匹配,并计算重叠区域的深度图;
    根据所述深度图获得全景深度图。
  2. 如权利要求1所述的方法,适用于四目鱼眼相机,其特征在于,
    所述四目鱼眼相机在平行的表面上分别设置两个所述鱼眼镜头,
    所述对所述鱼眼图像进行立体匹配,并计算重叠区域的深度图,还包括:
    对所述四目鱼眼相机在不同表面上的同一端的所述鱼眼镜头拍摄的所述鱼眼图像进行立体匹配,并分别计算两个第一重叠区域的第一深度图;
    分别对所述四目鱼眼相机在同一表面上的两个所述鱼眼镜头拍摄的所述鱼眼图像进行立体匹配,并分别计算第二重叠区域的第二深度图和第三重叠区域的第三深度图;
    所述根据所述深度图获得全景深度图,还包括:
    将所述两个第一深度图、第二深度图、第三深度图合并,获得全景深度图。
  3. 如权利要求2所述的方法,其特征在于,所述获取所述鱼眼镜头拍摄的鱼眼图像,包括获取当前每个所述鱼眼镜头拍摄的图片或者视频帧。
  4. 如权利要求2所述的方法,其特征在于,所述立体匹配包括从不同的所述鱼眼图像中找到匹配的对应点。
  5. 如权利要求2所述的方法,其特征在于,所述四目鱼眼相机为无人机的机身或者外接装置。
  6. 如权利要求1所述的方法,其特征在于,所述重叠区域包括360度全景区域。
  7. 一种障碍物确定方法,其特征在于,根据权利要求1至6任意一项所述的方法获得全景深度图,再从所述深度图中确定障碍物。
  8. 一种鱼眼相机,包括多个鱼眼镜头,其特征在于,包括:
    图像获取模块,用于获取相邻鱼眼镜头拍摄的鱼眼图像;
    立体匹配模块,用于对所述相邻鱼眼镜头拍摄的鱼眼图像进行立体匹配,并计算重叠区域的深度图;
    全景合成模块,用于根据所述深度图获得全景深度图。
  9. 如权利要求8所述的鱼眼相机,其特征在于,
    所述鱼眼相机为四目鱼眼相机;
    所述四目鱼眼相机在平行的表面上分别设置两个鱼眼镜头,
    所述对所述鱼眼图像进行立体匹配,并计算重叠区域的深度图,还包括:
    对所述四目鱼眼相机不同表面上的同一端所述鱼眼镜头拍摄的所述鱼眼图像进行立体匹配,并分别计算两个第一重叠区域的第一深度图;
    分别对所述四目鱼眼相机同一表面上的两个所述鱼眼镜头拍摄的所述鱼眼图像进行立体匹配,并分别计算第二重叠区域的第二深度图和第三重叠区域的第三深度图;
    所述根据所述深度图获得全景深度图,还包括:
    将所述两个第一深度图、第二深度图、第三深度图合并,获得全景深度图。
  10. 如权利要求9所述的四目鱼眼相机,其特征在于,所述获取所述鱼眼镜头拍摄的鱼眼图像,包括获取当前每个所述鱼眼镜头拍摄的图片或者视频帧。
  11. 如权利要求9所述的四目鱼眼相机,其特征在于,所述立体匹配包括从不同的所述鱼眼图像中找到匹配的对应点。
  12. 如权利要求9所述的四目鱼眼相机,其特征在于,所述四目鱼眼相机为无人机的机身或者外接装置。
  13. 如权利要求8所述的鱼眼相机,其特征在于,所述重叠区域包括360度全景区域。
  14. 如权利要求8所述的鱼眼相机,其特征在于,还包括:
    障碍检测模块,用于从所述深度图中确定障碍物。
  15. 一种全景深度测量方法,适用于鱼眼相机,其特征在于,包括步骤:
    获取所述鱼眼相机的鱼眼镜头在第一位置和第二位置时拍摄的鱼眼图像以及鱼眼相机从第一位置到第二位置的位移;
    对同一鱼眼镜头在不同位置获得的所述鱼眼图像进行立体匹配,并计算重叠区域的深度图;
    根据所述深度图获得全景深度图。
  16. 如权利要求15所述的方法,其特征在于,
    所述重叠区域包括360度全景区域。
  17. 如权利要求15所述的方法,其特征在于,所述鱼眼相机为无人机的机身或者外接装置。
  18. 一种障碍物确定方法,其特征在于,根据权利要求15至17任意一项所述的方法获得全景深度图,再从所述深度图中确定障碍物。
  19. 一种鱼眼相机,其特征在于,包括:
    图像模块,用于获取所述鱼眼相机的鱼眼镜头在第一位置和第二位置时拍摄的鱼眼图像;
    计算模块,用于对同一鱼眼镜头在不同位置获得的所述鱼眼图像进行立体匹配,并根据鱼眼相机从第一位置到第二位置的位移计算重叠区域的深度图;
    深度模块,用于根据所述深度图获得全景深度图。
  20. 如权利要求19所述的鱼眼相机,其特征在于,
    所述鱼眼相机为双目鱼眼相机;
    所述双目鱼眼相机的两个鱼眼镜头是背向设置的。
  21. 如权利要求19所述的鱼眼相机,其特征在于,
    所述重叠区域包括360度全景区域。
  22. 如权利要求19所述的鱼眼相机,其特征在于,所述鱼眼相机为无人机的机身或者外接装置。
  23. 如权利要求19所述的鱼眼相机,其特征在于,还包括:
    避障模块,用于从所述深度图中确定障碍物。
PCT/CN2020/131506 2019-11-25 2020-11-25 全景深度测量方法、四目鱼眼相机及双目鱼眼相机 WO2021104308A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911164667.4A CN112837207A (zh) 2019-11-25 2019-11-25 全景深度测量方法、四目鱼眼相机及双目鱼眼相机
CN201911164667.4 2019-11-25

Publications (1)

Publication Number Publication Date
WO2021104308A1 true WO2021104308A1 (zh) 2021-06-03

Family

ID=75922111

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/131506 WO2021104308A1 (zh) 2019-11-25 2020-11-25 全景深度测量方法、四目鱼眼相机及双目鱼眼相机

Country Status (2)

Country Link
CN (1) CN112837207A (zh)
WO (1) WO2021104308A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023130465A1 (zh) * 2022-01-10 2023-07-13 深圳市大疆创新科技有限公司 飞行器、图像处理方法和装置、可移动平台
CN116563186A (zh) * 2023-05-12 2023-08-08 中山大学 一种基于专用ai感知芯片的实时全景感知系统及方法

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023041884A1 (fr) 2021-09-17 2023-03-23 Lerity Système optronique hémisphérique de détection et localisation de menaces à traitement temps réel
FR3127353A1 (fr) 2021-09-17 2023-03-24 Lerity Système optronique hémisphérique de détection et localisation de menaces á traitement temps rèel
WO2024103366A1 (zh) * 2022-11-18 2024-05-23 影石创新科技股份有限公司 全景无人机

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090201384A1 (en) * 2008-02-13 2009-08-13 Samsung Electronics Co., Ltd. Method and apparatus for matching color image and depth image
CN105787447A (zh) * 2016-02-26 2016-07-20 深圳市道通智能航空技术有限公司 一种无人机基于双目视觉的全方位避障的方法及系统
CN106931961A (zh) * 2017-03-20 2017-07-07 成都通甲优博科技有限责任公司 一种自动导航方法及装置
CN107437273A (zh) * 2017-09-06 2017-12-05 深圳岚锋创视网络科技有限公司 一种虚拟现实的六自由度三维重构方法、系统及便携式终端
CN108230392A (zh) * 2018-01-23 2018-06-29 北京易智能科技有限公司 一种基于imu的视觉障碍物检测虚警剔除方法
CN108322730A (zh) * 2018-03-09 2018-07-24 嘀拍信息科技南通有限公司 一种可采集360度场景结构的全景深度相机系统
CN109360150A (zh) * 2018-09-27 2019-02-19 轻客小觅智能科技(北京)有限公司 一种基于深度相机的全景深度图的拼接方法及装置
CN210986289U (zh) * 2019-11-25 2020-07-10 影石创新科技股份有限公司 四目鱼眼相机及双目鱼眼相机

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101000461B (zh) * 2006-12-14 2010-09-08 上海杰图软件技术有限公司 一种鱼眼图像生成立方体全景的方法
CN108269234B (zh) * 2016-12-30 2021-11-19 成都美若梦景科技有限公司 一种全景相机镜头姿态估计方法及全景相机

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090201384A1 (en) * 2008-02-13 2009-08-13 Samsung Electronics Co., Ltd. Method and apparatus for matching color image and depth image
CN105787447A (zh) * 2016-02-26 2016-07-20 深圳市道通智能航空技术有限公司 一种无人机基于双目视觉的全方位避障的方法及系统
CN106931961A (zh) * 2017-03-20 2017-07-07 成都通甲优博科技有限责任公司 一种自动导航方法及装置
CN107437273A (zh) * 2017-09-06 2017-12-05 深圳岚锋创视网络科技有限公司 一种虚拟现实的六自由度三维重构方法、系统及便携式终端
CN108230392A (zh) * 2018-01-23 2018-06-29 北京易智能科技有限公司 一种基于imu的视觉障碍物检测虚警剔除方法
CN108322730A (zh) * 2018-03-09 2018-07-24 嘀拍信息科技南通有限公司 一种可采集360度场景结构的全景深度相机系统
CN109360150A (zh) * 2018-09-27 2019-02-19 轻客小觅智能科技(北京)有限公司 一种基于深度相机的全景深度图的拼接方法及装置
CN210986289U (zh) * 2019-11-25 2020-07-10 影石创新科技股份有限公司 四目鱼眼相机及双目鱼眼相机

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023130465A1 (zh) * 2022-01-10 2023-07-13 深圳市大疆创新科技有限公司 飞行器、图像处理方法和装置、可移动平台
CN116563186A (zh) * 2023-05-12 2023-08-08 中山大学 一种基于专用ai感知芯片的实时全景感知系统及方法

Also Published As

Publication number Publication date
CN112837207A (zh) 2021-05-25

Similar Documents

Publication Publication Date Title
WO2021227359A1 (zh) 一种无人机投影方法、装置、设备及存储介质
WO2021104308A1 (zh) 全景深度测量方法、四目鱼眼相机及双目鱼眼相机
US10085011B2 (en) Image calibrating, stitching and depth rebuilding method of a panoramic fish-eye camera and a system thereof
US11170561B1 (en) Techniques for determining a three-dimensional textured representation of a surface of an object from a set of images with varying formats
WO2019100933A1 (zh) 用于三维测量的方法、装置以及系统
JP4825980B2 (ja) 魚眼カメラの校正方法。
WO2018153374A1 (zh) 相机标定
WO2018205623A1 (en) Method for displaying a virtual image, a virtual image display system and device, a non-transient computer-readable storage medium
US20190012804A1 (en) Methods and apparatuses for panoramic image processing
CN111028155B (zh) 一种基于多对双目相机的视差图像拼接方法
WO2017020150A1 (zh) 一种图像处理方法、装置及摄像机
JP2017509986A (ja) 超音波深度検出を使用するオプティカルフロー画像化システム及び方法
US20220067974A1 (en) Cloud-Based Camera Calibration
JP2007024647A (ja) 距離算出装置、距離算出方法、構造解析装置及び構造解析方法。
TWI788739B (zh) 3d顯示設備、3d圖像顯示方法
WO2018032841A1 (zh) 绘制三维图像的方法及其设备、系统
CN102831816B (zh) 一种提供实时场景地图的装置
US11403499B2 (en) Systems and methods for generating composite sets of data from different sensors
CN109495733B (zh) 三维影像重建方法、装置及其非暂态电脑可读取储存媒体
JP2010276433A (ja) 撮像装置、画像処理装置及び距離計測装置
CN210986289U (zh) 四目鱼眼相机及双目鱼眼相机
CN113436267B (zh) 视觉惯导标定方法、装置、计算机设备和存储介质
JP2019525509A (ja) 水平視差ステレオパノラマ取込方法
Lin et al. Real-time low-cost omni-directional stereo vision via bi-polar spherical cameras
TWM594322U (zh) 全向立體視覺的相機配置系統

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20893883

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20893883

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20893883

Country of ref document: EP

Kind code of ref document: A1