WO2020114433A1 - Procédé et appareil de perception de profondeur, et dispositif de perception de profondeur - Google Patents

Procédé et appareil de perception de profondeur, et dispositif de perception de profondeur Download PDF

Info

Publication number
WO2020114433A1
WO2020114433A1 PCT/CN2019/123072 CN2019123072W WO2020114433A1 WO 2020114433 A1 WO2020114433 A1 WO 2020114433A1 CN 2019123072 W CN2019123072 W CN 2019123072W WO 2020114433 A1 WO2020114433 A1 WO 2020114433A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
group
images
divided
correction
Prior art date
Application number
PCT/CN2019/123072
Other languages
English (en)
Chinese (zh)
Inventor
郑欣
Original Assignee
深圳市道通智能航空技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市道通智能航空技术有限公司 filed Critical 深圳市道通智能航空技术有限公司
Publication of WO2020114433A1 publication Critical patent/WO2020114433A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • G06T3/047Fisheye or wide-angle transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20228Disparity calculation for image-based rendering

Definitions

  • Embodiments of the present invention relate to the field of computer vision technology, and in particular, to a depth perception method, apparatus, and depth perception equipment.
  • obstacle sensing sensors With the popularity of robots and drones, obstacle sensing sensors have also been increasingly used.
  • binocular sensors are widely used in obstacle sensing sensors due to their low cost, wide application scenarios, long detection range, and high efficiency. Due to the wide angle of view of fisheye lens, more and more researches on binocular depth perception based on fisheye lens.
  • the binocular fisheye lens is used for obstacle depth perception. Due to the severe deformation of the periphery of the fisheye image, the edge of the fisheye image after correction is greatly distorted, and there is a problem of inaccurate stereo matching.
  • the Taylor series model is used for calibration and depth measurement, and the spherical image is expressed as a rectangular image characterized by latitude and longitude. This method can reduce the measurement error due to distortion to a certain extent, but there is still a large error at the edge of the image.
  • An object of the embodiments of the present invention is to provide a depth sensing method, device, and depth sensing device with high edge depth detection accuracy.
  • an embodiment of the present invention provides a depth perception method, which is used for a binocular system to perceive the depth of a target area.
  • the binocular system includes a first camera device and a second camera device, and the method includes :
  • the center line of the first divided image is parallel to the center line of the second divided image, and the line connecting the first divided image and the second divided image is The centerline is at a preset angle;
  • the depth information of the region corresponding to each group of the divided images is obtained.
  • the performing image segmentation and correction on the first target image to obtain multiple first segmented images includes:
  • a piecewise algorithm is used to perform image segmentation and correction on the first target image to obtain the plurality of first segmented images.
  • the performing image segmentation and correction on the second target image to obtain multiple second segmented images respectively corresponding to the multiple first segmented images includes:
  • a piecewise algorithm is used to perform image segmentation and correction on the second target image to obtain the plurality of second segmented images.
  • the separately calibrating each group of divided images to obtain the calibration parameters corresponding to each group of the divided images includes:
  • the Zhang Zhengyou method or the Faugeras method are used to calibrate each group of segmented images to obtain the calibration parameters corresponding to each group of segmented images.
  • performing binocular matching on each group of the divided images to obtain a disparity map corresponding to each group of the divided images includes:
  • the obtaining depth information of the area corresponding to each group of the segmented images according to the disparity map and the calibration parameters includes:
  • x i , y i , and z i represent the three-dimensional coordinates of each point
  • baseline is the length of the baseline
  • disparity is the disparity data obtained from the disparity map
  • cx, cy, fx, and fy are the calibration parameters
  • connection between the first camera device and the second camera device is at a predetermined angle with the horizontal direction.
  • the preset included angle is 45° or 135°.
  • the line between the first camera device and the second camera device is parallel to the horizontal direction
  • the image segmentation and correction of the first target image includes:
  • the image segmentation and correction of the second target image includes:
  • Image segmentation and correction are performed on the second target image in a direction at a preset angle with the horizontal direction, and the preset angle is not 90°.
  • the preset included angle is 45° or 135°.
  • both the first camera device and the second camera device are fisheye lenses.
  • an embodiment of the present invention provides a depth sensing device, which is used for a binocular system to sense the depth of a target area.
  • the binocular system includes a first camera device and a second camera device, and the device includes :
  • An obtaining module configured to obtain a first target image of the target area obtained by the first camera device and a second target image of the target area obtained by the second camera device;
  • a segmentation and correction module is used for:
  • each of the first segmented image and the first The second divided image corresponding to the divided image is a group of divided images; in each group of the divided images, the center line of the first divided image is parallel to the center line of the second divided image, and the first divided The connection between the image and the second divided image is at a preset angle with the center line;
  • a calibration module the calibration module is used to calibrate each group of segmented images separately to obtain calibration parameters corresponding to each group of segmented images;
  • a binocular matching module configured to perform binocular matching on each group of the divided images to obtain a disparity map corresponding to each group of the divided images
  • the depth information acquisition module is configured to acquire depth information of a region corresponding to each segmented image according to the disparity map and the calibration parameter.
  • the segmentation and correction module is specifically used to:
  • a piecewise algorithm is used to perform image segmentation and correction on the first target image to obtain the plurality of first segmented images.
  • the segmentation and correction module is specifically used to:
  • a piecewise algorithm is used to perform image segmentation and correction on the second target image to obtain the plurality of second segmented images.
  • the calibration module is specifically used to:
  • the Zhang Zhengyou method or the Faugeras method are used to calibrate each group of segmented images to obtain the calibration parameters corresponding to each group of segmented images.
  • the binocular matching module is specifically used to:
  • the depth information acquisition module is specifically used to:
  • x i , y i , and z i represent the three-dimensional coordinates of each point
  • baseline is the length of the baseline
  • disparity is the disparity data obtained from the disparity map
  • cx, cy, fx, and fy are the calibration parameters
  • connection between the first camera device and the second camera device is at a predetermined angle with the horizontal direction.
  • the preset included angle is 45° or 135°.
  • the line between the first camera device and the second camera device is parallel to the horizontal direction
  • the segmentation and correction module is specifically used for:
  • the preset angle is not 90°.
  • the preset included angle is 45° or 135°.
  • both the first camera device and the second camera device are fisheye lenses.
  • an embodiment of the present invention provides a depth sensing device, including:
  • a binocular system set on the main body, includes a first camera device and a second camera device;
  • a controller the controller is provided in the main body; the controller includes:
  • At least one processor and
  • a memory the memory is communicatively connected to the at least one processor, the memory stores instructions executable by the at least one processor, the instructions are executed by the at least one processor, so that the at least one processor The processor can execute the method described above.
  • connection between the first camera device and the second camera device is at a predetermined angle with the horizontal direction.
  • the preset included angle is 45° or 135°.
  • both the first camera device and the second camera device are fisheye lenses.
  • the depth sensing device is a drone.
  • the depth sensing method, device and depth sensing device of the embodiment of the present invention perform image segmentation and correction on the first target image of the target area captured by the first camera device to obtain multiple first segmented images Perform image segmentation and correction on the second target image of the target area to obtain a plurality of second segmented images corresponding to the plurality of first segmented images, each of the first segmented image and its corresponding second segmented image A set of split images. After the image is segmented and then corrected, a corrected image with relatively small image quality loss can be obtained, thereby improving the accuracy of binocular stereo matching and also improving the edge depth detection accuracy. Moreover, multiple sets of segmented images constitute multiple sets of binocular systems, which can perceive depth information in multiple directions.
  • FIG. 1 is a schematic structural diagram of an embodiment of a depth perception device according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of an application scenario of a depth sensing device according to an embodiment of the present invention
  • 3a is a schematic diagram of the position of the camera device in one embodiment of the depth perception device of the present invention.
  • 3b is a schematic diagram of image segmentation of a target image in an embodiment of the present invention.
  • 3c is a schematic diagram of image segmentation of a target image in an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of image segmentation of a target image in an embodiment of the present invention.
  • FIG. 5 is a schematic flowchart of an embodiment of the depth perception method of the present invention.
  • FIG. 6 is a schematic diagram of image segmentation of a target image in an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of an embodiment of a depth perception device of the present invention.
  • FIG. 8 is a schematic diagram of a hardware structure of a controller in an embodiment of a depth perception device of the present invention.
  • the depth sensing method and apparatus provided by the embodiments of the present invention may be applied to the depth sensing device 100 shown in FIG. 1.
  • the depth sensing device 100 includes a main body (not shown in the figure), a binocular system 101 for sensing an image of a target area, and a controller 10.
  • the binocular system 101 includes a first camera 20 and a second camera 30, both of which are used to acquire a target image of a target area.
  • the controller 10 is used to process the target images acquired by the first camera 20 and the second camera 30 to acquire depth information.
  • the controller 10 performs image segmentation and correction on the first target image acquired by the first camera 20 and the second target image acquired by the second camera 30 to obtain a plurality of first segmented images, and A plurality of second divided images respectively corresponding to the first divided image, and each of the first divided image and the second divided image corresponding thereto constitutes a group of divided images.
  • the target image is divided into multiple small images, and after correcting the multiple small images, a corrected image with relatively small image quality loss can be obtained.
  • Stereo matching and depth calculation using images with a small loss of image quality improve the accuracy of binocular stereo matching and the accuracy of edge depth detection.
  • multiple sets of segmented images constitute multiple sets of binocular systems, which can perceive depth information in multiple directions.
  • the depth sensing device 100 can be used in various situations where depth sensing is required, such as three-dimensional reconstruction, obstacle detection, and path planning.
  • the depth sensing device 100 is used in drones, robots, and the like.
  • FIG. 2 shows an application scenario of the depth sensing device 100 as an unmanned aerial vehicle for obstacle detection and path planning.
  • the first camera 20 and the second camera 30 can be any suitable lens.
  • the depth sensing method and device of the embodiments of the present invention are more suitable for the case where the first camera 20 and the second camera 30 are wide-angle lenses such as fisheye lenses, or panoramic lenses such as fold-back and omnidirectional lenses. Since the edges of the images captured by these lenses are severely deformed and the edge depth detection accuracy is lower, the depth perception method and device of the embodiments of the present invention can obtain edge correction images with relatively small image quality loss, greatly improving the edge depth detection accuracy.
  • the first camera 20 and the second camera 30 can be arranged in any suitable manner.
  • the first camera 20 and the second camera 30 can be staggered.
  • the first camera 20 and the second camera 30 are arranged diagonally.
  • the first camera 20 is located at the diagonal point on the upper side
  • the second camera 30 is located at the diagonal point on the lower side.
  • the connection between the first camera 20 and the second camera 30 is at a predetermined angle A with the horizontal direction, so that the first camera 20 and the second camera 30 do not block each other.
  • the images obtained by dividing and correcting the first target image and the second target image obtained by the first camera 20 and the second camera 30 are shown in FIG.
  • the first camera device 20 and the second camera device 30 may also be arranged in parallel horizontally, and the first camera device 20 and the second camera device 30 are located on the same horizontal line.
  • it when performing image segmentation on the first target image and the second target image, it may be in a direction at a preset angle A with the horizontal direction (for example (B direction in FIG. 4) Image segmentation and correction are performed on the first target image and the second target image. It can be seen from Figure 4 that the two segmented images in each group of binocular systems are staggered from each other, and no dead zone area will appear.
  • the center line of the first segmented image (for example, line a or line b) is parallel to the center line of the second segmented image.
  • the first divided image and the second divided image need to be staggered from each other, that is, the connection line between the first divided image and the second divided image and the center line form a preset angle A.
  • the preset included angle may be any suitable angle other than 90 degrees, for example, 45 degrees or 135 degrees.
  • FIG. 5 is a schematic flowchart of a depth sensing method according to an embodiment of the present invention. The method may be executed by the depth sensing device 100 in FIG. 1. As shown in FIG. 5, the method includes:
  • a first target image of the target area is obtained by the first camera 20, and a second target image of the target area is obtained by the second camera 30.
  • the first camera device 20 and the second camera device 30 can be any suitable lens, such as a wide-angle lens such as a fisheye lens, or a panoramic lens such as a fold-back or omnidirectional lens.
  • a wide-angle lens such as a fisheye lens
  • a panoramic lens such as a fold-back or omnidirectional lens.
  • each of the first segmented image and the The second divided image corresponding to the first divided image is a group of divided images; in each group of the divided images, the center line of the first divided image is parallel to the center line of the second divided image, and the first The connection line between a divided image and the second divided image forms a predetermined angle with the center line.
  • the piecewise algorithm may be used to segment and correct the first target image and the second target image to obtain multiple first segmented images and multiple second segmented images.
  • the number of the first divided image and the second divided image may be any appropriate number, for example, 2, 3, 4, 5 or more. Because the middle area of the target image is less deformed and the surrounding area is more deformed, in order to further improve the edge depth detection accuracy, the middle area can be regarded as a single area, and the area around the middle area is divided into multiple peripheral areas. Among them, the number of surrounding areas may be 4, 6, 8, or other.
  • FIG. 6 shows a scenario where the first camera 20 and the second camera 30 are fisheye lenses and there are four peripheral areas.
  • the left image is an image before correction
  • the right image is an image after correction.
  • the above four peripheral areas are an upper area, a lower area, a left area and a right area (corresponding to the numbers 1, 5, 2, 4 in FIG. 6 respectively) Area).
  • the upper area, the left area, the middle area, the right area, and the lower area represent the images of the front, left, top, right, and rear, respectively, and the binocular system formed thereby can obtain Depth information in five directions: front, left, top, right, and back.
  • the upper area, the left area, the middle area, the right area, and the lower area respectively represent the images of the front left, front right, top, left, and back right directions, and the binocular system composed thereby can obtain Depth information in five directions: front left, front right, top, back left, and back right.
  • depth perception in the five directions of front, left, top, right, and rear can be achieved, and the perception angle is 180 degrees.
  • two or more sets of depth sensing devices 100 may be installed on the drone or robot to achieve 360-degree depth sensing.
  • the embodiments of the present invention use fewer lenses and save space. It also reduces the calibration error caused by the deformation of the connected components, reduces the complexity and calibration time of the calibration algorithm, and the calibration parameters are not easy to change and have the advantage of zero delay.
  • the Zhang Zhengyou method or the Faugeras method may be used to separately calibrate each group of segmented images to obtain the calibration parameters (including internal and external parameters) corresponding to each group of the segmented images.
  • the BM (Boyer-Moore) algorithm or SGBM (Semi Global Block Matching) algorithm can be used to perform binocular matching on each group of the segmented images to obtain the corresponding position of each group of the segmented images Describe the parallax map.
  • the image after the first target image and the second target image are divided and corrected please refer to FIG. 3b .
  • the first target image is divided into 1-5 five divided images
  • the second target image is divided into 6-10 five divided images.
  • 1 and 6 form the front binocular system
  • 2 and 7 form the left binocular system
  • 3 and 8 form the upper binocular system
  • 4 and 9 form the right binocular system
  • 5 and 10 form the rear binocular system.
  • the existing stereo matching algorithm obtains the parallax of each feature point on the first target image and the second target image, and the parallax of each feature point constitutes a parallax map corresponding to each group of divided images.
  • the following formula can be used to obtain the three-dimensional coordinates of each point in the area corresponding to each group of the divided images:
  • x i , y i , and z i represent the three-dimensional coordinates of each point
  • baseline is the length of the baseline
  • disparity is the disparity data obtained from the disparity map
  • cx, cy, fx, and fy are the calibration parameters. According to the three-dimensional coordinates of each point in the area, the depth information of the area can be obtained.
  • the baseline distance between area 1 and area 6, the baseline distance between area 5 and area 10 is D
  • the baseline distance between area 2 and area 7 is D
  • the baseline distance between them is W
  • the baseline distance between regions 3 and 8 is (It can also be W or D).
  • W is the horizontal distance between the first camera 20 and the second camera 30
  • D is the vertical distance between the first camera 20 and the second camera 30.
  • the first target image of the target area captured by the first camera device is image-divided and corrected to obtain multiple first divided images
  • the second target image of the target area captured by the second camera device is image-divided and After correction, a plurality of second divided images respectively corresponding to the plurality of first divided images are obtained, and each of the first divided image and the second divided image corresponding thereto constitutes a group of divided images.
  • a corrected image with relatively small image quality loss can be obtained, thereby improving the accuracy of binocular stereo matching and also improving the edge depth detection accuracy.
  • multiple sets of segmented images constitute multiple sets of binocular systems, which can perceive depth information in multiple directions.
  • an embodiment of the present invention further provides a depth sensing device, which is used in the depth sensing device 100 in FIG. 1, and the depth sensing device 700 includes:
  • An acquiring module 701 the acquiring module is configured to acquire a first target image of the target area acquired by the first camera device and a second target image of the target area acquired by the second camera device.
  • the segmentation and correction module 702 is used to:
  • each of the first segmented image and the first The second divided image corresponding to the divided image is a group of divided images; in each group of the divided images, the center line of the first divided image is parallel to the center line of the second divided image, and the first divided The connection between the image and the second divided image is at a predetermined angle with the center line.
  • the calibration module 703 is configured to calibrate each group of divided images separately to obtain calibration parameters corresponding to each group of the divided images.
  • the binocular matching module 704 is configured to perform binocular matching on each group of the divided images to obtain a disparity map corresponding to each group of the divided images.
  • the depth information obtaining module 705 is configured to obtain the depth information of the area corresponding to each group of the divided images according to the disparity map and the calibration parameters.
  • the first target image of the target area captured by the first camera device is image-divided and corrected to obtain a plurality of first divided images
  • the second target image of the target area captured by the second camera device is image-divided and After correction, a plurality of second divided images respectively corresponding to the plurality of first divided images are obtained, and each of the first divided image and the second divided image corresponding thereto constitutes a group of divided images.
  • a corrected image with relatively small image quality loss can be obtained, thereby improving the accuracy of binocular stereo matching and also improving the edge depth detection accuracy.
  • multiple sets of segmented images constitute multiple sets of binocular systems, which can perceive depth information in multiple directions.
  • the segmentation and correction module 702 is specifically used to:
  • a piecewise algorithm is used to perform image segmentation and correction on the first target image to obtain the plurality of first segmented images. as well as
  • a piecewise algorithm is used to perform image segmentation and correction on the second target image to obtain the plurality of second segmented images.
  • the calibration module 703 is specifically used to:
  • the Zhang Zhengyou method or the Faugeras method are used to calibrate each group of segmented images to obtain the calibration parameters corresponding to each group of segmented images.
  • the binocular matching module 704 is specifically used to:
  • the depth information acquisition module 705 is specifically used to:
  • x i , y i , and z i represent the three-dimensional coordinates of each point
  • baseline is the length of the baseline
  • disparity is the disparity data obtained from the disparity map
  • cx, cy, fx, and fy are the calibration parameters
  • the connection between the first camera device and the second camera device is at a preset angle with the horizontal direction.
  • the preset included angle is 45° or 135°.
  • connection line between the first camera device and the second camera device is parallel to the horizontal direction
  • the segmentation and correction module 702 is specifically used for:
  • the preset angle is not 90°.
  • the preset included angle is 45° or 135°.
  • both the first camera device and the second camera device are fisheye lenses.
  • the above-mentioned device can execute the method provided by the embodiments of the present application, and has functional modules and beneficial effects corresponding to the execution method.
  • the above-mentioned device can execute the method provided by the embodiments of the present application, and has functional modules and beneficial effects corresponding to the execution method.
  • the methods provided in the embodiments of the present application refer to the methods provided in the embodiments of the present application.
  • FIG. 8 is a schematic diagram of the hardware structure of the controller 10 in the depth perception device 100 according to an embodiment of the present invention. As shown in FIG. 8, the controller 10 includes:
  • One or more processors 11 and memory 12, one processor 11 is taken as an example in FIG. 8.
  • the processor 11 and the memory 12 may be connected through a bus or in other ways. In FIG. 8, the connection through a bus is used as an example.
  • the memory 12 is a non-volatile computer-readable storage medium, and can be used to store non-volatile software programs, non-volatile computer executable programs, and modules, such as program instructions corresponding to the depth perception method in the embodiments of the present application.
  • / Module for example, the acquisition module 701, segmentation and correction module 702, calibration module 703, binocular matching module 704, and depth information acquisition module 705 shown in FIG. 7.
  • the processor 11 executes various functional applications and data processing of the depth-aware device 100 by running non-volatile software programs, instructions, and modules stored in the memory 12, that is, implements the depth-aware method of the foregoing method embodiments.
  • the memory 12 may include a storage program area and a storage data area, where the storage program area may store an operating system and application programs required by at least one function; the storage data area may store data created according to the use of a depth-aware device, and the like.
  • the memory 12 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage devices.
  • the memory 12 may optionally include memories remotely provided with respect to the processor 11, and these remote memories may be connected to a depth-aware device through a network. Examples of the above network include but are not limited to the Internet, intranet, local area network, mobile communication network, and combinations thereof.
  • the one or more modules are stored in the memory 12, and when executed by the one or more processors 11, execute the depth perception method in any of the above method embodiments, for example, execute FIG. 5 described above Steps 101 to 106 of the method; implement the functions of the modules 701-705 in FIG. 7.
  • An embodiment of the present application provides a non-volatile computer-readable storage medium that stores computer-executable instructions that are executed by one or more processors, such as in FIG. 8
  • One of the processors 11 may enable the one or more processors to execute the depth perception method in any of the above method embodiments, for example, to perform the method steps 101 to 106 in FIG. 5 described above; Functions of modules 701-705.
  • the main body is the body of the drone, wherein the first camera 20 and the second camera 30 of the depth sensing device 100 may be provided in the drone
  • the controller 10 may use a separate controller, or may be controlled by a drone's flight control chip.
  • the device embodiments described above are only schematic, wherein the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located One place, or can be distributed to multiple network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each embodiment can be implemented by means of software plus a general hardware platform, and of course, it can also be implemented by hardware.
  • the program may be stored in a computer-readable storage medium. When executed, it may include the processes of the foregoing method embodiments.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (Read-Only Memory, ROM) or a random access memory (RandomAccessMemory, RAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

La présente invention concerne un procédé et un appareil de perception de profondeur, et un dispositif de perception de profondeur. Le procédé comprend les étapes consistant à : obtenir une première image cible d'une région cible au moyen d'une première unité de photographie, et obtenir une seconde image cible de la région cible au moyen d'une seconde unité de photographie (101) ; réaliser une segmentation et une correction d'image sur la première image cible de façon à obtenir de multiples premières images segmentées (102) ; effectuer une segmentation et une correction d'image sur la seconde image cible de façon à obtenir de multiples secondes images segmentées correspondant respectivement aux multiples premières images segmentées (103) ; effectuer une mise en correspondance binoculaire sur chaque groupe d'images segmentées pour obtenir une carte de disparité correspondant à chaque groupe d'images segmentées (105) ; et en fonction de la carte de disparité et d'un paramètre d'étalonnage, obtenir les informations de profondeur d'une région correspondant à chaque groupe d'images segmentées (106). Le procédé permet d'améliorer la précision de détection de profondeur de bord. De plus, de multiples groupes d'images segmentées constituent de multiples groupes de systèmes binoculaires, permettant ainsi de percevoir des informations de profondeur dans de multiples directions.
PCT/CN2019/123072 2018-12-04 2019-12-04 Procédé et appareil de perception de profondeur, et dispositif de perception de profondeur WO2020114433A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811473003.1 2018-12-04
CN201811473003.1A CN109658451B (zh) 2018-12-04 2018-12-04 一种深度感知方法,装置和深度感知设备

Publications (1)

Publication Number Publication Date
WO2020114433A1 true WO2020114433A1 (fr) 2020-06-11

Family

ID=66112775

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/123072 WO2020114433A1 (fr) 2018-12-04 2019-12-04 Procédé et appareil de perception de profondeur, et dispositif de perception de profondeur

Country Status (2)

Country Link
CN (1) CN109658451B (fr)
WO (1) WO2020114433A1 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109658451B (zh) * 2018-12-04 2021-07-30 深圳市道通智能航空技术股份有限公司 一种深度感知方法,装置和深度感知设备
CN110580724B (zh) * 2019-08-28 2022-02-25 贝壳技术有限公司 一种对双目相机组进行标定的方法、装置和存储介质
CN111986248B (zh) * 2020-08-18 2024-02-09 东软睿驰汽车技术(沈阳)有限公司 多目视觉感知方法、装置及自动驾驶汽车

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008075271A2 (fr) * 2006-12-18 2008-06-26 Koninklijke Philips Electronics N.V. Étalonnage d'un système de caméra
CN105277169A (zh) * 2015-09-25 2016-01-27 安霸半导体技术(上海)有限公司 基于图像分割的双目测距方法
CN105787447A (zh) * 2016-02-26 2016-07-20 深圳市道通智能航空技术有限公司 一种无人机基于双目视觉的全方位避障的方法及系统
CN109658451A (zh) * 2018-12-04 2019-04-19 深圳市道通智能航空技术有限公司 一种深度感知方法,装置和深度感知设备

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6756993B2 (en) * 2001-01-17 2004-06-29 The University Of North Carolina At Chapel Hill Methods and apparatus for rendering images using 3D warping techniques
CN202818442U (zh) * 2012-05-25 2013-03-20 常州泰勒维克今创电子有限公司 全数字全景摄像机
CN102750711B (zh) * 2012-06-04 2015-07-29 清华大学 一种基于图像分割和运动估计的双目视频深度图求取方法
CN102721370A (zh) * 2012-06-18 2012-10-10 南昌航空大学 基于计算机视觉的山体滑坡实时监测方法
CN103198473B (zh) * 2013-03-05 2016-02-24 腾讯科技(深圳)有限公司 一种深度图生成方法及装置
CN106709948A (zh) * 2016-12-21 2017-05-24 浙江大学 一种基于超像素分割的快速双目立体匹配方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008075271A2 (fr) * 2006-12-18 2008-06-26 Koninklijke Philips Electronics N.V. Étalonnage d'un système de caméra
CN105277169A (zh) * 2015-09-25 2016-01-27 安霸半导体技术(上海)有限公司 基于图像分割的双目测距方法
CN105787447A (zh) * 2016-02-26 2016-07-20 深圳市道通智能航空技术有限公司 一种无人机基于双目视觉的全方位避障的方法及系统
CN109658451A (zh) * 2018-12-04 2019-04-19 深圳市道通智能航空技术有限公司 一种深度感知方法,装置和深度感知设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XIAO-QING LIU: "The Image Process and Location Technology Research Based on Binocular Vision", MASTER THESIS, no. 6, 15 June 2018 (2018-06-15), pages 1 - 76, XP009521581, ISSN: 1674-0246 *

Also Published As

Publication number Publication date
CN109658451A (zh) 2019-04-19
CN109658451B (zh) 2021-07-30

Similar Documents

Publication Publication Date Title
US10586352B2 (en) Camera calibration
CN109920011B (zh) 激光雷达与双目摄像头的外参标定方法、装置及设备
CN108323190B (zh) 一种避障方法、装置和无人机
US10085011B2 (en) Image calibrating, stitching and depth rebuilding method of a panoramic fish-eye camera and a system thereof
EP3825954A1 (fr) Procédé et dispositif de photographie, et véhicule aérien sans pilote
CN106960454B (zh) 景深避障方法、设备及无人飞行器
US20170127045A1 (en) Image calibrating, stitching and depth rebuilding method of a panoramic fish-eye camera and a system thereof
US11039121B2 (en) Calibration apparatus, chart for calibration, chart pattern generation apparatus, and calibration method
CN107329490B (zh) 无人机避障方法及无人机
WO2018210078A1 (fr) Procédé de mesure de distance pour un véhicule aérien sans pilote et véhicule aérien sans pilote
WO2020114433A1 (fr) Procédé et appareil de perception de profondeur, et dispositif de perception de profondeur
WO2018120040A1 (fr) Procédé et dispositif de détection d'obstacle
JP2018179980A (ja) カメラ校正方法、カメラ校正プログラム及びカメラ校正装置
CN106570899B (zh) 一种目标物体检测方法及装置
WO2021035731A1 (fr) Procédé et appareil de commande pour véhicule aérien sans pilote et support d'informations lisible par ordinateur
CN108734738B (zh) 相机标定方法及装置
CN111383264B (zh) 一种定位方法、装置、终端及计算机存储介质
CN112837207B (zh) 全景深度测量方法、四目鱼眼相机及双目鱼眼相机
CN110458952B (zh) 一种基于三目视觉的三维重建方法和装置
WO2021195939A1 (fr) Procédé d'étalonnage pour paramètres externes d'un dispositif de photographie binoculaire, plateforme mobile et système
CN112052788A (zh) 基于双目视觉的环境感知方法、装置及无人飞行器
CN112470192A (zh) 双摄像头标定方法、电子设备、计算机可读存储介质
CN109325913A (zh) 无人机图像拼接方法及装置
CN109785225B (zh) 一种用于图像矫正的方法和装置
EP4050553A1 (fr) Procédé et dispositif permettant de restaurer une image obtenue à partir d'une caméra en réseau

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19894082

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19894082

Country of ref document: EP

Kind code of ref document: A1