WO2018120040A1 - Procédé et dispositif de détection d'obstacle - Google Patents

Procédé et dispositif de détection d'obstacle Download PDF

Info

Publication number
WO2018120040A1
WO2018120040A1 PCT/CN2016/113550 CN2016113550W WO2018120040A1 WO 2018120040 A1 WO2018120040 A1 WO 2018120040A1 CN 2016113550 W CN2016113550 W CN 2016113550W WO 2018120040 A1 WO2018120040 A1 WO 2018120040A1
Authority
WO
WIPO (PCT)
Prior art keywords
matching area
matching
target
area
obstacle
Prior art date
Application number
PCT/CN2016/113550
Other languages
English (en)
Chinese (zh)
Inventor
林义闽
廉士国
Original Assignee
深圳前海达闼云端智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海达闼云端智能科技有限公司 filed Critical 深圳前海达闼云端智能科技有限公司
Priority to CN201680006896.1A priority Critical patent/CN107636679B/zh
Priority to PCT/CN2016/113550 priority patent/WO2018120040A1/fr
Publication of WO2018120040A1 publication Critical patent/WO2018120040A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Definitions

  • the present application relates to the field of detection technologies, and in particular, to an obstacle detection method and apparatus.
  • obstacle detection is one of the most important parts in any technical field of driverless, assisted driving, drone navigation, blind guide, intelligent robots, for example, intelligent robots or unmanned vehicles.
  • Autonomous navigation in the unknown environment perceives the surrounding environment, and the system needs to provide information such as obstacles and roads in the environment.
  • the application of visual sensors in obstacle detection has received more and more attention. Therefore, the current mainstream obstacle detection method is based on obstacle detection methods based on visual sensors.
  • the eye vision system is widely used in the fields of target detection, tracking and obstacle recognition due to its low cost and ability to acquire depth information of scenes or objects.
  • the existing obstacle detection method based on binocular vision is: a stereo vision system composed of a camera having a known positional relationship, and a parallax map obtained by imaging the parallax of the same object on two cameras, and the parallax map is obtained. Test to determine where the obstacle is.
  • Embodiments of the present application provide an obstacle detection method and apparatus for improving obstacle detection accuracy and detection efficiency.
  • an obstacle detection method comprising:
  • an obstacle detecting apparatus including:
  • An acquiring module configured to respectively acquire a target first matching area where an obstacle exists in a first matching area of a left view of the predetermined scene and a first matching area of the right view;
  • a matching module configured to match a second matching area in the target first matching area in the left view acquired by the acquiring module with a second matching area in the target first matching area in the right view, to obtain a first disparity map;
  • the second matching window is smaller than the first matching window;
  • a determining module configured to determine location information of the obstacle in the first disparity map obtained by the matching module.
  • an electronic device configured to support the electronic device to perform a corresponding function in the above method.
  • the electronic device can also include a memory for coupling with the processor that stores computer software code for the electronic device, including a program designed to perform the above aspects.
  • a computer storage medium for storing computer software instructions for use as an obstacle detection device, comprising program code designed to perform the method of the first aspect.
  • a computer program product can be directly loaded into an internal memory of a computer and contains software code that can be implemented by the computer and loaded and executed to implement the method of the first aspect.
  • a robot comprising the electronic device of the third aspect.
  • the solution provided by the embodiment of the present application passes the low resolution in the left view of the predetermined scene.
  • the first matching area of the rate and the first matching area of the low resolution of the right view respectively acquire the target first matching area where the obstacle exists, and the high resolution of the target first matching area in the left view
  • the second matching area is matched with the second matching area of the high resolution in the target first matching area in the right view to obtain a first disparity map.
  • FIG. 1 is a flowchart of a method for detecting an obstacle according to an embodiment of the present application
  • FIG. 2 is a corresponding diagram of a parallax and a depth corresponding to a target of a left camera and a right camera in a binocular camera according to an embodiment of the present invention
  • Figure 2b is a top view of Figure 2a
  • FIG. 3 is a schematic flowchart of a method for detecting another obstacle according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a first matching window that does not overlap each other in a left view and a right view according to an embodiment of the present disclosure
  • FIG. 5 is a schematic diagram of a first matching window overlapping each other in a left view and a right view according to an embodiment of the present disclosure
  • FIG. 6 is a schematic diagram of parallax matching according to an embodiment of the present application.
  • FIG. 7 is another schematic diagram of parallax matching according to an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of an obstacle detecting apparatus according to an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • the detection of obstacles is a very important part, and the current mainstream obstacle detection method is based on binocular cameras.
  • the disparity map is detected.
  • the embodiment of the present application provides an obstacle detection method based on the above application scenario to obtain a high-precision disparity map.
  • the basic principle of the technical solution provided by the embodiment of the present application is: acquiring a left view and a right view collected by the binocular camera at the same time, and then, low resolution of the left view and the right view.
  • the first matching window performs matching, and based on the disparity map obtained after the matching, obtains the first matching area of the target with the obstacle from the first matching area of the left view and the right view, and then, the target in the left view and the right view
  • the high-resolution second matching window in the first matching area is matched to obtain a first disparity map, and then, the matching solution is calculated repeatedly using a higher-resolution matching window, and the contour of the obstacle in the obtained disparity map can be more For precision.
  • Binocular camera placed at a certain distance by two cameras with the same parameters Combined camera, in general, the left camera and the right camera in the binocular camera are usually set on the same horizontal line so that the left camera and the right camera optical axis are parallel, so that the binocular camera can be used to simulate the angle of the human eye. Poor, in order to achieve stereoscopic imaging or to detect the effect of depth of field.
  • Disposity refers to the difference in direction produced by observing the same target from two points where there is a certain distance.
  • the angle between two points from the target is called the parallax angle of the two points, between the two points.
  • the distance is called the baseline.
  • Disposity value refers to the difference between two abscissas for the same pixel in the two images obtained when the left camera and the right camera in the binocular camera shoot the same target, that is, the view of the pixel point.
  • the difference correspondingly, the disparity values of all the pixels in the two images form a disparity map.
  • O L is the position of the left camera
  • O R is the position where the camera is located
  • f is used to indicate the focal length of the left camera and the camera lens
  • B is The baseline distance is equal to the distance between the left camera and the projection center of the right camera.
  • z c can generally be regarded as the depth of the feature point, and is used to indicate that the feature point is between the planes of the left and right cameras.
  • the distance P of the feature point P is obtained on the "left eye” and the "right eye” respectively, that is, the projection points of the feature point P on the left and right cameras are P L (x L , y L ) and P R (x R , y R ), if the images of the left and right cameras are on the same plane, the Y coordinate of the feature point P is the same as the Y coordinate of the image coordinates P L and P R , and is obtained by the triangular geometric relationship:
  • the disparity value and the depth are inversely proportional.
  • the disparity value in the present application has a value range of [0, 255]. In general, 0 can be set to the nearest distance, and 255 is set to the farthest distance.
  • the binocular camera in the present application simulates the human eye to collect images
  • the left camera and the right camera in the binocular camera in the present application are disposed on the same horizontal line, and the optical axes are parallel, and There is a certain distance, and therefore, the parallax in the present application mainly refers to the horizontal parallax.
  • Camera calibration in order to determine the relationship between the three-dimensional geometric position of a point on the surface of a space object and its corresponding point in the image, a geometric model of camera imaging must be established. These geometric model parameters are camera parameters. Under most conditions, these parameters must be obtained through experiments and calculations. This process of solving parameters is called the camera calibration process.
  • Camera calibration in this application generally refers to offline calibration of the camera. Normally, since the optical axis of the binocular camera is located inside the camera, it is difficult to ensure that the optical axes are strictly parallel when the camera is assembled. Generally, there is a certain deviation. Therefore, it is usually possible to perform off-line calibration of the successfully constructed binocular camera. To obtain the camera's internal parameters (focal length, baseline length, image center, distortion parameters, etc.) and external parameters (rotation matrix R and translation matrix T).
  • the Zhang Zhengyou checkerboard calibration method can be used to off-line calibration of the binocular camera lens.
  • the left camera may be first calibrated to obtain the internal and external parameters of the left camera; secondly, the right camera is calibrated to obtain the internal and external parameters of the right camera; finally, the binocular camera is calibrated, Get the rotation and translation relationship between the left and right cameras.
  • P is a 3 ⁇ 4 projection matrix, which can be represented by a rotation and translation matrix:
  • R is a 3 ⁇ 3 rotation matrix and t is a translation vector.
  • t is a translation vector.
  • These two matrices represent external parameters of binocular vision, one for position and one for direction, so that each pixel on the image can be determined in the world coordinate system.
  • the position of the A matrix represents the internal parameter matrix of the camera and can be expressed as follows:
  • (u o , u o ) is the coordinate of the center point of the image; f u and f v respectively represent the focal length represented by the horizontal and vertical pixel units, and ⁇ represents the tilt factor.
  • Image Correction because the lens distortion causes distortion of the image captured by the lens, it is usually possible to perform distortion correction and polar line correction on the binocular camera before the binocular camera acquires the image.
  • the set distortion between the two image coordinate systems can be expressed as: The above formula is represented by a binary polynomial:
  • n is the coefficient of the polynomial
  • i and j represent the specific positions of the pixel in the image
  • a ij and b ij are the coefficients.
  • the execution subject of the obstacle detecting method provided by the embodiment of the present application may be an obstacle detecting device based on a binocular camera or an electronic device that can be used to execute the above obstacle detecting method.
  • the obstacle detecting device based on the binocular camera may be a combination of a central processing unit (CPU), a CPU and a memory in the electronic device, or may be another control unit in the terminal device. Or module.
  • the above-mentioned electronic device may be a personal computer (PC), a netbook, or a personal digital assistant (English: Personal Digital Assistant) that analyzes the left and right views collected by the binocular camera by using the method provided by the embodiment of the present application.
  • PC personal computer
  • netbook netbook
  • personal digital assistant English: Personal Digital Assistant
  • PDA personal Digital Assistant
  • server etc.
  • the above-mentioned electronic device may be a software client or a software system or a software application PC, server, etc., which can be processed by using the method provided by the embodiment of the present application to process the left and right views collected by the binocular camera.
  • the specific hardware implementation environment can be in the form of a general computer, or an ASIC, or an FPGA, or a programmable extension platform such as Tensilica's Xtensa platform, etc.
  • the above electronic device can be integrated in driverless driving. Machines, blind navigators, unmanned vehicles, smart vehicles, smart phones, etc., which need to detect obstacles or equipment.
  • an embodiment of the present application provides an obstacle detection method based on a binocular camera. As shown in FIG. 3, the method includes the following steps:
  • the binocular camera acquisition view that is, before performing obstacle detection in the scene
  • it is usually necessary to perform certain adjustments on the binocular camera in advance for example, offline calibration, image correction, etc.
  • the camera is parallel to the optical axis of the right camera.
  • the baseline length between the left and right camera optical axes is measured, and the focal length of the binocular camera is recorded, and the baseline length and focal length are not changed, thereby ensuring the synchronization of the images acquired by the binocular camera and avoiding unnecessary errors.
  • the matching area of the two views may be divided according to the window size of the first matching area and the resolution of the binocular camera to obtain a first matching area of the two views;
  • the left view and the right view described above are composed of a plurality of first matching areas of the same size that do not overlap each other. For example, suppose that the resolution W of the left view and the right view captured by the binocular camera is 600*600 pixels, and the window size of the preset first matching area is 30 ⁇ 30, as shown in FIG. 4, in FIG. There are 20 horizontal and vertical sides of the left view 21 and the right view 22, each of which does not overlap the first one. Matching area.
  • step S101b when step S101b is performed, matching regions of the two views are performed according to the window size of the first matching area, the horizontal offset of the two matching areas overlapping each other, and the resolution of the binocular camera. Dividing, the first matching area of the two views is obtained, wherein the left view and the right view are composed of a plurality of first matching areas of the same size overlapping each other. For example, suppose that the resolution W of the left view and the right view collected by the binocular camera is 600*600 pixels, referring to the division diagram of the first matching area in the left view and the right view shown in FIG. 5, if one of the left views is selected 30 For the area of ⁇ 30, N 30 ⁇ 30 areas are also selected for matching in the right view.
  • the horizontal offset distance of the first area is L
  • the horizontal offset distance of the second area is 2L.
  • the first matching area is a low-resolution matching window.
  • the window size of the first matching area is s and the view size is W, the W/s is required to be an integer, thereby ensuring two
  • Each of the first matching regions in the image is the same size for easy matching.
  • the present application matches the second matching area in the target first matching area in the left view with the second matching area in the target first matching area in the right view, that is, the target first in the left view.
  • the matching cost calculation is performed on the second matching area in the matching area and the second matching area in the target first matching area in the right view, and the disparity value corresponding to the same second matching area in the left view and the right view is calculated, and the first is obtained.
  • Parallax map is performed on the second matching area in the matching area and the second matching area in the target first matching area in the right view, and the disparity value corresponding to the same second matching area in the left view and the right view is calculated, and the first is obtained.
  • the size of the second matching area in the embodiment of the present application is smaller than the size of the first matching area.
  • the first disparity map described above may be a disparity of a second matching area corresponding image in the target first matching area and a corresponding image of the target first matching area in the right view in the left view.
  • the figure may also be a disparity map of the image corresponding to the target first matching area in the left view and the image corresponding to the target first matching area in the right view, and the image corresponding to the first matching area except the target first matching area in the left view
  • the disparity map formed after the disparity map of the image corresponding to the first matching region other than the target first matching region is combined in the right view.
  • an accurate region where the obstacle is located may be segmented according to the set obstacle threshold H, and the true orientation of the obstacle is calculated according to the internal and external parameters of the binocular camera.
  • the present application may perform contour detection on an area of the first disparity map whose disparity value is smaller than a predetermined obstacle threshold, obtain contour information of the obstacle, and then according to the contour information of the obstacle and the disparity value of the corresponding area. , determine the location information of the obstacle.
  • the solution provided by the embodiment of the present application obtains the target first matching area where the obstacle exists, respectively, in the first matching area of the low resolution of the left view of the predetermined scene and the first matching area of the low resolution of the right view. And matching the high-resolution second matching area in the target first matching area in the left view with the high-resolution second matching area in the target first matching area in the right view to obtain the first disparity map, due to the The size of the two matching regions is smaller than the size of the first matching region. Therefore, the targeted fine matching between the left view and the region where the obstacle is located in the right view is performed, thereby reducing the calculation amount of the parallax calculation and improving the parallax calculation.
  • the present application determines, in the first matching area of the first matching area and the right view of the left view, the target first matching area where the obstacle exists, that is, when performing step S101, the low resolution may be used.
  • the matching window performs a matching on the left view and the right view, and acquires the target first matching area in which the obstacle exists in the left view and the right view based on the disparity map obtained after the matching.
  • step S101 specifically includes the following steps:
  • S101a Obtain a left view and a right view collected by the binocular camera at the same time.
  • S101b Match the first matching area in the left view with the first matching area in the right view to obtain a second disparity map.
  • S101c Determine, according to the second disparity map, the target first matching area where the obstacle exists, respectively, from the first matching area in the left view and the first matching area in the right view.
  • the present application matches the first matching area in the left view and the right view, that is, performs matching cost calculation on the first matching area in the left view and the first matching area in the right view, and calculates a left view.
  • a disparity value corresponding to the same first matching area in the right view is obtained, and a second disparity map is obtained.
  • the present application may determine, from the first matching area in the left view and the right view, the first corresponding to the area in the second disparity map that is smaller than the predetermined obstacle threshold.
  • the matching area is the target first matching area where the obstacle exists.
  • the 20 first matching areas on the left view of the first line are numbered L (1, 1), L (1, 2), .... L(1,20), the first matching area number on the right view is R(1,1), R(1,2),...,R(1,20), and then L(1,1) is sequentially Matching R(1,1) to R(1,20), selecting the region R(1,j) where the first matching region with the smallest matching cost is the smallest among the calculated matching costs, then obtaining L(1,1)
  • the region's disparity value D1(1,1) j-1, where j ⁇ (1,20), represents the disparity value of all the pixels in the window region of the first row and jth column. As shown in FIG.
  • the existing binocular matching cost calculation methods include SAD, SSD, NCC, etc., and the specific formulas are as follows:
  • the matching cost calculation formula for SAD is:
  • the matching cost calculation formula for SSD is:
  • the matching cost calculation formula for NCC is:
  • This area contains an obstacle, referring to the obstacle block area T shown in Fig. 6, in which black indicates that the disparity value is 0, and the gray area indicates that the disparity value is 12.
  • the L(1, 15) window is further fine-matched.
  • k is a 5 ⁇ 5 window for further fine matching calculation, and there are six second matching regions in the horizontal and vertical directions, respectively, in the first behavior example, on the left view of the first row.
  • the matching window number is TL(1,1), TL(1,2), whil.,TL(1,6)
  • the matching window number on the right view is TR(1,1), TR(1,2) ), ..., TR(1,6)
  • TL(1,1) to match TR(1,1) to TR(1,6), in which the matching cost is the smallest, and the value must be less than
  • the specified matching threshold G otherwise considered as a point that cannot be matched, for example, the left and right images cannot be matched due to occlusion, assuming that the matching window number that satisfies the condition is TR(1, i), where i ⁇ (1,6), then
  • disparity map D2 After obtaining the disparity map D2, it is also possible to repeatedly perform matching calculation with a higher resolution (for example, 4 ⁇ 4, 3 ⁇ 3, 2 ⁇ 2) matching regions, thereby obtaining a higher-precision disparity map. And from which you can determine the outline information of the obstacle with a clearer outline.
  • a higher resolution for example, 4 ⁇ 4, 3 ⁇ 3, 2 ⁇ 2
  • the present application may further use the higher resolution repeatedly on the second subdivided obstacle region after the step S103. Match the window to match.
  • step S102 the method further includes:
  • S102b Matching a third matching area in the target second matching area in the left view with a third matching area in the target second matching area in the right view to obtain a third disparity map, where the third matching in the present application The size of the area is smaller than the size of the second matching area.
  • the number of specific repetitions and the size of the matching matching window each time can be set, and the operations of steps S102a and S102b described above are repeatedly performed.
  • the solution provided by the embodiment of the present application is mainly introduced from the perspective of the obstacle detecting device and the terminal applied by the device. It can be understood that the device includes corresponding hardware structures and/or software modules for performing various functions in order to implement the above functions.
  • the present application can be implemented in a combination of hardware or hardware and computer software in combination with the elements and algorithm steps of the various examples described in the embodiments disclosed herein. Whether a function is implemented in hardware or computer software to drive hardware depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods to implement the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the present application.
  • the embodiment of the present application may divide the function module by the obstacle detecting device according to the above method example.
  • each function module may be divided according to each function, or two or more functions may be integrated into one processing module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules. It should be noted that the division of the module in the embodiment of the present application is schematic, and is only a logical function division, and the actual implementation may have another division manner.
  • FIG. 8 is a schematic diagram showing a possible structure of the obstacle detecting device involved in the above embodiment, and the device 3 includes: an obtaining module 31, a matching module 32, and The module 33 is determined.
  • the obtaining module 31 is configured to support the obstacle detecting device to perform step S101 in FIG. 3; the matching module 32 is used to support the obstacle detecting device to perform step S102 in FIG. 3; the determining module 33 is configured to support the device to perform step S103 in FIG.
  • the obtaining module 31 is specifically configured to support the device to perform the above steps S101a, S101c, and the matching module 32 is specifically configured to support the device to perform step S101b above.
  • the obtaining module 31 is specifically configured to support the device to perform step S102a above, and the matching module 32 is specifically configured to support the device to perform step S102b above.
  • partitioning module in which all the related content of the steps involved in the foregoing method embodiments can be referred to the functional description of the corresponding functional module, and details are not described herein again.
  • the above obtaining module 31, matching module 32 and determining module 33 may be processors.
  • the programs corresponding to the actions performed by the obstacle detecting device described above may be stored in the memory of the device in software, so that the processor calls to execute the operations corresponding to the above respective modules.
  • FIG. 9 is a schematic diagram showing a possible structure of an electronic device involved in an embodiment of the present application.
  • the apparatus 4 includes a processor 41, a memory 42, a system bus 43, and a communication interface 44.
  • the memory 42 is used to store the computer execution code
  • the processor 41 is connected to the memory 42 through the system bus 43.
  • the processor 41 is configured to execute the computer execution code stored in the memory 42 to execute any one of the embodiments provided by the embodiment of the present application.
  • An obstacle detection method for example, the processor 41 is used to support the apparatus to perform all the steps in FIG. 3, and/or other processes for the techniques described herein, and the specific obstacle detection method can be referred to the following and the accompanying drawings. The related descriptions are not repeated here.
  • the embodiment of the present application further provides a storage medium, which may include a memory 42.
  • the embodiment of the present application further provides a computer program product, which can be directly loaded into the memory 42 and contains software code, and the computer program can be loaded and executed by a computer to implement the obstacle detection method described above.
  • the processor 41 may be a processor or a collective name of a plurality of processing elements.
  • the processor 41 can be a central processing unit (CPU).
  • the processor 41 can also be other general-purpose processors, digital signal processing (DSP), and application specific integrated Circuit, ASIC), field-programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware component, etc., which may be implemented or executed as described in connection with the present disclosure.
  • DSP digital signal processing
  • ASIC application specific integrated Circuit
  • FPGA field-programmable gate array
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the processor 41 may also be a dedicated processor, which may include at least one of a baseband processing chip, a radio frequency processing chip, and the like.
  • the processor may also be a combination of computing functions, for example, including one or more microprocessor combinations, a combination of a DSP and a microprocessor, and the like. Further, the dedicated processor may also include a chip having other specialized processing functions of the device.
  • the steps of the method described in connection with the present disclosure may be implemented in a hardware manner, or may be implemented by a processor executing software instructions.
  • the software instructions may be composed of corresponding software modules, which may be stored in random access memory (English: random access memory, abbreviation: RAM), flash memory, read only memory (English: read only memory, abbreviation: ROM), Erase programmable read-only memory (English: erasable programmable ROM, abbreviation: EPROM), electrically erasable programmable read-only memory (English: electrical EPROM, abbreviation: EEPROM), registers, hard disk, mobile hard disk, CD-ROM (CD) - ROM) or any other form of storage medium known in the art.
  • RAM random access memory
  • ROM read only memory
  • EPROM Erase programmable read-only memory
  • EPROM electrically erasable programmable read-only memory
  • registers hard disk, mobile hard disk, CD-ROM (CD) - ROM) or any other form of storage
  • An exemplary storage medium is coupled to the processor to enable the processor to read information from, and write information to, the storage medium.
  • the storage medium can also be an integral part of the processor.
  • the processor and the storage medium can be located in an ASIC. Additionally, the ASIC can be located in the terminal device.
  • the processor and the storage medium can also exist as discrete components in the terminal device.
  • System bus 43 may include a data bus, a power bus, a control bus, and a signal status bus. For the sake of clarity in the present embodiment, various buses are illustrated as the system bus 43 in FIG.
  • Communication interface 44 may specifically be a transceiver on the device.
  • the transceiver can be a wireless transceiver.
  • the wireless transceiver can be an antenna or the like of the device.
  • the processor 41 communicates with other devices through the communication interface 44, for example, if the device is a module or component of the terminal device, the device is used to count with other modules in the terminal device According to the interaction.
  • the embodiment of the present application further provides a robot including the obstacle detecting device corresponding to FIG. 9 .
  • the functions described herein can be implemented in hardware, software, firmware, or any combination thereof.
  • the functions may be stored in a computer readable medium or transmitted as one or more instructions or code on a computer readable medium.
  • Computer readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one location to another.
  • a storage medium may be any available media that can be accessed by a general purpose or special purpose computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé et un dispositif de détection d'obstacle, le procédé consistant : dans une première zone de mise en correspondance d'une vue gauche d'une scène prédéfinie et d'une première zone de mise en correspondance d'une vue droite de la scène prédéfinie, à acquérir respectivement des premières zones de mise en correspondance cibles d'un obstacle présent (S101) ; à effectuer une mise en correspondance par rapport à une seconde zone de mise en correspondance de la première zone de mise en correspondance cible de la vue gauche et à une seconde zone de mise en correspondance de la première zone de mise en correspondance cible de la vue droite pour obtenir une première carte de disparité, une taille de la seconde zone de mise en correspondance étant inférieure à une taille de la première zone de mise en correspondance (S102) ; et à déterminer des informations de position de l'obstacle dans la première carte de disparité (S103). Le procédé peut être utilisé pour augmenter la précision et l'efficacité d'une détection d'obstacles.
PCT/CN2016/113550 2016-12-30 2016-12-30 Procédé et dispositif de détection d'obstacle WO2018120040A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201680006896.1A CN107636679B (zh) 2016-12-30 2016-12-30 一种障碍物检测方法及装置
PCT/CN2016/113550 WO2018120040A1 (fr) 2016-12-30 2016-12-30 Procédé et dispositif de détection d'obstacle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/113550 WO2018120040A1 (fr) 2016-12-30 2016-12-30 Procédé et dispositif de détection d'obstacle

Publications (1)

Publication Number Publication Date
WO2018120040A1 true WO2018120040A1 (fr) 2018-07-05

Family

ID=61112708

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/113550 WO2018120040A1 (fr) 2016-12-30 2016-12-30 Procédé et dispositif de détection d'obstacle

Country Status (2)

Country Link
CN (1) CN107636679B (fr)
WO (1) WO2018120040A1 (fr)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111443365A (zh) * 2020-03-27 2020-07-24 维沃移动通信有限公司 一种定位方法及电子设备
CN111899170A (zh) * 2020-07-08 2020-11-06 北京三快在线科技有限公司 障碍物检测方法、装置、无人机和存储介质
CN111986248A (zh) * 2020-08-18 2020-11-24 东软睿驰汽车技术(沈阳)有限公司 多目视觉感知方法、装置及自动驾驶汽车
CN112698421A (zh) * 2020-12-11 2021-04-23 北京百度网讯科技有限公司 障碍物检测的测评方法、装置、设备以及存储介质
CN113534737A (zh) * 2021-07-15 2021-10-22 中国人民解放军火箭军工程大学 一种基于多目视觉的ptz球机控制参数获取系统
CN113776503A (zh) * 2018-12-29 2021-12-10 深圳市道通智能航空技术股份有限公司 一种深度图处理方法、装置和无人机

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019183789A1 (fr) * 2018-03-27 2019-10-03 深圳市大疆创新科技有限公司 Procédé et appareil de commande de véhicule aérien sans pilote, et véhicule aérien sans pilote
CN110633600B (zh) * 2018-06-21 2023-04-25 海信集团有限公司 一种障碍物检测方法及装置
CN109035322A (zh) * 2018-07-17 2018-12-18 重庆大学 一种基于双目视觉的障碍物检测与识别方法
CN111898396A (zh) * 2019-05-06 2020-11-06 北京四维图新科技股份有限公司 障碍物检测方法和装置
CN112149458A (zh) * 2019-06-27 2020-12-29 商汤集团有限公司 障碍物检测方法、智能驾驶控制方法、装置、介质及设备
CN111191538B (zh) * 2019-12-20 2022-11-18 北京中科慧眼科技有限公司 基于双目相机的障碍物跟踪方法、装置、系统和存储介质
CN111583336B (zh) * 2020-04-22 2023-12-01 深圳市优必选科技股份有限公司 一种机器人及其巡检方法和装置
CN112016394A (zh) * 2020-07-21 2020-12-01 影石创新科技股份有限公司 障碍物信息获取方法、避障方法、移动装置及计算机可读存储介质
CN112489186B (zh) * 2020-10-28 2023-06-27 中汽数据(天津)有限公司 一种自动驾驶双目数据感知方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102175222A (zh) * 2011-03-04 2011-09-07 南开大学 基于立体视觉的吊车避障系统
CN102313536A (zh) * 2011-07-21 2012-01-11 清华大学 基于机载双目视觉的障碍物感知方法
US20140184754A1 (en) * 2012-12-28 2014-07-03 Samsung Electronics Co., Ltd. Method of obtaining depth information and display apparatus
CN105222760A (zh) * 2015-10-22 2016-01-06 一飞智控(天津)科技有限公司 一种基于双目视觉的无人机自主障碍物检测系统及方法

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103868460B (zh) * 2014-03-13 2016-10-05 桂林电子科技大学 基于视差优化算法的双目立体视觉自动测量方法
CN104021388B (zh) * 2014-05-14 2017-08-22 西安理工大学 基于双目视觉的倒车障碍物自动检测及预警方法
CN105205458A (zh) * 2015-09-16 2015-12-30 北京邮电大学 人脸活体检测方法、装置及系统
CN105654493B (zh) * 2015-12-30 2018-11-02 哈尔滨工业大学 一种改进的光学仿射不变双目立体匹配代价与视差优化方法
CN105869167A (zh) * 2016-03-30 2016-08-17 天津大学 基于主被动融合的高分辨率深度图获取方法
CN105931231A (zh) * 2016-04-15 2016-09-07 浙江大学 一种基于全连接随机场联合能量最小化的立体匹配方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102175222A (zh) * 2011-03-04 2011-09-07 南开大学 基于立体视觉的吊车避障系统
CN102313536A (zh) * 2011-07-21 2012-01-11 清华大学 基于机载双目视觉的障碍物感知方法
US20140184754A1 (en) * 2012-12-28 2014-07-03 Samsung Electronics Co., Ltd. Method of obtaining depth information and display apparatus
CN105222760A (zh) * 2015-10-22 2016-01-06 一飞智控(天津)科技有限公司 一种基于双目视觉的无人机自主障碍物检测系统及方法

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113776503A (zh) * 2018-12-29 2021-12-10 深圳市道通智能航空技术股份有限公司 一种深度图处理方法、装置和无人机
CN113776503B (zh) * 2018-12-29 2024-04-12 深圳市道通智能航空技术股份有限公司 一种深度图处理方法、装置和无人机
CN111443365A (zh) * 2020-03-27 2020-07-24 维沃移动通信有限公司 一种定位方法及电子设备
CN111443365B (zh) * 2020-03-27 2022-06-17 维沃移动通信有限公司 一种定位方法及电子设备
CN111899170A (zh) * 2020-07-08 2020-11-06 北京三快在线科技有限公司 障碍物检测方法、装置、无人机和存储介质
CN111986248A (zh) * 2020-08-18 2020-11-24 东软睿驰汽车技术(沈阳)有限公司 多目视觉感知方法、装置及自动驾驶汽车
CN111986248B (zh) * 2020-08-18 2024-02-09 东软睿驰汽车技术(沈阳)有限公司 多目视觉感知方法、装置及自动驾驶汽车
CN112698421A (zh) * 2020-12-11 2021-04-23 北京百度网讯科技有限公司 障碍物检测的测评方法、装置、设备以及存储介质
CN113534737A (zh) * 2021-07-15 2021-10-22 中国人民解放军火箭军工程大学 一种基于多目视觉的ptz球机控制参数获取系统
CN113534737B (zh) * 2021-07-15 2022-07-19 中国人民解放军火箭军工程大学 一种基于多目视觉的ptz球机控制参数获取系统

Also Published As

Publication number Publication date
CN107636679B (zh) 2021-05-25
CN107636679A (zh) 2018-01-26

Similar Documents

Publication Publication Date Title
WO2018120040A1 (fr) Procédé et dispositif de détection d'obstacle
US10789719B2 (en) Method and apparatus for detection of false alarm obstacle
WO2021004548A1 (fr) Procédé de mesure intelligente de vitesse de véhicule basé sur un système de vision stéréoscopique binoculaire
CN110148185B (zh) 确定成像设备坐标系转换参数的方法、装置和电子设备
EP3627109B1 (fr) Procédé et appareil de positionnement visuel, dispositif électronique et système
CN107329490B (zh) 无人机避障方法及无人机
WO2021143286A1 (fr) Procédé et appareil de positionnement de véhicule, contrôleur, voiture intelligente et système
WO2018128667A1 (fr) Systèmes et procédés de détection de marqueur de voie
CN106569225B (zh) 一种基于测距传感器的无人车实时避障方法
CN110766760B (zh) 用于相机标定的方法、装置、设备和存储介质
WO2020154990A1 (fr) Procédé et dispositif de détection d'état de mouvement d'objet cible, et support de stockage
CN111295667B (zh) 图像立体匹配的方法和辅助驾驶装置
CN112700486B (zh) 对图像中路面车道线的深度进行估计的方法及装置
CN110231832B (zh) 用于无人机的避障方法和避障装置
WO2021195939A1 (fr) Procédé d'étalonnage pour paramètres externes d'un dispositif de photographie binoculaire, plateforme mobile et système
CN111738033B (zh) 基于平面分割的车辆行驶信息确定方法及装置、车载终端
WO2022048493A1 (fr) Procédé et appareil d'étalonnage de paramètres extrinsèques d'une caméra
CN115410167A (zh) 目标检测与语义分割方法、装置、设备及存储介质
CN115147809B (zh) 一种障碍物检测方法、装置、设备以及存储介质
CN114648639B (zh) 一种目标车辆的检测方法、系统及装置
CN116385994A (zh) 一种三维道路线提取方法及相关设备
US11138448B2 (en) Identifying a curb based on 3-D sensor data
EP4148375A1 (fr) Procédé et appareil de télémétrie
Ma et al. A road environment prediction system for intelligent vehicle
CN113763560B (zh) 点云数据的生成方法、系统、设备及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16925653

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25-10-2019)

122 Ep: pct application non-entry in european phase

Ref document number: 16925653

Country of ref document: EP

Kind code of ref document: A1