CN108230403A - A kind of obstacle detection method based on space segmentation - Google Patents

A kind of obstacle detection method based on space segmentation Download PDF

Info

Publication number
CN108230403A
CN108230403A CN201810063300.2A CN201810063300A CN108230403A CN 108230403 A CN108230403 A CN 108230403A CN 201810063300 A CN201810063300 A CN 201810063300A CN 108230403 A CN108230403 A CN 108230403A
Authority
CN
China
Prior art keywords
carried out
critical areas
key area
obstacle detection
right view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810063300.2A
Other languages
Chinese (zh)
Inventor
徐枫
陈建武
肖谋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yi Intelligent Technology Co Ltd
Original Assignee
Beijing Yi Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yi Intelligent Technology Co Ltd filed Critical Beijing Yi Intelligent Technology Co Ltd
Priority to CN201810063300.2A priority Critical patent/CN108230403A/en
Publication of CN108230403A publication Critical patent/CN108230403A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of obstacle detection methods based on space segmentation, are including step:Offline correction is carried out to binocular vision system, obtains the inside and outside parameter of camera;The left and right view of scene is obtained using binocular vision system;The inside and outside parameter of biocular systems based on acquisition carries out image rectification to the left and right view of acquisition;Space segmentation is carried out to left and right view, obtains key area and non-critical areas;Disparity computation is carried out to key area and non-critical areas, obtains different depth value;Depth value based on acquisition is split processing, and then using the method for contour detecting, acquisition depth value corresponds to the barrier in depth map, and the inside and outside parameter based on camera, calculates barrier relevant information.It is calculated by being corrected to biocular systems, to the reasonable carving culture that view area carries out, improves the quality and speed of image detection, make detection resource reasonable distribution, it is slow to solve existing obstacle detection method speed, it is difficult to the problem of meeting requirement of real-time.

Description

A kind of obstacle detection method based on space segmentation
Technical field
The present invention relates to computer vision and technical field of image processing more particularly to a kind of obstacles based on space segmentation Object detecting method.
Background technology
Intelligent robot and automatic driving vehicle need independently to detect barrier in environment in unknown independent navigation environment With the information such as road conditions.
At present, common obstacle detection method is divided into two major class:(1) obstacle detection method based on Principles of Radar, (2) The obstacle detection method of view-based access control model sensor.However, ultrasonic reflections are extremely strong, directionality is poor, in complex environment Performance is bad;2D range laser radars or 3D range laser radars, the transmitting mirror for being substantially one rotation of dependence send out laser It is shot out, and most of driving mechanisms included for scanning laser, cost is excessively high and installation is complicated.
In recent years, with the rapid development of computer image processing technology, visual sensor is applied in detection of obstacles Increasingly by more, wherein, binocular vision system is wide since its is at low cost, can obtain scene or the advantages that the depth information of object It is general to be applied to the fields such as target detection, tracking and obstacle recognition.The Binocular Stereo Vision System passes through known to position relationship Camera composition stereo visual system, the parallax being imaged on two cameras according to the same object in space obtain its three-dimensional letter Breath, then judges whether the picture point rest on the ground, and then achieve the purpose that detection of obstacles according to the height of picture point in image.
The difficult point that obstacle detection method based on binocular vision system is faced is that the real-time of disparity computation is poor, for The robot or automatic driving vehicle of high-speed motion, obstacle detection method speed are slower, it is difficult to meet requirement of real-time.
Invention content
It is an object of the invention to:A kind of obstacle detection method divided based on space is provided, solves existing barrier Detection method speed is slow, it is difficult to the problem of meeting requirement of real-time.
The technical solution adopted by the present invention is as follows:
A kind of obstacle detection method based on space segmentation, includes the following steps,
S1:Offline correction is carried out to binocular vision system, obtains the inside and outside parameter of camera;
S2:The left and right view of scene is obtained using binocular vision system;
S3:Based on the inside and outside parameter of the S1 biocular systems obtained, image rectification is carried out to the left and right view that S2 is obtained;
S4:Space segmentation is carried out to left and right view, obtains key area and non-critical areas;
S5:Disparity computation is carried out to key area and non-critical areas, obtains different depth value;
S6:Processing is split based on the S5 depth values obtained, then using the method for contour detecting, obtains depth value pair The barrier in depth map, and the inside and outside parameter based on camera are answered, calculates barrier relevant information.
Further, the step S4 is as follows,
S401:Left and right view is divided into effective coverage and inactive area simultaneously, is denoted as P regions and n-quadrant respectively;
S402:P area views are divided into key area and non-critical areas;
S403:Key area is labeled as 1, non-critical areas is labeled as 0;
S404:Key area is divided into the big region of grade of n*n, non-critical areas is divided into the big region of grade of m*m.
Further, the specific steps are respectively to the big region such as each n*n of step S404 segmentations and often by the step S5 The big region of grade of a m*m carries out depth value calculating, and by depth value assignment in each pixel value of corresponding region.
Further, the n<m.
In conclusion by adopting the above-described technical solution, the beneficial effects of the invention are as follows:
1st, in the present invention, the depth information of scene or object can be obtained by binocular vision system, and to system lens distortion Processing is corrected with left and right view caused by rigging error, the Stereo matching picture point of follow-up disparity computation is made to shrink range from two Dimensional plane drops to one-dimensional plane, substantially reduces disparity computation amount, and then improves calculating speed and computational efficiency and its solid Match accuracy.
2nd, by the way that collected left and right view is divided into effective coverage and inactive area, and then effective coverage is carried out multiple Segmentation, and by being labeled differentiation to cut zone, reduced by the method for excluding inactive area and unified mark differentiation Disparity computation amount improves disparity computation efficiency.
3rd, by the way that the key area of segmentation and the non-key size divided again are set, for parallax requirement Higher key area carries out the Stereo matching in smaller piece region, obtains its denser disparity map, is required for parallax relatively low Non-critical areas, carry out the Stereo matching in relatively large region, obtain its sparse disparity map, and then improve image detection Quality and speed make detection resource reasonable distribution, it is slow to solve existing obstacle detection method speed, it is difficult to meet requirement of real-time The problem of.
Description of the drawings
Fig. 1 is the technology of the present invention flow chart;
Fig. 2 is left and right viewing field of camera figure in the present invention;
Fig. 3 is the separation view of left and right view in the present invention.
Specific embodiment
In order to make the purpose , technical scheme and advantage of the present invention be clearer, with reference to the accompanying drawings and embodiments, it is right The present invention is further elaborated.It should be appreciated that specific embodiment described herein is only to explain the present invention, not For limiting the present invention.
Embodiment 1
A kind of obstacle detection method based on space segmentation, includes the following steps,
S1:Offline correction is carried out to binocular vision system, obtains the inside and outside parameter of camera;
S2:The left and right view of scene is obtained using binocular vision system;
S3:Based on the inside and outside parameter of the S1 biocular systems obtained, image rectification is carried out to the left and right view that S2 is obtained;
S4:Space segmentation is carried out to left and right view, obtains key area and non-critical areas;
S5:Disparity computation is carried out to key area and non-critical areas, obtains different depth value;
S6:Processing is split based on the S5 depth values obtained, then using the method for contour detecting, obtains depth value pair The barrier in depth map, and the inside and outside parameter based on camera are answered, calculates barrier relevant information.
As shown in Figure 1, in the present invention, to solve binocular vision system due to left and right caused by lens distortion and rigging error The problem of view distorts, needs to demarcate camera, obtains the inside and outside parameter of camera, corrects and uses for subsequent image.
By image correction module using obtain distortion parameter and camera parameter to binocular view carry out distortion correction and Polar curve corrects, if being corrected without binocular view, in the Stereo matching of follow-up disparity computation, certain pixel is on the right side in left view It in view during search proportioning pixel, needs to scan on two dimensional surface, if carrying out binocular view correction, is carrying out solid In matching, pixel point search matching range drops to one-dimensional plane from two dimensional surface, substantially reduces disparity computation amount, and then improve Calculating speed and computational efficiency and its Stereo matching accuracy.
Embodiment 2
On the basis of embodiment 1, the step S4 is as follows,
S401:Left and right view is divided into effective coverage and inactive area simultaneously, is denoted as P regions and n-quadrant respectively;
S402:P area views are divided into key area and non-critical areas;
S403:Key area is labeled as 1, non-critical areas is labeled as 0;
S404:Key area is divided into the big region of grade of n*n, non-critical areas is divided into the big region of grade of m*m.
In order to improve the efficiency of disparity computation and speed, need to carry out space segmentation to left and right view.First, by view point For effective coverage and inactive area two parts, be denoted as P and n-quadrant respectively, secondly, then by the view publishing in P regions be key area Domain and non-critical areas two parts, wherein key area refer to the traveling of the concerns such as intelligent robot or automatic driving vehicle in visual field Passage zone, non-critical areas refer to the roof in visual field, the non-path region such as sky and other distant scenes, immediately It, key area is labeled as 1, non-critical areas is labeled as 0, finally, key area is carried out region segmentation again, by the area Domain is divided into the big region of grade of n*n, meanwhile, non-critical areas is subjected to region segmentation again, by the region be divided into m*m etc. Big region.
As shown in Fig. 2, dash area is the field of view of left and right camera, L pays close attention to distance, and f represents the focal length of camera, ClLine segment and CrLine segment represents nonoverlapping region in the view of left and right, i.e., not common field of view respectively, and b represents baseline length.
As shown in Figure 2:
As available from the above equation, Cl=Cr, i.e., the size of not common visual field is equal in the view of left and right.It equally, will be in the view of left and right The columns of Non-overlapping Domain is denoted as Ml=Mr.So first, by the M from left to right in left imagelIt arranges, the M from right to left in right imagerRow, Inactive area N is denoted as, part remaining in image is denoted as P regions.
Embodiment 3
On the basis of Examples 1 and 2, the step S5 the specific steps are:Respectively to each n*n of step S404 segmentations The big region of grade etc. big region and each m*m carries out depth value calculating, and by depth value assignment in each pixel of corresponding region Value.
As shown in figure 3, dash area represents the effective coverage in the view of left and right in the view of left and right, and effective coverage is divided For key area p-0 and non-critical areas p-1 two parts, then by zonule that p-1 region segmentations are n*n, it is assumed that it has i Row and j row, each zonule are labeled as p1 (i, j), similarly, by p-0 region segmentations into the zonule of m*m, and equally remember respectively Make p0 (a, b).
After image is divided according to effective coverage and inactive area and key area with non-critical areas, next It is the disparity computation to key area and non-critical areas.
Finally, the key area of acquisition and the disparity map of non-critical areas are merged, obtains the disparity map of current frame image.
Disparity map based on acquisition carries out scene the detection of barrier.
Embodiment 4
On the basis of embodiment 1,2,3, the n<m.
For key area, that is, passage zone, the requirement of parallax is very high, its denser disparity map is obtained, for non-pass Key range, parallax requirement is relatively low, therefore relatively low to its disparity computation required precision, can be based on relatively large region and carry out three-dimensional Match, obtain sparse disparities figure, therefore key area further divides relative area n*n and further divides phase less than non-critical areas To area m*m, and then improve the quality and speed of image detection, make detection resource reasonable distribution, solve existing detection of obstacles Method speed is slow, it is difficult to the problem of meeting requirement of real-time.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all essences in the present invention All any modification, equivalent and improvement made within refreshing and principle etc., should all be included in the protection scope of the present invention.

Claims (4)

1. a kind of obstacle detection method based on space segmentation, which is characterized in that include the following steps,
S1:Offline correction is carried out to binocular vision system, obtains the inside and outside parameter of camera;
S2:The left and right view of scene is obtained using binocular vision system;
S3:Based on the inside and outside parameter of the S1 biocular systems obtained, image rectification is carried out to the left and right view that S2 is obtained;
S4:Space segmentation is carried out to left and right view, obtains key area and non-critical areas;
S5:Disparity computation is carried out to key area and non-critical areas, obtains different depth value;
S6:Processing is split based on the S5 depth values obtained, then using the method for contour detecting, depth value is obtained and corresponds to deeply The barrier in figure, and the inside and outside parameter based on camera are spent, calculates barrier relevant information.
A kind of 2. obstacle detection method based on space segmentation according to claim 1, which is characterized in that the step S4 is as follows,
S401:Left and right view is divided into effective coverage and inactive area simultaneously, is denoted as P regions and n-quadrant respectively;
S402:P area views are divided into key area and non-critical areas;
S403:Key area is labeled as 1, non-critical areas is labeled as 0;
S404:Key area is divided into the big region of grade of n*n, non-critical areas is divided into the big region of grade of m*m.
3. a kind of obstacle detection method based on space segmentation according to claim 1 or 2, which is characterized in that described Step S5 the specific steps are:The big region of grade in the big regions and each m*m such as each n*n of step S404 segmentations is carried out respectively deep Angle value calculates, and by depth value assignment in each pixel value of corresponding region.
4. a kind of obstacle detection method based on space segmentation according to claim 2, it is characterised in that:The n<m.
CN201810063300.2A 2018-01-23 2018-01-23 A kind of obstacle detection method based on space segmentation Pending CN108230403A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810063300.2A CN108230403A (en) 2018-01-23 2018-01-23 A kind of obstacle detection method based on space segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810063300.2A CN108230403A (en) 2018-01-23 2018-01-23 A kind of obstacle detection method based on space segmentation

Publications (1)

Publication Number Publication Date
CN108230403A true CN108230403A (en) 2018-06-29

Family

ID=62668623

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810063300.2A Pending CN108230403A (en) 2018-01-23 2018-01-23 A kind of obstacle detection method based on space segmentation

Country Status (1)

Country Link
CN (1) CN108230403A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111045029A (en) * 2019-12-18 2020-04-21 深圳奥比中光科技有限公司 Fused depth measuring device and measuring method
CN111260715A (en) * 2020-01-20 2020-06-09 深圳市普渡科技有限公司 Depth map processing method, small obstacle detection method and system
CN111382591A (en) * 2018-12-27 2020-07-07 海信集团有限公司 Binocular camera ranging correction method and vehicle-mounted equipment
CN117315003A (en) * 2023-12-01 2023-12-29 常州微亿智造科技有限公司 Three-dimensional measurement method, system, equipment and medium based on binocular grating projection

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150085082A1 (en) * 2013-09-26 2015-03-26 Sick Ag 3D Camera in Accordance with the Stereoscopic Principle and Method of Detecting Depth Maps
CN105868687A (en) * 2015-02-09 2016-08-17 丰田自动车株式会社 Traveling road surface detection apparatus and traveling road surface detection method
CN106650708A (en) * 2017-01-19 2017-05-10 南京航空航天大学 Visual detection method and system for automatic driving obstacles
CN106681353A (en) * 2016-11-29 2017-05-17 南京航空航天大学 Unmanned aerial vehicle (UAV) obstacle avoidance method and system based on binocular vision and optical flow fusion

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150085082A1 (en) * 2013-09-26 2015-03-26 Sick Ag 3D Camera in Accordance with the Stereoscopic Principle and Method of Detecting Depth Maps
CN105868687A (en) * 2015-02-09 2016-08-17 丰田自动车株式会社 Traveling road surface detection apparatus and traveling road surface detection method
CN106681353A (en) * 2016-11-29 2017-05-17 南京航空航天大学 Unmanned aerial vehicle (UAV) obstacle avoidance method and system based on binocular vision and optical flow fusion
CN106650708A (en) * 2017-01-19 2017-05-10 南京航空航天大学 Visual detection method and system for automatic driving obstacles

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111382591A (en) * 2018-12-27 2020-07-07 海信集团有限公司 Binocular camera ranging correction method and vehicle-mounted equipment
CN111382591B (en) * 2018-12-27 2023-09-29 海信集团有限公司 Binocular camera ranging correction method and vehicle-mounted equipment
CN111045029A (en) * 2019-12-18 2020-04-21 深圳奥比中光科技有限公司 Fused depth measuring device and measuring method
CN111045029B (en) * 2019-12-18 2022-06-28 奥比中光科技集团股份有限公司 Fused depth measuring device and measuring method
CN111260715A (en) * 2020-01-20 2020-06-09 深圳市普渡科技有限公司 Depth map processing method, small obstacle detection method and system
WO2021147545A1 (en) * 2020-01-20 2021-07-29 深圳市普渡科技有限公司 Depth image processing method, small obstacle detection method and system, robot, and medium
CN111260715B (en) * 2020-01-20 2023-09-08 深圳市普渡科技有限公司 Depth map processing method, small obstacle detection method and system
CN117315003A (en) * 2023-12-01 2023-12-29 常州微亿智造科技有限公司 Three-dimensional measurement method, system, equipment and medium based on binocular grating projection

Similar Documents

Publication Publication Date Title
WO2021223368A1 (en) Target detection method based on vision, laser radar, and millimeter-wave radar
CN106681353B (en) The unmanned plane barrier-avoiding method and system merged based on binocular vision with light stream
US10129521B2 (en) Depth sensing method and system for autonomous vehicles
US5937079A (en) Method for stereo image object detection
Nedevschi et al. High accuracy stereovision approach for obstacle detection on non-planar roads
CN108230403A (en) A kind of obstacle detection method based on space segmentation
Pantilie et al. Real-time obstacle detection in complex scenarios using dense stereo vision and optical flow
Salman et al. Distance measurement for self-driving cars using stereo camera
CN113196007B (en) Camera system applied to vehicle
Pantilie et al. Real-time obstacle detection using dense stereo vision and dense optical flow
US20180285661A1 (en) Image processing device, object recognizing device, device control system, image processing method, and computer-readable medium
KR101076406B1 (en) Apparatus and Method for Extracting Location and velocity of Obstacle
JP4052291B2 (en) Image processing apparatus for vehicle
CN108205315A (en) A kind of robot automatic navigation method based on binocular vision
CN110780287A (en) Distance measurement method and distance measurement system based on monocular camera
CN107220632B (en) Road surface image segmentation method based on normal characteristic
Raguraman et al. Intelligent drivable area detection system using camera and LiDAR sensor for autonomous vehicle
KR102003387B1 (en) Method for detecting and locating traffic participants using bird&#39;s-eye view image, computer-readerble recording medium storing traffic participants detecting and locating program
US11908148B2 (en) Information processing device to determine object distance
US11741625B2 (en) Systems and methods for thermal imaging
Chavan et al. Obstacle detection and avoidance for automated vehicle: A review
Ma et al. Disparity estimation based on fusion of vision and LiDAR
Unger et al. Efficient stereo matching for moving cameras and decalibrated rigs
CN110488320A (en) A method of vehicle distances are detected using stereoscopic vision
US20240144507A1 (en) Electronic device and control method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180629