CN111127534A - Obstacle detection method - Google Patents
Obstacle detection method Download PDFInfo
- Publication number
- CN111127534A CN111127534A CN201911072142.8A CN201911072142A CN111127534A CN 111127534 A CN111127534 A CN 111127534A CN 201911072142 A CN201911072142 A CN 201911072142A CN 111127534 A CN111127534 A CN 111127534A
- Authority
- CN
- China
- Prior art keywords
- obstacle
- image
- barrier
- depth data
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 11
- 238000000034 method Methods 0.000 claims abstract description 19
- 230000004888 barrier function Effects 0.000 claims abstract description 12
- 238000000605 extraction Methods 0.000 claims abstract description 8
- 238000012163 sequencing technique Methods 0.000 claims abstract description 7
- 230000006399 behavior Effects 0.000 claims abstract description 6
- 238000004891 communication Methods 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 3
- 238000011410 subtraction method Methods 0.000 claims description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005587 bubbling Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000010408 sweeping Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a method for detecting an obstacle, which comprises the following steps: A. corroding and expanding the differential image processed by the obstacle extraction method, removing noise influence and filling holes; B. extracting an image contour; C. calculating the area of the outline; D. sequencing and marking the connected bodies; E. judging according to the rule of the barrier communicating body, and regarding the barrier as a barrier if the requirement is met; F. the obstacle area is marked by the rectangular frame, and the obstacle detection method can provide decision basis for the mobile robot to adopt obstacle avoidance behaviors so as to complete the on-line detection of the obstacle.
Description
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a method for detecting obstacles.
Background
Along with the rapid development of the artificial intelligence technology, the application of the intelligent robot is more and more extensive, and the intelligent robot can comprise a floor sweeping robot, a carrying robot and the like, so that the intelligent robot has a very wide market prospect.
In the autonomous navigation process of the robot, in order to avoid collision with surrounding obstacles, the robot needs to identify the obstacles and avoid the obstacles. However, the traditional obstacle identification method still has the defect of poor identification accuracy, and cannot judge the obstacles which accord with a certain rule, extract and classify the obstacle individuals, and provide a decision basis for the mobile robot to adopt obstacle avoidance behaviors so as to complete the on-line detection of the obstacles.
Disclosure of Invention
The present invention is directed to a method for detecting an obstacle, so as to solve the problems mentioned in the background art.
In order to achieve the purpose, the invention provides the following technical scheme:
a method for detecting an obstacle, comprising the steps of:
A. corroding and expanding the differential image processed by the obstacle extraction method, removing noise influence and filling holes;
B. extracting an image contour;
C. calculating the area of the outline;
D. sequencing and marking the connected bodies;
E. judging according to the rule of the barrier communicating body, and regarding the barrier as a barrier if the requirement is met;
F. the obstacle area is identified by a rectangular frame.
As a further scheme of the invention: the step A adopts a background subtraction method to remove regions which are not interested so as to highlight the obstacle regions.
As a further scheme of the invention: and D, sequencing and marking the connected bodies by adopting a bubble sequencing method.
As a further scheme of the invention: the obstacle extraction method comprises the following specific processing steps: installing a 3D camera on the head of a robot, facing the ground, cutting a depth data map into proper resolution according to the obstacle avoidance range, if the ground is flat and has no obstacle, the difference between each pixel value of the obtained depth data map and the peripheral pixel values is constant, when the ground has the obstacle, the difference between the edge point pixel value of the position of the obstacle and the peripheral pixel values is more than 0.2 m, and the depth of the edge point of the obstacle is the whole depth according to the depth value of the edge point of the obstacleExtracting the edge of the obstacle according to the characteristics which can be changed too much in the graph, and adopting the flat area which is not interested in the following filtering to highlight the obstacle area each time the obstacle detection is carried outIn the formula, D (x, y) represents the processed image, I (x, y) represents the image to be detected, T is a threshold value, then, communication body analysis is carried out to judge the obstacles of the communication bodies which accord with a certain rule, and the obstacle individuals are extracted and classified to provide decision basis for the mobile robot to adopt obstacle avoidance behaviors so as to complete the on-line detection of the obstacles.
As a further scheme of the invention: the method for extracting the obstacle utilizes the depth data of the edge point of the obstacle and the depth data of the adjacent pixel points to calculate the difference value, and the moving target can be detected by binarizing the difference value image.
As a further scheme of the invention: the connecting body is divided into four connecting bodies and eight connecting bodies, wherein the four connecting bodies refer to two modes that pixels are directly connected with 4 adjacent pixels on the upper side, the lower side, the left side and the right side, and the eight connecting bodies also comprise two modes that the pixels are connected with 4 adjacent pixels on the oblique direction and represent image connectivity.
As a further scheme of the invention: the difference between the value of each pixel and the value of the surrounding pixels is determined by the accuracy of the 3D camera.
Compared with the prior art, the invention has the beneficial effects that: the obstacle detection method can provide decision basis for the mobile robot to adopt obstacle avoidance behaviors so as to complete the on-line detection of the obstacle.
Drawings
FIG. 1 is a flow chart of the algorithm of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, embodiment 1: in an embodiment of the present invention, a method for detecting an obstacle includes the following steps:
A. corroding and expanding the differential image processed by the obstacle extraction method, removing noise influence and filling holes;
B. extracting an image contour;
C. calculating the area of the outline;
D. sequencing and marking the connected bodies by adopting a bubbling sequencing method;
E. judging according to the rule of the barrier communicating body, and regarding the barrier as a barrier if the requirement is met;
F. the obstacle area is identified by a rectangular frame.
The obstacle extraction method comprises the following specific treatment steps: the method comprises the steps of installing a 3D camera on the head of a robot, facing the ground, cutting a depth data map into proper resolution according to an obstacle avoidance range, if the ground is flat and has no obstacle, the difference between each pixel value of the obtained depth data map and the surrounding pixel values is constant, when the ground has the obstacle, the difference between the edge point pixel value of the position of the obstacle and some surrounding pixel values is larger than 0.2 m, extracting the edge of the obstacle according to the characteristic that the depth value of the edge point of the obstacle can change too much in the whole depth data map, and when the obstacle is detected each time, adopting the following flat area which is not interested in filtering to protrude the obstacle areaIn the formula, D (x, y) represents the processed image, I (x, y) represents the image to be detected, T is a threshold value, then, communication body analysis is carried out to judge the obstacles of the communication bodies which accord with a certain rule, and the obstacle individuals are extracted and classified to provide decision basis for the mobile robot to adopt obstacle avoidance behaviors so as to complete the on-line detection of the obstacles.
The method for extracting the obstacle utilizes the depth data of the edge point of the obstacle and the depth data of the adjacent pixel points to calculate the difference value, and the moving target can be detected by binarizing the difference value image. The connected body is divided into four connection and eight connection, wherein the four connection means that the pixel is directly connected with 4 adjacent pixels on the upper, lower, left and right sides, and the eight connection also comprises two modes of being connected with 4 adjacent pixels on the oblique direction and representing the image connectivity. The difference between each pixel value and the surrounding pixel values is determined by the accuracy of the 3D camera.
Example 2: on the basis of embodiment 1, step a uses background subtraction to eliminate regions not of interest to highlight the obstacle regions.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present description refers to embodiments, not every embodiment may contain only a single embodiment, and such description is for clarity only, and those skilled in the art should integrate the description, and the embodiments may be combined as appropriate to form other embodiments understood by those skilled in the art.
Claims (7)
1. A method for detecting an obstacle, comprising the steps of:
A. corroding and expanding the differential image processed by the obstacle extraction method, removing noise influence and filling holes;
B. extracting an image contour;
C. calculating the area of the outline;
D. sequencing and marking the connected bodies;
E. judging according to the rule of the barrier communicating body, and regarding the barrier as a barrier if the requirement is met;
F. the obstacle area is identified by a rectangular frame.
2. The method according to claim 1, wherein the step a adopts a background subtraction method to remove regions not of interest to highlight the obstacle region.
3. The method for detecting an obstacle according to claim 1, wherein the step D employs a bubble sorting method to sort and mark the connected objects.
4. The method for detecting the obstacle according to claim 1, wherein the obstacle extraction method comprises the following specific steps: the method comprises the steps of installing a 3D camera on the head of a robot, facing the ground, cutting a depth data map into proper resolution according to an obstacle avoidance range, if the ground is flat and has no obstacle, the difference between each pixel value of the obtained depth data map and the surrounding pixel values is constant, when the ground has the obstacle, the difference between the edge point pixel value of the position of the obstacle and some surrounding pixel values is larger than 0.2 m, extracting the edge of the obstacle according to the characteristic that the depth value of the edge point of the obstacle can change too much in the whole depth data map, and when the obstacle is detected each time, adopting the following flat area which is not interested in filtering to protrude the obstacle areaIn the formula, D (x, y) represents the processed image, I (x, y) represents the image to be detected, T is a threshold value, then, communication body analysis is carried out to judge the obstacles of the communication bodies which accord with a certain rule, and the obstacle individuals are extracted and classified to provide decision basis for the mobile robot to adopt obstacle avoidance behaviors so as to complete the on-line detection of the obstacles.
5. The method for detecting the obstacle according to claim 4, wherein the obstacle extraction method is characterized in that the moving object can be detected by computing the difference value by using the obstacle edge point depth data and the adjacent pixel point depth data and binarizing the difference value image.
6. The method according to claim 4, wherein the connected objects are divided into four connected objects and eight connected objects, the four connected objects are pixels directly connected to 4 adjacent pixels in the upper, lower, left and right directions, and the eight connected objects further include two ways of connecting 4 adjacent pixels in the oblique direction to represent image connectivity.
7. The method according to claim 4, wherein the difference between each pixel value and the surrounding pixel values is determined by the accuracy of the 3D camera.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911072142.8A CN111127534A (en) | 2019-11-05 | 2019-11-05 | Obstacle detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911072142.8A CN111127534A (en) | 2019-11-05 | 2019-11-05 | Obstacle detection method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111127534A true CN111127534A (en) | 2020-05-08 |
Family
ID=70495546
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911072142.8A Pending CN111127534A (en) | 2019-11-05 | 2019-11-05 | Obstacle detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111127534A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113887400A (en) * | 2021-09-29 | 2022-01-04 | 北京百度网讯科技有限公司 | Obstacle detection method, model training method and device and automatic driving vehicle |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5201011A (en) * | 1991-11-19 | 1993-04-06 | Xerox Corporation | Method and apparatus for image hand markup detection using morphological techniques |
CN104331910A (en) * | 2014-11-24 | 2015-02-04 | 沈阳建筑大学 | Track obstacle detection system based on machine vision |
CN109271944A (en) * | 2018-09-27 | 2019-01-25 | 百度在线网络技术(北京)有限公司 | Obstacle detection method, device, electronic equipment, vehicle and storage medium |
CN109448045A (en) * | 2018-10-23 | 2019-03-08 | 南京华捷艾米软件科技有限公司 | Plane polygon object measuring method and machine readable storage medium based on SLAM |
-
2019
- 2019-11-05 CN CN201911072142.8A patent/CN111127534A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5201011A (en) * | 1991-11-19 | 1993-04-06 | Xerox Corporation | Method and apparatus for image hand markup detection using morphological techniques |
CN104331910A (en) * | 2014-11-24 | 2015-02-04 | 沈阳建筑大学 | Track obstacle detection system based on machine vision |
CN109271944A (en) * | 2018-09-27 | 2019-01-25 | 百度在线网络技术(北京)有限公司 | Obstacle detection method, device, electronic equipment, vehicle and storage medium |
CN109448045A (en) * | 2018-10-23 | 2019-03-08 | 南京华捷艾米软件科技有限公司 | Plane polygon object measuring method and machine readable storage medium based on SLAM |
Non-Patent Citations (1)
Title |
---|
刘昱岗等: "基于双目视觉图像的倒车障碍物检测预处理方法", 《重庆交通大学学报(自然科学版)》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113887400A (en) * | 2021-09-29 | 2022-01-04 | 北京百度网讯科技有限公司 | Obstacle detection method, model training method and device and automatic driving vehicle |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yuan et al. | Robust lane detection for complicated road environment based on normal map | |
CN111487641B (en) | Method and device for detecting object by using laser radar, electronic equipment and storage medium | |
EP2811423B1 (en) | Method and apparatus for detecting target | |
Yenikaya et al. | Keeping the vehicle on the road: A survey on on-road lane detection systems | |
Gomez et al. | Traffic lights detection and state estimation using hidden markov models | |
CN104134209B (en) | A kind of feature extracting and matching method and system in vision guided navigation | |
EP2779025B1 (en) | Method and system for detecting road edge | |
CN115049700A (en) | Target detection method and device | |
CN106682641A (en) | Pedestrian identification method based on image with FHOG- LBPH feature | |
CN111007531A (en) | Road edge detection method based on laser point cloud data | |
CN104915642B (en) | Front vehicles distance measuring method and device | |
CN107480603A (en) | Figure and method for segmenting objects are synchronously built based on SLAM and depth camera | |
Li et al. | Road markings extraction based on threshold segmentation | |
Li et al. | A lane marking detection and tracking algorithm based on sub-regions | |
Seo et al. | Detection and tracking of boundary of unmarked roads | |
Qing et al. | A novel particle filter implementation for a multiple-vehicle detection and tracking system using tail light segmentation | |
Sucgang et al. | Road surface obstacle detection using vision and LIDAR for autonomous vehicle | |
CN111127534A (en) | Obstacle detection method | |
Jiang et al. | Mobile robot gas source localization via top-down visual attention mechanism and shape analysis | |
CN108388854A (en) | A kind of localization method based on improvement FAST-SURF algorithms | |
Raikar et al. | Automatic building detection from satellite images using internal gray variance and digital surface model | |
CN107563282A (en) | For unpiloted recognition methods, electronic equipment, storage medium and system | |
Vachmanus et al. | Road detection in snowy forest environment using rgb camera | |
Oniga et al. | Fast obstacle detection using U-disparity maps with stereo vision | |
KR101910256B1 (en) | Lane Detection Method and System for Camera-based Road Curvature Estimation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200508 |
|
RJ01 | Rejection of invention patent application after publication |