CN111127534A - Obstacle detection method - Google Patents

Obstacle detection method Download PDF

Info

Publication number
CN111127534A
CN111127534A CN201911072142.8A CN201911072142A CN111127534A CN 111127534 A CN111127534 A CN 111127534A CN 201911072142 A CN201911072142 A CN 201911072142A CN 111127534 A CN111127534 A CN 111127534A
Authority
CN
China
Prior art keywords
obstacle
image
barrier
depth data
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911072142.8A
Other languages
Chinese (zh)
Inventor
庄永军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
QIHAN TECHNOLOGY CO LTD
Original Assignee
QIHAN TECHNOLOGY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by QIHAN TECHNOLOGY CO LTD filed Critical QIHAN TECHNOLOGY CO LTD
Priority to CN201911072142.8A priority Critical patent/CN111127534A/en
Publication of CN111127534A publication Critical patent/CN111127534A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method for detecting an obstacle, which comprises the following steps: A. corroding and expanding the differential image processed by the obstacle extraction method, removing noise influence and filling holes; B. extracting an image contour; C. calculating the area of the outline; D. sequencing and marking the connected bodies; E. judging according to the rule of the barrier communicating body, and regarding the barrier as a barrier if the requirement is met; F. the obstacle area is marked by the rectangular frame, and the obstacle detection method can provide decision basis for the mobile robot to adopt obstacle avoidance behaviors so as to complete the on-line detection of the obstacle.

Description

Obstacle detection method
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a method for detecting obstacles.
Background
Along with the rapid development of the artificial intelligence technology, the application of the intelligent robot is more and more extensive, and the intelligent robot can comprise a floor sweeping robot, a carrying robot and the like, so that the intelligent robot has a very wide market prospect.
In the autonomous navigation process of the robot, in order to avoid collision with surrounding obstacles, the robot needs to identify the obstacles and avoid the obstacles. However, the traditional obstacle identification method still has the defect of poor identification accuracy, and cannot judge the obstacles which accord with a certain rule, extract and classify the obstacle individuals, and provide a decision basis for the mobile robot to adopt obstacle avoidance behaviors so as to complete the on-line detection of the obstacles.
Disclosure of Invention
The present invention is directed to a method for detecting an obstacle, so as to solve the problems mentioned in the background art.
In order to achieve the purpose, the invention provides the following technical scheme:
a method for detecting an obstacle, comprising the steps of:
A. corroding and expanding the differential image processed by the obstacle extraction method, removing noise influence and filling holes;
B. extracting an image contour;
C. calculating the area of the outline;
D. sequencing and marking the connected bodies;
E. judging according to the rule of the barrier communicating body, and regarding the barrier as a barrier if the requirement is met;
F. the obstacle area is identified by a rectangular frame.
As a further scheme of the invention: the step A adopts a background subtraction method to remove regions which are not interested so as to highlight the obstacle regions.
As a further scheme of the invention: and D, sequencing and marking the connected bodies by adopting a bubble sequencing method.
As a further scheme of the invention: the obstacle extraction method comprises the following specific processing steps: installing a 3D camera on the head of a robot, facing the ground, cutting a depth data map into proper resolution according to the obstacle avoidance range, if the ground is flat and has no obstacle, the difference between each pixel value of the obtained depth data map and the peripheral pixel values is constant, when the ground has the obstacle, the difference between the edge point pixel value of the position of the obstacle and the peripheral pixel values is more than 0.2 m, and the depth of the edge point of the obstacle is the whole depth according to the depth value of the edge point of the obstacleExtracting the edge of the obstacle according to the characteristics which can be changed too much in the graph, and adopting the flat area which is not interested in the following filtering to highlight the obstacle area each time the obstacle detection is carried out
Figure BDA0002261288520000021
In the formula, D (x, y) represents the processed image, I (x, y) represents the image to be detected, T is a threshold value, then, communication body analysis is carried out to judge the obstacles of the communication bodies which accord with a certain rule, and the obstacle individuals are extracted and classified to provide decision basis for the mobile robot to adopt obstacle avoidance behaviors so as to complete the on-line detection of the obstacles.
As a further scheme of the invention: the method for extracting the obstacle utilizes the depth data of the edge point of the obstacle and the depth data of the adjacent pixel points to calculate the difference value, and the moving target can be detected by binarizing the difference value image.
As a further scheme of the invention: the connecting body is divided into four connecting bodies and eight connecting bodies, wherein the four connecting bodies refer to two modes that pixels are directly connected with 4 adjacent pixels on the upper side, the lower side, the left side and the right side, and the eight connecting bodies also comprise two modes that the pixels are connected with 4 adjacent pixels on the oblique direction and represent image connectivity.
As a further scheme of the invention: the difference between the value of each pixel and the value of the surrounding pixels is determined by the accuracy of the 3D camera.
Compared with the prior art, the invention has the beneficial effects that: the obstacle detection method can provide decision basis for the mobile robot to adopt obstacle avoidance behaviors so as to complete the on-line detection of the obstacle.
Drawings
FIG. 1 is a flow chart of the algorithm of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, embodiment 1: in an embodiment of the present invention, a method for detecting an obstacle includes the following steps:
A. corroding and expanding the differential image processed by the obstacle extraction method, removing noise influence and filling holes;
B. extracting an image contour;
C. calculating the area of the outline;
D. sequencing and marking the connected bodies by adopting a bubbling sequencing method;
E. judging according to the rule of the barrier communicating body, and regarding the barrier as a barrier if the requirement is met;
F. the obstacle area is identified by a rectangular frame.
The obstacle extraction method comprises the following specific treatment steps: the method comprises the steps of installing a 3D camera on the head of a robot, facing the ground, cutting a depth data map into proper resolution according to an obstacle avoidance range, if the ground is flat and has no obstacle, the difference between each pixel value of the obtained depth data map and the surrounding pixel values is constant, when the ground has the obstacle, the difference between the edge point pixel value of the position of the obstacle and some surrounding pixel values is larger than 0.2 m, extracting the edge of the obstacle according to the characteristic that the depth value of the edge point of the obstacle can change too much in the whole depth data map, and when the obstacle is detected each time, adopting the following flat area which is not interested in filtering to protrude the obstacle area
Figure BDA0002261288520000031
In the formula, D (x, y) represents the processed image, I (x, y) represents the image to be detected, T is a threshold value, then, communication body analysis is carried out to judge the obstacles of the communication bodies which accord with a certain rule, and the obstacle individuals are extracted and classified to provide decision basis for the mobile robot to adopt obstacle avoidance behaviors so as to complete the on-line detection of the obstacles.
The method for extracting the obstacle utilizes the depth data of the edge point of the obstacle and the depth data of the adjacent pixel points to calculate the difference value, and the moving target can be detected by binarizing the difference value image. The connected body is divided into four connection and eight connection, wherein the four connection means that the pixel is directly connected with 4 adjacent pixels on the upper, lower, left and right sides, and the eight connection also comprises two modes of being connected with 4 adjacent pixels on the oblique direction and representing the image connectivity. The difference between each pixel value and the surrounding pixel values is determined by the accuracy of the 3D camera.
Example 2: on the basis of embodiment 1, step a uses background subtraction to eliminate regions not of interest to highlight the obstacle regions.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present description refers to embodiments, not every embodiment may contain only a single embodiment, and such description is for clarity only, and those skilled in the art should integrate the description, and the embodiments may be combined as appropriate to form other embodiments understood by those skilled in the art.

Claims (7)

1. A method for detecting an obstacle, comprising the steps of:
A. corroding and expanding the differential image processed by the obstacle extraction method, removing noise influence and filling holes;
B. extracting an image contour;
C. calculating the area of the outline;
D. sequencing and marking the connected bodies;
E. judging according to the rule of the barrier communicating body, and regarding the barrier as a barrier if the requirement is met;
F. the obstacle area is identified by a rectangular frame.
2. The method according to claim 1, wherein the step a adopts a background subtraction method to remove regions not of interest to highlight the obstacle region.
3. The method for detecting an obstacle according to claim 1, wherein the step D employs a bubble sorting method to sort and mark the connected objects.
4. The method for detecting the obstacle according to claim 1, wherein the obstacle extraction method comprises the following specific steps: the method comprises the steps of installing a 3D camera on the head of a robot, facing the ground, cutting a depth data map into proper resolution according to an obstacle avoidance range, if the ground is flat and has no obstacle, the difference between each pixel value of the obtained depth data map and the surrounding pixel values is constant, when the ground has the obstacle, the difference between the edge point pixel value of the position of the obstacle and some surrounding pixel values is larger than 0.2 m, extracting the edge of the obstacle according to the characteristic that the depth value of the edge point of the obstacle can change too much in the whole depth data map, and when the obstacle is detected each time, adopting the following flat area which is not interested in filtering to protrude the obstacle area
Figure FDA0002261288510000011
In the formula, D (x, y) represents the processed image, I (x, y) represents the image to be detected, T is a threshold value, then, communication body analysis is carried out to judge the obstacles of the communication bodies which accord with a certain rule, and the obstacle individuals are extracted and classified to provide decision basis for the mobile robot to adopt obstacle avoidance behaviors so as to complete the on-line detection of the obstacles.
5. The method for detecting the obstacle according to claim 4, wherein the obstacle extraction method is characterized in that the moving object can be detected by computing the difference value by using the obstacle edge point depth data and the adjacent pixel point depth data and binarizing the difference value image.
6. The method according to claim 4, wherein the connected objects are divided into four connected objects and eight connected objects, the four connected objects are pixels directly connected to 4 adjacent pixels in the upper, lower, left and right directions, and the eight connected objects further include two ways of connecting 4 adjacent pixels in the oblique direction to represent image connectivity.
7. The method according to claim 4, wherein the difference between each pixel value and the surrounding pixel values is determined by the accuracy of the 3D camera.
CN201911072142.8A 2019-11-05 2019-11-05 Obstacle detection method Pending CN111127534A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911072142.8A CN111127534A (en) 2019-11-05 2019-11-05 Obstacle detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911072142.8A CN111127534A (en) 2019-11-05 2019-11-05 Obstacle detection method

Publications (1)

Publication Number Publication Date
CN111127534A true CN111127534A (en) 2020-05-08

Family

ID=70495546

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911072142.8A Pending CN111127534A (en) 2019-11-05 2019-11-05 Obstacle detection method

Country Status (1)

Country Link
CN (1) CN111127534A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113887400A (en) * 2021-09-29 2022-01-04 北京百度网讯科技有限公司 Obstacle detection method, model training method and device and automatic driving vehicle

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5201011A (en) * 1991-11-19 1993-04-06 Xerox Corporation Method and apparatus for image hand markup detection using morphological techniques
CN104331910A (en) * 2014-11-24 2015-02-04 沈阳建筑大学 Track obstacle detection system based on machine vision
CN109271944A (en) * 2018-09-27 2019-01-25 百度在线网络技术(北京)有限公司 Obstacle detection method, device, electronic equipment, vehicle and storage medium
CN109448045A (en) * 2018-10-23 2019-03-08 南京华捷艾米软件科技有限公司 Plane polygon object measuring method and machine readable storage medium based on SLAM

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5201011A (en) * 1991-11-19 1993-04-06 Xerox Corporation Method and apparatus for image hand markup detection using morphological techniques
CN104331910A (en) * 2014-11-24 2015-02-04 沈阳建筑大学 Track obstacle detection system based on machine vision
CN109271944A (en) * 2018-09-27 2019-01-25 百度在线网络技术(北京)有限公司 Obstacle detection method, device, electronic equipment, vehicle and storage medium
CN109448045A (en) * 2018-10-23 2019-03-08 南京华捷艾米软件科技有限公司 Plane polygon object measuring method and machine readable storage medium based on SLAM

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘昱岗等: "基于双目视觉图像的倒车障碍物检测预处理方法", 《重庆交通大学学报(自然科学版)》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113887400A (en) * 2021-09-29 2022-01-04 北京百度网讯科技有限公司 Obstacle detection method, model training method and device and automatic driving vehicle

Similar Documents

Publication Publication Date Title
Yuan et al. Robust lane detection for complicated road environment based on normal map
CN111487641B (en) Method and device for detecting object by using laser radar, electronic equipment and storage medium
EP2811423B1 (en) Method and apparatus for detecting target
Yenikaya et al. Keeping the vehicle on the road: A survey on on-road lane detection systems
Gomez et al. Traffic lights detection and state estimation using hidden markov models
CN104134209B (en) A kind of feature extracting and matching method and system in vision guided navigation
EP2779025B1 (en) Method and system for detecting road edge
CN115049700A (en) Target detection method and device
CN106682641A (en) Pedestrian identification method based on image with FHOG- LBPH feature
CN111007531A (en) Road edge detection method based on laser point cloud data
CN104915642B (en) Front vehicles distance measuring method and device
CN107480603A (en) Figure and method for segmenting objects are synchronously built based on SLAM and depth camera
Li et al. Road markings extraction based on threshold segmentation
Li et al. A lane marking detection and tracking algorithm based on sub-regions
Seo et al. Detection and tracking of boundary of unmarked roads
Qing et al. A novel particle filter implementation for a multiple-vehicle detection and tracking system using tail light segmentation
Sucgang et al. Road surface obstacle detection using vision and LIDAR for autonomous vehicle
CN111127534A (en) Obstacle detection method
Jiang et al. Mobile robot gas source localization via top-down visual attention mechanism and shape analysis
CN108388854A (en) A kind of localization method based on improvement FAST-SURF algorithms
Raikar et al. Automatic building detection from satellite images using internal gray variance and digital surface model
CN107563282A (en) For unpiloted recognition methods, electronic equipment, storage medium and system
Vachmanus et al. Road detection in snowy forest environment using rgb camera
Oniga et al. Fast obstacle detection using U-disparity maps with stereo vision
KR101910256B1 (en) Lane Detection Method and System for Camera-based Road Curvature Estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200508

RJ01 Rejection of invention patent application after publication