CN109344687B - Vision-based obstacle detection method and device and mobile device - Google Patents

Vision-based obstacle detection method and device and mobile device Download PDF

Info

Publication number
CN109344687B
CN109344687B CN201810884385.0A CN201810884385A CN109344687B CN 109344687 B CN109344687 B CN 109344687B CN 201810884385 A CN201810884385 A CN 201810884385A CN 109344687 B CN109344687 B CN 109344687B
Authority
CN
China
Prior art keywords
obstacle
information
mobile equipment
image
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810884385.0A
Other languages
Chinese (zh)
Other versions
CN109344687A (en
Inventor
郭睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Topband Co Ltd
Original Assignee
Shenzhen Topband Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Topband Co Ltd filed Critical Shenzhen Topband Co Ltd
Priority to CN201810884385.0A priority Critical patent/CN109344687B/en
Publication of CN109344687A publication Critical patent/CN109344687A/en
Application granted granted Critical
Publication of CN109344687B publication Critical patent/CN109344687B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/20Contour coding, e.g. using detection of edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Abstract

The invention relates to a vision-based obstacle detection method, a vision-based obstacle detection device and mobile equipment, which are applied to the mobile equipment, wherein the mobile equipment comprises an image acquisition device, and the image acquisition device is arranged on the mobile equipment and is used for acquiring a real-time image in the advancing direction of the mobile equipment; the vision-based obstacle detection method includes the steps of: s1, acquiring a current real-time image in the traveling direction of the mobile equipment, which is acquired by the image acquisition device; s2, analyzing and processing the current real-time image based on the visual positioning and map building system to obtain the obstacle information in the moving direction of the mobile equipment; the obstacle information is distance information between an obstacle and the mobile equipment in the traveling direction of the mobile equipment; and S3, controlling the mobile equipment to avoid obstacles according to the obstacle information. The invention can detect the barrier without contact, avoid contact detection and improve the reliability and detection precision of the mobile equipment.

Description

Vision-based obstacle detection method and device and mobile device
Technical Field
The invention relates to the field of robots, in particular to a vision-based obstacle detection method, a vision-based obstacle detection device and mobile equipment.
Background
In recent years, floor sweeping machines are increasingly popular as household appliances in household life, and are inevitable to meet various obstacles in daily work.
The traditional sweeper design mainly adopts a collision mode to sense the existence of obstacles, and the mode not only can reduce the service life of the sweeper after long-time use, but also can influence the precision of path planning during the work.
Disclosure of Invention
The present invention is directed to a method, an apparatus, and a mobile device for detecting an obstacle based on vision, which are provided to overcome the above-mentioned drawbacks of the prior art.
The technical scheme adopted by the invention for solving the technical problems is as follows: constructing a vision-based obstacle detection method, which is applied to mobile equipment, wherein the mobile equipment comprises an image acquisition device, and the image acquisition device is arranged on the mobile equipment and is used for acquiring a real-time image in the traveling direction of the mobile equipment;
the vision-based obstacle detection method includes the steps of:
s1, acquiring a current real-time image in the traveling direction of the mobile equipment, which is acquired by the image acquisition device;
s2, analyzing and processing the current real-time image based on a visual positioning and map building system to obtain obstacle information in the traveling direction of the mobile equipment; the obstacle information is distance information between an obstacle and the mobile equipment in the traveling direction of the mobile equipment;
and S3, controlling the mobile equipment to avoid obstacles according to the obstacle information.
Preferably, the step S2 includes:
s21, performing cutting preprocessing on the current real-time image based on a preset area to obtain an area of interest;
s22, extracting the region of interest by adopting a preset extraction algorithm to obtain a candidate barrier region in the region of interest;
s23, determining candidate region information of the obstacle candidate region according to the obstacle candidate region; the candidate region information includes height, width, and area of the obstacle candidate region.
Preferably, the step S22 includes:
s221, carrying out image gray processing on the region of interest to obtain a gray image of the obstacle in the region of interest;
s222, processing the outline of the gray image of the obstacle to obtain the outline of the obstacle;
s223, performing morphological opening and closing operation processing on the outline of the obstacle to obtain a closed area of the outline of the obstacle;
s224, processing the closed area to obtain the minimum rectangular boundary of the closed area, wherein the minimum rectangular boundary is the obstacle candidate area;
the step S23 includes:
and S231, calculating candidate region information of the candidate region of the obstacle according to the candidate region of the obstacle.
Preferably, the step S2 further includes:
s24, judging whether the candidate area information meets a first preset condition, if so, executing a step S25, and if not, exiting the analysis processing;
and S25, analyzing and processing the outline of the obstacle to obtain the distance information between the obstacle and the mobile equipment.
Preferably, the step S25 includes:
s251, encoding the outline of the obstacle to obtain encoding information of the outline of the obstacle;
s252, based on the position information of the mobile equipment at the current moment and a plurality of moments after the current moment, which is provided by the visual positioning and mapping system, performing pixel-level dense reconstruction processing on the candidate area of the obstacle to obtain a depth-of-field image of the obstacle;
s253, carrying out coding processing on the contour of the depth image of the obstacle to obtain coding information of the contour of the depth image;
s254, calculating a similarity value between the coded information of the contour of the obstacle and the coded information of the contour of the depth image;
s255, judging whether the similarity value meets a second preset condition, if so, executing the step S256, and if not, exiting the analysis processing;
and S256, outputting the distance information between the obstacle and the mobile equipment.
Preferably, the similarity value satisfying the second preset condition is:
the similarity value is greater than a preset threshold value.
Preferably, the step S3 includes:
s41, analyzing and processing all the obstacle information to obtain the minimum distance value in the obstacle information;
s42, determining an obstacle corresponding to the minimum distance;
and S43, controlling the mobile equipment to avoid the obstacle according to the position information of the obstacle corresponding to the minimum distance value.
The invention also constructs a vision-based obstacle detection device which is applied to mobile equipment, wherein the mobile equipment comprises an image acquisition device, and the image acquisition device is arranged on the mobile equipment and is used for acquiring a real-time image in the advancing direction of the mobile equipment;
the vision-based obstacle detection apparatus includes:
the acquisition unit is used for acquiring the current real-time image in the traveling direction of the mobile equipment, which is acquired by the image acquisition device;
the analysis processing unit is used for analyzing and processing the current real-time image based on a visual positioning and map building system to obtain the obstacle information in the traveling direction of the mobile equipment; the obstacle information is distance information between an obstacle and the mobile equipment in the traveling direction of the mobile equipment;
and the control unit is used for controlling the mobile equipment to avoid the obstacle according to the obstacle information.
The invention also constitutes a mobile device comprising a processor for implementing the steps of the method as described above when executing a computer program stored in a memory.
The invention also constitutes a readable storage medium on which a computer program is stored which, when being executed by a processor, carries out the steps of the method as described above.
The implementation of the obstacle detection method based on vision of the invention has the following beneficial effects: the invention can detect the barrier without contact, avoid contact detection and improve the reliability and detection precision of the mobile equipment. Moreover, the vision-based obstacle detection method can share one set of image acquisition device with a vision positioning and map building system of the mobile equipment, does not need to additionally increase other auxiliary devices, and can reduce the overall design cost of the mobile equipment to the maximum extent.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
fig. 1 is a real-time scene diagram of a mobile device moving forward on the ground when detecting an obstacle by using the vision-based obstacle detection method of the present invention;
FIG. 2 is a schematic flow chart diagram of a first embodiment of the vision-based obstruction detection method of the present invention;
FIG. 3 is a schematic flow chart diagram of a second embodiment of the vision-based obstruction detection method of the present invention;
FIG. 4 is a schematic flow chart of the obstacle candidate extraction method of the present invention;
FIG. 5 is a schematic view of the configuration of the vision-based obstacle detecting apparatus of the present invention;
fig. 6 is a logical block diagram of a mobile device.
Detailed Description
For a more clear understanding of the technical features, objects and effects of the present invention, embodiments of the present invention will now be described in detail with reference to the accompanying drawings.
In order to solve the prior art problems, the present invention is constructed a vision-based obstacle detection method that can be applied to mobile devices, including but not limited to a sweeper.
Referring to fig. 1, fig. 1 is a real-time scene diagram of a mobile device moving forward on the ground when the obstacle detection is performed by using the vision-based obstacle detection method of the present invention.
The mobile device according to the embodiment of the present invention is described by taking a floor cleaning machine as an example.
As shown in fig. 1, a real-time view of a motor sweeper 102 moving forward 102 over a floor 101 is shown. 103 is an image acquisition device installed in the traveling direction of the sweeper 102, which may be a monocular camera. 104 is an obstacle placed on the floor 101, which obstacle 104 now appears just in the front field of view of the sweeper 102, wherein the field of view of the sweeper 102 is determined by the FOV (field of view) of the monocular camera 103. The image captured by the monocular camera 103 at the current time forms an image 301. Further, in fig. 1, 302 is a Region Of Interest (ROI) Of the cropping pre-processing stage, 401 is an obstacle projection, 303 is an obstacle candidate Region including the obstacle projection 401, 304 is a depth map Of the obstacle candidate Region, and 402 is an obstacle projection within 304.
As shown in fig. 2, the present invention provides a vision-based obstacle detection method, which can be applied to a mobile device, where the mobile device includes an image acquisition device, and the image acquisition device is disposed on the mobile device and is used for acquiring a real-time image in a traveling direction of the mobile device. The mobile device includes, but is not limited to, a sweeper (e.g., 102 of fig. 1), and the image capture device includes, but is not limited to, a monocular camera as shown in fig. 1.
The first embodiment:
specifically, as shown in fig. 2, the vision-based obstacle detection method of this embodiment includes the steps of:
and step S1, acquiring the current real-time image in the traveling direction of the mobile equipment, which is acquired by the image acquisition device.
Taking a floor sweeper as an example, the traveling direction of the mobile equipment is the moving track of the floor sweeper, and the monocular camera arranged in the traveling direction of the floor sweeper can acquire images in the traveling direction in real time.
The current real-time image in the moving direction of the mobile equipment is a real-time image acquired by the image acquisition device at the current moment in the moving process of the mobile equipment.
Step S2, analyzing and processing the current real-time image based on the visual positioning and map building system to obtain the obstacle information in the traveling direction of the mobile equipment; the obstacle information is distance information between the obstacle and the mobile device in the traveling direction of the mobile device.
The visual positioning and mapping system is a vSLAM system, the vSLAM system is arranged in the mobile equipment, the vSLAM system can be used for positioning and mapping the mobile equipment in real time, and the position information of the mobile equipment is output in real time in the moving process of the mobile equipment. The location information of the mobile device includes, but is not limited to, coordinate information, angle information, etc. of the mobile device.
It should be noted here that the obtained obstacle information, that is, the distance information between the obstacle and the mobile device in the traveling direction of the mobile device includes a plurality of pieces of distance information. As shown in fig. 1 in particular, the current real-time image 301 acquired by the image acquisition apparatus at the current moment generally includes a plurality of obstacles (fig. 1 is only one obstacle for illustration), so that the distance information of the plurality of obstacles from the mobile device included in the current real-time image 301 acquired at each moment can be obtained in step S2 according to the embodiment of the present invention.
And step S3, controlling the mobile equipment to avoid obstacles according to the obstacle information.
Specifically, after the distance information between the obstacle and the mobile device in the traveling direction of the mobile device is obtained in step S2, the moving route of the mobile device may be adjusted according to the distance information between each obstacle and the mobile device, so as to avoid collision between the mobile device and the obstacle.
By implementing the invention, the monocular vision camera of the vSLAM system is utilized to acquire the image of the obstacle in the traveling direction of the mobile equipment in real time, realize non-contact obstacle sensing and early warn the distance between the obstacle and the mobile equipment in real time. And other auxiliary devices such as pressure, microwave, infrared and other sensors are not required to be additionally arranged, so that the accurate judgment of the barrier can be realized, the overall design cost of the mobile equipment can be reduced to the maximum extent, and meanwhile, the detection precision of the barrier can be improved.
Second embodiment:
as shown in fig. 3, the vision-based obstacle detection method of this embodiment includes the following steps based on the first embodiment:
and step S21, performing cutting preprocessing on the current real-time image based on the preset area to obtain the area of interest.
The preset area may be determined according to the condition of the mobile device (e.g., the width of the mobile device) and the distance from the obstacle.
The region of interest is obtained by cutting the current real-time image, so that the image processing data can be greatly reduced, the image processing speed is improved, and the capability requirement on hardware equipment is reduced. For example, assuming that the current real-time image is an image with a resolution of 640 × 480 and the preset area of the mobile device is 100 × 100 (that is, the mobile device only needs to determine an obstacle in the image with a resolution of 640 × 480 in the traveling direction to meet the obstacle avoidance requirement), the size of the region of interest is 100 × 100 image.
And S22, extracting the region of interest by adopting a preset extraction algorithm to obtain an obstacle candidate region in the region of interest.
Optionally, the candidate region extraction processing performed on the region of interest by using the preset extraction algorithm may be performed according to the following steps.
As shown in fig. 4, an embodiment of extracting the obstacle candidate using a preset extraction algorithm:
and step S221, carrying out image gray scale processing on the region of interest to obtain a gray scale image of the obstacle in the region of interest.
Step S222, the contour of the gray image of the obstacle is processed to obtain the contour of the obstacle.
Step S223 is to perform morphological opening and closing operation processing on the contour of the obstacle to obtain a closed region of the contour of the obstacle.
And S224, processing the closed area to obtain the minimum rectangular boundary of the closed area, wherein the minimum rectangular boundary is an obstacle candidate area.
Step S23, determining candidate region information of the candidate region of the obstacle according to the candidate region of the obstacle; the candidate region information includes the height, width, and area of the obstacle candidate region.
Optionally, step S23 includes:
step S231 calculates candidate region information of the obstacle candidate region from the obstacle candidate region.
Specifically, the candidate region information of the obstacle candidate region is calculated by calculating the height (h), width (w), and area (a) of the obstacle candidate region.
Step S24, determining whether the candidate region information satisfies a first preset condition, if yes, performing step S25, and if no, exiting the analysis process.
Optionally, the first preset condition is: the height (h), width (w) and area (a) of the obstacle candidate area are all larger than the corresponding set values. Setting the height (h), width (w) and area (a) of the obstacle candidate region as h0、w0、a0(ii) a Then the candidate region information h, w, a of the obstacle candidate region satisfies the first preset condition: h is>h0And w>w0And a is>a0
And step S25, analyzing the outline of the obstacle to obtain the distance information between the obstacle and the mobile equipment.
Preferably, step S25 includes:
and step S251, encoding the outline of the obstacle to obtain the encoding information of the outline of the obstacle.
Here, the outline of the obstacle is the outline of the obstacle in the obstacle candidate region.
Step S252, based on the position information of the mobile device provided by the visual positioning and mapping system at the current time and several times thereafter, performs dense reconstruction processing at a pixel level on the candidate area of the obstacle, and obtains a depth-of-field image of the obstacle.
Step S253, performs encoding processing on the contour of the depth image of the obstacle to obtain encoding information of the contour of the depth image.
Here, the contour of the range image is a contour of an obstacle within the range image.
Step S254 calculates a similarity value between the encoded information of the contour of the obstacle and the encoded information of the contour of the depth image.
And step S255, judging whether the similarity value meets a second preset condition, if so, executing step S256, and if not, exiting the analysis processing.
Optionally, the similarity value satisfying the second preset condition is: the similarity value is greater than a preset threshold value.
The encoding information of the contour of the obstacle is represented by C0, the encoding information of the contour of the depth image is represented by C1, and the similarity value of C0 and C1 is represented by S, after C0 and C1 are obtained respectively, the similarity value S of C0 and C1 is calculated, and then the similarity value S is compared with a preset threshold value to judge whether S is greater than the preset threshold value. Wherein the preset threshold value can be preset.
And step S256, outputting distance information between the obstacle and the mobile equipment.
Here, the distance information of the obstacle from the mobile device is a depth-of-field mean of the obstacle within the depth-of-field image, where the depth-of-field mean is a distance mean of each pixel of the obstacle from the mobile device.
Of course, it can be understood that when there are multiple obstacles in the region of interest, each obstacle can be detected based on the above method, so as to obtain distance information of each obstacle from the mobile device.
Further, when there are a plurality of obstacles in the region of interest, the vision-based obstacle detection method of the present invention further includes the steps of:
and step S41, analyzing all the obstacle information to acquire the minimum distance value in the obstacle information.
Step S42, the obstacle corresponding to the distance minimum is determined.
And step S43, controlling the mobile equipment to avoid the obstacle according to the position information of the obstacle corresponding to the minimum distance.
In the traveling direction of the mobile device, the obstacle corresponding to the minimum distance is closest to the mobile device, and thus the most urgent is to deal with the obstacle. Therefore, when there are a plurality of obstacles in the region of interest, the obstacle corresponding to the minimum distance needs to be selected, and the traveling route of the mobile device is adjusted according to the obstacle, so as to avoid collision between the mobile device and the obstacle.
As shown in fig. 5, the present invention further provides a vision-based obstacle detection apparatus, which is applied to a mobile device, wherein the mobile device includes an image acquisition device, and the image acquisition device is disposed on the mobile device and is used for acquiring a real-time image of the mobile device in a traveling direction
The vision-based obstacle detection apparatus includes:
the acquiring unit 10 is configured to acquire a current real-time image in the traveling direction of the mobile device acquired by the image acquiring apparatus.
The analysis processing unit 20 is configured to perform analysis processing on the current real-time image based on a visual positioning and map building system to obtain obstacle information in the traveling direction of the mobile device; the obstacle information is distance information between the obstacle and the mobile device in the traveling direction of the mobile device.
And the control unit 30 is used for controlling the mobile device to avoid the obstacle according to the obstacle information.
The invention also constitutes a mobile device comprising a processor for implementing the steps of the method as described above when executing a computer program stored in a memory, as shown in fig. 6.
The invention also constitutes a readable storage medium, on which a computer program is stored, characterized in that the computer program realizes the steps of the method as described above when executed by a processor.
The above embodiments are merely illustrative of the technical ideas and features of the present invention, and are intended to enable those skilled in the art to understand the contents of the present invention and implement the present invention, and not to limit the scope of the present invention. All equivalent changes and modifications made within the scope of the claims of the present invention should be covered by the claims of the present invention.
It will be understood that modifications and variations can be made by persons skilled in the art in light of the above teachings and all such modifications and variations are intended to be included within the scope of the invention as defined in the appended claims.

Claims (6)

1. The vision-based obstacle detection method is characterized by being applied to mobile equipment, wherein the mobile equipment comprises an image acquisition device, and the image acquisition device is arranged on the mobile equipment and is used for acquiring a real-time image in the traveling direction of the mobile equipment;
the vision-based obstacle detection method includes the steps of:
s1, acquiring a current real-time image in the traveling direction of the mobile equipment, which is acquired by the image acquisition device;
s2, analyzing and processing the current real-time image based on a visual positioning and map building system to obtain obstacle information in the traveling direction of the mobile equipment; the obstacle information is distance information between an obstacle and the mobile equipment in the traveling direction of the mobile equipment; the step S2 includes:
s21, performing cutting preprocessing on the current real-time image based on a preset area to obtain an area of interest;
s22, extracting the region of interest by adopting a preset extraction algorithm to obtain a candidate barrier region in the region of interest; the step S22 includes:
s221, carrying out image gray processing on the region of interest to obtain a gray image of the obstacle in the region of interest;
s222, processing the outline of the gray image of the obstacle to obtain the outline of the obstacle;
s223, performing morphological opening and closing operation processing on the outline of the obstacle to obtain a closed area of the outline of the obstacle;
s224, processing the closed area to obtain the minimum rectangular boundary of the closed area, wherein the minimum rectangular boundary is the obstacle candidate area;
s23, determining candidate region information of the obstacle candidate region according to the obstacle candidate region; the candidate region information includes height, width, and area of the obstacle candidate region;
the step S23 includes:
s231, calculating candidate region information of the obstacle candidate region according to the obstacle candidate region;
the step S2 further includes:
s24, judging whether the candidate area information meets a first preset condition, if so, executing a step S25, and if not, exiting the analysis processing;
s25, analyzing and processing the outline of the obstacle to obtain the distance information between the obstacle and the mobile equipment;
the step S25 includes:
s251, encoding the outline of the obstacle to obtain encoding information of the outline of the obstacle;
s252, based on the position information of the mobile equipment at the current moment and a plurality of moments after the current moment, which is provided by the visual positioning and mapping system, performing pixel-level dense reconstruction processing on the candidate area of the obstacle to obtain a depth-of-field image of the obstacle;
s253, carrying out coding processing on the contour of the depth image of the obstacle to obtain coding information of the contour of the depth image;
s254, calculating a similarity value between the coded information of the contour of the obstacle and the coded information of the contour of the depth image;
s255, judging whether the similarity value meets a second preset condition, if so, executing the step S256, and if not, exiting the analysis processing;
s256, outputting distance information between the obstacle and the mobile equipment;
and S3, controlling the mobile equipment to avoid obstacles according to the obstacle information.
2. The vision-based obstacle detection method of claim 1, wherein the similarity value satisfies a second preset condition as follows:
the similarity value is greater than a preset threshold value.
3. The vision-based obstacle detection method of claim 1, further comprising:
s41, analyzing and processing all the obstacle information to obtain the minimum distance value in the obstacle information;
s42, determining an obstacle corresponding to the minimum distance;
and S43, controlling the mobile equipment to avoid the obstacle according to the position information of the obstacle corresponding to the minimum distance value.
4. The vision-based obstacle detection device is applied to mobile equipment, and the mobile equipment comprises an image acquisition device, wherein the image acquisition device is arranged on the mobile equipment and is used for acquiring a real-time image in the traveling direction of the mobile equipment;
the vision-based obstacle detection apparatus includes:
the acquisition unit is used for acquiring the current real-time image in the traveling direction of the mobile equipment, which is acquired by the image acquisition device;
the analysis processing unit is used for analyzing and processing the current real-time image based on a visual positioning and map building system to obtain the obstacle information in the traveling direction of the mobile equipment; the obstacle information is distance information between an obstacle and the mobile equipment in the traveling direction of the mobile equipment; the analysis processing unit is used for obtaining the information of the obstacle through processing of cutting, extracting, gray level, outline and morphological opening and closing of the image; the analysis processing unit is specifically configured to perform the following steps:
s21, performing cutting preprocessing on the current real-time image based on a preset area to obtain an area of interest;
s22, extracting the region of interest by adopting a preset extraction algorithm to obtain a candidate barrier region in the region of interest; the step S22 includes:
s221, carrying out image gray processing on the region of interest to obtain a gray image of the obstacle in the region of interest;
s222, processing the outline of the gray image of the obstacle to obtain the outline of the obstacle;
s223, performing morphological opening and closing operation processing on the outline of the obstacle to obtain a closed area of the outline of the obstacle;
s224, processing the closed area to obtain the minimum rectangular boundary of the closed area, wherein the minimum rectangular boundary is the obstacle candidate area;
s23, determining candidate region information of the obstacle candidate region according to the obstacle candidate region; the candidate region information includes height, width, and area of the obstacle candidate region;
the step S23 includes:
s231, calculating candidate region information of the obstacle candidate region according to the obstacle candidate region;
the step S2 further includes:
s24, judging whether the candidate area information meets a first preset condition, if so, executing a step S25, and if not, exiting the analysis processing;
s25, analyzing and processing the outline of the obstacle to obtain the distance information between the obstacle and the mobile equipment;
the step S25 includes:
s251, encoding the outline of the obstacle to obtain encoding information of the outline of the obstacle;
s252, based on the position information of the mobile equipment at the current moment and a plurality of moments after the current moment, which is provided by the visual positioning and mapping system, performing pixel-level dense reconstruction processing on the candidate area of the obstacle to obtain a depth-of-field image of the obstacle;
s253, carrying out coding processing on the contour of the depth image of the obstacle to obtain coding information of the contour of the depth image;
s254, calculating a similarity value between the coded information of the contour of the obstacle and the coded information of the contour of the depth image;
s255, judging whether the similarity value meets a second preset condition, if so, executing the step S256, and if not, exiting the analysis processing;
s256, outputting distance information between the obstacle and the mobile equipment; the control unit is used for controlling the mobile equipment to avoid the obstacle according to the obstacle information;
the acquisition unit, the analysis processing unit and the control unit are mutually associated and matched to realize the detection of the obstacles.
5. A mobile device comprising a processor for implementing the steps of the method according to any one of claims 1-3 when executing a computer program stored in a memory.
6. A readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1-3.
CN201810884385.0A 2018-08-06 2018-08-06 Vision-based obstacle detection method and device and mobile device Active CN109344687B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810884385.0A CN109344687B (en) 2018-08-06 2018-08-06 Vision-based obstacle detection method and device and mobile device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810884385.0A CN109344687B (en) 2018-08-06 2018-08-06 Vision-based obstacle detection method and device and mobile device

Publications (2)

Publication Number Publication Date
CN109344687A CN109344687A (en) 2019-02-15
CN109344687B true CN109344687B (en) 2021-04-16

Family

ID=65296621

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810884385.0A Active CN109344687B (en) 2018-08-06 2018-08-06 Vision-based obstacle detection method and device and mobile device

Country Status (1)

Country Link
CN (1) CN109344687B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021006011A (en) * 2019-06-27 2021-01-21 株式会社クボタ Obstacle detection system for farm working vehicle
CN110531770B (en) * 2019-08-30 2023-06-02 的卢技术有限公司 RRT path planning method and system based on improvement
CN110658820A (en) * 2019-10-10 2020-01-07 北京京东乾石科技有限公司 Method and device for controlling unmanned vehicle, electronic device and storage medium
CN111104933B (en) * 2020-03-20 2020-07-17 深圳飞科机器人有限公司 Map processing method, mobile robot, and computer-readable storage medium
CN112171675B (en) * 2020-09-28 2022-06-10 深圳市丹芽科技有限公司 Obstacle avoidance method and device for mobile robot, robot and storage medium
CN114387500A (en) * 2020-10-16 2022-04-22 苏州科瓴精密机械科技有限公司 Image recognition method and system applied to self-walking device, self-walking device and readable storage medium
CN112308033B (en) * 2020-11-25 2024-04-05 珠海一微半导体股份有限公司 Obstacle collision warning method based on depth data and visual chip
CN112415532B (en) * 2020-11-30 2022-10-21 上海炬佑智能科技有限公司 Dust detection method, distance detection device, and electronic apparatus
CN113242353B (en) * 2021-03-22 2023-11-03 启美科技(江苏)有限公司 Front rod body approximation degree analysis platform
CN114462726B (en) * 2022-04-14 2022-07-12 青岛海滨风景区小鱼山管理服务中心 Intelligent garden management method and system
CN115273552B (en) * 2022-09-19 2022-12-20 南通立信自动化有限公司 HMI control system of automobile instrument

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012040644A1 (en) * 2010-09-24 2012-03-29 Evolution Robotics, Inc. Systems and methods for vslam optimization
CN103411536A (en) * 2013-08-23 2013-11-27 西安应用光学研究所 Auxiliary driving obstacle detection method based on binocular stereoscopic vision
CN104182756A (en) * 2014-09-05 2014-12-03 大连理工大学 Method for detecting barriers in front of vehicles on basis of monocular vision
CN105843223A (en) * 2016-03-23 2016-08-10 东南大学 Mobile robot three-dimensional mapping and obstacle avoidance method based on space bag of words model
CN106855411A (en) * 2017-01-10 2017-06-16 深圳市极思维智能科技有限公司 A kind of robot and its method that map is built with depth camera and obstacle avoidance system
CN108230392A (en) * 2018-01-23 2018-06-29 北京易智能科技有限公司 A kind of dysopia analyte detection false-alarm elimination method based on IMU

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012040644A1 (en) * 2010-09-24 2012-03-29 Evolution Robotics, Inc. Systems and methods for vslam optimization
CN103411536A (en) * 2013-08-23 2013-11-27 西安应用光学研究所 Auxiliary driving obstacle detection method based on binocular stereoscopic vision
CN104182756A (en) * 2014-09-05 2014-12-03 大连理工大学 Method for detecting barriers in front of vehicles on basis of monocular vision
CN105843223A (en) * 2016-03-23 2016-08-10 东南大学 Mobile robot three-dimensional mapping and obstacle avoidance method based on space bag of words model
CN106855411A (en) * 2017-01-10 2017-06-16 深圳市极思维智能科技有限公司 A kind of robot and its method that map is built with depth camera and obstacle avoidance system
CN108230392A (en) * 2018-01-23 2018-06-29 北京易智能科技有限公司 A kind of dysopia analyte detection false-alarm elimination method based on IMU

Also Published As

Publication number Publication date
CN109344687A (en) 2019-02-15

Similar Documents

Publication Publication Date Title
CN109344687B (en) Vision-based obstacle detection method and device and mobile device
CN106527444B (en) Control method of cleaning robot and cleaning robot
CN107738612B (en) Automatic parking space detection and identification system based on panoramic vision auxiliary system
CN108243623B (en) Automobile anti-collision early warning method and system based on binocular stereo vision
US8265425B2 (en) Rectangular table detection using hybrid RGB and depth camera sensors
CN111104933B (en) Map processing method, mobile robot, and computer-readable storage medium
Broggi et al. Self-calibration of a stereo vision system for automotive applications
CN108303096B (en) Vision-assisted laser positioning system and method
JP2013109760A (en) Target detection method and target detection system
CN105182320A (en) Depth measurement-based vehicle distance detection method
JP6524529B2 (en) Building limit judging device
CN110262487B (en) Obstacle detection method, terminal and computer readable storage medium
JP6815963B2 (en) External recognition device for vehicles
Lion et al. Smart speed bump detection and estimation with kinect
KR101333459B1 (en) Lane detecting method and apparatus thereof
JP2004301607A (en) Moving object detection device, moving object detection method, and moving object detection program
CN111553342B (en) Visual positioning method, visual positioning device, computer equipment and storage medium
Karunasekera et al. Energy minimization approach for negative obstacle region detection
Saleem et al. Effects of ground manifold modeling on the accuracy of stixel calculations
Zheng et al. Automatic detection technique of preceding lane and vehicle
JP5464073B2 (en) Outside environment recognition device, outside environment recognition method
KR101668649B1 (en) Surrounding environment modeling method and apparatus performing the same
JP2019079338A (en) Object detection system
CN113353071A (en) Narrow area intersection vehicle safety auxiliary method and system based on deep learning
CN114587220B (en) Dynamic obstacle avoidance method, device, computer equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant