CN110147748B - Mobile robot obstacle identification method based on road edge detection - Google Patents
Mobile robot obstacle identification method based on road edge detection Download PDFInfo
- Publication number
- CN110147748B CN110147748B CN201910390236.3A CN201910390236A CN110147748B CN 110147748 B CN110147748 B CN 110147748B CN 201910390236 A CN201910390236 A CN 201910390236A CN 110147748 B CN110147748 B CN 110147748B
- Authority
- CN
- China
- Prior art keywords
- road
- detection
- obstacle
- mobile robot
- real
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the field of artificial intelligence, in particular to a mobile robot obstacle identification method based on road edge detection, which comprises the following specific steps: s1: obtaining a boundary line; s2: detecting edges; s3: detecting a target; s4: coordinate values; s5: calculating the existence area of the road obstacle; s6: judging the area where the road barrier exists; compared with the traditional method that the obstacle identification can only detect the object in front of the mobile robot and the overlapped part cannot be accurately identified, the method has the advantages that the road edge detection algorithm is used for finding the obstacle existing area omega, the deep learning network framework is used for carrying out target detection on the object in the image, the obstacle in the detected target in the real-time image can be more accurately identified through the method, and the intelligent degree is high.
Description
Technical Field
The invention relates to the field of artificial intelligence, in particular to a mobile robot obstacle identification method based on road edge detection.
Background
The mobile robot technology and industry are rapidly developed in recent years, the application scenes of the mobile robot are more and more extensive, and the research and the application of the mobile robot technology and the industry are also expanded from the military field and the industrial field to the industries of agriculture, household use, service and security protection. In the field of mobile robot research, obstacle detection is an important direction. When the mobile robot runs on a road, an obstacle exists in the direction of the road. The obstacle can hinder the moving robot from advancing, can collide with the moving robot, etc. and cause the moving robot to damage or the obstacle damages, and the moving robot reaches the destination only constantly avoids the obstacle. Therefore, it is necessary to perform obstacle detection of the mobile robot while the mobile robot is traveling.
At present, a robot obstacle detection method based on laser radar information exists, although the laser radar has good environmental adaptability, the laser radar can only detect objects and cannot identify the objects, so that the behaviors and the types of the objects cannot be judged, and the multi-thread laser radar is high in price. For the detection of an obstacle, only the obstacle distance can be detected by ultrasonic waves or millimeter wave radars.
Disclosure of Invention
In order to solve the above problems, the present invention provides a method for identifying obstacles of a mobile robot based on road edge detection.
A mobile robot obstacle identification method based on road edge detection comprises the following specific steps:
s1: obtaining a boundary line: an image acquisition device arranged on the mobile robot acquires a real-time image, and the real-time image comprises an upper boundary line l up And a real-time image lower boundary line l down ;
S2: edge detection: obtaining a road edge line l by an edge detection algorithm road To determine the road edge line l road Equation y of l =F l (x);
S3: target detection: carrying out target detection by using a deep learning network framework real-time image to obtain each detection target boundary box and a characterization function thereof: f ═ x, y, w, h, c;
s4: coordinate values: the coordinate values of the lower left corner and the lower right corner of each detection target bounding box can be obtained by using the characterization function f, and are respectively (x) L ,y L ) And (x) R ,y R );
S5: calculation of the road obstacle existing region: calculating the road edge line l road And the real-time image upper boundary line l up And a real-time image lower boundary line l down Enclosing a road obstacle existence area omega;
s6: judging the existence area of the road barrier: judging the coordinate values (x) of the lower left corner and the lower right corner of each detection target boundary frame L ,y L ) And (x) R ,y R ) And whether the road obstacle exists in the road obstacle existence region omega or not and finishing obstacle identification.
The road edge of step 1Edge line l road Equation y of l =F l (x l ) Wherein F is l As a piecewise function:
wherein, y l And x l The units of (a) and (b) are px, the units of (a) and (b) of the characterization function f are px, px is a pixel unit, x is an abscissa value of an upper left corner of the detection target bounding box, y is an ordinate value of the upper left corner of the detection target bounding box, w is a lateral width of the detection target bounding box, h is a vertical height of the detection target bounding box, and c is a recognition rate of the detection target bounding box.
The size of the real-time image in the step 1 is a × b, the unit of a and b is px, a is the length of the real-time image, b is the height of the real-time image, and the value range of x in the road edge equation F (x, y) is that x is greater than or equal to 0 and less than or equal to a, and x is greater than or equal to 0 and less than or equal to b.
The edge detection algorithm in the step 2 can realize complete detection of the road edge, the detection time of the edge detection algorithm is shorter than the braking time of the mobile robot, and the target detection time of the real-time image is shorter than the braking time of the mobile robot.
The characterization function f of each detection target boundary box in step 4 is (x, y, w, h, c) and (x) on each detection target boundary box L ,y L ) And (x) R ,y R ) The following relationship is satisfied:
the road obstacle existing region Ω in step 5 is a two-dimensional point set in the real-time image, and satisfies the following conditions:
Ω={(x,y)q 1 ≤x<q 2 ,q 3 ≤x<q 4 ,…,q n-1 ≤x<q n ,q n ≤a;0≤y≤b}。
if (x) in step 6 L ,y L ) And said (x) R ,y R ) All contained in the road obstacle existing region omega, then the (x) is judged L ,y L ) And said (x) R ,y R ) The target object selected by the detection target bounding box is an obstacle;
if (x) in the step 6 L ,y L ) (ii) is contained in the road obstacle existing region Ω, said (x) R ,y R ) If the road obstacle is not included in the road obstacle existing region omega, the (x) is also determined L ,y L ) And said (x) R ,y R ) The target object selected by the detection target bounding box is an obstacle;
if (x) in step 6 R ,y R ) (ii) is contained in the road obstacle existing region Ω, and (x) L ,y L ) If the road obstacle is not included in the road obstacle existing region omega, the (x) is also determined L ,y L ) And said (x) R ,y R ) The target object selected by the detection target bounding box is an obstacle;
if (x) in the step 6 L ,y L ) And said (x) R ,y R ) If none of the regions is included in the road obstacle existing region omega, the (x) is determined L ,y L ) And said (x) R ,y R ) The target object selected by the detection target bounding box is not an obstacle.
The invention has the beneficial effects that: compared with the traditional barrier recognition that only an object is in front of the mobile robot and can only be detected, and the object cannot be accurately recognized when the object is overlapped, the method has the advantages that the road edge detection algorithm is used for finding the barrier existing region omega, the deep learning network framework is used for carrying out target detection on the object in the image, the barrier in the detection target in the real-time image can be more accurately recognized through the method, and the degree of intelligence is high.
Drawings
The invention is further illustrated with reference to the following figures and examples.
FIG. 1 is a schematic diagram of a real-time image of the present invention;
FIG. 2 is a schematic diagram of a target detection bounding box according to the present invention;
FIG. 3 is a schematic diagram of real-time image target detection and road edge detection according to the present invention;
fig. 4 is a schematic distribution diagram of the road obstacle existing region Ω in the real-time image according to the present invention.
Detailed Description
The present invention will be further described in order to make the technical means, the creation characteristics, the achievement purposes and the effects of the present invention easy to understand.
As shown in fig. 1 to 4, a method for identifying obstacles of a mobile robot based on road edge detection includes connecting a camera with a notebook computer, installing the notebook computer on the mobile robot, then collecting a real-time image in front of the robot by the camera, selecting a trained deep learning neural network frame as an image software detection tool, and operating in a VS2015 environment, wherein the real-time image includes an upper boundary line l of the real-time image up And a real-time image lower boundary line l down And the coordinate origin O (0,0) is at the vertex of the upper left corner of the real-time image, and the specific steps are as follows:
s1: boundary line acquisition: the image acquisition equipment is used for acquiring a real-time image of a scene in front of the motion of the mobile robot, wherein the real-time image comprises an upper boundary line l of the real-time image up And a real-time image lower boundary line l down :
S2: edge detectionAnd (3) testing: detecting the road edge in the real-time image by an edge detection algorithm on the real-time image in the step 1 to obtain a road edge line l road To determine the road edge line l road Equation y of l =F l (x);
S3: target detection: as shown in fig. 2, a deep learning network framework is used to perform target detection on the real-time image in step 1, and a boundary frame and a characterization function of each detected target are obtained: in this embodiment, the four values of x, y, w, and h in each detection target characterization function are sequentially from left to right (x, y, w, h, and c)
S4: coordinate values: the coordinate values of the lower left corner and the lower right corner of each detection target boundary frame can be obtained by using the characterization function f in the step 3, wherein the coordinate values are respectively (x) L ,y L ) And (x) R ,y R ) As shown in fig. 3, the coordinate values of the lower left corner and the lower right corner of each detection target bounding box sequentially from left to right are:
s5: calculation of the road obstacle existing region: calculating the road edge line l in the step 2 road And the upper boundary line l of the real-time image up And a real-time image lower boundary line l down Enclose a road obstacle existence region omega as shown in fig. 4 up And l down The following equation is satisfied:
the road obstacle existing region Ω in the present embodiment satisfies the following equation
S6: judging the road obstacle existence area: judging the coordinate values (x) of the lower left corner and the lower right corner of each detection target boundary box in the step 4 L ,y L ) And (x) R ,y R ) And whether the road obstacle exists in the road obstacle existence region omega or not and finishing obstacle identification.
Judging the coordinate values (x) of the lower left corner and the lower right corner of each detection target boundary frame L ,y L ) And (x) R ,y R ) Whether the road obstacle is included in the road obstacle existence area specifically includes:
a: for the detection target tree with the number 1, substituting coordinate values (135,450) of the left lower corner and the right lower corner of the boundary frame (230,450) into the region equation of the omega in the step 5, judging that the two coordinates are not in the region where the road obstacle exists, and judging that the detection target tree with the number 1 is not the obstacle;
b: for the detection target tree with the number 2, substituting the coordinate values (360,199) of the lower left corner and the lower right corner of the bounding box and (430,199) into the region equation of omega in the step 5, judging that the two coordinates are not in the region where the road obstacle exists, and judging that the detection target tree with the number 2 is not the obstacle;
c: for the detection target Person numbered 3, the coordinate values (390,480), (470,480) of the lower left corner and the lower right corner of the boundary frame are substituted into the region equation of the omega in the step 5, and if the two coordinates are in the region where the road obstacle exists, the detection target Person numbered 3 in the judgment is the obstacle;
d: for the number 4 detection target bicycle, substituting the coordinate values (512,290) of the lower left corner and the lower right corner of the boundary box and (605,290) into the region equation of omega in the step 5, judging that the two coordinates are in the road obstacle existence region, and judging that the number 4 detection target bicycle is the obstacle in the judgment;
e: for the number 5 detection target bus, the coordinate values (590,180) of the lower left corner and the lower right corner of the boundary box (670,180) are substituted into the region equation of omega in the step 5, and if the two coordinates are in the road obstacle existence region, the number 5 detection target bus is judged to be the obstacle;
f: and (3) for the detection target Person numbered 6, substituting coordinate values (610,435) of the lower left corner and the lower right corner of the boundary box (685,435) into the region equation of the omega in the step 5, judging that the two coordinates are both in the road obstacle existence region, and judging that the detection target Person numbered 6 is the obstacle in the judgment.
The road edge line l of the step 1 road Equation y of l =F l (x l ) Wherein, F l As a piecewise function:
wherein, y l And x l The units of (a) and (b) are px, the units of (a) and (b) of the characterization function f are px, px is a pixel unit, x is an abscissa value of an upper left corner of the detection target bounding box, y is an ordinate value of the upper left corner of the detection target bounding box, w is a lateral width of the detection target bounding box, h is a vertical height of the detection target bounding box, and c is a recognition rate of the detection target bounding box.
The road edge line l road There may be a plurality of said road edge lines l road Either straight or curved.
Compared with the traditional method that the obstacle identification can only detect the object in front of the mobile robot and the overlapped part cannot be accurately identified, the method has the advantages that the road edge detection algorithm is used for finding the obstacle existing area omega, the deep learning network framework is used for carrying out target detection on the object in the image, the obstacle in the detected target in the real-time image can be more accurately identified through the method, and the intelligent degree is high.
The mobile robot in the step 1 is provided with an image processing platform, and the image processing platform comprises a hardware part and a software part; the image acquisition equipment is a calibrated monocular camera, the coordinate values in the image are coordinate values taking a pixel px as a unit, and the origin of the coordinates is positioned at the upper left corner of the real-time image.
The size of the real-time image is a x b, the unit of a and b is px, a is the length of the real-time image, b is the height of the real-time image, and the value range of x in the road edge equation F (x, y) is that x is more than or equal to 0 and less than or equal to a, and x is more than or equal to 0 and less than or equal to b.
The edge detection algorithm can realize complete detection of road edges, the detection time of the edge detection algorithm is shorter than the braking time of the mobile robot, and the target detection time of the real-time image is shorter than the braking time of the mobile robot.
The feature function f of each detection target bounding box in step 4 is (x, y, w, h, c) and (x) on each detection target bounding box L ,y L ) And (x) R ,y R ) The following relationship is satisfied:
the road obstacle existing region omega in the step 5 is a two-dimensional point set in the real-time image, and meets the following conditions:
Ω={(x,y)q 1 ≤x<q 2 ,q 3 ≤x<q 4 ,…,q n-1 ≤x<q n ,q n ≤a;0≤y≤b}。
the real-time image detected by the edge detection algorithm and the real-time image detected by the deep learning network framework are the same frame of image or the real-time image detected by the deep learning network framework and the real-time image detected by the edge detection algorithm.
If (x) in step 6 L ,y L ) And said (x) R ,y R ) All contained in the road obstacle existing region omega, then the (x) is judged L ,y L ) And said (x) R ,y R ) The target object selected by the detection target bounding box is an obstacle;
if (x) in step 6 L ,y L ) (ii) is contained in the road obstacle existing region Ω, said (x) R ,y R ) If the road obstacle is not included in the road obstacle existing region omega, the (x) is also determined L ,y L ) And said (x) R ,y R ) The target object selected by the detection target bounding box is an obstacle;
if (x) in the step 6 R ,y R ) (ii) is contained in the road obstacle existing region Ω, and (x) L ,y L ) If the road obstacle is not included in the road obstacle existing region omega, the (x) is also determined L ,y L ) And said (x) R ,y R ) The target object selected by the detection target bounding box is an obstacle;
if (x) in step 6 L ,y L ) And said (x) R ,y R ) If none of the regions is included in the road obstacle existing region omega, the (x) is determined L ,y L ) And said (x) R ,y R ) The target object selected by the detection target bounding box is not an obstacle.
The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are merely illustrative of the principles of the invention, but that various changes and modifications may be made without departing from the spirit and scope of the invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.
Claims (10)
1. A mobile robot obstacle identification method based on road edge detection is characterized in that: the method comprises the following specific steps:
s1: obtaining a boundary line: an image acquisition device arranged on the mobile robot acquires a real-time image comprising an upper boundary line l of the real-time image up And a real-time image lower boundary line l down ;
S2: edge detection: obtaining a road edge line l by an edge detection algorithm road To determine the road edge line l road Equation y of l =F l (x);
S3: target detection: carrying out target detection by using a deep learning network framework real-time image to obtain each detection target boundary box and a characterization function thereof: f ═ x, y, w, h, c;
s4: coordinate values: the coordinate values of the lower left corner and the lower right corner of each detection target boundary frame can be obtained by using the characterization function f and are respectively (x) L ,y L ) And (x) R ,y R );
S5: calculation of the road obstacle presence area: calculating the road edge line l road And the real-time image upper boundary line l up And a real-time image lower boundary line l down Enclosing a road obstacle existence area omega;
s6: judging the existence area of the road barrier: judging the coordinate values (x) of the lower left corner and the lower right corner of each detection target bounding box L ,y L ) And (x) R ,y R ) And whether the road obstacle exists in the road obstacle existing area omega or not and finishing obstacle identification.
2. The method for identifying obstacles of a mobile robot based on road edge detection as claimed in claim 1, wherein: the road edge line l of the step 1 road Equation y of l =F l (x l ) Wherein, F l As a piecewise function:
wherein, y l And x l The units of (a) and (b) are px, the units of (a) and (b) of the characterization function f are px, px is a pixel unit, x is an abscissa value of an upper left corner of the detection target bounding box, y is an ordinate value of the upper left corner of the detection target bounding box, w is a lateral width of the detection target bounding box, h is a vertical height of the detection target bounding box, and c is a recognition rate of the detection target bounding box.
3. The method for identifying obstacles of a mobile robot based on road edge detection as claimed in claim 1, wherein: the size of the real-time image in the step 1 is a × b, the unit of a and b is px, a is the length of the real-time image, b is the height of the real-time image, and the value range of x in the road edge equation F (x, y) is that x is greater than or equal to 0 and less than or equal to a, and x is greater than or equal to 0 and less than or equal to b.
4. The method for identifying obstacles of a mobile robot based on road edge detection as claimed in claim 1, wherein: the edge detection algorithm in the step 2 can realize complete detection of the road edge, the detection time of the edge detection algorithm is shorter than the braking time of the mobile robot, and the target detection time of the real-time image is shorter than the braking time of the mobile robot.
5. The method for identifying obstacles of the mobile robot based on road edge detection as claimed in claim 1, wherein the method comprises the following steps: the feature function f of each detection target bounding box in step 4 is (x, y, w, h, c) and (x) on each detection target bounding box L ,y L ) And (x) R ,y R ) The following relationship is satisfied:
6. the method for identifying obstacles of the mobile robot based on road edge detection as claimed in claim 1, wherein the method comprises the following steps: the road obstacle existing region omega in the step 5 is a two-dimensional point set in the real-time image, and meets the following conditions:
Ω={(x,y)|q 1 ≤x<q 2 ,q 3 ≤x<q 4 ,…,q n-1 ≤x<q n ,q n ≤a;0≤y≤b}。
7. the method for identifying obstacles of the mobile robot based on road edge detection as claimed in claim 1, wherein the method comprises the following steps: if (x) in step 6 L ,y L ) And said (x) R ,y R ) All contained in the road obstacle existing region omega, then the (x) is judged L ,y L ) And said (x) R ,y R ) And the target object selected by the detection target bounding box is an obstacle.
8. The method for identifying obstacles of a mobile robot based on road edge detection as claimed in claim 1, wherein: if (x) in step 6 L ,y L ) (ii) is contained in the road obstacle existing region Ω, said (x) R ,y R ) If the road obstacle is not included in the road obstacle existing region omega, the (x) is also determined L ,y L ) And said (x) R ,y R ) The target object selected by the detection target bounding box is an obstacle.
9. The method for identifying obstacles of the mobile robot based on road edge detection as claimed in claim 1, wherein the method comprises the following steps: if (x) in the step 6 R ,y R ) (ii) is contained in the road obstacle existing region Ω, said (x) L ,y L ) If the road obstacle is not included in the road obstacle existing region omega, the (x) is also determined L ,y L ) And said (x) R ,y R ) The target object selected by the detection target bounding box is an obstacle.
10. The method for identifying obstacles of the mobile robot based on road edge detection as claimed in claim 1, wherein the method comprises the following steps: if (x) in step 6 L ,y L ) And said (x) R ,y R ) If none of the regions is included in the road obstacle existing region omega, the (x) is determined L ,y L ) And said (x) R ,y R ) The target object selected by the detection target bounding box is not an obstacle.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910390236.3A CN110147748B (en) | 2019-05-10 | 2019-05-10 | Mobile robot obstacle identification method based on road edge detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910390236.3A CN110147748B (en) | 2019-05-10 | 2019-05-10 | Mobile robot obstacle identification method based on road edge detection |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110147748A CN110147748A (en) | 2019-08-20 |
CN110147748B true CN110147748B (en) | 2022-09-30 |
Family
ID=67595111
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910390236.3A Active CN110147748B (en) | 2019-05-10 | 2019-05-10 | Mobile robot obstacle identification method based on road edge detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110147748B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114291082A (en) * | 2019-10-09 | 2022-04-08 | 北京百度网讯科技有限公司 | Method and device for controlling a vehicle |
CN110705492A (en) * | 2019-10-10 | 2020-01-17 | 北京北特圣迪科技发展有限公司 | Stage mobile robot obstacle target detection method |
CN112307989B (en) * | 2020-11-03 | 2024-05-03 | 广州海格通信集团股份有限公司 | Road surface object identification method, device, computer equipment and storage medium |
CN112486172A (en) * | 2020-11-30 | 2021-03-12 | 深圳市普渡科技有限公司 | Road edge detection method and robot |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015024407A1 (en) * | 2013-08-19 | 2015-02-26 | 国家电网公司 | Power robot based binocular vision navigation system and method based on |
CN109145756A (en) * | 2018-07-24 | 2019-01-04 | 湖南万为智能机器人技术有限公司 | Object detection method based on machine vision and deep learning |
-
2019
- 2019-05-10 CN CN201910390236.3A patent/CN110147748B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015024407A1 (en) * | 2013-08-19 | 2015-02-26 | 国家电网公司 | Power robot based binocular vision navigation system and method based on |
CN109145756A (en) * | 2018-07-24 | 2019-01-04 | 湖南万为智能机器人技术有限公司 | Object detection method based on machine vision and deep learning |
Non-Patent Citations (1)
Title |
---|
激光点云在无人驾驶路径检测中的应用;张永博等;《测绘通报》;20161125(第11期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN110147748A (en) | 2019-08-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110147748B (en) | Mobile robot obstacle identification method based on road edge detection | |
Guo et al. | Dense construction vehicle detection based on orientation-aware feature fusion convolutional neural network | |
CN107045629B (en) | Multi-lane line detection method | |
CA2950791C (en) | Binocular visual navigation system and method based on power robot | |
Yuan et al. | Robust lane detection for complicated road environment based on normal map | |
JP5822255B2 (en) | Object identification device and program | |
CN113156421A (en) | Obstacle detection method based on information fusion of millimeter wave radar and camera | |
CN113370977B (en) | Intelligent vehicle forward collision early warning method and system based on vision | |
CN109145756A (en) | Object detection method based on machine vision and deep learning | |
CN115049700A (en) | Target detection method and device | |
US9008364B2 (en) | Method for detecting a target in stereoscopic images by learning and statistical classification on the basis of a probability law | |
CN109001757A (en) | A kind of parking space intelligent detection method based on 2D laser radar | |
CN109828267A (en) | The Intelligent Mobile Robot detection of obstacles and distance measuring method of Case-based Reasoning segmentation and depth camera | |
CN110568861B (en) | Man-machine movement obstacle monitoring method, readable storage medium and unmanned machine | |
Premachandra et al. | Detection and tracking of moving objects at road intersections using a 360-degree camera for driver assistance and automated driving | |
Ji et al. | RGB-D SLAM using vanishing point and door plate information in corridor environment | |
KR101460313B1 (en) | Apparatus and method for robot localization using visual feature and geometric constraints | |
CN110674674A (en) | Rotary target detection method based on YOLO V3 | |
Ye et al. | Overhead ground wire detection by fusion global and local features and supervised learning method for a cable inspection robot | |
Ma et al. | Multiple lane detection algorithm based on optimised dense disparity map estimation | |
CN116503803A (en) | Obstacle detection method, obstacle detection device, electronic device and storage medium | |
CN113640826A (en) | Obstacle identification method and system based on 3D laser point cloud | |
CN109741306B (en) | Image processing method applied to dangerous chemical storehouse stacking | |
Qing et al. | A novel particle filter implementation for a multiple-vehicle detection and tracking system using tail light segmentation | |
CN109993107B (en) | Mobile robot obstacle visual detection method based on non-iterative K-means algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |