CN113269130A - Visual path searching method based on artificial neural network - Google Patents
Visual path searching method based on artificial neural network Download PDFInfo
- Publication number
- CN113269130A CN113269130A CN202110655676.4A CN202110655676A CN113269130A CN 113269130 A CN113269130 A CN 113269130A CN 202110655676 A CN202110655676 A CN 202110655676A CN 113269130 A CN113269130 A CN 113269130A
- Authority
- CN
- China
- Prior art keywords
- travelable
- length
- drivable
- neural network
- artificial neural
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 27
- 230000000007 visual effect Effects 0.000 title claims abstract description 18
- 230000011218 segmentation Effects 0.000 claims description 16
- 238000012549 training Methods 0.000 claims description 5
- 230000004888 barrier function Effects 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 3
- 239000003086 colorant Substances 0.000 claims description 3
- 238000000354 decomposition reaction Methods 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 238000001514 detection method Methods 0.000 abstract description 7
- 238000010586 diagram Methods 0.000 description 6
- 238000007796 conventional method Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000013459 approach Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 239000010813 municipal solid waste Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a visual path searching method based on an artificial neural network, which relates to the technical field of road detection. The visual path searching method based on the artificial neural network can be used for an autonomous vehicle or a robot to safely and accurately navigate on a road, and meanwhile, the cost and complexity of available solutions in the prior art are reduced.
Description
Technical Field
The invention relates to the technical field of road detection, in particular to a visual path searching method based on an artificial neural network.
Background
Finding a vehicle driving navigation path is an important task for autonomous vehicles and Advanced Driver Assistance Systems (ADAS). Drivers look for travelable areas on roads that are not occupied by obstacles, including various unpredictable objects that appear on the road surface, such as bicycles, cars, trash cans, humans, pets, and the like, to identify and avoid the obstacles.
The traditional obstacle identification adopts sensor combination to acquire information, namely adopts laser radar, radar and a camera to capture information. Among them, a camera is one of the most popular sensors, and a radar is used in combination with the camera as an auxiliary sensor for detecting an object. Lidar is a relatively new technology and has found widespread use in autonomous vehicles. In the conventional method, the road and the obstacle are separately detected. The camera is used for detecting roads, and detecting searched roads through marks on the roads. The use of the conventional method has the following disadvantages:
a. the traditional method is to assume that a perfect mark exists on a road;
b. the traditional method can not search roads under challenging conditions such as night, low light and the like;
c. road searching cannot be performed in the case where traffic is heavy and the road line is not visible.
After the conventional method finds a road, the next step is to determine whether there is a sensor combination on the road that identifies an obstacle. The sensor combination is based on the transmitted waves and the detection of reflected waves from different objects for obstacle determination. And the laser radar, the radar and the ultrasonic sensor respectively adopt light waves, radio waves and ultrasonic waves for detection. However, the sensor combination is expensive, the use of multiple sensors also increases the power consumption and complexity of the system, and the data of the sensors needs to be fused together.
Therefore, in order to solve the above technical problems, the present application provides a visual path searching method based on an artificial neural network, which uses a single camera as a sensor to obtain information, performs detection of roads and obstacles in one step by semantic segmentation, processes segmented images based on features learned from different data sets by the neural network, and finds a navigation path of a vehicle.
Disclosure of Invention
The invention aims to provide a visual path searching method based on an artificial neural network, which uses a single camera as a sensor to acquire information, completes the detection of roads and obstacles in one step by semantic segmentation, processes the segmented images based on the characteristics learned from different data sets by the neural network, and finds a navigation path of a vehicle.
The invention provides a visual path searching method based on an artificial neural network, which comprises the following steps:
establishing a semantic segmentation model, and inputting a Berkley Deep Drive data set for model training;
customizing a data set, and finely adjusting the trained semantic segmentation model;
inputting an image acquired by a camera into a semantic segmentation model, and outputting a segmented image;
filtering the segmented image into two independent binary images, counting continuous white pixels in the binary images in a line-by-line manner, and calculating a travelable segment of each line;
judging the number of the travelable sections in each row, if the number is 1, no barrier exists, and the travelable sections are travelable sections; if the number is larger than 1, the driving section has an obstacle, the maximum driving section in the driving section with the obstacle is calculated, and the maximum driving section is used as the driving section of the line;
and acquiring the midpoint coordinate of each travelable section, respectively judging the continuity of the midpoint coordinate, the length and the width of the continuous midpoint coordinate, indicating to drive the vehicle to travel if the judgment condition is met, searching for the alternative route if the judgment condition is not met, and iteratively judging the travelable section of the new route to complete the search of the navigation route.
Further, the Berkley Deep Drive data set comprises a data set image and a label image, wherein the data set image and the label image respectively comprise an unlinkable area, a direct travelable area and an alternate travelable area, and are marked by different colors.
Further, the image is divided into a number of lines by resolution and the lines are counted.
Further, in each row, the travelable segment is calculated as follows:
I(j)=I(j+1)(whereI(j)=255)→count=count+1 (1)
Drivableatsrt=j,Drivableend=j+count (2)
wherein j and I (j) respectively represent the column number and the counting value of the image pixel at the j column, and the index of the travelable segment and the start and end is obtained by counting the continuous white pixels, so that the travelable segment is found for each row.
Further, a calculation formula for calculating the maximum travelable segment within the travel segment in which the obstacle exists is as follows:
Drivableperrow=Maximum(Drivableend-Drivablestart) (3)。
further, the midpoint coordinate of the travelable segment is calculated as follows:
Midpoint=(Drivableend-Drivablestart)/2 (4)
the coordinates are calculated and then expressed as Drive (Midpoint, row number).
Further, the method for judging the continuity of the midpoint coordinate, the length and the width of the continuous midpoint coordinate comprises the following steps:
if the continuous length of the midpoint coordinate exceeds a preset value, indicating that a vehicle is driven to run, and if the continuous length is smaller than the preset value, indicating that an obstacle exists, and searching for an alternative route;
and judging the length and the width of the continuous midpoint coordinate, if the length and the width exceed preset values of the length and the width, normally driving the vehicle, and if the length and the width exceed preset values of the length, searching for an alternative route.
Further, the preset values of length and width are the length array requiredlength (l), where requiredlength (l) is [150,149,148,147,146,145,144,143,142,141,140,139,138,137,136,135,134,133,132,131,130,129,128,127,126,125,124,123,122,121,120,119,118,117,116,115,114,113,112,111,110,109,108,107,106,105,104,103,102,101,100,99,98,97,96,95,94,93,92,91,90,89,88,87,86,85,84,83,82,81,80,79,78,77,76,75,74,73,72,71,70,69,68,67,66,65,64,63,62,61,60,59, 57, 54,53, 51, 53 ].
Further, the semantic segmentation model is a deep convolution network model of effective residual decomposition.
Compared with the prior art, the invention has the following remarkable advantages:
the invention provides a visual path searching method based on an artificial neural network, which uses a single camera as a sensor to acquire information, completes the detection of roads and obstacles in one step through semantic segmentation, processes the segmented images based on the characteristics learned from different data sets by the neural network, and finds a navigation path of a vehicle. The method can be used in autonomous vehicles or robots to safely and accurately navigate on roads, and simultaneously reduces the cost and complexity of available solutions in the prior art.
Drawings
FIG. 1 is a flowchart of a path searching method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a conventional method provided by an embodiment of the present invention;
FIG. 3 is an annotated image training diagram provided in an embodiment of the present invention;
FIG. 4 is a diagram of a possible driving area and an optional driving area obtained by segmenting an image according to an embodiment of the present invention;
FIG. 5 is a block diagram illustrating a minimum (N) number of consecutive drivable scenarios examined, according to an embodiment of the present invention;
FIG. 6 is a diagram of a far road condition provided by an embodiment of the present invention;
FIG. 7 is a diagram illustrating a length requirement array according to an embodiment of the present invention.
Detailed Description
The technical solutions of the embodiments of the present invention are clearly and completely described below with reference to the drawings in the present invention, and it is obvious that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, shall fall within the scope of protection of the present invention.
In the prior art, there are basically two techniques available for computer vision tasks, called the traditional method and the artificial neural network based method, as shown in fig. 2:
in the traditional approach, rules are defined as part of the algorithm, whereas in the artificial neural network based approach, the network can learn the rules itself if it is provided with enough different inputs and outputs. Artificial neural networks are very powerful because image processing defines strict rules that are not valid in all cases. The application also adopts an artificial neural network-based method, particularly a supervised learning method, and a large number of images and corresponding image labels are handed to the neural network in the training stage. A pattern of neural network learning images is employed.
The existing deep learning method for image processing roughly includes the following three types:
1. classification-identifying the class to which the image belongs. There is one label per image.
2. Object detection-combines classification with localization (localizing objects in the form of bounding boxes). It can detect multiple objects in an image.
3. Semantic segmentation-it provides a class value for each pixel of the image.
In critical situations, the decision requires information at the pixel level, for example in the case of autonomous vehicles and medical image analysis, semantic segmentation is used.
Referring to fig. 1, the present invention provides a visual path searching method based on an artificial neural network, comprising the following steps:
establishing a semantic segmentation model, and inputting a Berkley Deep Drive data set for model training;
customizing a data set, and finely adjusting the trained semantic segmentation model, wherein the customized data set comes from an actual environment of self-service operation;
inputting an image acquired by a camera into a semantic segmentation model, and outputting a segmented image;
filtering the segmented image into two independent binary images, counting continuous white pixels in the binary images in a line-by-line manner, and calculating a travelable segment of each line;
judging the number of the travelable sections in each row, if the number is 1, no barrier exists, and the travelable sections are travelable sections; if the number is larger than 1, the driving section has an obstacle, the maximum driving section in the driving section with the obstacle is calculated, and the maximum driving section is used as the driving section of the line;
and acquiring the midpoint coordinate of each travelable section, respectively judging the continuity of the midpoint coordinate, the length and the width of the continuous midpoint coordinate, indicating to drive the vehicle to travel if the judgment condition is met, searching for the alternative route if the judgment condition is not met, and iteratively judging the travelable section of the new route to complete the search of the navigation route.
The BerkleyDeepdrive data set comprises a data set image and a label image, wherein the data set image and the label image respectively comprise an unlawable area, a direct drivable area and a standby drivable area and are marked by different colors. And (4) adopting a direct driving map and processing the direct driving map to find the navigation path of the vehicle.
The semantic segmentation model is a deep convolution network model of effective residual decomposition ConvNet (ERFNet).
The binary image in this application has one channel and the segmented image has three channels. Further, the binary image is black (pixel value 0) or white (pixel value 255). The computational load of further processing is greatly reduced.
Example 1
The application trains the model using the BerkleyDeepdrive dataset (BDD100 k). The data set consisted of 80000 publicly available images and their corresponding labels for detecting travelable areas under different road and environmental conditions.
As shown in fig. 3, the dataset image and the label image have the following three categories:
a) non-drivable zone
b) Direct travelable area
c) Spare drivable area
The red road corresponds to a directly travelable road, the blue road corresponds to an alternative travelable road, and the black pixels correspond to a non-travelable area.
Example 2
The resolution of the camera in this application is 480x640, where 480 is the height (number of rows) and 640 is the width (number of columns). To process, each line of the drivable map will be processed. The image is divided into lines by resolution and counted, and consecutive white pixels are found and counted for each line of the image, the number of lines in this embodiment being 480.
In each row, the travelable segment is calculated as follows:
I(j)=I(j+1)(whereI(j)=255)→count=count+1 (1)
Drivableatsrt=j,Drivableend=j+count (2)
wherein j and I (j) respectively represent the column number and the counting value of the image pixel at the j column, and the index of the travelable segment and the start and end is obtained by counting the continuous white pixels, so that the travelable segment is found for each row.
The calculation formula for calculating the maximum travelable segment within the travelable segment in which the obstacle exists is as follows:
Drivableperrow=Maximum(Drivableend-Drivablestart) (3)。
the maximum drivable segment is found because it can be ensured that the robot/vehicle knows the direction within the maximum available space if there is an obstacle in the middle. Therefore, we have one drivable segment per line, i.e. one drivable segment per line of images. The midpoint coordinates of the travelable segment are calculated as follows:
Midpoint=(Drivableend-Drivablestart)/2 (4)
the coordinates are calculated and are denoted Drive (Midpoint) and for each row there is a travelable point coordinate, the Midpoint of which is the travelable point for each row and the number of rows of the image as its x-coordinate and y-coordinate, respectively.
Example 3
The method for judging the continuity of the center point coordinate and the length and the width of the continuous center point coordinate comprises the following steps:
if the continuous length of the midpoint coordinate exceeds a preset value, indicating that the vehicle is driven to run, and if the continuous length is less than the preset value, indicating that an obstacle exists, performing alternative route searching, wherein the preset value is the minimum continuous travelable line number (N), the surface must have at least N continuous travelable lines to indicate that the vehicle runs, otherwise, indicating that an obstacle exists near the vehicle, as shown in fig. 5, N is 100, and in fig. 5, a horizontal straight line shows the minimum travelable line number (N), in this case 100. FIG. 5 is non-drivable on the left because the number of consecutive drivable rows is 80, which is less than the desired number of rows; FIG. 5 is drivable to the right because the number of drivable rows is 120;
and judging the length and the width of the continuous midpoint coordinate, if the length and the width exceed preset values of the length and the width, normally driving the vehicle, and if the length and the width exceed preset values of the length, searching for an alternative route. Since each row has one drivable segment with the largest number of drivable pixels. The length of the road section should also be taken into account, which should be at least greater than the width of the vehicle, so that the vehicle can travel safely. The length of the road is not constant as seen by the camera, but the length of the road appears to be smaller and smaller, as shown in fig. 6, where an image and its corresponding label are displayed. As the distance from the camera increases, the width of the road converges. The required length (L) cannot be a single constant value, since it should also converge with the distance from the camera. Therefore, the length (L) should be an array of progressively decreasing values. requiredlength (l) is a length array with a length equal to the required travelable (100 rows in our example) and has the following values.
The preset values for length and width are the length array requiredlength (l), where requiredlength (l) is [150,149,148,147,146,145,144,143,142,141,140,139,138,137,136,135,134,133,132,131,130,129,128,127,126,125,124,123,122,121,120,119,118,117,116,115,114,113,112,111,110,109,108,107,106,105,104,103,102,101,100,99,98,97,96,95,94,93,92,91,90,89,88,87,86,85,84,83,82,81,80,79,78,77,76,75,74,73,72,71,70,69,68,67,66,65,64,63,62,61,60,59,58,57,56,55,54,53, 51, and 51 as shown in the schematic diagram of fig. 7.
Each line of the image is compared to the required length (L). A road is considered drivable only if the length of the drivable segment is greater than or equal to the respective required length. If the above determination conditions are satisfied, the robot or the vehicle is instructed to travel on the center coordinates of the drivable area. If one of the conditions is not satisfied, a parking instruction is issued to the vehicle.
If the direct drivable map is not drivable, the alternative drivable maps of condition 1 and condition 2 are checked, and if satisfied, the robot is instructed to change lanes. Therefore, the alternative drivable road becomes the direct drivable road, and the process is repeated.
Based on the coordinate-based driving, the camera was placed in the center of the vehicle, the center of the image being 320 pixel columns and 640 pixels wide. Thus, the post 320 may be assumed to be the center of the vehicle. Ideally, the vehicle center should coincide with the x-axis of the drive coordinate.
The offset is calculated from the vehicle center and the drive coordinate x-axis. To do this, the average x-axis value of the first 25 rows of drive coordinates is calculated, and the offset is calculated:
Drive(x-axis,y-axis)=Drive(Midpoint,rownumber) (5)
Driveavg=(Midpoint1+Midpoint2+…+MidpointN)/N (6)
Where Nis 25 in our case (7)
Offset=Driveavg-Center of Vehicle(340) (8)
a positive offset indicates that the vehicle is traveling to the right, and a negative offset indicates that the vehicle is traveling to the left.
The above disclosure is only for a few specific embodiments of the present invention, however, the present invention is not limited to the above embodiments, and any variations that can be made by those skilled in the art are intended to fall within the scope of the present invention.
Claims (9)
1. A visual path searching method based on an artificial neural network is characterized by comprising the following steps:
establishing a semantic segmentation model, and inputting a Berkley Deep Drive data set for model training;
customizing a data set, and finely adjusting the trained semantic segmentation model;
inputting an image acquired by a camera into a semantic segmentation model, and outputting a segmented image;
filtering the segmented image into two independent binary images, counting continuous white pixels in the binary images in a line-by-line manner, and calculating a travelable segment of each line;
judging the number of the travelable sections in each row, if the number is 1, no barrier exists, and the travelable sections are travelable sections; if the number is larger than 1, the driving section has an obstacle, the maximum driving section in the driving section with the obstacle is calculated, and the maximum driving section is used as the driving section of the line;
and acquiring the midpoint coordinate of each travelable section, respectively judging the continuity of the midpoint coordinate, the length and the width of the continuous midpoint coordinate, indicating to drive the vehicle to travel if the judgment condition is met, searching for the alternative route if the judgment condition is not met, and iteratively judging the travelable section of the new route to complete the search of the navigation route.
2. The artificial neural network-based visual path searching method of claim 1, wherein the Berkley Deep Drive data set comprises a data set image and a label image, and the data set image and the label image each comprise an unlinkable area, a directly drivable area and an alternate drivable area and are identified by different colors.
3. The artificial neural network-based visual path searching method of claim 1, wherein the image is divided into a plurality of lines according to resolution and the lines are counted.
4. A visual path searching method based on artificial neural network as claimed in claim 1 or 3, wherein in each row, the travelable segment is calculated as follows:
I(j)=I(j+1)(whereI(j)=255)→count=count+1 (1)
Drivableatsrt=j,Drivableend=j+count (2)
wherein j and I (j) respectively represent the column number and the counting value of the image pixel at the j column, and the index of the travelable segment and the start and end is obtained by counting the continuous white pixels, so that the travelable segment is found for each row.
5. The artificial neural network-based visual path searching method according to claim 4, wherein the calculation formula for calculating the maximum travelable segment within the travelable segment in which the obstacle exists is as follows:
Drivableperrow=Maximum(Drivableend-Drivablestart) (3)。
6. the artificial neural network-based visual path searching method of claim 5, wherein the midpoint coordinates of the travelable segment are calculated as follows:
Midpoint=(Drivableend-Drivablestart)/2 (4)
the coordinates are calculated and then expressed as Drive (Midpoint, row number).
7. The method for searching visual path based on artificial neural network as claimed in claim 1, wherein the method for determining the continuity of the coordinates of the central point, the length and width of the coordinates of the continuous central point comprises:
if the continuous length of the midpoint coordinate exceeds a preset value, indicating that a vehicle is driven to run, and if the continuous length is smaller than the preset value, indicating that an obstacle exists, and searching for an alternative route;
and judging the length and the width of the continuous midpoint coordinate, if the length and the width exceed preset values of the length and the width, normally driving the vehicle, and if the length and the width exceed preset values of the length, searching for an alternative route.
8. The method of claim 7, wherein the predetermined set of lengths and widths is a length array Required length (L), wherein Required length (L) ([ 150,149,148,147,146,145,144,143,142,141,140,139,138,137,136,135,134,133,132,131,130,129,128,127,126,125,124,123,122,121,120,119,118,117,116,115,114,113,112,111,110,109,108,107,106,105,104,103,102,101,100,99,98,97,96,95,94,93,92,91,90,89,88,87,86,85,84,83,82,81,80,79,78,77,76,75,74,73,72,71,70,69,68, 66, 67, 63,62, 54, 55, 61, 53, 54,53 ].
9. The artificial neural network-based visual path searching method of claim 1, wherein the semantic segmentation model is a deep convolutional network model of effective residual decomposition.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110655676.4A CN113269130B (en) | 2021-06-11 | 2021-06-11 | Visual path searching method based on artificial neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110655676.4A CN113269130B (en) | 2021-06-11 | 2021-06-11 | Visual path searching method based on artificial neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113269130A true CN113269130A (en) | 2021-08-17 |
CN113269130B CN113269130B (en) | 2024-08-06 |
Family
ID=77234910
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110655676.4A Active CN113269130B (en) | 2021-06-11 | 2021-06-11 | Visual path searching method based on artificial neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113269130B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108985194A (en) * | 2018-06-29 | 2018-12-11 | 华南理工大学 | A kind of intelligent vehicle based on image, semantic segmentation can travel the recognition methods in region |
CN109543600A (en) * | 2018-11-21 | 2019-03-29 | 成都信息工程大学 | A kind of realization drivable region detection method and system and application |
CN110008848A (en) * | 2019-03-13 | 2019-07-12 | 华南理工大学 | A kind of travelable area recognizing method of the road based on binocular stereo vision |
US10565476B1 (en) * | 2018-09-04 | 2020-02-18 | StradVision, Inc. | Method and computing device for generating image data set for learning to be used for detection of obstruction in autonomous driving circumstances and learning method and learning device using the same |
US20200372648A1 (en) * | 2018-05-17 | 2020-11-26 | Tencent Technology (Shenzhen) Company Limited | Image processing method and device, computer apparatus, and storage medium |
CN112171668A (en) * | 2020-09-21 | 2021-01-05 | 河南颂达信息技术有限公司 | Rail-mounted robot anti-jamming detection method and device based on artificial intelligence |
CN112639821A (en) * | 2020-05-11 | 2021-04-09 | 华为技术有限公司 | Method and system for detecting vehicle travelable area and automatic driving vehicle adopting system |
CN112902981A (en) * | 2021-01-26 | 2021-06-04 | 中国科学技术大学 | Robot navigation method and device |
-
2021
- 2021-06-11 CN CN202110655676.4A patent/CN113269130B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200372648A1 (en) * | 2018-05-17 | 2020-11-26 | Tencent Technology (Shenzhen) Company Limited | Image processing method and device, computer apparatus, and storage medium |
CN108985194A (en) * | 2018-06-29 | 2018-12-11 | 华南理工大学 | A kind of intelligent vehicle based on image, semantic segmentation can travel the recognition methods in region |
US10565476B1 (en) * | 2018-09-04 | 2020-02-18 | StradVision, Inc. | Method and computing device for generating image data set for learning to be used for detection of obstruction in autonomous driving circumstances and learning method and learning device using the same |
CN109543600A (en) * | 2018-11-21 | 2019-03-29 | 成都信息工程大学 | A kind of realization drivable region detection method and system and application |
CN110008848A (en) * | 2019-03-13 | 2019-07-12 | 华南理工大学 | A kind of travelable area recognizing method of the road based on binocular stereo vision |
CN112639821A (en) * | 2020-05-11 | 2021-04-09 | 华为技术有限公司 | Method and system for detecting vehicle travelable area and automatic driving vehicle adopting system |
CN112171668A (en) * | 2020-09-21 | 2021-01-05 | 河南颂达信息技术有限公司 | Rail-mounted robot anti-jamming detection method and device based on artificial intelligence |
CN112902981A (en) * | 2021-01-26 | 2021-06-04 | 中国科学技术大学 | Robot navigation method and device |
Non-Patent Citations (1)
Title |
---|
张凯航 等: "基于SegNet的非结构道路可行驶区域语义分割", 重庆大学学报, no. 03, pages 79 - 87 * |
Also Published As
Publication number | Publication date |
---|---|
CN113269130B (en) | 2024-08-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110765922B (en) | Binocular vision object detection obstacle system for AGV | |
Possatti et al. | Traffic light recognition using deep learning and prior maps for autonomous cars | |
CN106774313B (en) | A kind of outdoor automatic obstacle-avoiding AGV air navigation aid based on multisensor | |
CN113370977B (en) | Intelligent vehicle forward collision early warning method and system based on vision | |
EP3879455A1 (en) | Multi-sensor data fusion method and device | |
US8750567B2 (en) | Road structure detection and tracking | |
Levinson et al. | Traffic light mapping, localization, and state detection for autonomous vehicles | |
JP4703136B2 (en) | Line drawing processing equipment | |
JP2000090243A (en) | Periphery monitoring device and method therefor | |
CN110688954A (en) | Vehicle lane change detection method based on vector operation | |
CN112654998B (en) | Lane line detection method and device | |
CN112666573B (en) | Detection method for retaining wall and barrier behind mine unloading area vehicle | |
CN111098850A (en) | Automatic parking auxiliary system and automatic parking method | |
CN104915642A (en) | Method and apparatus for measurement of distance to vehicle ahead | |
CN115223039A (en) | Robot semi-autonomous control method and system for complex environment | |
Muthalagu et al. | Object and Lane Detection Technique for Autonomous Car Using Machine Learning Approach | |
Sirbu et al. | Real-time line matching based speed bump detection algorithm | |
CN113110456A (en) | AGV trolley stable driving method and system based on artificial intelligence | |
Seo et al. | Use of a monocular camera to analyze a ground vehicle’s lateral movements for reliable autonomous city driving | |
CN113269130B (en) | Visual path searching method based on artificial neural network | |
CN114217641B (en) | Unmanned aerial vehicle power transmission and transformation equipment inspection method and system in non-structural environment | |
CN116386003A (en) | Three-dimensional target detection method based on knowledge distillation | |
CN115755888A (en) | AGV obstacle detection system with multi-sensor data fusion and obstacle avoidance method | |
CN113658240B (en) | Main obstacle detection method and device and automatic driving system | |
CN113298044B (en) | Obstacle detection method, system, device and storage medium based on positioning compensation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |