CN113200052B - Intelligent road condition identification method for unmanned driving - Google Patents
Intelligent road condition identification method for unmanned driving Download PDFInfo
- Publication number
- CN113200052B CN113200052B CN202110491435.0A CN202110491435A CN113200052B CN 113200052 B CN113200052 B CN 113200052B CN 202110491435 A CN202110491435 A CN 202110491435A CN 113200052 B CN113200052 B CN 113200052B
- Authority
- CN
- China
- Prior art keywords
- lane
- road
- road image
- image
- result
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W60/00—Drive control systems specially adapted for autonomous road vehicles
- B60W60/001—Planning or execution of driving tasks
- B60W60/0015—Planning or execution of driving tasks specially adapted for safety
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/02—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
- B60W40/06—Road conditions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Mechanical Engineering (AREA)
- Transportation (AREA)
- Automation & Control Theory (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Human Computer Interaction (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Processing (AREA)
Abstract
The invention relates to an intelligent road condition identification method for unmanned driving, belonging to the technical field of unmanned driving, and the method comprises the following steps: carrying out color separation on the collected road image information to obtain a road image after color separation; intelligently identifying the road image after color separation to obtain a road condition identification result; and controlling the unmanned vehicle to run based on the road condition recognition result. The method can accurately identify the rugged road, can better distinguish the muddy part in the muddy road, can obtain more accurate road environment identification result, and can better and properly select the automatic driving mode based on the road environment identification result, thereby realizing the accurate control of the unmanned vehicle under the complicated road condition and improving the identification accuracy rate of the poorer road condition and the unmanned control accuracy rate.
Description
Technical Field
The invention belongs to the technical field of unmanned driving, and particularly relates to an intelligent road condition identification method for unmanned driving.
Background
Since the birth of unmanned vehicles, the perception of the surrounding environment has attracted the attention of many scholars and scientific research institutions as an important research content in the field of unmanned driving, but at present, the perception of the surrounding environment is still a difficult subject, and the detection of obstacles on the driving road of vehicles is an important component in the technical field of perception of the surrounding environment.
In recent years, in the field of ITS (intelligent transportation system) research domestically and abroad, many algorithms and implementation means are proposed for detecting obstacles on a vehicle driving path, wherein the algorithms and implementation means include an image-based detection method, and the method mainly comprises the following steps:
(1) obstacle detection based on prior knowledge: the method is characterized in that the image is preprocessed and compared with the prior knowledge to draw a conclusion, so that the method has the defects of low precision and narrow application range.
(2) Stereoscopic vision based obstacle detection: the method comprises binocular stereo vision and trinocular stereo vision, and has the advantages of high cost, complex algorithm and difficult calculation due to the fact that a plurality of cameras are needed.
Moreover, the existing unmanned technology has no effective road identification and detection method under the condition of bad road conditions, especially uneven and muddy road conditions, so that the existing unmanned technology has poor driving control effect under the condition of bad road conditions.
Disclosure of Invention
The invention mainly aims to provide an intelligent identification method for the unmanned road condition, which separates the color channels of the acquired road image information to obtain the road image in each color space, and performs lane identification, lane hierarchy division and lane obstacle identification based on the road images in different color spaces, thereby effectively controlling the unmanned driving of uneven roads and muddy roads and improving the identification accuracy and the driving control accuracy of poor road conditions.
In order to achieve the aim, the invention discloses an intelligent road condition identification method for unmanned driving, which comprises the following steps:
step S1: carrying out color separation on the collected road image information to obtain a road image after color separation;
step S2: intelligently identifying the road image after color separation to obtain a road condition identification result;
step S3: and controlling the unmanned vehicle to run based on the road condition recognition result.
Further, the color-separated road image includes: a red channel road image, a green channel road image, and a blue channel road image.
Further, the color separation of the acquired road image information to obtain a color-separated road image includes:
collecting road image information in the driving process of the unmanned vehicle; carrying out pyramid downsampling processing on the road image information to obtain three layers of sample image characteristics corresponding to the road image information, wherein the three layers of sample image characteristics respectively correspond to three preset channels; the three-layer sample image features are sequenced according to the preset sequence of the three channels; and converting three image channel values corresponding to the three channels of the three-layer sample image characteristics to respectively obtain a red channel road image, a green channel road image and a blue channel road image.
Further, the intelligent identification of the road image after color separation to obtain a road condition identification result includes:
performing lane recognition based on the red channel road image to obtain a lane recognition result;
performing lane level division based on the green channel road image to obtain a lane level division result;
and carrying out lane obstacle recognition based on the blue channel road image to obtain a lane obstacle recognition result.
Further, the lane recognition based on the red channel road image to obtain a lane recognition result includes:
detecting road edge points of the red channel road image, and fitting a road boundary model by using the detected road edge points; adjusting a characteristic region of a red channel road image according to the road boundary model, extracting a brightness characteristic gray scale image in the adjusted characteristic region, detecting lane line pixel points of the brightness characteristic gray scale image, constructing a lane by using the lane line pixel points, completing lane recognition and obtaining a lane recognition result.
Further, the lane-level division based on the green channel road image to obtain a lane-level division result includes:
respectively extracting histogram features, variance curve features and rotation and symmetry invariant features of the green channel road image; taking the histogram feature and the variance curve feature as level features, taking the rotation and symmetrical invariant features as texture features, inputting the texture features into a terrain recognition classifier, and obtaining a terrain classification result through the terrain recognition classifier; and obtaining a lane level division result based on the terrain classification result.
Further, the extracting the variance curve feature of the green channel road image includes:
the variance curve characteristics P, P ', F, and F' of the green channel road image are obtained using the following formulas, respectively:
wherein P represents the statistical distribution of the terrain sample gray level mean of the variance curve characteristic; h isKRepresenting a terrain sample gray value under a gray level K; r isKExpressing a terrain sample gray average value under a gray level K; size represents the number of samples under the native type; p' represents the statistical distribution of the average occupancy rate of the gray scale of the terrain sample with the variance curve characteristics; f represents the gray variance statistical distribution of the variance curve characteristics; bKExpressing the gray variance of the terrain sample under the gray level K; f' represents the gray variance of the normalized variance curve characteristic; col denotes the width of the green channel road image; row represents the high of the green channel road image.
Further, the lane obstacle recognition based on the blue channel road image to obtain a lane obstacle recognition result includes:
establishing an actual coordinate system fixedly connected to the vehicle body by taking the vehicle body as an origin;
constructing a grid coordinate system of the blue channel road image, and converting the acquired data of the grid coordinate systems into an actual coordinate system when the road surface is level;
the length, height and width estimation is carried out on the regular obstacles and the irregular obstacles, and comprises the following steps: identifying the grid coordinates of the points to be measured on the barrier through a computer; comparing the grid coordinates of the same pixel points on the image without the obstacles, and obtaining an estimated value of the three-dimensional coordinates of the point to be measured by using a geometric relation;
and obtaining a lane obstacle recognition result based on the estimated value of the three-dimensional coordinate of the point to be measured.
Further, the controlling the unmanned vehicle to run based on the road condition recognition result includes:
and controlling the unmanned vehicle to run based on the lane recognition result, the lane hierarchy division result and the lane obstacle recognition result.
Further, the controlling the unmanned vehicle to travel based on the lane recognition result, the lane hierarchy division result, and the lane obstacle recognition result includes:
step S3.1: obtaining an operation mode value L of the current unmanned vehicle by using the following formula based on a preset control model, the lane recognition result, the lane hierarchy division result and the lane obstacle recognition result:
L=LS+λLC+exp(1+LD)
wherein L isSDividing results for lane levels; l isCIs a lane recognition result; l isDA lane obstacle recognition result; lambda is an adjustment coefficient;
obtaining the running mode of the current unmanned vehicle based on the running mode value and the corresponding rule of the running mode of the vehicle;
step S3.2: and controlling the unmanned vehicle to run according to a preset corresponding vehicle automatic driving mode based on the running mode of the current unmanned vehicle.
The invention has the following beneficial effects:
1. before processing the road image, the invention firstly separates the collected road image information into a red channel road image, a green channel road image and a blue channel road image. Because the pixel characteristics and the image characteristics of the images under each channel are different, different image processing is respectively carried out on the images of each channel, a more accurate road environment recognition result can be obtained, recognition is effectively carried out on uneven roads and muddy roads, unmanned vehicle control is carried out based on the recognition result, and the recognition accuracy rate and the unmanned control accuracy rate of poor road conditions are improved.
2. According to the lane recognition method and the lane recognition device, lane recognition is carried out based on the red channel road image, green and blue pixels are filtered out from the red channel road image, the image outline is clearer, lane recognition and construction are more facilitated, and the obtained lane recognition result is more accurate than that in the prior art.
3. According to the invention, the lane hierarchy division is carried out based on the green channel road image, red and blue pixels are filtered out from the green channel road image, the image hierarchy sense is more obvious, the lane hierarchy division is more favorably carried out, and the obtained lane hierarchy division result is more accurate in the prior art.
4. According to the lane obstacle identification method and device, lane obstacle identification is carried out on the basis of the blue channel road image, red pixels and green pixels are filtered out from the blue channel road image, block parts in the image are more obvious, obstacle identification is facilitated, and the obtained lane obstacle identification result is more accurate in the prior art.
5. By combining the level division method for the green channel road image and the variance curve characteristic formula, the accuracy of road level division is further improved.
6. The method for controlling the unmanned vehicle to run based on the lane recognition result, the lane hierarchy division result and the lane obstacle recognition result can accurately recognize the rugged road and well distinguish the muddy part in the muddy road, so that the automatic driving mode can be selected more properly, and the unmanned vehicle can be controlled under the complex road conditions.
Drawings
The drawings are only for purposes of illustrating particular embodiments and are not to be construed as limiting the invention, wherein like reference numerals are used to designate like parts throughout. It should be apparent that the drawings in the following description are only some of the embodiments described in the embodiments of the present invention, and that other drawings may be obtained by those skilled in the art.
Fig. 1 is a schematic flow chart of a method for intelligently identifying a road condition of unmanned vehicles according to an embodiment of the present invention;
FIG. 2 is a graph comparing experimental data curves based on the method of the present invention and a prior art method, provided by an embodiment of the present invention.
Reference numerals:
1, experimental curve of this example; 2, experimental curves of the prior art.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the embodiments of the present invention, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. It is to be understood that this description is made only by way of example and not as a limitation on the scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
The following describes the technical solution of the present invention in further detail with reference to the detailed description and the accompanying drawings.
Method embodiment
One complete image, red, green and blue channels are absent. Even if there appears to be no blue in the image, it can only be said that the luminance of blue light is 0 or that the red and green channels of each pixel value are not all 0, but it cannot be said that no blue channel is present, "present, luminance zero" and "not present" are two different concepts.
One image, if the red channel is turned off, the image is cyan; if the green channel is turned off, the image is magenta-biased; if the blue channel is turned off, the image is yellow.
If a single channel is viewed, each channel is displayed as a grayscale image. The brightness in a certain channel gray scale image corresponds to the brightness of the channel color, so that the distribution of the color light on the whole image is expressed. Since the number of image color channels is 3, three gray-scale images can be generated from one image.
The invention discloses an intelligent identification method for unmanned road conditions, which separates color channels of acquired road image information to obtain road images in each color space, identifies lanes, divides the levels of the lanes and identifies lane obstacles based on the road images in different color spaces, effectively identifies uneven roads and muddy roads, controls unmanned vehicles based on identification results, and improves identification accuracy and driving control accuracy of poor road conditions.
The invention discloses a specific embodiment of a method for intelligently identifying a road condition for unmanned driving, which comprises the following steps of: carrying out color separation on the collected road image information to obtain a road image after color separation; intelligently identifying the road image after color separation to obtain a road condition identification result; and controlling the unmanned vehicle to run based on the road condition recognition result. Specifically, the method includes steps S1 to S3.
And S1, carrying out color separation on the collected road image information to obtain a road image after color separation.
Collecting road image information in the driving process of the unmanned vehicle; and performing RGB (red, green and blue) color channel separation on the collected road image information to obtain a road image after color separation, wherein the road image comprises road images in R (red), G (green) and B (blue) color spaces. The road image in the R color space is recorded as a red channel road image, the road image in the G color space is recorded as a green channel road image, and the road image in the B color space is recorded as a blue channel road image.
Preferably, the method for performing color separation on the acquired road image information to obtain a color-separated road image includes: carrying out pyramid downsampling processing on road image information to obtain three layers of sample image characteristics corresponding to the road image information, wherein the three layers of sample image characteristics respectively correspond to three channels; the three-layer sample image features are sequenced according to the preset sequence of the three channels; and converting three image channel values corresponding to the three channels of the three-layer sample image characteristics to respectively obtain a red channel road image, a green channel road image and a blue channel road image.
And S2, intelligently identifying the road image after color separation to obtain a road condition identification result.
The method for intelligently identifying the road image after color separation to obtain the road condition identification result comprises the following steps: performing lane recognition based on the red channel road image to obtain a lane recognition result; performing lane level division based on the green channel road image to obtain a lane level division result; and carrying out lane obstacle recognition based on the blue channel road image to obtain a lane obstacle recognition result.
Because the pixel characteristics and the image characteristics of the acquired road image information under each color channel are different, different image processing is respectively carried out on each channel image, and a more accurate road environment recognition result can be obtained.
Step S2.1: and carrying out lane recognition based on the red channel road image to obtain a lane recognition result.
The red channel road image filters green and blue pixels, the image contour is clearer, and lane identification and construction are facilitated.
The lane recognition is carried out based on the red channel road image, and the lane recognition method comprises the following steps: detecting road edge points of the red channel road image, and fitting a road boundary model by using the detected road edge points; adjusting a characteristic region of a red channel road image according to the road boundary model, extracting a brightness characteristic gray scale image in the adjusted characteristic region, detecting lane line pixel points of the brightness characteristic gray scale image, constructing a lane by using the lane line pixel points, completing lane recognition and obtaining a lane recognition result.
Step S2.2: and carrying out lane level division based on the green channel road image to obtain a lane level division result.
According to the lane hierarchy dividing method and the lane hierarchy dividing device, the lane hierarchy dividing is carried out according to the acquired lane image information, uneven roads such as muddy roads can be identified, and muddy parts in the muddy roads can be well distinguished, because the muddy parts in the roads are different from the road hierarchies where other parts are located, the control of the unmanned vehicle under the complicated road conditions can be achieved through the mode.
The red and blue pixels of the green channel road image are filtered, so that the image is more obvious in layering sense and more beneficial to lane level division.
The method for lane hierarchy division based on the green channel road image comprises the following steps: respectively extracting histogram features, variance curve features and rotation and symmetry invariant features of the green channel road image; taking the histogram feature and the variance curve feature of the green channel road image as level features, taking the rotation and symmetrical invariant features as texture features, inputting the level features and the rotation and symmetrical invariant features into a terrain recognition classifier, and obtaining a terrain classification result through the terrain recognition classifier; and obtaining a lane level division result based on the terrain classification result, and finishing lane level division.
Preferably, the green channel road image is subjected to image processing by using the following formula, so as to obtain variance curve characteristics P, P ', F and F' of the green channel road image:
wherein P represents the statistical distribution of the terrain sample gray level mean of the variance curve characteristic; h isKRepresenting a terrain sample gray value under a gray level K; r isKExpressing a terrain sample gray average value under a gray level K; size represents the number of samples under the native type; p' represents the statistical distribution of the average occupancy rate of the gray scale of the terrain sample with the variance curve characteristics; f represents the gray variance statistical distribution of the variance curve characteristics; bKExpressing the gray variance of the terrain sample under the gray level K; f' represents the gray variance of the normalized variance curve characteristic; col denotes the width of the green channel road image; row represents the high of the green channel road image.
By carrying out image processing on the green channel road image in the formula, the variance characteristic curve obtained by solving is more accurate, and the accuracy of road level division is further improved.
Step S2.3: and carrying out lane obstacle recognition based on the blue channel road image to obtain a lane obstacle recognition result.
The red and green pixels of the blue channel road image are filtered, the block parts in the image are more obvious, and the obstacle identification is facilitated.
The method for recognizing lane obstacles based on the blue channel road image comprises the following steps:
establishing an actual coordinate system fixedly connected to the vehicle body by taking the vehicle body as an origin;
constructing a grid coordinate system of the blue channel road image, and converting the acquired data of the grid coordinate systems into an actual coordinate system when the road surface is level;
the length, height and width estimation is carried out on the regular obstacles and the irregular obstacles, and comprises the following steps: identifying the grid coordinates of the points to be measured on the barrier through a computer; comparing the grid coordinates of the same pixel points on the image without the obstacles, and obtaining an estimated value of the three-dimensional coordinates of the point to be measured by using a geometric relation;
and obtaining a lane obstacle recognition result based on the estimated value of the three-dimensional coordinate of the point to be measured.
And S3, controlling the unmanned vehicle to run based on the road condition identification result.
The method for controlling the unmanned vehicle to run based on the road condition recognition result comprises the following steps: and controlling the unmanned vehicle to run based on the lane recognition result, the lane hierarchy division result and the lane obstacle recognition result.
Specifically, two steps of S3.1 and S3.2 are included.
Step S3.1: judging the current running mode of the unmanned vehicle based on a preset control model and the obtained lane recognition result, lane hierarchy division result and lane obstacle recognition result;
preferably, the operating mode value L is obtained using the following formula:
L=LS+λLC+exp(1+LD)
wherein L isSDividing results for lane levels; l isCIs a lane recognition result; l isDA lane obstacle recognition result; λ is an adjustment coefficient, for the purpose of countingThe calculation result falls within a set range, and the value of λ is determined depending on the set range.
And obtaining the running mode of the current unmanned vehicle based on the running mode value and the corresponding rule of the running mode of the vehicle.
Step S3.2: and controlling the unmanned vehicle to run according to a preset corresponding vehicle automatic driving mode based on the running mode of the current unmanned vehicle.
Effects of the embodiment
FIG. 2 is a graph comparing experimental data curves based on the method of the present invention and a prior art method, provided by an embodiment of the present invention. The intelligent identification is carried out on various road conditions to be tested by respectively using the prior art and the technology of the invention, and the identification effect of road condition tests for more than 200 thousand times is counted. In the figure, the ordinate is the error occurrence rate, which is the ratio of the number of the experiments in which the road condition is judged incorrectly to the current cumulative number of the experiments, and the abscissa is the cumulative number of the experiments.
As shown in fig. 2, as the number of accumulated experiments increases, the error occurrence rate of the conventional method (experiment curve 2 in the prior art) increases significantly, while the error occurrence rate of the method of the present invention (experiment curve 1 in this embodiment) changes more slowly, the error occurrence rate is much lower than that of the prior art, and the average value of the error occurrence rate in 200 thousand experiments is about 5%, which has a better improvement effect.
Finally, it should be noted that the above-mentioned embodiments are only preferred embodiments of the present invention, and are not intended to limit the scope of the present invention. In practical applications, the above steps may be re-divided or combined as required to complete all or part of the functions described above. The names of the steps involved in the embodiments of the present invention are only for distinguishing the respective steps, and are not to be construed as unduly limiting the present invention.
Those skilled in the art will appreciate that the method steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or combinations of both, and that the corresponding programs in the method steps can be located in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or any other form of storage medium known in the art. Whether such functionality is implemented as electronic hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The terms "comprises," "comprising," or any other similar term are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.
Claims (7)
1. An intelligent road condition identification method for unmanned driving is characterized by comprising the following steps:
step S1: carrying out color separation on the collected road image information to obtain a road image after color separation;
step S2: intelligently identifying the road image after color separation to obtain a road condition identification result;
step S3: controlling the unmanned vehicle to run based on the road condition identification result;
the color-separated road image comprises: a red channel road image, a green channel road image and a blue channel road image;
the color separation of the collected road image information to obtain the road image after the color separation comprises the following steps:
collecting road image information in the driving process of the unmanned vehicle; carrying out pyramid downsampling processing on the road image information to obtain three layers of sample image characteristics corresponding to the road image information, wherein the three layers of sample image characteristics respectively correspond to three preset channels; the three-layer sample image features are sequenced according to the preset sequence of the three channels; converting three image channel values corresponding to the three channels of the three-layer sample image characteristics to respectively obtain a red channel road image, a green channel road image and a blue channel road image;
the road image after the color separation is intelligently identified to obtain a road condition identification result, and the method comprises the following steps:
performing lane recognition based on the red channel road image to obtain a lane recognition result;
performing lane level division based on the green channel road image to obtain a lane level division result;
and carrying out lane obstacle recognition based on the blue channel road image to obtain a lane obstacle recognition result.
2. The intelligent road condition recognition method for unmanned aerial vehicle as claimed in claim 1, wherein the performing lane recognition based on the red channel road image to obtain a lane recognition result comprises:
detecting road edge points of the red channel road image, and fitting a road boundary model by using the detected road edge points; adjusting a characteristic region of a red channel road image according to the road boundary model, extracting a brightness characteristic gray scale image in the adjusted characteristic region, detecting lane line pixel points of the brightness characteristic gray scale image, constructing a lane by using the lane line pixel points, completing lane recognition and obtaining a lane recognition result.
3. The intelligent road condition recognition method for unmanned aerial vehicle as claimed in claim 2, wherein the step of performing lane hierarchy division based on the green channel road image to obtain lane hierarchy division results comprises:
respectively extracting histogram features, variance curve features and rotation and symmetry invariant features of the green channel road image; taking the histogram feature and the variance curve feature as level features, taking the rotation and symmetrical invariant features as texture features, inputting the texture features into a terrain recognition classifier, and obtaining a terrain classification result through the terrain recognition classifier; and obtaining a lane level division result based on the terrain classification result.
4. The intelligent road condition identification method for unmanned aerial vehicle as claimed in claim 3, wherein the extracting the variance curve feature of the green channel road image comprises:
the variance curve characteristics P, P ', F, and F' of the green channel road image are obtained using the following formulas, respectively:
wherein P represents the statistical distribution of the terrain sample gray level mean of the variance curve characteristic; h isKRepresenting a terrain sample gray value under a gray level K; r isKRepresenting the gray level of a terrain sample at a gray level of KA value; size represents the number of samples under the native type; p' represents the statistical distribution of the average occupancy rate of the gray scale of the terrain sample with the variance curve characteristics; f represents the gray variance statistical distribution of the variance curve characteristics; bKExpressing the gray variance of the terrain sample under the gray level K; f' represents the gray variance of the normalized variance curve characteristic; col denotes the width of the green channel road image; row represents the high of the green channel road image.
5. The intelligent unmanned road condition recognition method as claimed in claim 4, wherein the performing lane obstacle recognition based on the blue channel road image to obtain a lane obstacle recognition result comprises:
establishing an actual coordinate system fixedly connected to the vehicle body by taking the vehicle body as an origin;
constructing a grid coordinate system of the blue channel road image, and converting the acquired data of the grid coordinate systems into an actual coordinate system when the road surface is level;
the length, height and width estimation is carried out on the regular obstacles and the irregular obstacles, and comprises the following steps: identifying the grid coordinates of the points to be measured on the barrier through a computer; comparing the grid coordinates of the same pixel points on the image without the obstacles, and obtaining an estimated value of the three-dimensional coordinates of the point to be measured by using a geometric relation;
and obtaining a lane obstacle recognition result based on the estimated value of the three-dimensional coordinate of the point to be measured.
6. The intelligent unmanned road condition recognition method as claimed in claim 5, wherein the controlling of the unmanned vehicle based on the road condition recognition result comprises:
and controlling the unmanned vehicle to run based on the lane recognition result, the lane hierarchy division result and the lane obstacle recognition result.
7. The intelligent road condition recognition method for unmanned aerial vehicle as claimed in claim 6, wherein the controlling of the unmanned aerial vehicle to travel based on the lane recognition result, the lane hierarchy division result and the lane obstacle recognition result comprises:
step S3.1: obtaining an operation mode value L of the current unmanned vehicle by using the following formula based on a preset control model, the lane recognition result, the lane hierarchy division result and the lane obstacle recognition result:
L=LS+λLC+exp(1+LD)
wherein L isSDividing results for lane levels; l isCIs a lane recognition result; l isDA lane obstacle recognition result; lambda is an adjustment coefficient;
obtaining the running mode of the current unmanned vehicle based on the running mode value and the corresponding rule of the running mode of the vehicle;
step S3.2: and controlling the unmanned vehicle to run according to a preset corresponding vehicle automatic driving mode based on the running mode of the current unmanned vehicle.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110491435.0A CN113200052B (en) | 2021-05-06 | 2021-05-06 | Intelligent road condition identification method for unmanned driving |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110491435.0A CN113200052B (en) | 2021-05-06 | 2021-05-06 | Intelligent road condition identification method for unmanned driving |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113200052A CN113200052A (en) | 2021-08-03 |
CN113200052B true CN113200052B (en) | 2021-11-16 |
Family
ID=77030123
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110491435.0A Active CN113200052B (en) | 2021-05-06 | 2021-05-06 | Intelligent road condition identification method for unmanned driving |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113200052B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114228727B (en) * | 2021-12-21 | 2023-12-19 | 湖北文理学院 | Vehicle driving safety assessment method, device, equipment and storage medium |
CN114291094B (en) * | 2021-12-28 | 2024-05-17 | 清华大学苏州汽车研究院(相城) | Road surface condition sensing response system and method based on automatic driving |
CN114261408B (en) * | 2022-01-10 | 2024-05-03 | 武汉路特斯汽车有限公司 | Automatic driving method and system capable of identifying road conditions and vehicle |
CN115489549A (en) * | 2022-09-27 | 2022-12-20 | 上汽通用五菱汽车股份有限公司 | Control method, device, equipment and storage medium for automatic driving vehicle |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106682586A (en) * | 2016-12-03 | 2017-05-17 | 北京联合大学 | Method for real-time lane line detection based on vision under complex lighting conditions |
CN111797766A (en) * | 2020-07-06 | 2020-10-20 | 三一专用汽车有限责任公司 | Identification method, identification device, computer-readable storage medium, and vehicle |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2075170B1 (en) * | 2007-12-28 | 2011-02-16 | Magneti Marelli S.p.A. | A driving assistance system for a vehicle travelling along a roadway that lacks lane demarcation lines |
JP5083658B2 (en) * | 2008-03-26 | 2012-11-28 | 本田技研工業株式会社 | Vehicle lane recognition device, vehicle, and vehicle lane recognition program |
DE102013101639A1 (en) * | 2013-02-19 | 2014-09-04 | Continental Teves Ag & Co. Ohg | Method and device for determining a road condition |
CN107578418B (en) * | 2017-09-08 | 2020-05-19 | 华中科技大学 | Indoor scene contour detection method fusing color and depth information |
CN107831762A (en) * | 2017-10-18 | 2018-03-23 | 江苏卡威汽车工业集团股份有限公司 | The path planning system and method for a kind of new-energy automobile |
US20190286921A1 (en) * | 2018-03-14 | 2019-09-19 | Uber Technologies, Inc. | Structured Prediction Crosswalk Generation |
KR102715606B1 (en) * | 2019-06-11 | 2024-10-11 | 주식회사 에이치엘클레무브 | Advanced Driver Assistance System, Vehicle having the same and method for controlling the vehicle |
US20200408533A1 (en) * | 2019-06-28 | 2020-12-31 | DeepMap Inc. | Deep learning-based detection of ground features using a high definition map |
CN110765890B (en) * | 2019-09-30 | 2022-09-02 | 河海大学常州校区 | Lane and lane mark detection method based on capsule network deep learning architecture |
CN111898540B (en) * | 2020-07-30 | 2024-07-09 | 平安科技(深圳)有限公司 | Lane line detection method, lane line detection device, computer equipment and computer readable storage medium |
CN111891129A (en) * | 2020-08-17 | 2020-11-06 | 湖南汽车工程职业学院 | Intelligent driving system of electric automobile |
CN112351154B (en) * | 2020-10-28 | 2022-11-15 | 湖南汽车工程职业学院 | Unmanned vehicle road condition identification system |
CN112634611B (en) * | 2020-12-15 | 2022-06-21 | 阿波罗智联(北京)科技有限公司 | Method, device, equipment and storage medium for identifying road conditions |
-
2021
- 2021-05-06 CN CN202110491435.0A patent/CN113200052B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106682586A (en) * | 2016-12-03 | 2017-05-17 | 北京联合大学 | Method for real-time lane line detection based on vision under complex lighting conditions |
CN111797766A (en) * | 2020-07-06 | 2020-10-20 | 三一专用汽车有限责任公司 | Identification method, identification device, computer-readable storage medium, and vehicle |
Also Published As
Publication number | Publication date |
---|---|
CN113200052A (en) | 2021-08-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113200052B (en) | Intelligent road condition identification method for unmanned driving | |
CN107330376B (en) | Lane line identification method and system | |
CN109460709B (en) | RTG visual barrier detection method based on RGB and D information fusion | |
CN109977812B (en) | Vehicle-mounted video target detection method based on deep learning | |
US8421859B2 (en) | Clear path detection using a hierachical approach | |
US8634593B2 (en) | Pixel-based texture-less clear path detection | |
US8452053B2 (en) | Pixel-based texture-rich clear path detection | |
US10467482B2 (en) | Method and arrangement for assessing the roadway surface being driven on by a vehicle | |
US20140314279A1 (en) | Clear path detection using an example-based approach | |
CN111738314A (en) | Deep learning method of multi-modal image visibility detection model based on shallow fusion | |
CN109670515A (en) | Method and system for detecting building change in unmanned aerial vehicle image | |
CN117094914B (en) | Smart city road monitoring system based on computer vision | |
CN108090459B (en) | Traffic sign detection and identification method suitable for vehicle-mounted vision system | |
CN103116757B (en) | A kind of three-dimensional information restores the road extracted and spills thing recognition methods | |
CN106326822A (en) | Method and device for detecting lane line | |
CN104766071A (en) | Rapid traffic light detection algorithm applied to pilotless automobile | |
CN108052904A (en) | The acquisition methods and device of lane line | |
EP3979196A1 (en) | Image processing method and apparatus for target detection | |
CN111753749A (en) | Lane line detection method based on feature matching | |
CN106407951A (en) | Monocular vision-based nighttime front vehicle detection method | |
CN111652033A (en) | Lane line detection method based on OpenCV | |
CN110733416B (en) | Lane departure early warning method based on inverse perspective transformation | |
CN108460348A (en) | Road target detection method based on threedimensional model | |
CN109800693B (en) | Night vehicle detection method based on color channel mixing characteristics | |
CN107977608B (en) | Method for extracting road area of highway video image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |