CN116129380A - Method and system for judging driving lane of automatic driving vehicle, vehicle and storage medium - Google Patents

Method and system for judging driving lane of automatic driving vehicle, vehicle and storage medium Download PDF

Info

Publication number
CN116129380A
CN116129380A CN202310001900.7A CN202310001900A CN116129380A CN 116129380 A CN116129380 A CN 116129380A CN 202310001900 A CN202310001900 A CN 202310001900A CN 116129380 A CN116129380 A CN 116129380A
Authority
CN
China
Prior art keywords
lane
vehicle
feature map
pixel
obstacle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310001900.7A
Other languages
Chinese (zh)
Inventor
张忠旭
林仲涛
严旭
杨东方
邱利宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Changan Automobile Co Ltd
Original Assignee
Chongqing Changan Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Changan Automobile Co Ltd filed Critical Chongqing Changan Automobile Co Ltd
Priority to CN202310001900.7A priority Critical patent/CN116129380A/en
Publication of CN116129380A publication Critical patent/CN116129380A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention relates to the technical field of automatic driving of vehicles, and provides a method and a system for judging a driving lane of an automatic driving vehicle, a vehicle and a storage medium, wherein the system comprises a whole vehicle controller and a front-view camera, and the method comprises the following steps: acquiring road condition images through a forward-looking camera; inputting a road image obtained by visual perception into a PSPNet semantic segmentation neural network, outputting the semantic of each pixel in the image, and judging that each pixel belongs to a left lane, a right lane, a current lane or a non-lane; while carrying out semantic segmentation, detecting pixel coordinates of the obstacle in the road image by using a target detection neural network; and judging the position of the lane to which the obstacle belongs through the pixel coordinates of the obstacle and the pixel result of the lane semantic segmentation. According to the invention, semantic information can be acquired through a semantic segmentation technology based on the PSPNet deep neural network, and lane information can be extracted more accurately and more antijamming according to the semantic information, so that the lanes occupied by the vehicle and the obstacle at present can be determined.

Description

Method and system for judging driving lane of automatic driving vehicle, vehicle and storage medium
Technical Field
The invention relates to the technical field of automatic driving of vehicles, in particular to a method and a system for judging a driving lane of an automatic driving vehicle, a vehicle and a storage medium.
Background
With the vigorous development of the automatic driving technology, the perception of the environment by the automatic driving vehicle is rapidly improved. The perception and modeling of the automatic driving vehicle to the environment are the input items and decision basis of the following automatic driving prediction, decision and planning module, and have great influence on the driving safety and comfort. For obstacles (vehicles, pedestrians, riders) that may interact with the vehicle, the discrimination of the current driving lane is an important component of environmental perception, and the downstream prediction and decision module can perform input modeling based on the discrimination. On the other hand, inaccuracy in the perception of the lane where the obstacle is located can lead to missed braking and false braking of the vehicle, which negatively affects the driving experience and safety.
In the prior art, a patent application with publication number CN108647572A discloses a lane departure warning method based on Hough transform, and the method comprises the steps of firstly segmenting an interested region of an image; then graying the region of interest image; performing edge detection on the gray level image by using an improved unidirectional gradient operator to obtain an edge image; performing binarization processing on the image through an Ojin algorithm; obtaining candidate straight line sets by Hough transformation, dividing the candidate straight line sets into left and right lane straight line sets according to slopes, and screening the two straight line sets by vanishing point constraint and random sample consistency algorithm to obtain optimal left and right lane line parameters; constructing two Kalman filters to track the left and right optimal lane lines respectively; and finally, estimating relative transverse offset and change trend thereof by utilizing the slopes of the left lane line and the right lane line, identifying lane departure and early warning.
The technical scheme has the advantages of high calculation efficiency and instant detection. However, hough straight line detection does not have semantic information, and may misjudge straight lines other than lane lines as lane lines, thereby bringing detection errors. In addition, the robustness of the non-linear lane lines is lacking, and the threshold value binary segmentation mode has higher requirement on the selection of the threshold value.
Disclosure of Invention
In view of the above, an object of the embodiments of the present application is to provide a method, a system, a vehicle and a storage medium for determining a driving lane of an autonomous vehicle, which can obtain semantic information by a semantic segmentation technique based on a PSPNet deep neural network, and extract lane information more accurately and more tamper-proof according to the semantic information, thereby determining a lane currently occupied by the autonomous vehicle and an obstacle.
In order to achieve the technical purpose, the technical scheme adopted by the application is as follows:
in a first aspect, an embodiment of the present application provides a method for determining a driving lane of an autopilot vehicle, which is applied to an autopilot system, where the autopilot system includes a complete vehicle controller and a front-view camera, and the front-view camera is communicatively connected with the complete vehicle controller, and the method includes the following steps:
s1, road image acquisition: acquiring road condition images through the forward-looking camera;
s2, semantic segmentation: inputting the road image obtained by visual perception of the forward-looking camera into a PSPNet semantic segmentation neural network, outputting the semantic of each pixel in the image, and judging that each pixel belongs to a left lane, a right lane, a current lane or a non-lane;
s3, obstacle detection: detecting pixel coordinates of the obstacle in the road image by using the target detection neural network while performing the semantic segmentation;
s4, lane distinguishing post-processing: and judging the position of the lane to which the obstacle belongs according to the pixel coordinates of the obstacle in the road image and the pixel result of the semantic segmentation of the lane.
Further, in the step S2, after the road image obtained by visual perception is input into the PSPNet semantic segmentation neural network, semantic segmentation is performed, and the semantic segmentation includes the following steps:
s201, using ResNet-101 as a backbone network of ResNet, and downsampling a road condition image acquired by a front-view camera by 8 times by using a feature map extracted by ResNet-101;
in the technical scheme, semantic segmentation and image classification or target detection focus on global information are different, and because pixel-level segmentation is required to be carried out on an image, fine-granularity characteristics are required, and meanwhile, only local information is focused on, so that class misjudgment can be caused.
S202, adopting a PPM (Pyramid Pooling Module) module to fuse multi-scale features by PSPNet, wherein the PPM module comprises 4 branches, respectively dividing a feature map extracted by a backbone network into 1x1, 2x2, 3x3 and 6x6 regions on the spatial scale, carrying out average pooling operation on each region to obtain features of different scales, and adjusting the number of channels to 1/4 of the number of input channels of the PPM module by each branch through a 1x1 convolution;
s203, up-sampling the feature map after 1x1 convolution to the same size of the PPM input feature map by using bilinear interpolation, and then splicing the feature maps output by the 4 branches in the channel dimension to obtain an output feature map of the PPM module;
s204, carrying out convolution operation on the feature map output by the PPM module to obtain a feature map with the channel number of k, wherein k represents the semantic number to be segmented and specifically comprises 4 categories of a left lane of a self-vehicle, a lane where the self-vehicle is located, a right lane of the self-vehicle and a non-interested region;
s205, performing Softmax on the spatial position of the feature map of the last k channel to obtain pixel-by-pixel classification probability, and performing cross entropy with one-hot vector of a real pixel probability label to obtain a loss function of the whole network, so as to obtain the category to which each pixel belongs.
Further, in the step 205, the specific method for obtaining the loss function of the whole network by performing cross entropy with the one-hot vector of the true pixel probability label is as follows:
Figure BDA0004035385230000021
wherein H and W are the height and width of the output feature map, c=4, C is the number of channels of the output feature map, f is the prediction probability, and g is the true classification.
Further, the category to which each pixel obtained in step S205 belongs includes a left lane of the own vehicle, a lane in which the own vehicle is located, a right lane of the own vehicle, and a region of non-interest.
Further, when semantic segmentation is performed, the standard ResNet-101 is modified, and the modification method comprises the following steps:
s2011, introducing hole convolution in the ResNet convolution operation, introducing expansion rate parameters to the standard convolution through the hole convolution, and representing the intervals among elements of the convolution kernel;
s2012, introducing an SE module (Squeeze Excitation Module) into a residual block of the ResNet, extruding an output characteristic diagram of the residual block to a size of 1*1 by the SE module through a Squeeze operation, and obtaining channel scale information through global average pooling;
s2013, performing an expicity operation, connecting a sigmoid function through full-connection nonlinear activation to obtain a channel emphasis vector with each element being a channel scale activation value between 0 and 1, and multiplying the channel emphasis vector with an input feature map to obtain the output of the SE module.
In the technical scheme, a larger receptive field can be provided for outputting the feature map by introducing the cavity convolution, the SE operation enhances the channel dependent information of the feature map, and the standard ResNet-101 is modified by using the method, so that the feature extraction capability of the backbone network can be improved.
Further, in the step S2012, if the feature map of the input squeze operation is u, the squeze operation may be expressed as follows:
Figure BDA0004035385230000031
where H is the height of the output feature map, W is the width of the output feature map, and u is the feature map of the input squeze operation.
Further, in the step S4, when the lane where the obstacle is located is determined, specifically: and respectively using OpenCV as contour detection for pixels of a left lane, a lane where the vehicle is located and a right lane of the vehicle, which are obtained by semantic segmentation, then obtaining external polygons, taking the center of an obstacle detection frame as a judgment point of the obstacle, and obtaining lanes where the obstacles belong by calculating the external polygons to which the judgment point belongs, thereby completing judgment.
In a second aspect, the invention also discloses an automatic driving system, which comprises a whole vehicle controller and a front-view camera, wherein the front-view camera is in communication connection with the whole vehicle controller, and the whole vehicle controller uses the automatic driving vehicle driving lane judging method.
In a third aspect, the invention also discloses a vehicle, which comprises a vehicle body and the automatic driving system, wherein the automatic driving system is mounted on the vehicle body.
In a fourth aspect, the present invention also discloses a computer readable storage medium, in which a computer program is stored, which when run on a computer causes the computer to perform the above-mentioned method.
The invention adopting the technical scheme has the following advantages:
1. according to the invention, the semantic information is acquired through the semantic segmentation technology based on the PSPNet deep neural network, so that the semantic information extracted from the positions of the lanes by semantic segmentation is more definite, and the lane information can be extracted more accurately and more antijamming according to the semantic information, thereby judging that the results of the lanes occupied by the vehicle and the obstacle are more accurate, and realizing safer and real-time automatic driving.
2. In the technical scheme, the invention can provide a larger receptive field for outputting the feature map by introducing the cavity convolution, the SE operation enhances the channel dependent information of the feature map, and the standard ResNet-101 is modified, so that the invention can improve the feature extraction capability of the backbone network.
Drawings
The present application may be further illustrated by the non-limiting examples given in the accompanying drawings. It is to be understood that the following drawings illustrate only certain embodiments of the present application and are therefore not to be considered limiting of its scope, for the person of ordinary skill in the art may derive other relevant drawings from the drawings without inventive effort.
Fig. 1 is a flowchart of an embodiment of a method for determining a driving lane of an automatic driving vehicle according to the present invention.
FIG. 2 is a logic flow diagram of a method for determining a lane of travel of an autonomous vehicle according to the present invention.
Fig. 3 is a circuit control diagram of the autopilot system of the present invention.
1, a whole vehicle controller; 2. a front view camera.
Detailed Description
The present application will be described in detail below with reference to the drawings and the specific embodiments, and it should be noted that in the drawings or the description of the specification, similar or identical parts use the same reference numerals, and implementations not shown or described in the drawings are in a form known to those of ordinary skill in the art. In the description of the present application, the terms "first," "second," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
Example 1,
The example is a method for judging a driving lane of an automatic driving vehicle, which is applied to an automatic driving system, wherein the automatic driving system comprises a whole vehicle controller and a front-view camera, the front-view camera is in communication connection with the whole vehicle controller, and the method comprises the following steps:
s1, road image acquisition: acquiring road condition images through the forward-looking camera;
s2, semantic segmentation: inputting a road image obtained by visual perception of a front-view camera into a PSPNet semantic segmentation neural network, outputting the semantic of each pixel in the image, and judging whether each pixel belongs to a left lane, a right lane, a current lane or a non-lane; inputting a road image obtained by visual perception into a PSPNet semantic segmentation neural network, and then carrying out semantic segmentation, wherein the semantic segmentation comprises the following steps:
s201, using ResNet-101 as a backbone network of ResNet, modifying a standard ResNet-101 in order to improve the feature extraction capability of the backbone network, and downsampling a road condition image acquired by a front-view camera by 8 times by using a feature map extracted by the ResNet-101; the method for modifying the standard ResNet-101 in the embodiment comprises the following steps:
s2011, introducing hole convolution in the ResNet convolution operation, introducing expansion rate parameters to the standard convolution through the hole convolution, and representing the intervals among elements of the convolution kernel; the embodiment can provide a larger receptive field for the output characteristic map by introducing the cavity convolution;
in S2012, in order to enhance the channel dependent information of the feature map, an SE module (Squeeze Excitation Module) is introduced into the residual block of the res net, the SE module extrudes the output feature map of the residual block to a size of 1*1 through a Squeeze operation, obtains the channel scale information through global average pooling, and records that the feature map input into the Squeeze operation is u, and then the Squeeze operation can be expressed as:
Figure BDA0004035385230000051
where H is the height of the output feature map, W is the width of the output feature map, and u is the feature map of the input squeze operation.
S2013, performing an expicity operation, connecting a sigmoid function through full-connection nonlinear activation to obtain a channel emphasis vector with each element being a channel scale activation value between 0 and 1, and multiplying the channel emphasis vector with an input feature map to obtain the output of the SE module;
in the technical scheme, semantic segmentation and image classification or target detection focus on global information are different, and because pixel-level segmentation is required to be carried out on an image, fine-granularity characteristics are required, and meanwhile, only local information is focused on, so that class misjudgment can be caused.
S202, adopting a PPM (Pyramid Pooling Module) module to fuse multi-scale features by PSPNet, wherein the PPM module comprises 4 branches, respectively dividing a feature map extracted by a backbone network into 1x1, 2x2, 3x3 and 6x6 regions on the spatial scale, carrying out average pooling operation on each region to obtain features of different scales, and adjusting the number of channels to 1/4 of the number of input channels of the PPM module by each branch through a 1x1 convolution;
s203, up-sampling the feature map after 1x1 convolution to the same size of the PPM input feature map by using bilinear interpolation, and then splicing the feature maps output by the 4 branches in the channel dimension to obtain an output feature map of the PPM module;
s204, carrying out convolution operation on the feature map output by the PPM module to obtain a feature map with the channel number of k, wherein k represents the semantic number to be segmented and specifically comprises 4 categories of a left lane of a self-vehicle, a lane where the self-vehicle is located, a right lane of the self-vehicle and a non-interested region;
s205, performing Softmax on the spatial position of the feature map of the last k channel to obtain pixel-by-pixel classification probability, and performing cross entropy with one-hot vector of a real pixel probability label to obtain a loss function of the whole network, so as to obtain a category to which each pixel belongs, wherein the category comprises a self-vehicle left lane, a self-vehicle right lane and a non-interested region; the specific method for obtaining the loss function of the whole network by performing cross entropy with the one-hot vector of the real pixel probability label in the embodiment is as follows:
Figure BDA0004035385230000061
wherein H and W are the height and width of the output feature map, c=4, C is the number of channels of the output feature map, f is the prediction probability, and g is the true classification.
S3, obstacle detection: while semantic segmentation is performed, detecting pixel coordinates of a detection frame of an obstacle (a vehicle, a pedestrian and a rider) in a forward-looking image by using a YOLOv4 target detection neural network;
s4, lane distinguishing post-processing: the position of the lane to which the obstacle belongs is judged according to the pixel coordinates of the obstacle and the pixel result of the lane semantic segmentation, specifically: and respectively using OpenCV as contour detection for pixels of a left lane, a lane where the vehicle is located and a right lane of the vehicle, which are obtained by semantic segmentation, then obtaining external polygons, taking the center of an obstacle detection frame as a judgment point of the obstacle, and obtaining lanes where the obstacles belong by calculating the external polygons to which the judgment point belongs, thereby completing judgment.
EXAMPLE 2,
The embodiment is an automatic driving system, which includes a vehicle controller and a front-view camera, wherein the front-view camera is in communication connection with the vehicle controller, and the vehicle controller uses the method for determining a driving lane of an automatic driving vehicle according to embodiment 1. According to the dynamic driving system, semantic information is acquired through a semantic segmentation technology based on the PSPNet deep neural network, so that semantic information extracted from the positions of the lanes by semantic segmentation is more definite, lane information can be extracted more accurately and more antijamming according to the semantic information, the result of judging the lanes occupied by the vehicle and the obstacle at present is more accurate, and safer and real-time automatic driving is realized.
EXAMPLE 3,
The embodiment is a vehicle including a vehicle body and the above-described automated driving system mounted on the vehicle.
EXAMPLE 4,
The present embodiment is a computer-readable storage medium having a computer program stored therein, which when run on a computer causes the computer to perform the above-described method. From the foregoing description of the embodiments, it will be apparent to those skilled in the art that the present application may be implemented in hardware, or by means of software plus a necessary general hardware platform, and based on this understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disc, a mobile hard disk, etc.), and includes several instructions for causing a computer device (may be a personal computer, a brake device, or a network device, etc.) to perform the methods described in the various implementation scenarios of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus, system, and method may be implemented in other manners as well. The above-described apparatus, systems, and method embodiments are merely illustrative, for example, flow charts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. In addition, the functional modules in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application, and various modifications and variations may be suggested to one skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application.

Claims (10)

1. A method for judging a driving lane of an automatic driving vehicle is characterized by comprising the following steps of: be applied to automatic driving system, automatic driving system includes whole car controller and front-view camera, the front-view camera with whole car controller communication is connected, the method includes following steps:
s1, road image acquisition: acquiring road condition images through the forward-looking camera;
s2, semantic segmentation: inputting the road image obtained by visual perception of the forward-looking camera into a PSPNet semantic segmentation neural network, outputting the semantic of each pixel in the image, and judging that each pixel belongs to a left lane, a right lane, a current lane or a non-lane;
s3, obstacle detection: detecting pixel coordinates of the obstacle in the road image by using the target detection neural network while performing the semantic segmentation;
s4, lane distinguishing post-processing: and judging the position of the lane to which the obstacle belongs according to the pixel coordinates of the obstacle in the road image and the pixel result of the semantic segmentation of the lane.
2. The method for determining a driving lane of an autonomous vehicle according to claim 1, wherein: in the step S2, after the road image obtained by the visual perception is input into a PSPNet semantic segmentation neural network, the semantic segmentation is performed, and the semantic segmentation includes the following steps:
s201, using ResNet-101 as a backbone network of ResNet, and downsampling a road condition image acquired by the front-view camera by 8 times by using a feature map extracted by the ResNet-101;
s202, fusing multi-scale features by using a PPM module by the PSPNet, wherein the PPM module comprises 4 branches, respectively dividing a feature image extracted by a backbone network into regions of 1x1, 2x2, 3x3 and 6x6 on the spatial scale, carrying out average pooling operation on each region to obtain features of different scales, and adjusting the number of channels to 1/4 of the number of input channels of the PPM module by using a 1x1 convolution by each branch;
s203, up-sampling the feature map after 1x1 convolution to the same size of the PPM input feature map by using bilinear interpolation, and then splicing the feature maps output by the 4 branches in the channel dimension to obtain an output feature map of the PPM module;
s204, carrying out convolution operation on the feature map output by the PPM module to obtain a feature map with the channel number of k, wherein k represents the semantic number to be segmented and specifically comprises 4 categories of a left lane of a self-vehicle, a lane where the self-vehicle is located, a right lane of the self-vehicle and a non-interested region;
s205, performing Softmax on the spatial position of the feature map of the last k channel to obtain pixel-by-pixel classification probability, and performing cross entropy with one-hot vector of a real pixel probability label to obtain a loss function of the whole network, so as to obtain the category to which each pixel belongs.
3. The method for determining a driving lane of an autonomous vehicle according to claim 2, wherein: the specific method for obtaining the loss function of the whole network by performing cross entropy with the one-hot vector of the real pixel probability label comprises the following steps:
Figure FDA0004035385220000021
wherein H and W are the height and width of the output feature map, c=4, C is the number of channels of the output feature map, f is the prediction probability, and g is the true classification.
4. The method for determining a driving lane of an autonomous vehicle according to claim 3, wherein: the category to which each pixel obtained in step S205 belongs includes a left lane of the own vehicle, a lane in which the own vehicle is located, a right lane of the own vehicle, and a region of non-interest.
5. The method for determining a lane in which an autonomous vehicle travels according to claim 4, wherein: when the semantic segmentation is carried out, the standard ResNet-101 is modified firstly, and the modification method comprises the following steps:
s2011, introducing hole convolution in the ResNet convolution operation, introducing expansion rate parameters to the standard convolution through the hole convolution, and representing the intervals among elements of the convolution kernel;
s2012, introducing an SE module into a residual block of the ResNet, extruding an output characteristic diagram of the residual block to a size of 1*1 through a Squeeze operation, and obtaining channel scale information through global average pooling;
s2013, performing an expicity operation, connecting a sigmoid function through full-connection nonlinear activation to obtain a channel emphasis vector with each element being a channel scale activation value between 0 and 1, and multiplying the channel emphasis vector with an input feature map to obtain the output of the SE module.
6. The method for determining a lane in which an autonomous vehicle travels according to claim 5, wherein: in the step S2012, if the feature map of the input squeze operation is u, the squeze operation may be expressed as follows:
Figure FDA0004035385220000022
where H is the height of the output feature map, W is the width of the output feature map, and u is the feature map of the input squeze operation.
7. The method for determining a lane in which an autonomous vehicle travels according to claim 6, wherein: in the step S4, when the lane where the obstacle is located is determined, specifically, pixels of a left lane, a lane where the vehicle is located, and a right lane where the vehicle is located, which are obtained by dividing the semantics, are respectively subjected to outline detection by using OpenCV, then an external polygon is obtained, the center of the obstacle detection frame is taken as a determination point of the obstacle, and the lanes where the obstacles are located are obtained by calculating the external polygon to which the determination point belongs, so as to complete the determination.
8. An autopilot system characterized by: the automatic driving system comprises a whole vehicle controller and a front-view camera, wherein the front-view camera is in communication connection with the whole vehicle controller, and the whole vehicle controller uses the automatic driving vehicle driving lane distinguishing method according to claim 1.
9. A vehicle, characterized in that: the vehicle includes a vehicle body and the autopilot system of claim 8, the autopilot system being mounted on the vehicle body.
10. A computer-readable storage medium, characterized by: the computer readable storage medium has stored therein a computer program which, when run on a computer, causes the computer to perform the method of claim 1.
CN202310001900.7A 2023-01-03 2023-01-03 Method and system for judging driving lane of automatic driving vehicle, vehicle and storage medium Pending CN116129380A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310001900.7A CN116129380A (en) 2023-01-03 2023-01-03 Method and system for judging driving lane of automatic driving vehicle, vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310001900.7A CN116129380A (en) 2023-01-03 2023-01-03 Method and system for judging driving lane of automatic driving vehicle, vehicle and storage medium

Publications (1)

Publication Number Publication Date
CN116129380A true CN116129380A (en) 2023-05-16

Family

ID=86293939

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310001900.7A Pending CN116129380A (en) 2023-01-03 2023-01-03 Method and system for judging driving lane of automatic driving vehicle, vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN116129380A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116934847A (en) * 2023-09-15 2023-10-24 蓝思系统集成有限公司 Discharging method, discharging device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116934847A (en) * 2023-09-15 2023-10-24 蓝思系统集成有限公司 Discharging method, discharging device, electronic equipment and storage medium
CN116934847B (en) * 2023-09-15 2024-01-05 蓝思系统集成有限公司 Discharging method, discharging device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
Wu et al. Lane-mark extraction for automobiles under complex conditions
CN112528878A (en) Method and device for detecting lane line, terminal device and readable storage medium
KR102099265B1 (en) System and method for pedestrian-vehicle collision warning based on pedestrian state
CN110956081B (en) Method and device for identifying position relationship between vehicle and traffic marking and storage medium
KR101067437B1 (en) Lane detection method and Detecting system using the same
KR102043089B1 (en) Method for extracting driving lane, device and computer readable medium for performing the method
CN108830131B (en) Deep learning-based traffic target detection and ranging method
Chao et al. Multi-lane detection based on deep convolutional neural network
CN105426863A (en) Method and device for detecting lane line
CN112793567A (en) Driving assistance method and system based on road condition detection
CN104915642A (en) Method and apparatus for measurement of distance to vehicle ahead
CN111627057A (en) Distance measuring method and device and server
Rasib et al. Pixel level segmentation based drivable road region detection and steering angle estimation method for autonomous driving on unstructured roads
CN114067292A (en) Image processing method and device for intelligent driving
CN116129380A (en) Method and system for judging driving lane of automatic driving vehicle, vehicle and storage medium
CN112613434A (en) Road target detection method, device and storage medium
Alpar et al. Intelligent collision warning using license plate segmentation
CN109635701B (en) Lane passing attribute acquisition method, lane passing attribute acquisition device and computer readable storage medium
Ghahremannezhad et al. Automatic road detection in traffic videos
CN114581886A (en) Visibility discrimination method, device and medium combining semantic segmentation and frequency domain analysis
CN111144361A (en) Road lane detection method based on binaryzation CGAN network
Ghani et al. Advances in lane marking detection algorithms for all-weather conditions
Ab Ghani et al. Lane Detection Using Deep Learning for Rainy Conditions
Saha et al. Road Rutting Detection using Deep Learning on Images
Dai et al. A driving assistance system with vision based vehicle detection techniques

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination