CN112319468A - Driverless lane keeping method for maintaining road shoulder distance - Google Patents

Driverless lane keeping method for maintaining road shoulder distance Download PDF

Info

Publication number
CN112319468A
CN112319468A CN202011259803.0A CN202011259803A CN112319468A CN 112319468 A CN112319468 A CN 112319468A CN 202011259803 A CN202011259803 A CN 202011259803A CN 112319468 A CN112319468 A CN 112319468A
Authority
CN
China
Prior art keywords
vehicle
road
resolution
lane
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011259803.0A
Other languages
Chinese (zh)
Other versions
CN112319468B (en
Inventor
杨扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Boonray Intelligent Technology Co Ltd
Original Assignee
Shanghai Boonray Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Boonray Intelligent Technology Co Ltd filed Critical Shanghai Boonray Intelligent Technology Co Ltd
Priority to CN202011259803.0A priority Critical patent/CN112319468B/en
Publication of CN112319468A publication Critical patent/CN112319468A/en
Application granted granted Critical
Publication of CN112319468B publication Critical patent/CN112319468B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/10Path keeping
    • B60W30/12Lane keeping
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • B60W40/06Road conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Software Systems (AREA)
  • Mechanical Engineering (AREA)
  • Transportation (AREA)
  • Automation & Control Theory (AREA)
  • Geometry (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention belongs to the technical field of unmanned driving, and particularly relates to an unmanned lane keeping method for maintaining a road shoulder distance, which comprises the following steps: step 1: collecting data information in the vehicle running process, road image information in the vehicle running process and vehicle body information in the vehicle running process; the data information includes: physical parameters in the driving process of the vehicle and road parameters in the driving process of the vehicle; the physical parameters include at least: speed, acceleration, and angular velocity at which the vehicle is traveling; the road parameters at least include: the width of the road and the steering angle. The method utilizes the collected data information, road image information and vehicle body information in the driving process of the unmanned vehicle to control the steering angle and identify the road of the unmanned vehicle, thereby realizing the automatic control of road maintenance of the unmanned vehicle; meanwhile, in the road identification process, the accuracy and the efficiency of road identification are ensured by using the confrontation training based on Nash balance.

Description

Driverless lane keeping method for maintaining road shoulder distance
Technical Field
The invention belongs to the technical field of unmanned driving, and particularly relates to an unmanned lane keeping method for maintaining a road shoulder distance.
Background
The unmanned automobile is an intelligent automobile which senses road environment through a vehicle-mounted sensing system, automatically plans a driving route and controls the automobile to reach a preset target.
The vehicle-mounted sensor is used for sensing the surrounding environment of the vehicle, and controlling the steering and the speed of the vehicle according to the road, the vehicle position and the obstacle information obtained by sensing, so that the vehicle can safely and reliably run on the road.
The system integrates a plurality of technologies such as automatic control, a system structure, artificial intelligence, visual calculation and the like, is a product of high development of computer science, mode recognition and intelligent control technology, is an important mark for measuring national scientific research strength and industrial level, and has wide application prospect in the fields of national defense and national economy.
The lane keeping in the traditional unmanned driving establishes a lane model according to artificial knowledge, in the actual driving process, lane marks are extracted by collecting road images, then lane offset is calculated according to the lane model, a steering wheel corner compensation value required for correcting the lane offset distance is calculated by utilizing a corner segmentation PID (proportional Integral Derivative) controller, and then the lane offset of a vehicle is corrected. However, in the conventional method for keeping the lane in unmanned driving, because the corresponding lane model is established by adopting artificial knowledge, the method has the defects of unclear route, large curvature of the curve and insufficient recognition capability of the congested road section of the vehicle.
Disclosure of Invention
The invention mainly aims to provide an unmanned lane keeping method for maintaining road shoulder distance, which utilizes data information, road image information and vehicle body information collected in the driving process of an unmanned vehicle to control the steering angle and recognize a road of the unmanned vehicle, thereby realizing the automatic control of road keeping of the unmanned vehicle; meanwhile, in the road identification process, the accuracy and the efficiency of road identification are ensured by using the confrontation training based on Nash balance.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
an unmanned lane keeping method for maintaining a road shoulder distance, the method comprising the steps of:
step 1: collecting data information in the vehicle running process, road image information in the vehicle running process and vehicle body information in the vehicle running process; the data information includes: physical parameters in the driving process of the vehicle and road parameters in the driving process of the vehicle; the physical parameters include at least: speed, acceleration, and angular velocity at which the vehicle is traveling; the road parameters at least include: the width and steering angle of the road; the vehicle body information at least includes: length, width and height of the vehicle;
step 2: identifying lane lines in real time based on the road parameters; and controlling the vehicle to keep running in the corresponding lane based on the physical parameters and the lane line recognition result.
Further, step 2 specifically includes: step 2.1: performing lane line identification, comprising: inputting training data, and constructing a generation network, a discrimination network and a detection algorithm of lane line identification; then inputting the road image information into a generation network to generate a high-resolution picture, inputting the high-resolution picture into a discrimination network to perform accuracy judgment, capturing the resolution distribution of the high-resolution picture according to a judgment result, performing countermeasure training on the generation network and the discrimination network based on data distribution until Nash equilibrium is reached to obtain an optimized generation network, and inputting the high-resolution picture generated by the optimized generation network into a detection algorithm; the detection algorithm identifies the lane lines based on the road parameters and the generated high-resolution pictures; step 2.2: controlling the vehicle to keep running in the corresponding lane: based on the obtained vehicle body information; the method comprises the steps that vehicle body information is transmitted to a preset real vehicle model to be processed, and a direction corner corresponding to the vehicle body information is obtained, wherein the real vehicle model is established through deep neural network learning and used for representing the corresponding relation between the vehicle body information and the direction corner; and controlling the vehicle to keep running in the corresponding lane according to the direction turning angle based on the obtained physical parameters, road parameters and lane line identification results in the running process of the vehicle.
Further, in step 2.1: the method for generating the high-resolution picture by inputting the road image information into the generation network comprises the following steps: generating a convolution template for edge detection, wherein the convolution template is a 3 x 3 template; the following formula is used:
Figure BDA0002774280730000021
Figure BDA0002774280730000022
carrying out convolution operation on the convolution template and the original image; wherein, CmnFor the generated intermediate image, Pm+i,n+jFor road image information, WijIs a convolution template; step 2.2 intermediate image C generated after convolution operation using the following formulamnInertial mean processing is performed to remove noise:
Figure BDA0002774280730000023
k is the final high-resolution picture generated; where m is the length of the intermediate image and n is the width of the intermediate image. i and j are ordinal numbers, and the value range is as follows: -1 to 1.
Further, in step 2.1: the method for capturing the resolution distribution of the high-resolution picture according to the judgment result, generating the network and judging the network to carry out countermeasure training based on the data distribution until Nash equilibrium is reached comprises the following steps: defining the resolution change rate as:
Figure BDA0002774280730000024
in the formula: AR represents a resolution change rate between the road image information and the high resolution picture; a (n) represents the resolution of the road image information; a (n +1) represents the resolution of the high resolution picture, eps is a set minimum value; the resolution change rate of the first gradient is in the range of 0.1-0.4, the resolution change rate of the second gradient is close to 0, and the third gradient isThe rate of change of resolution of the gradient is close to 1; the expression defining that is equality is:
Figure BDA0002774280730000025
Figure BDA0002774280730000026
wherein A iskFor the coefficient of opposition, an arithmetic series with a tolerance of 0.1 and a first term of 0.3; pkThe countermeasure function is a set linear function.
Further, the generation network and the discrimination network are trained by adopting a neural network, and the training data comprises paired lane line fuzzy pictures and lane line clear pictures, namely paired low-resolution pictures and paired high-resolution pictures.
An unmanned lane keeping apparatus for maintaining a road shoulder spacing, the apparatus comprising: the acquisition device is configured for acquiring data information in the vehicle running process, road image information in the vehicle running process and vehicle body information in the vehicle running process; the data information includes: physical parameters in the driving process of the vehicle and road parameters in the driving process of the vehicle; the physical parameters include at least: speed, acceleration, and angular velocity at which the vehicle is traveling; the road parameters at least include: the width and steering angle of the road; the vehicle body information at least includes: length, width and height of the vehicle; the control device is configured to identify the lane lines in real time based on the road parameters; and controlling the vehicle to keep running in the corresponding lane based on the physical parameters and the lane line recognition result.
Further, the control device includes: the lane line recognition device is configured for inputting training data, constructing a generation network, a judgment network and a detection algorithm of lane line recognition; then inputting the road image information into a generation network to generate a high-resolution picture, inputting the high-resolution picture into a discrimination network to perform accuracy judgment, capturing the resolution distribution of the high-resolution picture according to a judgment result, performing countermeasure training on the generation network and the discrimination network based on data distribution until Nash equilibrium is reached to obtain an optimized generation network, and inputting the high-resolution picture generated by the optimized generation network into a detection algorithm; the detection algorithm identifies the lane lines based on the road parameters and the generated high-resolution pictures; a lane control device configured to obtain vehicle body information based on the vehicle body information; the method comprises the steps that vehicle body information is transmitted to a preset real vehicle model to be processed, and a direction corner corresponding to the vehicle body information is obtained, wherein the real vehicle model is established through deep neural network learning and used for representing the corresponding relation between the vehicle body information and the direction corner; and controlling the vehicle to keep running in the corresponding lane according to the direction turning angle based on the obtained physical parameters, road parameters and lane line identification results in the running process of the vehicle.
Further, the method for the lane line recognition device to input the road image information into the generation network to generate the high-resolution picture comprises the following steps: generating a convolution template for edge detection, wherein the convolution template is a 3 x 3 template; the following formula is used:
Figure BDA0002774280730000031
Figure BDA0002774280730000032
carrying out convolution operation on the convolution template and the original image; wherein, CmnFor the generated intermediate image, Pm+i,n+jFor road image information, WijIs a convolution template; step 2.2 intermediate image C generated after convolution operation using the following formulamnInertial mean processing is performed to remove noise:
Figure BDA0002774280730000033
k is the final high-resolution picture generated; wherein m is the length of the intermediate image, n is the width of the intermediate image, i and j are ordinal numbers, and the value range is as follows: -1 to 1.
Further, the method for the lane line recognition device to capture the resolution distribution of the high-resolution picture according to the judgment result, generate the network and judge the network to carry out the countertraining based on the data distribution until the nash equilibrium is reached comprises the following steps: defining the resolution change rate as:
Figure BDA0002774280730000034
in the formula: AR watchIndicating a resolution change rate between the road image information and the high resolution picture; a (n) represents the resolution of the road image information; a (n +1) represents the resolution of the high resolution picture, eps is a set minimum value; the resolution change rate range of the first gradient is 0.1-0.4, the resolution change rate of the second gradient is close to 0, and the resolution change rate of the third gradient is close to 1; the expression defining that is equality is:
Figure BDA0002774280730000041
Figure BDA0002774280730000042
wherein A iskFor the coefficient of opposition, an arithmetic series with a tolerance of 0.1 and a first term of 0.3; pkThe countermeasure function is a set linear function.
Further, the generation network and the discrimination network are trained by adopting a neural network, and the training data comprises paired lane line fuzzy pictures and lane line clear pictures, namely paired low-resolution pictures and paired high-resolution pictures.
The unmanned lane keeping method for maintaining the road shoulder distance has the following beneficial effects: the method utilizes the collected data information, road image information and vehicle body information in the driving process of the unmanned vehicle to control the steering angle and identify the road of the unmanned vehicle, thereby realizing the automatic control of road maintenance of the unmanned vehicle; meanwhile, in the road identification process, the accuracy and the efficiency of road identification are ensured by using the confrontation training based on Nash balance. The method is mainly realized by the following steps: 1. complete collection of data in the vehicle driving process: the method comprises the steps of collecting data information in the vehicle running process, road image information in the vehicle running process and vehicle body information in the vehicle running process; the data information includes: physical parameters in the driving process of the vehicle and road parameters in the driving process of the vehicle; the physical parameters include at least: speed, acceleration, and angular velocity at which the vehicle is traveling; the road parameters at least include: the width and steering angle of the road; the complete simulation and control of the driving of the unmanned vehicle can be realized through physical parameters and road parameters; 2. recognizing lane lines: when the lane line identification is carried out, a generation network, a judgment network and a detection algorithm of the lane line identification are constructed; then inputting the road image information into a generation network to generate a high-resolution picture, inputting the high-resolution picture into a discrimination network to perform accuracy judgment, capturing the resolution distribution of the high-resolution picture according to a judgment result, performing countermeasure training on the generation network and the discrimination network based on data distribution until Nash equilibrium is achieved, and obtaining an optimized generation network; 3. the method for generating the high-resolution picture is realized by means of the convolution template of edge detection when the high-resolution picture is generated according to the road image information, the process can obviously improve the picture generation efficiency, and although the resolution is lower than that of the traditional prior art, the method can completely meet the requirements of application scenes such as roads.
Drawings
Fig. 1 is a schematic flow chart of a method of an unmanned lane keeping method for maintaining a road-shoulder distance according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an apparatus of an unmanned lane keeping apparatus for maintaining a road-shoulder distance according to an embodiment of the present invention;
fig. 3 is a graph illustrating a comparison experiment effect between a graph illustrating a change in recognition accuracy of the unmanned lane keeping method for maintaining a road shoulder distance according to the embodiment of the present invention with the number of experiments and a comparison experiment effect diagram in the prior art.
A-inventive experimental curve, B-prior art experimental curve.
Detailed Description
The technical solution of the present invention is further described in detail below with reference to the following detailed description and the accompanying drawings:
example 1
As shown in fig. 1, the driverless lane keeping method for maintaining a road-shoulder distance is characterized in that the method performs the following steps:
step 1: collecting data information in the vehicle running process, road image information in the vehicle running process and vehicle body information in the vehicle running process; the data information includes: physical parameters in the driving process of the vehicle and road parameters in the driving process of the vehicle; the physical parameters include at least: speed, acceleration, and angular velocity at which the vehicle is traveling; the road parameters at least include: the width and steering angle of the road; the vehicle body information at least includes: length, width and height of the vehicle;
step 2: identifying lane lines in real time based on the road parameters; and controlling the vehicle to keep running in the corresponding lane based on the physical parameters and the lane line recognition result.
By adopting the technical scheme, the steering angle control and the road recognition are carried out on the unmanned vehicle by collecting the data information, the road image information and the vehicle body information in the driving process of the unmanned vehicle, so that the automatic control of road maintenance of the unmanned vehicle is realized; meanwhile, in the road identification process, the accuracy and the efficiency of road identification are ensured by using the confrontation training based on Nash balance. The method is mainly realized by the following steps: 1. complete collection of data in the vehicle driving process: the method comprises the steps of collecting data information in the vehicle running process, road image information in the vehicle running process and vehicle body information in the vehicle running process; the data information includes: physical parameters in the driving process of the vehicle and road parameters in the driving process of the vehicle; the physical parameters include at least: speed, acceleration, and angular velocity at which the vehicle is traveling; the road parameters at least include: the width and steering angle of the road; the complete simulation and control of the driving of the unmanned vehicle can be realized through physical parameters and road parameters; 2. recognizing lane lines: when the lane line identification is carried out, a generation network, a judgment network and a detection algorithm of the lane line identification are constructed; then inputting the road image information into a generation network to generate a high-resolution picture, inputting the high-resolution picture into a discrimination network to perform accuracy judgment, capturing the resolution distribution of the high-resolution picture according to a judgment result, performing countermeasure training on the generation network and the discrimination network based on data distribution until Nash equilibrium is achieved, and obtaining an optimized generation network; 3. the method for generating the high-resolution picture is realized by means of the convolution template of edge detection when the high-resolution picture is generated according to the road image information, the process can obviously improve the picture generation efficiency, and although the resolution is lower than that of the traditional prior art, the method can completely meet the requirements of application scenes such as roads.
Example 2
On the basis of the embodiment 1, the step 2 specifically includes: step 2.1: performing lane line identification, comprising: inputting training data, and constructing a generation network, a discrimination network and a detection algorithm of lane line identification; then inputting the road image information into a generation network to generate a high-resolution picture, inputting the high-resolution picture into a discrimination network to perform accuracy judgment, capturing the resolution distribution of the high-resolution picture according to a judgment result, performing countermeasure training on the generation network and the discrimination network based on data distribution until Nash equilibrium is reached to obtain an optimized generation network, and inputting the high-resolution picture generated by the optimized generation network into a detection algorithm; the detection algorithm identifies the lane lines based on the road parameters and the generated high-resolution pictures; step 2.2: controlling the vehicle to keep running in the corresponding lane: based on the obtained vehicle body information; the method comprises the steps that vehicle body information is transmitted to a preset real vehicle model to be processed, and a direction corner corresponding to the vehicle body information is obtained, wherein the real vehicle model is established through deep neural network learning and used for representing the corresponding relation between the vehicle body information and the direction corner; and controlling the vehicle to keep running in the corresponding lane according to the direction turning angle based on the obtained physical parameters, road parameters and lane line identification results in the running process of the vehicle.
Specifically, the deep neural network of the invention is composed of a plurality of layers of neural networks. The training of the deep neural network is accomplished using supervised learning.
In supervised learning, the problem of previous multilayer neural networks is that they are prone to fall into local extreme points. If the training samples sufficiently cover future samples, the learned multi-layer weights can be used well to predict new test samples. Many tasks, however, have difficulty obtaining enough labeled samples, in which case simple models, such as linear regression or decision trees, tend to yield better results than multi-layer neural networks.
In unsupervised learning, there has been no effective method for constructing a multilayer network. The top layer of the multilayer neural network is high-level representation of bottom layer features, for example, the bottom layer is pixel points, and nodes of the upper layer can represent transverse lines and triangles; while the top level may have a node representing a face. A successful algorithm should maximize the number of top-level features generated to represent the underlying examples. The time complexity is too high if all layers are trained simultaneously; if one layer is trained at a time, the deviation is transmitted layer by layer. This would face the opposite problem in supervised learning above, and would be severely under-fitted.
Example 3
On the basis of example 2, in step 2.1: the method for generating the high-resolution picture by inputting the road image information into the generation network comprises the following steps: generating a convolution template for edge detection, wherein the convolution template is a 3 x 3 template; the following formula is used:
Figure BDA0002774280730000061
Figure BDA0002774280730000062
carrying out convolution operation on the convolution template and the original image; wherein, CmnTo generateIntermediate picture of Pm+i,n+jFor road image information, WijIs a convolution template; step 2.2 intermediate image C generated after convolution operation using the following formulamnInertial mean processing is performed to remove noise:
Figure BDA0002774280730000063
k is the final high-resolution picture generated; wherein m is the length of the intermediate image, n is the width of the intermediate image, i and j are ordinal numbers, and the value range is as follows: -1 to 1.
Specifically, the LKAS control method according to the related art calculates a distance and an angle between a lane and a vehicle from lane information obtained from an imaging device mounted on the vehicle, calculates a lane departure speed from a traveling direction and a vehicle speed of the vehicle obtained from Controller Area Network (CAN) data of the vehicle, and issues a lane departure warning or performs steering control according to whether the vehicle departs from the lane.
Example 4
On the basis of example 3, in step 2.1: the method for capturing the resolution distribution of the high-resolution picture according to the judgment result, generating the network and judging the network to carry out countermeasure training based on the data distribution until Nash equilibrium is reached comprises the following steps: defining the resolution change rate as:
Figure BDA0002774280730000071
in the formula: AR represents a resolution change rate between the road image information and the high resolution picture; a (n) represents the resolution of the road image information; a (n +1) represents the resolution of the high resolution picture, eps is a set minimum value; the resolution change rate range of the first gradient is 0.1-0.4, the resolution change rate of the second gradient is close to 0, and the resolution change rate of the third gradient is close to 1; the expression defining that is equality is:
Figure BDA0002774280730000072
Figure BDA0002774280730000073
wherein A iskFor the coefficient of opposition, an arithmetic series with a tolerance of 0.1 and a first term of 0.3; pkThe countermeasure function is a set linear function.
Specifically, in the conventional control method, a control amount (control degree) is calculated only by a deviation angle, which is an angle between the host vehicle and the lane, and therefore, the controller is made to operate sensitively only by slightly generating the deviation angle, that is, exceeding a threshold value of the lane departure speed. Further, when calculating the control amount, stability of control performance against disturbance such as crosswind and road surface gradient cannot be ensured by using an empirically-based steering torque map.
Example 5
On the basis of the embodiment 4, the network generation and discrimination network is trained by adopting a neural network, and the training data comprises paired lane line fuzzy pictures and lane line clear pictures, namely paired low-resolution pictures and high-resolution pictures.
Example 6
Apparatus for implementing the method, the apparatus comprising: the acquisition device is configured for acquiring data information in the vehicle running process, road image information in the vehicle running process and vehicle body information in the vehicle running process; the data information includes: physical parameters in the driving process of the vehicle and road parameters in the driving process of the vehicle; the physical parameters include at least: speed, acceleration, and angular velocity at which the vehicle is traveling; the road parameters at least include: the width and steering angle of the road; the vehicle body information at least includes: length, width and height of the vehicle; the control device is configured to identify the lane lines in real time based on the road parameters; and controlling the vehicle to keep running in the corresponding lane based on the physical parameters and the lane line recognition result.
Specifically, the invention utilizes the data information, the road image information and the vehicle body information collected in the driving process of the unmanned vehicle to control the steering angle and recognize the road of the unmanned vehicle, thereby realizing the automatic control of road maintenance of the unmanned vehicle; meanwhile, in the road identification process, the accuracy and the efficiency of road identification are ensured by using the confrontation training based on Nash balance.
Example 7
On the basis of embodiment 6, the control device includes: the lane line recognition device is configured for inputting training data, constructing a generation network, a judgment network and a detection algorithm of lane line recognition; then inputting the road image information into a generation network to generate a high-resolution picture, inputting the high-resolution picture into a discrimination network to perform accuracy judgment, capturing the resolution distribution of the high-resolution picture according to a judgment result, performing countermeasure training on the generation network and the discrimination network based on data distribution until Nash equilibrium is reached to obtain an optimized generation network, and inputting the high-resolution picture generated by the optimized generation network into a detection algorithm; the detection algorithm identifies the lane lines based on the road parameters and the generated high-resolution pictures; a lane control device configured to obtain vehicle body information based on the vehicle body information; the method comprises the steps that vehicle body information is transmitted to a preset real vehicle model to be processed, and a direction corner corresponding to the vehicle body information is obtained, wherein the real vehicle model is established through deep neural network learning and used for representing the corresponding relation between the vehicle body information and the direction corner; and controlling the vehicle to keep running in the corresponding lane according to the direction turning angle based on the obtained physical parameters, road parameters and lane line identification results in the running process of the vehicle.
Specifically, the method collects data information in the vehicle running process, road image information in the vehicle running process and vehicle body information in the vehicle running process; the data information includes: physical parameters in the driving process of the vehicle and road parameters in the driving process of the vehicle; the physical parameters include at least: speed, acceleration, and angular velocity at which the vehicle is traveling; the road parameters at least include: the width and steering angle of the road; the driving of the unmanned vehicle can be completely simulated and controlled through the physical parameters and the road parameters.
Example 8
On the basis of embodiment 7, the method for inputting road image information into a generation network by a lane line recognition device to generate a high-resolution picture includes: generating a convolution template for edge detection, wherein the convolution template is a 3 x 3 template; the following formula is used:
Figure BDA0002774280730000081
Figure BDA0002774280730000082
carrying out convolution operation on the convolution template and the original image; wherein, CmnFor the generated intermediate image, Pm+i,n+jFor road image information, WijIs a convolution template; step 2.2 intermediate image C generated after convolution operation using the following formulamnInertial mean processing is performed to remove noise:
Figure BDA0002774280730000083
k is the final high-resolution picture generated; wherein m is the length of the intermediate image, n is the width of the intermediate image, i and j are ordinal numbers, and the value range is as follows: -1 to 1.
Specifically, when the lane line identification is carried out, a generation network, a judgment network and a detection algorithm of the lane line identification are constructed; the method comprises the steps of inputting road image information into a generation network to generate a high-resolution picture, inputting the high-resolution picture into a discrimination network to perform accuracy judgment, capturing resolution distribution of the high-resolution picture according to a judgment result, performing countermeasure training on the generation network and the discrimination network based on data distribution until Nash equilibrium is achieved, and obtaining an optimized generation network.
Example 9
On the basis of embodiment 8, the method for the lane line recognition device to capture the resolution distribution of the high-resolution picture according to the determination result, generate the network and discriminate the network to perform the countermeasure training based on the data distribution until nash equilibrium is reached includes: defining the resolution change rate as:
Figure BDA0002774280730000091
in the formula: AR represents a resolution change rate between the road image information and the high resolution picture; a (n) represents the resolution of the road image information; a (n +1) represents the resolution of the high resolution picture, eps is a set minimum value; the resolution change rate range of the first gradient is 0.1-0.4, the resolution change rate of the second gradient is close to 0, and the resolution change rate of the third gradient is close to 1; the expression defining that is equality is:
Figure BDA0002774280730000092
Figure BDA0002774280730000093
wherein A iskFor the coefficient of opposition, an arithmetic series with a tolerance of 0.1 and a first term of 0.3; pkThe countermeasure function is a set linear function.
Example 10
On the basis of the embodiment 9, the generation network and the discrimination network are trained by adopting a neural network, and the training data comprises paired lane line fuzzy pictures and lane line clear pictures, namely paired low-resolution pictures and high-resolution pictures.
Specifically, when the high-resolution image is generated according to the road image information, the method is realized by means of the convolution template of the edge detection, the image generation efficiency can be obviously improved by the process, and although the resolution is lower than that of the traditional prior art, the method can completely meet the requirement of the application scene such as the road
The above is only an embodiment of the present invention, but the scope of the present invention should not be limited thereby, and any structural changes made according to the present invention should be considered as being limited within the scope of the present invention without departing from the gist of the present invention.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process and related description of the system described above may refer to the corresponding process in the foregoing method embodiments, and will not be described herein again.
It should be noted that, the system provided in the foregoing embodiment is only illustrated by dividing the functional modules, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the modules or steps in the embodiment of the present invention are further decomposed or combined, for example, the modules in the foregoing embodiment may be combined into one module, or may be further split into multiple sub-modules, so as to complete all or part of the functions described above. The names of the modules and steps involved in the embodiments of the present invention are only for distinguishing the modules or steps, and are not to be construed as unduly limiting the present invention.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes and related descriptions of the storage device and the processing device described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Those of skill in the art would appreciate that the various illustrative modules, method steps, and modules described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that programs corresponding to the software modules, method steps may be located in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. To clearly illustrate this interchangeability of electronic hardware and software, various illustrative components and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as electronic hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The terms "first," "second," and the like are used for distinguishing between similar elements and not necessarily for describing or implying a particular order or sequence.
The terms "comprises," "comprising," or any other similar term are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention.

Claims (5)

1. An unmanned lane keeping method for maintaining a road-shoulder spacing, the method comprising the steps of:
step 1: collecting data information in the vehicle running process, road image information in the vehicle running process and vehicle body information in the vehicle running process; the data information includes: physical parameters in the driving process of the vehicle and road parameters in the driving process of the vehicle; the physical parameters include: speed, acceleration, and angular velocity at which the vehicle is traveling; the road parameters include: the width and steering angle of the road; the vehicle body information includes: length, width and height of the vehicle;
step 2: identifying lane lines in real time based on the road parameters; and controlling the vehicle to keep running in the corresponding lane based on the physical parameters and the lane line recognition result.
2. The method according to claim 1, wherein the step 2 specifically comprises: step 2.1: performing lane line identification, comprising: inputting training data, and constructing a generation network, a discrimination network and a detection algorithm of lane line identification; then inputting the road image information into a generation network to generate a high-resolution picture, inputting the high-resolution picture into a discrimination network to perform accuracy judgment, capturing the resolution distribution of the high-resolution picture according to a judgment result, performing countermeasure training on the generation network and the discrimination network based on data distribution until Nash equilibrium is reached to obtain an optimized generation network, and inputting the high-resolution picture generated by the optimized generation network into a detection algorithm; the detection algorithm identifies the lane lines based on the road parameters and the generated high-resolution pictures; step 2.2: controlling the vehicle to keep running in the corresponding lane: based on the obtained vehicle body information; transmitting the vehicle body information to a preset real vehicle model for processing to obtain a direction corner corresponding to the vehicle body information, wherein the real vehicle model is established through deep neural network learning and is used for representing the corresponding relation between the vehicle body information and the direction corner; and controlling the vehicle to keep running in the corresponding lane according to the direction turning angle based on the obtained physical parameters, road parameters and lane line identification results in the running process of the vehicle.
3. A method according to claim 2, characterized in that in step 2.1: the method for generating the high-resolution picture by inputting the road image information into the generation network comprises the following steps: generating a convolution template for edge detection, wherein the convolution template is a 3 x 3 template; the following formula is used:
Figure FDA0002774280720000011
Figure FDA0002774280720000012
carrying out convolution operation on the convolution template and the original image; wherein, Cm. deletingFor the generated intermediate image, Pm + i, delete + jFor road image information, WijIs a convolution template; i and j are ordinal numbers, and the value range is as follows: -1 to 1; step 2.2 intermediate image C generated after convolution operation using the following formulam. deletingInertial mean processing is performed to remove noise:
Figure FDA0002774280720000013
k is the final high-resolution picture generated; where m is the length of the intermediate image and n is the width of the intermediate image.
4. A method according to claim 3, characterised in that in step 2.1: the method for capturing the resolution distribution of the high-resolution picture according to the judgment result, generating the network and judging the network to carry out countermeasure training based on the data distribution until Nash equilibrium is reached comprises the following steps: defining the resolution change rate as:
Figure FDA0002774280720000014
in the formula: AR represents a resolution change rate between the road image information and the high resolution picture; a (n) represents the resolution of the road image information; a (n +1) represents the resolution of the high resolution picture, eps is a set minimum value; the resolution change rate range of the first gradient is 0.1-0.4, the resolution change rate of the second gradient is close to 0, and the resolution change rate of the third gradient is close to 1; the expression defining that is equality is:
Figure FDA0002774280720000015
wherein A iskFor the coefficient of opposition, an arithmetic series with a tolerance of 0.1 and a first term of 0.3; pkThe countermeasure function is a set linear function.
5. The method of claim 4, wherein the generation network and the discrimination network are trained using neural networks, and the training data includes pairs of lane line blurred pictures and lane line sharp pictures, i.e., pairs of low resolution pictures and high resolution pictures.
CN202011259803.0A 2020-11-12 2020-11-12 Driverless lane keeping method for maintaining road shoulder distance Active CN112319468B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011259803.0A CN112319468B (en) 2020-11-12 2020-11-12 Driverless lane keeping method for maintaining road shoulder distance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011259803.0A CN112319468B (en) 2020-11-12 2020-11-12 Driverless lane keeping method for maintaining road shoulder distance

Publications (2)

Publication Number Publication Date
CN112319468A true CN112319468A (en) 2021-02-05
CN112319468B CN112319468B (en) 2021-07-20

Family

ID=74318984

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011259803.0A Active CN112319468B (en) 2020-11-12 2020-11-12 Driverless lane keeping method for maintaining road shoulder distance

Country Status (1)

Country Link
CN (1) CN112319468B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113110489A (en) * 2021-04-30 2021-07-13 清华大学 Trajectory planning method and device, electronic equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103287429A (en) * 2013-06-18 2013-09-11 安科智慧城市技术(中国)有限公司 Lane-keeping system and lane-keeping control method
CN207809374U (en) * 2018-01-30 2018-09-04 上海融聂电子科技有限公司 A kind of Lane Keeping System based on camera and brake-by-wire and steering
CN109606346A (en) * 2018-12-29 2019-04-12 武汉超控科技有限公司 A kind of Lane Keeping System of automatic Pilot
CN109760677A (en) * 2019-03-13 2019-05-17 广州小鹏汽车科技有限公司 A kind of lane keeps householder method and system
CN109886200A (en) * 2019-02-22 2019-06-14 南京邮电大学 A kind of unmanned lane line detection method based on production confrontation network
CN110654384A (en) * 2019-11-04 2020-01-07 湖南大学 Lane keeping control algorithm and system based on deep reinforcement learning
CN110789517A (en) * 2019-11-26 2020-02-14 安徽江淮汽车集团股份有限公司 Automatic driving lateral control method, device, equipment and storage medium
CN111209770A (en) * 2018-11-21 2020-05-29 北京三星通信技术研究有限公司 Lane line identification method and device
US20200223451A1 (en) * 2017-10-30 2020-07-16 Mobileye Vision Technologies Ltd. Navigation based on sensed looking direction of a pedestrian
DE102019203247A1 (en) * 2019-03-11 2020-09-17 Zf Friedrichshafen Ag Vision-based steering assistance system for land vehicles

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103287429A (en) * 2013-06-18 2013-09-11 安科智慧城市技术(中国)有限公司 Lane-keeping system and lane-keeping control method
US20200223451A1 (en) * 2017-10-30 2020-07-16 Mobileye Vision Technologies Ltd. Navigation based on sensed looking direction of a pedestrian
CN207809374U (en) * 2018-01-30 2018-09-04 上海融聂电子科技有限公司 A kind of Lane Keeping System based on camera and brake-by-wire and steering
CN111209770A (en) * 2018-11-21 2020-05-29 北京三星通信技术研究有限公司 Lane line identification method and device
CN109606346A (en) * 2018-12-29 2019-04-12 武汉超控科技有限公司 A kind of Lane Keeping System of automatic Pilot
CN109886200A (en) * 2019-02-22 2019-06-14 南京邮电大学 A kind of unmanned lane line detection method based on production confrontation network
DE102019203247A1 (en) * 2019-03-11 2020-09-17 Zf Friedrichshafen Ag Vision-based steering assistance system for land vehicles
CN109760677A (en) * 2019-03-13 2019-05-17 广州小鹏汽车科技有限公司 A kind of lane keeps householder method and system
CN110654384A (en) * 2019-11-04 2020-01-07 湖南大学 Lane keeping control algorithm and system based on deep reinforcement learning
CN110789517A (en) * 2019-11-26 2020-02-14 安徽江淮汽车集团股份有限公司 Automatic driving lateral control method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王坤峰等: ""生成式对抗网络GAN的研究进展与展望"", 《自动化学报》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113110489A (en) * 2021-04-30 2021-07-13 清华大学 Trajectory planning method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112319468B (en) 2021-07-20

Similar Documents

Publication Publication Date Title
CN114384920B (en) Dynamic obstacle avoidance method based on real-time construction of local grid map
Li et al. Springrobot: A prototype autonomous vehicle and its algorithms for lane detection
EP2591443B1 (en) Method for assisting vehicle guidance over terrain
US20110026770A1 (en) Person Following Using Histograms of Oriented Gradients
CN106896353A (en) A kind of unmanned vehicle crossing detection method based on three-dimensional laser radar
CN104318258A (en) Time domain fuzzy and kalman filter-based lane detection method
CN109466552B (en) Intelligent driving lane keeping method and system
CN108877267A (en) A kind of intersection detection method based on vehicle-mounted monocular camera
Zou et al. Real-time full-stack traffic scene perception for autonomous driving with roadside cameras
CN112298194A (en) Lane changing control method and device for vehicle
CN106887012A (en) A kind of quick self-adapted multiscale target tracking based on circular matrix
CN111680713A (en) Unmanned aerial vehicle ground target tracking and approaching method based on visual detection
CN110568861A (en) Man-machine movement obstacle monitoring method, readable storage medium and unmanned machine
CN115861968A (en) Dynamic obstacle removing method based on real-time point cloud data
CN104778699A (en) Adaptive object feature tracking method
EP2405383A1 (en) Assisting with guiding a vehicle over terrain
CN112319468B (en) Driverless lane keeping method for maintaining road shoulder distance
CN112428939B (en) Driveway keeping induction assembly device for maintaining road shoulder distance
Masmoudi et al. Autonomous car-following approach based on real-time video frames processing
CN114620059B (en) Automatic driving method, system thereof and computer readable storage medium
CN113129336A (en) End-to-end multi-vehicle tracking method, system and computer readable medium
Zhao et al. Improving Autonomous Vehicle Visual Perception by Fusing Human Gaze and Machine Vision
Guo et al. Optimal path planning in field based on traversability prediction for mobile robot
CN114842660B (en) Unmanned lane track prediction method and device and electronic equipment
CN115373383A (en) Autonomous obstacle avoidance method and device for garbage recovery unmanned boat and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Method of maintaining shoulder spacing for unmanned driving lanes

Effective date of registration: 20230802

Granted publication date: 20210720

Pledgee: Industrial Bank Co.,Ltd. Shanghai Huangpu Sub branch

Pledgor: SHANGHAI BOONRAY INTELLIGENT TECHNOLOGY Co.,Ltd.

Registration number: Y2023310000429

PE01 Entry into force of the registration of the contract for pledge of patent right