CN114018275A - Driving control method and system for vehicle at intersection and computer readable storage medium - Google Patents

Driving control method and system for vehicle at intersection and computer readable storage medium Download PDF

Info

Publication number
CN114018275A
CN114018275A CN202010683223.8A CN202010683223A CN114018275A CN 114018275 A CN114018275 A CN 114018275A CN 202010683223 A CN202010683223 A CN 202010683223A CN 114018275 A CN114018275 A CN 114018275A
Authority
CN
China
Prior art keywords
vehicle
path
transverse
historical
travelable
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010683223.8A
Other languages
Chinese (zh)
Inventor
关倩仪
刘文如
王玉龙
王航
张剑锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Automobile Group Co Ltd
Original Assignee
Guangzhou Automobile Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Automobile Group Co Ltd filed Critical Guangzhou Automobile Group Co Ltd
Priority to CN202010683223.8A priority Critical patent/CN114018275A/en
Publication of CN114018275A publication Critical patent/CN114018275A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3446Details of route searching algorithms, e.g. Dijkstra, A*, arc-flags, using precalculated routes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3453Special cost functions, i.e. other than distance or default speed limit of road segments
    • G01C21/3492Special cost functions, i.e. other than distance or default speed limit of road segments employing speed data or traffic data, e.g. real-time or historical
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention provides a method, a system and a computer readable storage medium for controlling the running of a vehicle at a road junction, wherein the method comprises the following steps: acquiring an intersection image in front of a vehicle, inputting the front intersection image into a trained neural network model for processing, acquiring a travelable path of the vehicle at an intersection and a transverse corner of the vehicle at a track point on the travelable path, receiving a path guide signal, determining a final travelable path from the travelable path according to the path guide signal, acquiring the transverse corner at the track point on the final travelable path, generating a control command according to the transverse corners at the track point on the final travelable path and the final travelable path, and sending the control command to a vehicle execution mechanism to control the execution mechanism to execute the control command. The method provides a new scheme for advanced automatic driving in a complex environment, and the cost is greatly reduced compared with the scheme of using a high-precision map for path planning and trajectory prediction.

Description

Driving control method and system for vehicle at intersection and computer readable storage medium
Technical Field
The invention relates to the technical field of vehicle driving control, in particular to a method and a system for controlling the running of a vehicle at a road junction and a computer readable storage medium.
Background
At present, a track prediction method predicts future track points of a vehicle by collecting historical track points of the vehicle and by a curve fitting method and a probability model, and the method mainly predicts based on the historical track points, does not combine with actual intersection conditions, and cannot process actual application conditions of multiple driving tracks at a bifurcation intersection.
Disclosure of Invention
The invention provides a method and a system for controlling the running of a vehicle at an intersection and a computer readable storage medium, which aim to solve the defect that in the prior art, as the actual intersection condition is not combined during vehicle navigation, the multiple running tracks at a branched intersection cannot be processed.
In order to solve the technical problem, the following technical scheme is adopted:
in a first aspect, the present invention provides a method for controlling travel of a vehicle at an intersection, including:
acquiring an image of a crossing in front of a vehicle;
inputting the front intersection image into a trained neural network model for processing to obtain a travelable path of the vehicle at the intersection and a transverse corner variation vector of the vehicle at a track point on the travelable path;
receiving a path directing signal;
determining a final driving path from the drivable paths according to the path guiding signals, and acquiring transverse corner variation corresponding to a plurality of track points on the final drivable path from the transverse corner variation vector;
and generating a control command according to the final driving path and the transverse corner variation corresponding to the plurality of track points on the final driving path, and sending the control command to a vehicle executing mechanism so as to control the executing mechanism to execute the control command.
In a specific embodiment, training the neural network model specifically includes:
determining a training set and a deep learning network for training the neural network model;
and training and optimizing the deep learning network by using the training set to obtain the trained neural network model.
In a specific embodiment, the determining a training set for training the neural network model specifically includes:
acquiring longitude and latitude coordinates of a plurality of historical track points of a historical driving path when a vehicle passes through an intersection, transverse corners of the plurality of historical track points and a plurality of frames of historical intersection images collected at the plurality of historical track points, wherein the historical images, the longitude and latitude coordinates of the plurality of historical track points and the transverse corners of the vehicle at the plurality of historical track points all adopt uniform time labels;
determining a functional relation between the advancing distance of the vehicle and the variation of the transverse corner of the vehicle in the time interval of the advancing distance according to longitude and latitude coordinates and the variation of the transverse corner at a plurality of track points of a historical driving path;
respectively calculating and obtaining transverse corner variation corresponding to a plurality of predicted advancing distance values according to the functional relation, and forming a label output sub-vector according to the transverse corner variation;
and obtaining a label output vector corresponding to the historical intersection image according to the label output sub-vector, wherein the historical intersection image and the corresponding label output vector form the training set.
In a specific embodiment, the determining, according to the longitude and latitude coordinates at the track point and the corresponding lateral corner, a functional relationship between a forward distance of the vehicle and a lateral corner variation of the vehicle within a time of advancing the forward distance specifically includes:
determining an initial track point and a current track point in historical track points, calculating the advancing distance of the vehicle according to the longitude and latitude coordinates of the initial track point and the longitude and latitude coordinates of the current track point, and calculating the transverse corner variation of the vehicle in the time interval of the advancing distance according to the transverse corner of the initial track point and the transverse corner of the current track point;
forming a coordinate point by the advancing distance and a transverse rotation angle variable corresponding to the advancing distance;
and fitting a plurality of coordinate points on the same travelable path, and obtaining a fitting function based on a minimum mean square error criterion.
In a specific embodiment, the calculating the advancing distance of the vehicle according to the longitude and latitude coordinates of the initial track point and the longitude and latitude coordinates of the current track point specifically includes:
acquiring longitude and latitude coordinates of middle track points of the time labels between the time labels of the initial track points and the current track points, and sequencing the initial track points, the current track points and the middle track points according to the sequence of the time labels to form a track point set;
calculating a distance value between two adjacent track points in the track point set;
summing the distance values, the sum being a distance traveled by the vehicle;
the calculating the transverse rotation angle variation of the vehicle in the time interval of the advance distance according to the transverse rotation angle of the initial track point and the transverse rotation angle of the current track point specifically comprises:
and calculating a difference value between the current transverse rotation angle and the initial transverse rotation angle, wherein the difference value is the transverse rotation angle variation.
In a specific embodiment, the obtaining a tag output vector corresponding to the historical intersection image according to the tag output sub-vector specifically includes:
acquiring the maximum number m of the travelable paths of the vehicle at the historical intersection and n-dimensional label output sub-vectors corresponding to each travelable path, wherein if the probability of one travelable path in the historical intersection image is zero, the label output sub-vector corresponding to the travelable path with the probability of zero is recorded as an n-dimensional zero vector, and n is the number of track points on the travelable path;
and connecting the m n-dimensional label output sub-vectors to form an (m x n) -dimensional vector according to a clockwise direction by taking the advancing direction of the vehicle when the vehicle shoots the historical intersection image as a positive direction, wherein the (m x n) -dimensional vector is a label output vector corresponding to the historical intersection image.
In one specific embodiment, the obtaining of the travelable path of the vehicle at the intersection and the lateral turning angle of the vehicle at the track point on the travelable path specifically includes
And outputting the number codes of the travelable paths according to the set encoding rule of the travelable paths, and outputting the (m x n) -dimensional transverse corner variation vector corresponding to the front intersection image.
In a specific embodiment, the receiving the route guidance signal, determining a final travel route from the travelable routes according to the route guidance signal, and acquiring coordinates of track points of the final travel route specifically includes:
if the guiding signal is a left turn, selecting a left travelable path, and acquiring a transverse corner variable quantity corresponding to the left travelable path from an output transverse corner variable quantity vector;
if the guiding signal is straight, selecting a middle travelable path, and acquiring the transverse corner variable quantity corresponding to the middle travelable path from the output transverse corner variable quantity vector;
and if the guiding signal is a right turn, selecting a right driving path, and acquiring the transverse corner variable quantity corresponding to the right driving path from the output transverse corner variable quantity vector.
A second aspect of the present invention provides a running control system of a vehicle at an intersection, including:
an acquisition unit for acquiring an intersection image in front of a vehicle;
the transverse corner determining unit is used for inputting the front intersection image into a trained neural network model for processing to obtain a drivable path of the vehicle at the intersection and a transverse corner variable quantity corresponding to a preset advancing distance of the vehicle according to the drivable path;
a receiving unit for receiving the path directing signal;
the final driving path determining unit is used for determining a final driving path from the drivable paths according to the path guiding signals and acquiring coordinates of track points of the final driving path;
and the control instruction generating unit is used for generating a control instruction according to the final driving path and the coordinates of the track points of the final driving path, and sending the control instruction to a vehicle executing mechanism so as to control the executing mechanism to execute the control instruction.
In a third aspect, the present invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a computer device, performs the aforementioned method steps.
The invention has the beneficial effects that: the driving control method of the vehicle at the intersection, provided by the embodiment of the invention, is based on the prediction algorithm of the deeply learned intersection driving tracks, can realize prediction of a plurality of driving tracks under the scene of a bifurcation intersection, receives the navigation instruction to select the final driving track and output the corresponding transverse corner variation, and outputs a steering wheel transverse corner control signal to the vehicle from end to end so as to realize driving behaviors of completing left turning, right turning, straight running, turning around and the like of a complex intersection according to the guidance, so that a new scheme is provided for advanced automatic driving under a complex environment, and the cost is greatly reduced compared with the scheme of using a high-precision map to carry out path planning and track prediction.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a method for controlling a vehicle to travel at a crossing according to a first embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating a process of determining a training set for training the neural network model according to a first embodiment of the present invention;
FIG. 3 is a flowchart illustrating a specific process of step Sb in FIG. 2 according to a first embodiment of the present invention;
fig. 4 is a schematic structural diagram of a driving control system of a vehicle at a crossing according to a second embodiment of the present invention.
Detailed Description
The following description of the embodiments refers to the accompanying drawings, which are included to illustrate specific embodiments in which the invention may be practiced.
An embodiment of the present invention provides a method for controlling a vehicle to travel at a crossing, as shown in fig. 1, including the following steps:
and S1, acquiring an intersection image in front of the vehicle.
Specifically, the image of the intersection in front of the vehicle is acquired through a camera arranged at the front end of the vehicle, in a specific embodiment, the camera is arranged at the longitudinal symmetry axis of the vehicle and is close to the upper edge of a windshield, the FOV angle of the camera is 60 degrees, the acquisition frequency is 30Hz, and the size of the acquired image is not less than 640x 480.
Because the size of the picture captured by the camera is large, the large proportion above the picture is information with small changes and low separability such as trees beside the sky and the road, in order to enable the deep learning network to pay more attention to the road scene and improve the identification efficiency of the model, the picture needs to be cut, the irrelevant information of the upper part is removed, and the front road scene area is reserved. After cropping, the image is scaled to 240x120 size, and the RGB channel pixel values are normalized to between [0,1 ].
And S2, inputting the front intersection image into the trained neural network model for processing to obtain a driving path of the vehicle at the intersection and a transverse corner of the vehicle at a track point on the driving path.
In one embodiment, obtaining the trained neural network model comprises: and determining a training set and a deep learning network for training the neural network model, and training and optimizing the deep learning network by using the training set to obtain the trained neural network model.
As shown in fig. 2, in a specific embodiment, determining the label data for training the neural network model includes:
and Sa acquires longitude and latitude coordinates of a plurality of track points of a historical driving path, transverse corners of the plurality of track points and a plurality of frames of historical intersection images collected at the plurality of track points, wherein the historical images, the longitude and latitude coordinates of the track points of the driving path and the transverse corners of the vehicles at the track points all adopt uniform time labels.
Specifically, a multi-needle historical front intersection image is acquired through a camera mounted at the front end of a vehicle, GPS coordinate data of a vehicle running track and a vehicle transverse corner are acquired through a vehicle-mounted sensor, wherein the environmental data in the front intersection image, longitude and latitude coordinates of a track point where the historical intersection image is acquired and the transverse corner adopt the same time label, and scenes for acquiring the front intersection image comprise different time, different roads, weather, light, traffic flow and the like.
And Sb, determining a functional relation between the advancing distance of the vehicle and the variation of the transverse corner of the vehicle in the time interval of the advancing distance according to the longitude and latitude coordinates and the variation of the transverse corner at a plurality of historical track points of a historical driving path.
In one embodiment, as shown in fig. 3, the step Sb includes:
sb1, determining an initial track point and a current track point in historical track points, calculating the advancing distance of the vehicle according to the longitude and latitude coordinates of the initial track point and the longitude and latitude coordinates of the current track point, and calculating the transverse corner variable of the vehicle according to the transverse corner of the initial track point and the transverse corner of the current track point.
Specifically, for a plurality of historical track points of a certain driving path, according to the sequence of time tags, longitude and latitude coordinates of intermediate track points of which the time tags are located between the time tags of the initial track points and the time tags of the current track points are obtained, and the initial track points, the current track points and the intermediate track points are sequenced according to the sequence of the time tags to form a track point set, the distance values between two adjacent track points in the track point set are calculated, and the distance values are summed, wherein the sum is the advancing distance of the vehicle.
Specifically, a difference between the current lateral rotation angle and the initial lateral rotation angle is calculated, and the difference is the amount of change in the lateral rotation angle.
Sb2, forming a coordinate point by the advancing distance of the vehicle and the amount of change in the lateral rotation angle corresponding to the advancing distance, and obtaining a plurality of the coordinate points.
Specifically, the advance distance is taken as an abscissa of a coordinate point, and a lateral rotation angle variation amount corresponding to the advance distance is taken as an ordinate of the coordinate point.
For example, assume that the longitude and latitude coordinate of the initial trace point A0 is (lon)0,lat0) The longitude and latitude coordinate A1 of the current tracing point is (lon)1,lat1) Calculating the advancing distance d1 of the vehicle in the interval of the time labels of the two position points A0 and A1 according to the longitude and latitude coordinates of A0 and A1, wherein the specific calculation method comprises the following steps:
d1=cos-1(sinlat0sinlat1+coslat0coslat1cos(lon1-lon0))×R
where R is the radius of the earth, assuming a lateral angular position θ of the vehicle at A00At A1, the lateral angular position of the vehicle theta1Then, the lateral rotation angle of the vehicle changes by Δ θ in the interval of the time stamps of the two position points a0 and a11=θ10D is mixing1And Δ θ1Composition coordinate point(s1,Δθ1)。
Suppose that the longitude and latitude coordinate A2 of the current track point is (lon)12,lat2) The longitude and latitude coordinates of the middle historical track point between the current track point and the initial track point are respectively A1 (lon)1,lat1) Forming a track point set (A0, A1 and A2) by the 3-point coordinate points according to the sequence of the time labels, and calculating the advancing distance d1 of the vehicle in the interval of the time labels of the A0 position point and the A1 position point according to the longitude and latitude coordinates of the A0 position point and the A1 position point, wherein the specific calculation method comprises the following steps:
d2=cos-1(sinlat1sinlat2+coslat1coslat2cos(lon2-lon1))×R
it is possible to calculate the distance s2 by which the vehicle advances in the interval of the time tags of the two position points a0 and a2, the distance s2 by which the vehicle advances in the interval of the time tags of the two position points a2 and A3, and the distance s2 — d1+ d2 assuming the lateral corner position θ of the vehicle at a22Then, the lateral rotation angle of the vehicle changes by Δ θ in the interval of the time stamps of the two position points a0 and a22=θ20D is mixing2And Δ θ2Composition coordinate point(s)2,Δθ2). By adopting the method, a plurality of coordinate points(s) on the historical track can be formedm,Δθm)。
And Sb3, fitting the coordinate points to obtain a fitting function.
In one embodiment, an nth degree polynomial Δ θ ═ a is used0+a1s1+…+ansn(n∈[2,6]And fitting the formed coordinate points, and determining the optimal times n based on the minimum mean square error criterion. Thereby obtaining a fitting function.
And Sc, respectively calculating and obtaining transverse rotation angle variable quantities corresponding to a plurality of predicted advancing distance values according to the functional relation, and forming a label output sub-vector according to the transverse rotation angle variable quantities.
After the fitting function is determined, if the future predicted advancing distance is determined and the number of sampling points is set, the method can determineAnd determining the change amount of the lateral rotation angle of the vehicle at each sampling point. Assuming that the predicted advance distance is 30 meters and the corresponding sampling points are 5, that is, the advance distances of the vehicles are (6,12,18,24,30), respectively, the corresponding lateral rotation angle is (θ)612182430) The corresponding vector of the amount of change in the lateral rotation angle is (Δ θ)6,Δθ12,Δθ18,Δθ24,Δθ30) Where Δ θs=θs0,θ0Is the lateral corner at the initial trajectory point.
After the transverse corner variation vector of each driving path is obtained, the transverse corner variation vector of each driving track is converted into the variation of the point of the previous track from the variation of each point relative to the starting point, namely (delta theta)6,Δθ12-Δθ6,Δθ18-Δθ12,Δθ24-Δθ18,Δθ30-Δθ24) And is normalized to
Figure BDA0002586606930000081
In the meantime.
And Sd, obtaining a label output vector corresponding to the historical intersection image according to the label output sub-vector, wherein the historical intersection image and the corresponding label output vector form the training set.
Specifically, the maximum number m of the travelable paths of the vehicle at the historical intersection and n-dimensional label output sub-vectors corresponding to each travelable path are obtained, wherein if the probability of one travelable path in the historical intersection image is zero, the label output sub-vector corresponding to the travelable path with the probability of zero is recorded as an n-dimensional zero vector, and n is the number of track points on the travelable path; and connecting the m n-dimensional label output sub-vectors to form an (m x n) -dimensional vector according to a clockwise direction by taking the advancing direction of the vehicle when the vehicle shoots the historical intersection image as a positive direction, wherein the (m x n) -dimensional vector is a label output vector corresponding to the historical intersection image.
In a specific embodiment, assuming that the maximum feasible driving paths corresponding to the intersection are 3, and the maximum feasible driving paths are a left feasible driving path, a middle feasible driving path and a right feasible driving path, respectively acquiring transverse corner variation vectors corresponding to the left driving path, the middle driving path and the right driving path of the historical intersection image, taking the advancing direction of the vehicle when the vehicle shoots the historical intersection image as a positive direction, and connecting the transverse corner variation vectors corresponding to the left driving path, the middle driving path and the right driving path according to a clockwise direction to form a (3 x n) -dimensional vector, wherein the (3 x n) -dimensional vector is a data label output corresponding to the historical intersection image, and when a certain driving path does not exist, the transverse corner variation vector corresponding to the nonexistent driving path is recorded as an n-dimensional zero vector, wherein, n is the number of the track points set on each driving path.
For example, it is assumed that the maximum number of the travelable paths at the intersection is 3, which are a left travelable path, a middle travelable path, and a right travelable path, where the normalized transverse rotation angle variation vector corresponding to the left travelable path is (Δ 1 θ)6,Δ1θ12-Δ1θ6,Δ1θ18-Δ1θ12,Δ1θ24- Δ 1 θ 18, Δ 1 θ 30- Δ 1 θ 24, the vector of normalized lateral rotation angle variations corresponding to the intermediate travelable path being (Δ 2 θ)6,Δ2θ12-Δ2θ6,Δ2θ18-Δ2θ12,Δ2θ24-Δ2θ18,Δ2θ30-Δ2θ24) The right travelable path corresponds to the normalized transverse corner variation vector of (delta 3 theta)6,Δ3θ12-Δ3θ6,Δ3θ18- Δ 3 θ 12, Δ 3 θ 24- Δ 3 θ 18, Δ 3 θ 30- Δ 3 θ 24, and assuming that the forward direction of the history intersection image is the positive direction, the clockwise direction is the left, middle and right sequence, and the normalized output data label corresponding to the history intersection image is (Δ 1 θ 24)6,Δ1θ12-Δ1θ6,Δ1θ18-Δ1θ12,Δ1θ24-Δ1θ18,Δ1θ30-Δ1θ24,Δ2θ6,Δ2θ12-Δ2θ6,Δ2θ18-Δ2θ12,Δ2θ24-Δ2θ18,Δ2θ30-Δ2θ24,Δ3θ6,Δ3θ12-Δ3θ6,Δ3θ18-Δ3θ12,Δ3θ24-Δ3θ18,Δ3θ30-Δ3θ24). Wherein, Delta 1 thetaiNormalized transverse angle change, Δ 2 θ, for a forward i meters on the left travel pathiNormalized transverse rotation angle variation, delta 3 theta, corresponding to i meters of forward travel on the intermediate travel pathiThe normalized lateral angle change amount corresponding to i meters ahead on the right travel path is i, 6,12,18,24, 30.
In a specific embodiment, the number of travelable tracks corresponding to each historical intersection image is further labeled, and one-hot encoding is performed on the number of travelable paths contained in the intersection data set, in the intersection data set used for the algorithm verification, the number of travelable paths at the same intersection is at most 3, which can be represented by a three-dimensional vector, and the correspondence between the number of paths and the encoding is 1: (1, 0, 0), 2 strips: (0, 1, 0), 3 strips: (0,0,1).
In a specific embodiment, the feature extraction part of the neural network model is realized based on ResNet-50, the output layer is divided into two branches, one branch identifies the number p of travelable paths of the intersection picture, and the other branch regresses the transverse corner change vector corresponding to each track. And finally, correspondingly selecting and activating different regression transverse corner change vector output neurons according to the number of the travelable paths predicted by the model, thereby obtaining a transverse corner control signal output to the vehicle.
The input layer of the convolution feature extraction layer network is series image data collected by a camera in front of a vehicle, the image type is RGB three-channel data, the size of an original image is 960x604, the original image is cut into 960x 480 through preprocessing, and the original image is scaled to 240x 120. The dimension of the input layer is 240,120,3, the intermediate network structure comprises 5 convolution block structures, the specific structure is shown in a table, and the dimension of the image feature vector can be 4,8,2048 after ResNet-50 convolution feature extraction.
Figure BDA0002586606930000101
In one embodiment, the output layer of the neural network model comprises a driving path number prediction branch and a transverse rotation angle change vector regression branch, wherein the driving path number prediction branch performs global mean pooling on the features obtained by the feature extraction layer to obtain [1,2048 ]]The neurons are fully connected to output a 3-dimensional vector, the vector is activated by utilizing a softmax function, and the output range is [0,1]]Each dimension represents the probability of 1,2 and 3 predicted tracks respectively. Wherein, the transverse corner change vector regression branch performs global mean pooling on the features obtained by the feature extraction layer to obtain [1,2048 ]]The neuron is fully connected to output a 15-dimensional vector, the vector is activated by an arctangent function, and the output range is [ -1,1]The proportional factor between the steering angle and the angle is
Figure BDA0002586606930000102
The actual corresponding steering angle variation range is
Figure BDA0002586606930000103
In a specific embodiment, the essence of the neural network is to find an optimal function expression from input to output, I represents the image input, W represents optimization parameters involved in all networks, F represents an overall function expression of the network, Y represents the output of the network, Y represents an actual value of an output label corresponding to the image, and then the network can be represented as Y ═ F (I, W), and the optimization of the network is to find an optimal parameter so that the loss function that Y and Y satisfy is evaluated to the minimum.
In a model defined by the algorithm, wc represents optimization parameters related to a convolution feature extraction layer in a network, wfcp represents network parameters of a driving path number prediction branch in an output layer, wfct represents network parameters of a track point transverse corner change vector regression branch in the output layer, fc represents a function expression of network convolution feature extraction, ffcp represents a function expression of a driving path number prediction branch in the network output layer, and ffct represents a function expression of a track point transverse corner change vector regression branch in the network output layer. yp and yt represent the practical value of the number vector of the travelable track and the transverse corner change vector of the track point corresponding to the intersection image.
Lossp represents the loss function of the prediction branch of the number of travelable paths, and is expressed by a cross entropy loss function:
Lossp=Σyplog[ffcp(fc(I,wc),wfcp)]
wherein f isc(I,wc) Extracting the output of the layer for the convolution features, ffcp(fc(I,wc),wfcp) An output representing a predicted branch of the number of travelable paths;
losst represents the loss function of the regression branch of the transverse corner change vector of the track point, and is represented by a minimum mean square error function:
Figure BDA0002586606930000111
wherein, F (y)p,yt) And representing the operation of the effective track part characteristic corresponding to the activation yt according to the yp prediction result, wherein the operation F (,) is defined as follows:
Figure BDA0002586606930000112
the final loss function of the model is a joint loss function of the prediction branch of the number of the travelable paths and the regression branch of the transverse corner change vector of the track point, and is defined as follows:
Loss=Lossp+Losst
after the model is established, the obtained data set is divided into a training set, a verification set and a test set, wherein the proportion of the training set is 80%, the proportion of the verification set is 10% and the proportion of the test set is 10%. During training, an adam optimization algorithm is adopted, the training steps are about 10 ten thousand, the learning rate adopts an exponential decay method, namely, the learning rate is gradually reduced along with the increase of the training steps:
Figure BDA0002586606930000121
where lcurrent is the current learning rate, lbase is the base learning rate, set to 1e-4, rd is the attenuation coefficient, set to 0.95, Cstep is the current training step number, Dstep is the attenuation rate, set to 5000 steps.
After the trained neural network model is obtained, the front intersection image collected in step S1 is input into the trained neural network model, and the number of corresponding travelable paths, the probability corresponding to each travelable path, and the vector of the lateral rotation angle variation are obtained.
And S3, receiving the path guiding signal.
Receiving a route guidance signal from a vehicle navigation system or a turn signal from a vehicle user, wherein in one embodiment, the route guidance signal includes a vehicle left turn, a vehicle straight run, or a vehicle right turn.
And S4, determining a final driving path from the drivable paths according to the path guiding signals, and acquiring the transverse steering angle variation corresponding to the preset distance of the vehicle advancing from the current position according to the final drivable path.
Specifically, if the guidance signal generated by the navigation software of the vehicle is a left turn, the left travelable path is selected and the lateral corner variation corresponding to the left travelable path at the set forward distance is acquired, if the guidance signal generated by the navigation software of the vehicle is a straight travel, the middle travelable path is selected and the lateral corner variation corresponding to the middle travelable path at the set forward distance is acquired, and if the guidance signal generated by the navigation software of the vehicle is a right turn, the right travelable path is selected and the lateral corner variation corresponding to the right travelable path at the set forward distance is acquired.
And S5, generating a control command according to the final running path and the transverse steering angle variation corresponding to the advancing set distance of the final running path, sending the control command to a vehicle execution mechanism, and controlling the execution mechanism to execute the control command.
After the transverse corner variation vector corresponding to the final travelable path is obtained, the set advancing distance and the corresponding transverse corner variation are formed into coordinate points according to the set advancing distance, so that a plurality of coordinate points can be formed, the formed coordinate points are fitted, a fitting function is obtained, and the transverse corner variation corresponding to any advancing distance can be obtained according to the fitting function. The signal of the variation of the transverse turning angle corresponding to the track is output to a vehicle steering wheel controller through a system, and the operations of left turning, right turning, straight going, turning around and the like can be smoothly finished by passing through the intersection according to the guidance of a navigation path.
According to the method for controlling the vehicle to run at the intersection, the intersection image in front of the vehicle is obtained, the intersection image in front of the vehicle is input into a trained neural network model for processing, the travelable path of the vehicle at the intersection and the transverse corner of the vehicle at the track point on the travelable path are obtained, the path guide signal is received, the final travelable path is determined from the travelable path according to the path guide signal, the transverse corner at the track point on the final travelable path is obtained, the control command is generated according to the final travelable path and the transverse corner at the track point on the final travelable path and is sent to the vehicle execution mechanism, and the execution mechanism is controlled to execute the control command. The method is based on a prediction algorithm of deeply learned intersection travelable tracks, can realize the prediction of a plurality of travelable tracks in a scene of a bifurcation intersection, receives a navigation instruction to select a final travelable path and output corresponding transverse corner variation, and outputs a transverse corner control signal of a steering wheel to a vehicle from end to end so as to realize the driving behaviors of completing left turn, right turn, straight running, turning around and the like of a complex intersection according to guidance, provides a new scheme for advanced automatic driving in a complex environment, and greatly reduces the cost compared with the scheme of using a high-precision map to carry out path planning and track prediction.
Based on the first embodiment of the present invention, the second embodiment of the present invention provides a driving control system for a vehicle at a road junction, as shown in fig. 4, the system 100 includes: an acquiring unit 10, a transverse corner determining unit 20 at a travelable path and a track point, a receiving unit 30, a final travel path determining unit 40 and a control instruction generating unit 50, wherein the acquiring unit 10 is used for acquiring an intersection image in front of a vehicle, the transverse corner determining unit 20 at the travelable path and the track point is used for inputting the front intersection image into a trained neural network model for processing, so as to obtain a travelable path of the vehicle at an intersection and a transverse corner change amount corresponding to a set distance for the vehicle to advance according to the travelable path, the receiving unit 30 is used for receiving a path guide signal, the final travel path determining unit 40 is used for determining a final travel path from the travelable path according to the path guide signal and acquiring coordinates of a track point of the final travel path, the control instruction generating unit 50 is configured to generate a control instruction according to the final travel path and the coordinates of the track points of the final travel path, and send the control instruction to a vehicle actuator, so as to control the actuator to execute the control instruction.
Based on the first embodiment of the present invention, a third embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a computer device, implements the method steps of the first embodiment.
For the working principle and the advantageous effects thereof, please refer to the description of the first embodiment of the present invention, which will not be described herein again.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention, and it is therefore to be understood that the invention is not limited by the scope of the appended claims.

Claims (10)

1. A running control method of a vehicle at an intersection, characterized by comprising:
acquiring an image of a crossing in front of a vehicle;
inputting the front intersection image into a trained neural network model for processing to obtain a travelable path of the vehicle at the current intersection and a transverse corner variable quantity vector of the vehicle at a track point on the travelable path;
receiving a path directing signal;
determining a final driving path from the drivable paths according to the path guiding signals, and acquiring transverse corner variation corresponding to a plurality of track points on the final drivable path from the transverse corner variation vector;
and generating a control command according to the final driving path and the transverse corner variation corresponding to the plurality of track points on the final driving path, and sending the control command to a vehicle executing mechanism so as to control the executing mechanism to execute the control command.
2. The method of claim 1, wherein training the neural network model specifically comprises:
determining a training set and a deep learning network for training the neural network model;
and training and optimizing the deep learning network by using the training set to obtain the trained neural network model.
3. The method of claim 2, wherein the determining a training set for training the neural network model specifically comprises:
acquiring longitude and latitude coordinates of a plurality of historical track points of a historical driving path when a vehicle passes through an intersection, transverse corners of the plurality of historical track points and a plurality of frames of historical intersection images collected at the plurality of historical track points, wherein the historical images, the longitude and latitude coordinates of the plurality of historical track points and the transverse corners of the vehicle at the plurality of historical track points all adopt uniform time labels;
determining a functional relation between the advancing distance of the vehicle and the corresponding transverse corner variable quantity according to longitude and latitude coordinates and the transverse corner variable quantity at a plurality of historical track points of a historical driving path;
respectively calculating and obtaining transverse corner variation corresponding to a plurality of predicted advancing distance values according to the functional relation, and forming a label output sub-vector according to the transverse corner variation;
and obtaining a label output vector corresponding to the historical intersection image according to the label output sub-vector, wherein the historical intersection image and the corresponding label output vector form the training set.
4. The method according to claim 3, wherein the determining the functional relationship between the advancing distance of the vehicle and the corresponding lateral rotation angle variation amount according to the longitude and latitude coordinates and the lateral rotation angle variation amount at a plurality of historical track points of a historical driving path specifically comprises:
determining an initial track point and a current track point in historical track points, and calculating the advancing distance of the vehicle according to the longitude and latitude coordinates of the initial track point and the longitude and latitude coordinates of the current track point; calculating the corresponding transverse corner variation of the vehicle in the time interval of the advancing distance according to the transverse corner of the initial track point and the transverse corner of the current track point;
forming a coordinate point by the advancing distance and the transverse rotation angle variation corresponding to the advancing distance;
and fitting a plurality of coordinate points on the same historical driving path, and obtaining a fitting function based on a minimum mean square error criterion.
5. The method according to claim 4, wherein the calculating the advancing distance of the vehicle according to the longitude and latitude coordinates of the initial track point and the longitude and latitude coordinates of the current track point specifically comprises:
acquiring longitude and latitude coordinates of middle track points of the time labels between the time labels of the initial track points and the current track points, and sequencing the initial track points, the current track points and the middle track points according to the sequence of the time labels to form a track point set;
calculating a distance value between two adjacent track points in the track point set;
summing the distance values, the sum being a distance traveled by the vehicle;
the calculating the transverse rotation angle variation of the vehicle in the time interval of the advance distance according to the transverse rotation angle of the initial track point and the transverse rotation angle of the current track point specifically comprises:
and calculating a difference value between the current transverse rotation angle and the initial transverse rotation angle, wherein the difference value is the transverse rotation angle variation.
6. The method according to claim 5, wherein the obtaining the tag output vector corresponding to the historical intersection image according to the tag output sub-vector specifically comprises:
acquiring the maximum number m of the travelable paths of the historical intersection and n-dimensional label output sub-vectors corresponding to each travelable path, wherein if the probability of one travelable path in the historical intersection image is zero, the label output sub-vector corresponding to the travelable path with the probability of zero is recorded as an n-dimensional zero vector, and n is the number of track points on the travelable path;
and connecting the m n-dimensional label output sub-vectors to form an (m x n) -dimensional vector according to a clockwise direction by taking the advancing direction of the vehicle when the vehicle shoots the historical intersection image as a positive direction, wherein the (m x n) -dimensional vector is a label output vector corresponding to the historical intersection image.
7. The method according to claim 6, wherein the obtaining of the travelable path of the vehicle at the intersection and the lateral turning angle of the vehicle at the trajectory point on the travelable path comprises in particular
And outputting the number codes of the travelable paths according to the set encoding rule of the travelable paths, and outputting the (m x n) -dimensional transverse corner variation vector corresponding to the front intersection image.
8. The method according to any one of claims 1 to 7, wherein the receiving a route guidance signal, determining a final travel route from the travelable routes according to the route guidance signal, and acquiring coordinates of track points of the final travel route specifically comprises:
if the guiding signal is a left turn, selecting a left travelable path, and acquiring transverse corner variation corresponding to a plurality of track points of the left travelable path from the output transverse corner variation vector;
if the guide signal is straight, selecting a middle travelable path, and acquiring transverse corner variation corresponding to a plurality of track points of the middle travelable path from the output transverse corner variation vector;
and if the guiding signal is a right turn, selecting a right driving path, and acquiring transverse turning angle variable quantities corresponding to a plurality of track points of the right driving path from the output transverse turning angle variable quantity vector.
9. A running control system of a vehicle at an intersection, characterized by comprising:
an acquisition unit for acquiring an intersection image in front of a vehicle;
the transverse corner determining unit is used for inputting the front intersection image into a trained neural network model for processing to obtain a drivable path of the vehicle at the intersection and a transverse corner variable quantity corresponding to a preset advancing distance of the vehicle according to the drivable path;
a receiving unit for receiving the path directing signal;
the final driving path determining unit is used for determining a final driving path from the drivable paths according to the path guiding signals and acquiring coordinates of track points of the final driving path;
and the control instruction generating unit is used for generating a control instruction according to the final driving path and the coordinates of the track points of the final driving path, and sending the control instruction to a vehicle executing mechanism so as to control the executing mechanism to execute the control instruction.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program realizes the method steps of any of the preceding claims 1 to 8 when executed by a computer device.
CN202010683223.8A 2020-07-15 2020-07-15 Driving control method and system for vehicle at intersection and computer readable storage medium Pending CN114018275A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010683223.8A CN114018275A (en) 2020-07-15 2020-07-15 Driving control method and system for vehicle at intersection and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010683223.8A CN114018275A (en) 2020-07-15 2020-07-15 Driving control method and system for vehicle at intersection and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN114018275A true CN114018275A (en) 2022-02-08

Family

ID=80053999

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010683223.8A Pending CN114018275A (en) 2020-07-15 2020-07-15 Driving control method and system for vehicle at intersection and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN114018275A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190034794A1 (en) * 2017-07-27 2019-01-31 Waymo Llc Neural Networks for Vehicle Trajectory Planning
CN109712128A (en) * 2018-12-24 2019-05-03 上海联影医疗科技有限公司 Feature point detecting method, device, computer equipment and storage medium
CN109934119A (en) * 2019-02-19 2019-06-25 平安科技(深圳)有限公司 Adjust vehicle heading method, apparatus, computer equipment and storage medium
CN110188683A (en) * 2019-05-30 2019-08-30 北京理工大学 A kind of automatic Pilot control method based on CNN-LSTM
CN110618678A (en) * 2018-06-19 2019-12-27 辉达公司 Behavioral guided path planning in autonomous machine applications
CN110646009A (en) * 2019-09-27 2020-01-03 北京邮电大学 DQN-based vehicle automatic driving path planning method and device
CN111191607A (en) * 2019-12-31 2020-05-22 上海眼控科技股份有限公司 Method, apparatus, and storage medium for determining steering information of vehicle

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190034794A1 (en) * 2017-07-27 2019-01-31 Waymo Llc Neural Networks for Vehicle Trajectory Planning
CN110618678A (en) * 2018-06-19 2019-12-27 辉达公司 Behavioral guided path planning in autonomous machine applications
CN109712128A (en) * 2018-12-24 2019-05-03 上海联影医疗科技有限公司 Feature point detecting method, device, computer equipment and storage medium
CN109934119A (en) * 2019-02-19 2019-06-25 平安科技(深圳)有限公司 Adjust vehicle heading method, apparatus, computer equipment and storage medium
CN110188683A (en) * 2019-05-30 2019-08-30 北京理工大学 A kind of automatic Pilot control method based on CNN-LSTM
CN110646009A (en) * 2019-09-27 2020-01-03 北京邮电大学 DQN-based vehicle automatic driving path planning method and device
CN111191607A (en) * 2019-12-31 2020-05-22 上海眼控科技股份有限公司 Method, apparatus, and storage medium for determining steering information of vehicle

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MOHAMMAD SHOKROLAH SHIRAZI 等: "Trajectory prediction of vehicles turning at intersections using deep neural networks", 《MACHINE VISION AND APPLICATIONS (2019)》, vol. 30, pages 1097 - 1109, XP036867650, DOI: 10.1007/s00138-019-01040-w *

Similar Documents

Publication Publication Date Title
CN108983781B (en) Environment detection method in unmanned vehicle target search system
CN111771141B (en) LIDAR positioning for solution inference using 3D CNN network in autonomous vehicles
JP7086111B2 (en) Feature extraction method based on deep learning used for LIDAR positioning of autonomous vehicles
CN111771135B (en) LIDAR positioning using RNN and LSTM for time smoothing in autonomous vehicles
CN110861642A (en) Vehicle lateral motion control
CN112698645A (en) Dynamic model with learning-based location correction system
CN111830979A (en) Trajectory optimization method and device
CN112212872A (en) End-to-end automatic driving method and system based on laser radar and navigation map
CN114005280A (en) Vehicle track prediction method based on uncertainty estimation
CN111177934B (en) Method, apparatus and storage medium for reference path planning
CN114547222A (en) Semantic map construction method and device and electronic equipment
Wang et al. Trajectory prediction for turning vehicles at intersections by fusing vehicle dynamics and driver’s future input estimation
US11775615B2 (en) System and method for tracking detected objects
CN114018275A (en) Driving control method and system for vehicle at intersection and computer readable storage medium
CN113237487B (en) Vision-aided navigation method and device
CN116125980A (en) Unmanned truck driving method and device, electronic equipment and storage medium
CN113942524B (en) Vehicle running control method, system and computer readable storage medium
US20220300851A1 (en) System and method for training a multi-task model
EP4134623A1 (en) Drive device, vehicle, and method for automated driving and/or assisted driving
US11878684B2 (en) System and method for trajectory prediction using a predicted endpoint conditioned network
US20220053124A1 (en) System and method for processing information from a rotatable camera
CN115257801A (en) Trajectory planning method and device, server and computer readable storage medium
CN111077893B (en) Navigation method based on multiple vanishing points, electronic equipment and storage medium
Salzmann et al. Online Path Generation from Sensor Data for Highly Automated Driving Functions
CN114019947B (en) Method and system for controlling vehicle to travel at intersection and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination