CN110688920B - Unmanned control method and device and server - Google Patents

Unmanned control method and device and server Download PDF

Info

Publication number
CN110688920B
CN110688920B CN201910874675.1A CN201910874675A CN110688920B CN 110688920 B CN110688920 B CN 110688920B CN 201910874675 A CN201910874675 A CN 201910874675A CN 110688920 B CN110688920 B CN 110688920B
Authority
CN
China
Prior art keywords
data
working condition
control
state
current vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910874675.1A
Other languages
Chinese (zh)
Other versions
CN110688920A (en
Inventor
郑鑫宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Geely Holding Group Co Ltd
Ningbo Geely Automobile Research and Development Co Ltd
Original Assignee
Zhejiang Geely Holding Group Co Ltd
Ningbo Geely Automobile Research and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Geely Holding Group Co Ltd, Ningbo Geely Automobile Research and Development Co Ltd filed Critical Zhejiang Geely Holding Group Co Ltd
Priority to CN201910874675.1A priority Critical patent/CN110688920B/en
Publication of CN110688920A publication Critical patent/CN110688920A/en
Application granted granted Critical
Publication of CN110688920B publication Critical patent/CN110688920B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Feedback Control In General (AREA)

Abstract

The application discloses a method, a device and a server for controlling unmanned driving, wherein the method comprises the steps of obtaining reference working condition data of a current vehicle under the environment, determining an error between the working condition data of the current vehicle in a current time period and the reference working condition data, optimizing the reference working condition data based on an optimization model and the error to obtain a state weight of state data in the reference working condition data for controlling the driving of the current vehicle and a control weight of control data in the reference working condition data, and controlling the driving of the current vehicle based on the state weight, the control weight, the state data and the control data in the reference working condition data and the state data of the current time period of the current vehicle determine the control data of the current vehicle, and the control data is used for controlling the driving of the current vehicle.

Description

Unmanned control method and device and server
Technical Field
The present disclosure relates to a method and an apparatus for controlling unmanned driving, and more particularly, to a method and an apparatus for controlling optimization and pre-training of an execution control module by using a neural network.
Background
The key technology of unmanned driving is an environment perception technology and a decision control technology, wherein the environment perception technology is the basis of the driving of an unmanned automobile and comprises a vehicle-mounted sensor and a perception positioning algorithm; the decision control technology is the core of the driving of the unmanned automobile and comprises two links of decision planning and control execution. In the prior art, a proportional, integral and differential control algorithm, a linear quadratic regulation algorithm and a model prediction control algorithm are mainly adopted. The model predictive control algorithm is developed based on a state space equation of a modern control theory, and the analysis design of a control system is mainly carried out through description of state variables of the system. The algorithm comprises a large number of matrix operations, is more suitable for being carried out on a digital computer, and is also very good for processing a system with constraint, nonlinearity and time variation, such as an unmanned automobile. Modern control theory also makes possible optimal control of specific performance metrics.
There are currently improvements to the collocation or optimization of several control algorithms, such as proportional, integral, and derivative control algorithms that focus on the amount of vehicle speed in the longitudinal direction and linear quadratic regulation algorithms that control the lateral and heading errors. In fact, the control of the vehicle in the lateral and longitudinal directions is highly coupled, so that splitting it and passing it to two different controllers for coordinated control of the vehicle brings more complexity and more uncertainty to the system. And the linear quadratic regulation algorithm lacks the capability of applying constraint on the controlled vehicle and preventing model mismatch through rolling optimization compared with the model predictive control algorithm. A model predictive control algorithm is used for tracking and controlling the transverse and heading errors of the agricultural machinery based on a two-degree-of-freedom dynamic model. The dynamic model adopted by the method is complex, has higher requirements on the computational power of the main chip, and has certain delay in practical application. In addition, the method can only adjust the corresponding weight parameters in the algorithm by experience, is low in efficiency and is difficult to achieve the optimal effect. And a neural network is combined with a proportional-integral-derivative algorithm to carry out driving control on the unmanned vehicle, and although better robustness and adaptability can be obtained by the method, the running condition of the unmanned vehicle is poor under the high-speed application scene of the unmanned vehicle because the driving track of the unmanned vehicle is predicted only by using a kinematic model.
Disclosure of Invention
In order to solve the problem that the optimal control quantity is difficult to obtain by experience to control a vehicle when the vehicle is unmanned, the application provides an unmanned control method, an unmanned control device and a server:
in a first aspect, the present application provides an unmanned control method, the method comprising:
acquiring reference working condition data of the current vehicle under the environment;
determining an error between the working condition data of the current time period of the current vehicle and the reference working condition data;
optimizing the reference working condition data based on an optimization model and the error to obtain a state weight of state data in the reference working condition data of the current vehicle driving control and a control weight of control data in the reference working condition data;
determining control data of the current vehicle based on the state weight, the control weight, state data and control data in the reference working condition data and state data of the current time period of the current vehicle;
controlling driving of the current vehicle using the control data;
the optimization model comprises a model obtained by training a preset deep learning network based on working condition training data, wherein the working condition training data comprise working condition data of different time periods under different environments, state weights of state data in the working condition data and control weights of control data in reference working condition data.
Wherein, the acquiring of the reference working condition data of the current vehicle under the environment comprises:
acquiring environmental data of a current vehicle through an environmental perception model of the vehicle;
matching a proper working condition label in a working condition training database according to the environmental data of the current vehicle, wherein the working condition training database comprises the environmental data of the vehicle, the working condition label and reference working condition data indexed by the label;
and indexing to corresponding reference working condition data in the working condition training database according to the working condition labels.
Specifically, the method further comprises:
acquiring state data of working condition training data in different environments and control data corresponding to the state data of the working condition training data;
determining an error between the working condition training data and the reference working condition data;
training a deep learning network model by the working condition training data based on the error between the working condition training data and the reference working condition data to obtain predicted state weights of state data and predicted control weights of control data in the working condition training data under different environments;
determining the predictive control data of the current vehicle by using the predictive state weight, the predictive control weight, the working condition training data and the reference working condition data;
determining errors of the reference working condition data and the working condition training data, wherein the errors comprise errors of state data in the reference working condition data and state data in the working condition training data, and errors of control data in the reference working condition data and predictive control data in the working condition training data;
judging whether the error value meets a preset condition or not;
when the judgment result is negative, adjusting the network parameters in the deep learning network model, and repeating the steps of training the deep learning network model and determining the predictive control data;
and when the judgment result is yes, taking the current deep learning network model as the current optimization model.
Specifically, the determining the control data of the current vehicle based on the state weight, the control weight, and the state data and the control data in the reference condition data includes:
setting an optimization objective function
Figure BDA0002203948690000031
Wherein N ispNc is a control time domain, Q represents a state weight of state data in reference condition data referring to the current vehicle driving control, R represents a control weight of control data in the reference condition data, k + i represents the ith step after the time k,
Figure BDA0002203948690000032
the transverse and longitudinal coordinates x, y and course angle of the vehicle at the moment of k + i
Figure BDA0002203948690000035
And the horizontal and vertical coordinates x in the reference working condition dataref、yrefAnd course angle
Figure BDA0002203948690000033
The deviation of (a) is calculated,
Figure BDA0002203948690000034
the deviation is the deviation of the controlled variable at the moment of k + i and the controlled variable in the reference working condition data, wherein the controlled variable comprises the vehicle center speed v and the vehicle front wheel steering angle delta;
determining driving control constraint conditions;
performing quadratic programming on the optimization objective function based on the driving control constraint condition to obtain a control quantity at the k + i moment;
and taking the control quantity at the k + i moment as the driving control data of the current vehicle.
Specifically, the preset deep learning network is configured to include:
a neural network of an input layer, a hidden layer, and an output layer;
each neuron of the input layer is connected with each neuron of the hidden layer, and each neuron of the hidden layer is connected with each neuron of the output layer.
Another aspect provides an unmanned control apparatus, the apparatus comprising:
the reference working condition data acquisition module is used for acquiring reference working condition data under the environment where the current vehicle is located;
the first error determination module is used for determining an error between the working condition data of the current time period of the current vehicle and the reference working condition data;
the optimization module is used for optimizing the reference working condition data based on an optimization model and the error to obtain the state weight of the state data in the reference working condition data of the current vehicle driving control and the control weight of the control data in the reference working condition data;
and the control module is used for determining the control data of the current vehicle based on the state weight, the control weight, the state data and the control data in the reference working condition data and the state data of the current time period of the current vehicle.
Specifically, the reference condition data obtaining module includes:
the data acquisition unit is used for acquiring environmental data of the current vehicle through an environmental perception model of the vehicle;
the label matching unit is used for matching a proper working condition label in a working condition training database according to the environmental data of the current vehicle;
and the index unit is used for indexing to corresponding reference working condition data in the working condition training database according to the working condition labels.
Specifically, the apparatus further comprises:
the working condition training data acquisition module is used for acquiring control data corresponding to the state data of the working condition training data;
the second error determining module is used for determining the error between the working condition training data and the reference working condition data;
the training module is used for training the deep learning network model based on the error between the working condition training data and the reference working condition data to obtain the predicted state weight of the state data and the predicted control weight of the control data in the working condition training data under different environments;
the control data determining module is used for determining the predictive control data of the current vehicle by utilizing the predictive state weight, the predictive control weight, the working condition training data and the reference working condition data;
the third error determination module is used for determining errors of the reference working condition data and the working condition training data, wherein the errors comprise errors of state data in the reference working condition data and state data in the working condition training data, and errors of control data in the reference working condition data and predictive control data in the working condition training data;
the judging module is used for judging whether the error value meets a preset condition or not;
the adjusting module is used for adjusting network parameters in the deep learning network model, and repeating the steps of training the deep learning network model and determining the predictive control data;
and the optimization model determining module is used for taking the current deep learning network model as the current optimization model.
Specifically, the control module includes:
an optimization target setting unit for setting an optimization target function;
a constraint condition setting unit for determining a driving control constraint condition;
a control quantity obtaining unit, configured to perform quadratic programming on the optimization objective function based on the driving control constraint condition, to obtain a control quantity at a k + i moment;
and the driving control unit is used for taking the control quantity at the moment k + i as the driving control data of the current vehicle.
A third aspect provides an unmanned control server comprising a processor and a memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions that is loaded and executed by the processor to implement the unmanned control method of the first aspect.
Drawings
In order to more clearly illustrate the technical solutions and advantages of the embodiments of the present application or the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic flow chart of an unmanned control method provided by an embodiment of the present application;
FIG. 2 is a schematic flow chart of determining control data of a vehicle according to an embodiment of the present application;
FIG. 3 is a schematic flow chart of a training process of an optimization model for unmanned control provided in an embodiment of the present application;
FIG. 4 is a schematic structural diagram of an unmanned control device provided by an embodiment of the present application;
FIG. 5 is a block diagram of a reference condition data obtaining module according to an embodiment of the present disclosure;
FIG. 6 is a block diagram of a control module provided in an embodiment of the present application;
FIG. 7 is a schematic structural diagram of a training apparatus for an optimization model of unmanned control according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Moreover, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In order to implement the technical solution of the application, so that more engineers can easily understand and apply the application, the working principle of the application will be further explained with reference to specific embodiments.
The unmanned system comprises an environment sensing device, a sensing and positioning device, a decision planning device and an execution control device. The environment sensing device collects environment parameters through the vehicle-mounted sensor and feeds the collected environment parameters back to a superior level. The perception positioning device carries out the processing of lane line detection, traffic identification or signal lamp detection and classification, target identification and tracking, vacancy detection and positioning and the like through the received environmental parameters, and feeds back the processed results to the superior. The decision planning device receives and analyzes the processed result, and plans a path, operation prediction, behavior decision or track planning in the map. And the execution control device calculates the control data of the vehicle according to the path planning, operation prediction, behavior decision or track planning of the decision planning, and performs predictive control on the vehicle by using the calculated control data. At present, in the aspect of executing control, a method combining longitudinal proportional, integral and derivative control with transverse linear quadratic control is applied, transverse and longitudinal control quantities are split and sent to two different controllers to cooperatively control a vehicle, great complexity and instability are brought to a system, and the capability of applying constraint to a controlled vehicle and continuously optimizing is lacked.
Aiming at the problems of the method based on the combination of longitudinal proportional, integral and differential Control and transverse linear quadratic Control, a deep learning training Model and an MPC (Model Predictive Control) can be adopted, an optimization Model is obtained by training a large amount of working condition training data in advance, and state weight and Control weight in an actual driving environment are obtained by utilizing the optimization Model, so that the Model Predictive Control obtains Control data based on preset constraint and controls the driving of a vehicle.
The following specifically describes the implementation process of the unmanned control combining the deep learning training model and the MPC with reference to fig. 1:
s101: and acquiring reference working condition data of the current vehicle under the environment.
The operating condition data refers to the operating state of the vehicle under conditions directly related to the action of the vehicle, and comprises state data and control data. Wherein the state data comprise the driving state of the vehicle, such as: the position of the vehicle, the heading of the vehicle, etc., and the control data includes data for controlling the vehicle output by a control device of the vehicle, such as: controlling the speed at which the vehicle travels, the angle of the steering wheel, etc.
Specifically, the obtaining of the reference working condition data of the current vehicle in the environment includes:
s1011: and acquiring the environmental data of the current vehicle through the environmental perception model of the vehicle.
The environmental data of the vehicle are collected through the vehicle-mounted sensor, the vehicle-mounted sensor comprises a camera, a millimeter/ultrasonic radar, a laser radar, a global positioning system and the like, the environmental data can comprise driving lanes, traffic signs, signal lamps, weather conditions and the like, an environment perception model is established by utilizing the collected environmental data, and data processing and classification are carried out.
S1013: and matching a proper working condition label in the working condition training database according to the environmental data of the current vehicle.
The working condition training database comprises environmental data of the vehicle, working condition labels and reference working condition data of the label index. And searching the working condition label matched with the environment data after the processing classification in the last step in the working condition training database.
S1015: and indexing the working condition labels to corresponding reference working condition data in the working condition training database.
S103: and determining the error between the working condition data of the current time period of the current vehicle and the reference working condition data.
Specifically, the error includes the shape of the working condition data of the current time periodThe error between the state data and the state data in the reference condition data is counted as
Figure BDA0002203948690000071
The error between the current time period working condition data and the control data in the reference working condition data is counted as
Figure BDA0002203948690000072
Wherein x and y are coordinates of the vehicle on an x axis and a y axis in a two-dimensional coordinate system,
Figure BDA0002203948690000073
is the heading angle, x, of the vehicleref yref
Figure BDA0002203948690000074
Respectively a reference abscissa, an ordinate and a course angle, v, of the state data in the reference condition dataref、δrefRespectively, a reference speed and a reference vehicle front wheel rotation angle in the control data.
S105: and optimizing the reference working condition data based on the optimization model and the error to obtain the state weight of the state data in the reference working condition data of the current vehicle driving control and the control weight of the control data in the reference working condition data. The state weight and the control weight are bases for model predictive control in the following step, the state weight is a weight optimized according to the state data in the unmanned vehicle control, and correspondingly, the control weight is a weight optimized according to the control data.
Specifically, the optimization model includes a model obtained by training a preset deep learning network based on working condition training data. The deep learning network is preset into a neural network comprising an input layer, a hidden layer and an output layer, wherein each neuron of the input layer is respectively connected with each neuron of the hidden layer, and each neuron of the hidden layer is respectively connected with each neuron of the output layer. The working condition training data comprises working condition data of different time periods under different environments, state weights in the working condition data and control weights of control data in the reference working condition data.
For example, the deep learning network can be configured as a BP neural network, which is a multi-layer forward network with one-way propagation, and as shown, the network can be divided into an input layer, an implicit layer, and an output layer. In the BP neural network, each layer of output is a linear function input by the previous layer, considering that data is not linearly separable in practical application, a nonlinear factor can be introduced by increasing an activation function, that is, a linear correction layer is added, for example, the activation function of an implicit layer is set to be a Sigmoid (S-shaped growth curve) function, a real number can be mapped into an interval of (0,1), and the real number can be used for classification; the activation function of the output layer is set as a Relu (corrected linear unit) function, the data result of the previous layer can be corrected, all the inputs of the previous layer which are less than 0 are changed into 0, and then the outputs which are more than 0 are output unchanged.
It should be noted that the BP neural network described above is only an example of a deep learning network, and in practical applications, the type of the deep learning network is not limited to the above.
S107: and determining the control data of the current vehicle based on the state weight, the control weight, the state data and the control data in the reference working condition data and the state data of the current time period of the current vehicle.
Firstly, a mathematical prediction model of the unmanned control of the vehicle is established, and the mathematical prediction model for calculating the length of the center of the vehicle on the horizontal axis and the vertical axis of the two-dimensional coordinate system and the heading angle in the two-dimensional coordinate system can be set as the following formula:
Figure BDA0002203948690000081
the equation represents the wheelbase, v is the center vehicle speed, and delta is the front wheel turning angle,
Figure BDA0002203948690000091
is the heading angle.
Figure BDA0002203948690000092
Respectively representing the derivative of the horizontal and vertical coordinates of the center of the vehicle, and obtaining the horizontal and vertical coordinates x and y of the center of the vehicle by using the derivative of the horizontal and vertical coordinates of the center of the vehicle. The mathematical prediction model is simple, and the real-time performance of the control method in the application can be improved.
Then, the state space equation of the vehicle is selected as
Figure BDA0002203948690000093
And the state space equation is subjected to linearization and discretization.
Wherein the content of the first and second substances,
Figure BDA0002203948690000094
the transverse and longitudinal coordinates x, y and course angle of the vehicle at the moment of k +1
Figure BDA0002203948690000095
And the horizontal and vertical coordinates x in the reference working condition dataref、yrefAnd course angle
Figure BDA0002203948690000096
The deviation of (a) is determined,
Figure BDA0002203948690000097
indicating the deviation of the vehicle state data from the state data in the reference condition data at time k,
Figure BDA0002203948690000098
representing the deviation of the vehicle control data at the moment k from the control data in the reference working condition data, A (k), B (k) are respectively in the state space equation
Figure BDA0002203948690000099
Coefficient of (1) and
Figure BDA00022039486900000910
the coefficient of (a) is determined,
Figure BDA00022039486900000911
a (k), B (k) are known quantities, and are substituted into equations to solveTo obtain
Figure BDA00022039486900000912
And also referring to the horizontal and vertical coordinates x in the working condition dataref、yrefAnd course angle
Figure BDA00022039486900000913
For known quantity, the horizontal and vertical coordinates x and y and the heading angle of the vehicle at the moment of k +1 can be obtained through solving
Figure BDA00022039486900000914
I.e. control data of the current vehicle.
The program is linearized so as to apply a linear solving method such as Simplex algorithm, and the result of the linearization is that globally optimal control data can be obtained efficiently. Discretizing the equation, for example: the STL algorithm is used for discretizing the processing equation, so that the complexity of calculation can be reduced, the calculation efficiency is improved, and the real-time performance of the control method is improved.
Specifically, obtaining
Figure BDA00022039486900000915
Thereafter, the steps shown in FIG. 2 are performed:
s1071: setting an optimization objective function
Figure BDA00022039486900000916
Figure BDA00022039486900000917
Wherein, NpNc is a control time domain, Q represents a state weight of state data in reference condition data referring to the current vehicle driving control, R represents a control weight of control data in the reference condition data, k + i represents the ith step after the time k,
Figure BDA00022039486900000918
the transverse and longitudinal coordinates x, y and course angle of the vehicle at the moment of k + i
Figure BDA00022039486900000919
And the horizontal and vertical coordinates x in the reference working condition dataref、yrefAnd course angle
Figure BDA00022039486900000920
The deviation of (a) is determined,
Figure BDA00022039486900000921
and deviation of the control quantity at the moment k + i from the control quantity in the reference working condition data, wherein the control quantity comprises a vehicle central vehicle speed v and a vehicle front wheel steering angle delta.
S1073: a driving control constraint is determined.
Specifically, the driving control constraint condition is a constraint condition of the control amount U, and may be set to UMIN≤U≤UMAX
S1075: and carrying out quadratic programming on the optimization objective function based on the driving control constraint condition to obtain the control quantity at the k + i moment.
Specifically, the quadratic programming is a mathematical programming problem in the nonlinear programming, and in a specific embodiment, the controlled variable U when the optimization objective function satisfies a certain condition may be solved by using an active set method of the quadratic programming, and then the controlled variable at the k + i moment may be solved according to a constraint condition of the controlled variable U.
S1077: and taking the control quantity at the k + i moment as the driving control data of the current vehicle.
S109: the driving of the current vehicle is controlled using the control data.
In other embodiments, in order to improve the real-time performance of the unmanned control method, in the embodiment of the application, the vehicle is enabled to carry out a large amount of training on the preset deep learning network in different time periods under different environments to obtain a large amount of optimization models, and the optimization models are classified and stored according to the labels. With reference to fig. 3, the unmanned control method further includes:
s201: and acquiring state data of the working condition training data in different environments and control data corresponding to the state data of the working condition training data.
S203: and training the deep learning network model by the working condition training data based on the error between the working condition training data and the reference working condition data to obtain the predicted state weight of the state data and the predicted control weight of the control data in the working condition training data under different environments.
Specifically, the hierarchy of the neural network in the deep learning network model and the activation function of each hierarchy are preset. Setting the target loss function, for example, setting the target loss function as M (k) ═ (r (k) -Y (k))2The target loss function is a function related to an error between the condition training data and the reference condition data, and is not limited to m (k) ═ r (k) -y (k)2In this manner. And then setting a learning rate, and adjusting the weight system of each layer of the deep learning network by using the loss function and combining a gradient descent method, wherein a correction coefficient used in the adjustment process is a result obtained based on the learning rate. And after adjustment, the predicted state weight of the state data and the predicted control weight of the control data are obtained by using the updated deep learning network.
S205: and determining the predictive control data of the current vehicle by using the predictive state weight, the predictive control weight, the working condition training data and the reference working condition data.
Specifically, the MPC is used to obtain the predictive control data according to the predicted state weight and the predictive control weight obtained in the previous step, the working condition training data, and the reference working condition data.
S207: and determining the error of the reference working condition data and the working condition training data.
The current condition training data is the condition training data including the updated predictive control data, and therefore the error in this step is not equal to the error in step S203.
S209: and judging whether the error value meets a preset condition or not.
S211: and when the judgment result is negative, adjusting the network parameters in the deep learning network model, and repeating the steps of training the deep learning network model and determining the predictive control data.
Specifically, adjusting the network parameters in the deep learning network model includes adjusting the weighting coefficients of each layer in the network.
S213: and when the judgment result is yes, taking the current deep learning network model as the current optimization model.
After a large amount of training working condition data are obtained through training, the training working condition data are stored in a database in a classified mode, working condition labels and indexes corresponding to reference working condition data one to one are established, so that the working condition labels can be directly matched when the training working condition labels are applied in an actual scene, the matched labels are indexed to the corresponding reference working condition data, and control efficiency is improved.
It can be seen from the above embodiments that, by obtaining reference condition data of the current vehicle in an environment, performing optimization processing on the reference condition data to obtain a state weight and a control weight, determining control data of the current vehicle by using the MPC according to the obtained state weight, the control weight, the state data and the control data in the reference condition data and the state data of the current time period of the current vehicle, and finally controlling driving of the vehicle by using the control data, the MPC can be optimized, weight distribution is performed on the state data and the control data reasonably, and parameters are output in a targeted manner and combined to the MPC for optimal control. In the above embodiment, the unmanned vehicle can train a deep learning network model of various forms at different working conditions, and appropriate control data is calculated by using the generalization fitting capability of the deep learning network to the nonlinear parameter combination, so that the vehicle is reasonably controlled.
An embodiment of the present application further provides an unmanned control device, as shown in fig. 4, the device includes:
the reference working condition data acquisition module 1 is used for acquiring reference working condition data of the current vehicle in the environment;
the first error determination module 2 is used for determining an error between the working condition data of the current time period of the current vehicle and the reference working condition data;
the optimization module 3 is used for optimizing the reference working condition data based on an optimization model and the error to obtain the state weight of the state data in the reference working condition data of the current vehicle driving control and the control weight of the control data in the reference working condition data;
and the control module 4 is used for determining the control data of the current vehicle based on the state weight, the control weight, the state data and the control data in the reference working condition data and the state data of the current time period of the current vehicle.
As shown in fig. 5, the reference condition data acquisition module 1 includes:
the data acquisition unit 6 is used for acquiring environmental data of the current vehicle through an environmental perception model of the vehicle;
the label matching unit 7 is used for matching a proper working condition label in a working condition training database according to the environmental data of the current vehicle;
and the index unit 8 is used for indexing the working condition labels to corresponding reference working condition data in the working condition training database.
As shown in fig. 6, the control module 4 includes:
an optimization target setting unit 9 for setting an optimization target function;
a constraint condition setting unit 10 for determining a driving control constraint condition;
a control quantity obtaining unit 11, configured to perform quadratic programming on the optimization objective function based on the driving control constraint condition, to obtain a control quantity at a k + i moment;
and a driving control unit 12 for using the control amount at the time k + i as driving control data of the current vehicle.
As shown in fig. 7, the apparatus further includes a training apparatus for the unmanned control optimization model:
the working condition training data acquisition module 13 is used for acquiring control data corresponding to the state data of the working condition training data;
a second error determination module 14, configured to determine an error between the operating condition training data and the reference operating condition data;
the training module 15 is configured to train a deep learning network model based on an error between the working condition training data and the reference working condition data, and obtain predicted state weights of state data and predicted control weights of control data in the working condition training data in different environments;
the control data determining module 16 is configured to determine the predictive control data of the current vehicle by using the predicted state weight, the predictive control weight, the working condition training data, and the reference working condition data;
a third error determining module 17, configured to determine an error between the reference working condition data and the working condition training data, where the error includes an error between state data in the reference working condition data and state data in the working condition training data, and an error between control data in the reference working condition data and predictive control data in the working condition training data;
a judging module 18, configured to judge whether the error value satisfies a preset condition;
an adjusting module 19, configured to adjust network parameters in the deep learning network model, and repeat the steps of training the deep learning network model and determining prediction control data;
and the optimization model determining module 20 is used for taking the current deep learning network model as the current optimization model.
An embodiment of the present application further provides an unmanned control server, which includes a processor and a memory, where the memory stores at least one instruction, at least one program, a code set, or a set of instructions, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by the processor to implement the above unmanned control method.
The memory may be used to store software programs and modules, and the processor may execute various functional applications and data processing by operating the software programs and modules stored in the memory. The memory can mainly comprise a program storage area and a data storage area, wherein the program storage area can store an operating system, application programs needed by functions and the like; the storage data area may store data created according to use of the apparatus, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory may also include a memory controller to provide the processor access to the memory.
Referring to fig. 8, the server 800 is configured to implement the unmanned control method provided in the foregoing embodiment, and specifically, the server structure may include the unmanned control device. The server 800 may vary significantly due to configuration or performance, and may include one or more Central Processing Units (CPUs) 810 (e.g., one or more processors) and memory 830, one or more storage media 820 (e.g., one or more mass storage devices) that store applications 823 or data 822. Memory 830 and storage medium 820 may be, among other things, transient or persistent storage. The program stored in storage medium 820 may include one or more modules, each of which may include a series of instruction operations for a server. Still further, the central processor 810 may be configured to communicate with the storage medium 820 to execute a series of instruction operations in the storage medium 820 on the server 800. The server 800 may also include one or more power supplies 860, one or more wired or wireless network interfaces 850, one or more input-output interfaces 840, and/or one or more operating systems 821, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, and so forth.
Embodiments of the present application further provide a storage medium, which may be disposed in a server to store at least one instruction, at least one program, a code set, or a set of instructions related to implementing an unmanned control method according to the method embodiments, where the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by the processor to implement the unmanned control method provided by the above method embodiments.
Alternatively, in this embodiment, the storage medium may be located in at least one network server of a plurality of network servers of a computer network. Optionally, in this embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and various media capable of storing program codes.
The embodiment of the application provides a training server, which comprises a processor and a memory, wherein at least one instruction, at least one program, a code set or an instruction set is stored in the memory, and the at least one instruction, the at least one program, the code set or the instruction set is loaded and executed by the processor to realize the training method of the unmanned control optimization model provided by the above method embodiment.
Embodiments of the present application further provide a storage medium, where at least one instruction, at least one program, a code set, or a set of instructions is stored, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by a processor to implement the method for training the unmanned control optimization model provided by the above method embodiments.
As can be seen from the above embodiments of the method, device, server, or storage medium for optimizing and training unmanned control and unmanned control, the method of combining the control mathematical prediction model MPC with the optimization training model in the present application can greatly improve the control capability of the unmanned vehicle, compared with the prior art, and has the characteristics of excellent real-time performance and high efficiency.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the device and server embodiments, since they are substantially similar to the method embodiments, the description is simple, and the relevant points can be referred to the partial description of the method embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. An unmanned control method, the method comprising:
acquiring reference working condition data of the current vehicle under the environment;
determining an error between the working condition data of the current time period of the current vehicle and the reference working condition data;
optimizing the reference working condition data based on an optimization model and the error to obtain a state weight of state data in the reference working condition data of the current vehicle driving control and a control weight of control data in the reference working condition data;
determining control data of the current vehicle based on the state weight, the control weight, state data and control data in the reference working condition data and state data of the current time period of the current vehicle;
controlling driving of the current vehicle using the control data;
the optimization model comprises a model obtained by training a preset deep learning network based on working condition training data, wherein the working condition training data comprise working condition data of different time periods under different environments, state weights of state data in the working condition data and control weights of control data in reference working condition data;
the determining the control data of the current vehicle based on the state weight, the control weight, the state data and the control data in the reference working condition data and the state data of the current time period of the current vehicle comprises:
establishing a mathematical prediction model of the current vehicle unmanned control;
inputting the state data of the current time period of the current vehicle into the mathematical prediction model, and determining the horizontal and vertical coordinates of the center of the current vehicle;
determining a state space equation of the current vehicle;
and inputting the state weight, the control weight, and the state data and the control data in the reference working condition data into the state space equation, and performing linearization and discretization processing on the state space equation to obtain the control data of the current vehicle.
2. The method of claim 1, wherein the obtaining reference operating condition data of the current vehicle environment comprises:
acquiring environmental data of a current vehicle through an environmental perception model of the vehicle;
matching a proper working condition label in a working condition training database according to the environmental data of the current vehicle, wherein the working condition training database comprises the environmental data of the vehicle, the working condition label and reference working condition data indexed by the label;
and indexing corresponding reference working condition data in the working condition training database according to the working condition labels.
3. The unmanned control method of claim 1, further comprising:
acquiring state data of working condition training data in different environments and control data corresponding to the state data of the working condition training data;
determining an error between the working condition training data and the reference working condition data;
training a deep learning network model by the working condition training data based on the error between the working condition training data and the reference working condition data to obtain predicted state weights of state data and predicted control weights of control data in the working condition training data under different environments;
determining the predictive control data of the current vehicle by using the predictive state weight, the predictive control weight, the working condition training data and the reference working condition data;
determining errors of the reference working condition data and the working condition training data, wherein the errors comprise errors of state data in the reference working condition data and state data in the working condition training data, and errors of control data in the reference working condition data and predictive control data in the working condition training data;
judging whether the error value meets a preset condition or not;
when the judgment result is negative, adjusting the network parameters in the deep learning network model, and repeating the steps of training the deep learning network model and determining the predictive control data;
and when the judgment result is yes, taking the current deep learning network model as the current optimization model.
4. The unmanned control method of claim 1, wherein determining control data for the current vehicle based on the state weight, the control weight, state data in the reference condition data, and control data comprises:
setting an optimization objective function
Figure FDA0003538851860000021
Wherein N ispFor the prediction time domain, Nc is the control time domainQ represents a state weight of state data in reference condition data referring to the current vehicle driving control, R represents a control weight of control data in the reference condition data, k + i represents the ith step after the time k,
Figure FDA0003538851860000031
the transverse and longitudinal coordinates x, y and course angle of the vehicle at the moment of k + i
Figure FDA0003538851860000032
And the horizontal and vertical coordinates x in the reference working condition dataref、yrefAnd course angle
Figure FDA0003538851860000033
The deviation of (a) is determined,
Figure FDA0003538851860000034
the deviation is the deviation of the controlled variable at the moment of k + i and the controlled variable in the reference working condition data, wherein the controlled variable comprises the vehicle center speed v and the vehicle front wheel steering angle delta;
determining driving control constraint conditions;
performing quadratic programming on the optimization objective function based on the driving control constraint condition to obtain a control quantity at the k + i moment;
and taking the control quantity at the k + i moment as the driving control data of the current vehicle.
5. The method according to any one of claims 1-4, wherein the pre-defined deep learning network is configured to include:
a neural network of an input layer, a hidden layer, and an output layer;
each neuron of the input layer is connected with each neuron of the hidden layer, and each neuron of the hidden layer is connected with each neuron of the output layer.
6. An unmanned control device, the device comprising:
the reference working condition data acquisition module is used for acquiring reference working condition data under the environment where the current vehicle is located;
the first error determination module is used for determining an error between the working condition data of the current time period of the current vehicle and the reference working condition data;
the optimization module is used for optimizing the reference working condition data based on an optimization model and the error to obtain the state weight of the state data in the reference working condition data of the current vehicle driving control and the control weight of the control data in the reference working condition data;
the control module is used for determining control data of the current vehicle based on the state weight, the control weight, the state data and the control data in the reference working condition data and the state data of the current time period of the current vehicle, and controlling the driving of the current vehicle by using the control data;
the determining the control data of the current vehicle based on the state weight, the control weight, the state data and the control data in the reference working condition data and the state data of the current time period of the current vehicle comprises:
establishing a mathematical prediction model of the current vehicle unmanned control;
inputting the state data of the current time period of the current vehicle into the mathematical prediction model, and determining the horizontal and vertical coordinates of the center of the current vehicle;
determining a state space equation of the current vehicle;
and inputting the state weight, the control weight, and the state data and the control data in the reference working condition data into the state space equation, and performing linearization and discretization processing on the state space equation to obtain the control data of the current vehicle.
7. The unmanned control device of claim 6, wherein the reference condition data obtaining module comprises:
the data acquisition unit is used for acquiring environmental data of the current vehicle through an environmental perception model of the vehicle;
the label matching unit is used for matching a proper working condition label in a working condition training database according to the environmental data of the current vehicle;
and the index unit is used for indexing to corresponding reference working condition data in the working condition training database according to the working condition labels.
8. The drone of claim 6, further comprising:
the working condition training data acquisition module is used for acquiring control data corresponding to the state data of the working condition training data;
the second error determination module is used for determining the error between the working condition training data and the reference working condition data;
the training module is used for training the deep learning network model based on the error between the working condition training data and the reference working condition data to obtain the predicted state weight of the state data and the predicted control weight of the control data in the working condition training data under different environments;
the control data determining module is used for determining the predictive control data of the current vehicle by utilizing the predictive state weight, the predictive control weight, the working condition training data and the reference working condition data;
the third error determination module is used for determining errors of the reference working condition data and the working condition training data, wherein the errors comprise errors of state data in the reference working condition data and state data in the working condition training data, and errors of control data in the reference working condition data and predictive control data in the working condition training data;
the judging module is used for judging whether the error value meets a preset condition or not;
the adjusting module is used for adjusting network parameters in the deep learning network model, and repeating the steps of training the deep learning network model and determining the predictive control data;
and the optimization model determining module is used for taking the current deep learning network model as the current optimization model.
9. The drone control device of claim 6, wherein the control module includes:
an optimization target setting unit for setting an optimization target function;
a constraint condition setting unit for determining a driving control constraint condition;
the control quantity obtaining unit is used for carrying out quadratic programming on the optimization objective function based on the driving control constraint condition to obtain the control quantity at the k + i moment;
and the driving control unit is used for taking the control quantity at the moment k + i as the driving control data of the current vehicle.
10. An unmanned control server, the server comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by the processor to implement the unmanned control method of any of claims 1 to 4.
CN201910874675.1A 2019-09-17 2019-09-17 Unmanned control method and device and server Active CN110688920B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910874675.1A CN110688920B (en) 2019-09-17 2019-09-17 Unmanned control method and device and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910874675.1A CN110688920B (en) 2019-09-17 2019-09-17 Unmanned control method and device and server

Publications (2)

Publication Number Publication Date
CN110688920A CN110688920A (en) 2020-01-14
CN110688920B true CN110688920B (en) 2022-06-14

Family

ID=69109343

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910874675.1A Active CN110688920B (en) 2019-09-17 2019-09-17 Unmanned control method and device and server

Country Status (1)

Country Link
CN (1) CN110688920B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113734182B (en) * 2020-05-29 2023-11-14 比亚迪股份有限公司 Vehicle self-adaptive control method and device
CN112462752A (en) * 2020-10-12 2021-03-09 星火科技技术(深圳)有限责任公司 Data acquisition method, equipment, storage medium and device of intelligent trolley
CN112464564A (en) * 2020-11-27 2021-03-09 北京罗克维尔斯科技有限公司 Method and device for determining vehicle dynamic parameters
CN113232658B (en) * 2021-06-28 2022-06-28 驭势(上海)汽车科技有限公司 Vehicle positioning method and device, electronic equipment and storage medium
CN113342005B (en) * 2021-08-04 2021-11-30 北京三快在线科技有限公司 Transverse control method and device for unmanned equipment
CN114670856B (en) * 2022-03-30 2022-11-25 湖南大学无锡智能控制研究院 Parameter self-tuning longitudinal control method and system based on BP neural network
CN115525054B (en) * 2022-09-20 2023-07-11 武汉理工大学 Method and system for controlling tracking of edge path of unmanned sweeper in large industrial park

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017207821A (en) * 2016-05-16 2017-11-24 トヨタ自動車株式会社 Automatic operation control system for mobile entity
CN107745709A (en) * 2017-09-26 2018-03-02 湖北文理学院 Preventing vehicle rollover pre-warning and control method, system and hardware-in-loop simulation method
CN108334086A (en) * 2018-01-25 2018-07-27 江苏大学 A kind of automatic driving vehicle path tracking control method based on soft-constraint quadratic programming MPC
CN109032131A (en) * 2018-07-05 2018-12-18 东南大学 A kind of dynamic applied to pilotless automobile is overtaken other vehicles barrier-avoiding method
CN109388138A (en) * 2017-08-08 2019-02-26 株式会社万都 Automatic driving vehicle, automatic Pilot control device and automatic Pilot control method based on deep learning
CN109624994A (en) * 2019-01-28 2019-04-16 浙江吉利汽车研究院有限公司 A kind of Vehicular automatic driving control method, device, equipment and terminal

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9760827B1 (en) * 2016-07-22 2017-09-12 Alpine Electronics of Silicon Valley, Inc. Neural network applications in resource constrained environments

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017207821A (en) * 2016-05-16 2017-11-24 トヨタ自動車株式会社 Automatic operation control system for mobile entity
CN109388138A (en) * 2017-08-08 2019-02-26 株式会社万都 Automatic driving vehicle, automatic Pilot control device and automatic Pilot control method based on deep learning
CN107745709A (en) * 2017-09-26 2018-03-02 湖北文理学院 Preventing vehicle rollover pre-warning and control method, system and hardware-in-loop simulation method
CN108334086A (en) * 2018-01-25 2018-07-27 江苏大学 A kind of automatic driving vehicle path tracking control method based on soft-constraint quadratic programming MPC
CN109032131A (en) * 2018-07-05 2018-12-18 东南大学 A kind of dynamic applied to pilotless automobile is overtaken other vehicles barrier-avoiding method
CN109624994A (en) * 2019-01-28 2019-04-16 浙江吉利汽车研究院有限公司 A kind of Vehicular automatic driving control method, device, equipment and terminal

Also Published As

Publication number Publication date
CN110688920A (en) 2020-01-14

Similar Documents

Publication Publication Date Title
CN110688920B (en) Unmanned control method and device and server
US20220363259A1 (en) Method for generating lane changing decision-making model, method for lane changing decision-making of unmanned vehicle and electronic device
CN110221611B (en) Trajectory tracking control method and device and unmanned vehicle
Farag Complex trajectory tracking using PID control for autonomous driving
US20200278686A1 (en) Iterative Feedback Motion Planning
US11092965B2 (en) Method and device for driving dynamics control for a transportation vehicle
McKinnon et al. Learn fast, forget slow: Safe predictive learning control for systems with unknown and changing dynamics performing repetitive tasks
Gao et al. Nonlinear and adaptive suboptimal control of connected vehicles: A global adaptive dynamic programming approach
CN111930015B (en) Unmanned vehicle control method and device
CN113467470B (en) Trajectory tracking control method of unmanned autonomous trolley
Ure et al. Enhancing situational awareness and performance of adaptive cruise control through model predictive control and deep reinforcement learning
US20200192307A1 (en) Control customization system, control customization method, and control customization program
Vallon et al. Data-driven strategies for hierarchical predictive control in unknown environments
Aalizadeh et al. Combination of particle swarm optimization algorithm and artificial neural network to propose an efficient controller for vehicle handling in uncertain road conditions
Joukov et al. Gaussian process based model predictive controller for imitation learning
US20240202393A1 (en) Motion planning
CN114559439B (en) Mobile robot intelligent obstacle avoidance control method and device and electronic equipment
CN114670856B (en) Parameter self-tuning longitudinal control method and system based on BP neural network
Engin et al. Neural optimal control using learned system dynamics
Tang et al. Actively learning Gaussian process dynamical systems through global and local explorations
Zhu et al. Autonomous driving vehicle control auto-calibration system: An industry-level, data-driven and learning-based vehicle longitudinal dynamic calibrating algorithm
CN114359349A (en) Lifelong learning method and system for vehicle adaptive path tracking
Zhao et al. Inverse Reinforcement Learning and Gaussian Process Regression-based Real-Time Framework for Personalized Adaptive Cruise Control
Németh et al. Hierarchical control design of automated vehicles for multi-vehicle scenarios in roundabouts
Alcalá et al. Gain scheduling lpv control scheme for the autonomous guidance problem using a dynamic modelling approach

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant