CN112417598A - Multi-source fusion vehicle state parallel estimation method - Google Patents

Multi-source fusion vehicle state parallel estimation method Download PDF

Info

Publication number
CN112417598A
CN112417598A CN202011309519.XA CN202011309519A CN112417598A CN 112417598 A CN112417598 A CN 112417598A CN 202011309519 A CN202011309519 A CN 202011309519A CN 112417598 A CN112417598 A CN 112417598A
Authority
CN
China
Prior art keywords
vehicle
neural network
steering wheel
longitudinal acceleration
estimation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011309519.XA
Other languages
Chinese (zh)
Inventor
查云飞
刘鑫烨
侯乃仁
吴昊
权晓玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian University of Technology
Original Assignee
Fujian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian University of Technology filed Critical Fujian University of Technology
Priority to CN202011309519.XA priority Critical patent/CN112417598A/en
Publication of CN112417598A publication Critical patent/CN112417598A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/15Vehicle, aircraft or watercraft design
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Biology (AREA)
  • Automation & Control Theory (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Computational Mathematics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Medical Informatics (AREA)
  • Steering Control In Accordance With Driving Conditions (AREA)

Abstract

The invention relates to a multi-source fusion vehicle state parallel estimation method, which comprises the following steps: s1, acquiring real values corresponding to the steering wheel corner and the longitudinal acceleration of the vehicle at each moment in the running process, and dividing the real values into a training set and a test set according to the proportion after preprocessing; step S2, constructing a neural network and training a training set; s3, constructing an extended Kalman filtering estimation model; step S4: in the running process of a vehicle, a steering wheel corner sensor and a longitudinal acceleration sensor are used for acquiring a steering wheel corner and a longitudinal acceleration, and the steering wheel corner and the longitudinal acceleration are respectively transmitted to a trained neural network estimation model and an extended Kalman filtering estimation model for processing; step S5: and respectively obtaining estimation results, and performing signal fusion and optimization processing to obtain a final estimation result. The method can quickly and effectively obtain the vehicle state estimation, and plays a decisive role in the accurate control of the subsequent vehicle active safety.

Description

Multi-source fusion vehicle state parallel estimation method
Technical Field
The invention relates to the field of vehicle state parameter estimation design, in particular to a multi-source fusion vehicle state parallel estimation method.
Background
With the development of the automobile industry, automobiles are not only simple mechanical bodies, but also gradually intelligentized and automated. Various sensors are additionally arranged on the vehicle to collect external information required by the vehicle. The vehicle can judge the current vehicle state through the collected information, and then various vehicle dynamic control systems such as ESP, ABS and the like are called. On the mass production vehicle, because the manufacturing cost needs to be controlled, the collection of the external information parameters of the vehicle cannot be performed by a sensor with high price, therefore, some other parameters to be estimated to the vehicle active control or passive control need to be performed through the collected signals by the sensor carried by the current mass production vehicle, such as: centroid slip angle, yaw rate, and longitudinal vehicle speed.
Disclosure of Invention
In view of the above, the present invention provides a multi-source fused parallel estimation method for vehicle states,
in order to achieve the purpose, the invention adopts the following technical scheme:
a multi-source fusion vehicle state parallel estimation method comprises the following steps:
s1, acquiring real values corresponding to the steering wheel corner and the longitudinal acceleration of the vehicle at each moment in the running process, and dividing the real values into a training set and a test set according to the proportion after preprocessing;
step S2, constructing a neural network, training a training set and obtaining a trained neural network estimation model;
s3, constructing an extended Kalman filtering estimation model;
step S4: in the running process of a vehicle, a steering wheel corner sensor and a longitudinal acceleration sensor are used for acquiring a steering wheel corner and a longitudinal acceleration, and the steering wheel corner and the longitudinal acceleration are respectively transmitted to a trained neural network estimation model and an extended Kalman filtering estimation model for processing;
step S5: and performing signal fusion and optimization processing according to estimation results respectively obtained by the trained neural network estimation model and the extended Kalman filtering estimation model to obtain a final estimation result.
Further, the real values of the steering wheel angle and the longitudinal acceleration comprise a centroid slip angle, a yaw rate and a longitudinal vehicle speed.
Further, the neural network adopts a radial basis function neural network to predict the relevant vehicle parameters, wherein the neural network implies that the layer node q is 9, and the nodes of the input and output layers are 1.
Further, the neural network training process is divided into two stages, namely a non-instructor learning stage and an instructor learning stage.
Further, the tutor-free learning stage specifically includes:
(1) given the initial center vector C of each hidden nodei(0) (i ═ 1,2,. q), learning rate β (0)<β(0)<1) And a threshold value epsilon for judging to stop the calculation
(2) Node for calculating Euclidean distance and calculating minimum distance
Figure BDA0002789347750000021
In the formula, k is a sample serial number; r is a central vector ci(k-1) hidden node sequence number closest to input sample x (k);
(3) center of adjustment
Figure BDA0002789347750000031
In the formula, β (k) is a learning rate. β (k) ═ β (k-1)/(1+ int (k/q))1/2(ii) a int (·) denotes rounding (·);
(4) determining cluster quality
The above (2) and (3) were repeated for all samples k (k ═ 1, 2.., N) until the following was satisfied
If the formula is satisfied, finishing clustering;
Figure BDA0002789347750000032
further, the teacher learning stage specifically includes:
when c is going toiAfter the determination, training the weight between the hidden layer and the output layer, and if the weight is a linear equation set, the problem of linear optimization becomes full-time; connection weight w between hidden layer and output layer of radial basic bible networkkiThe (k 1, 2.. times.l; i 1, 2.. times.q) learning algorithm is
wki(k+1)=wki(k)+η(tk-yk)ui(x)/uTu
Wherein u is [ u ]1(x),u2(x),..,uq(x)]T;ui(x) Is a Gaussian function; η is the learning rate.
Further, the extended kalman filter estimation model specifically includes: the vehicle three-degree-of-freedom dynamic model is used as a state equation and an observation equation, and the centroid slip angle, the yaw angular velocity and the longitudinal vehicle speed are estimated based on the extended Kalman filtering algorithm
Figure BDA0002789347750000033
Figure BDA0002789347750000041
In the formula, omega is the yaw angular velocity, beta is the centroid slip angle,vxIs the longitudinal speed, k1And k2Yaw stiffness, a and b are the lengths from the center of mass of the vehicle to the front and rear axes, Iz is the moment of inertia, m is the total vehicle mass, delta is the steering wheel angle, axAnd ayLongitudinal acceleration and lateral acceleration.
Further, the signal fusion and optimization module compares the signals estimated by different algorithms with the reference signal to obtain different confidence degrees of the two estimated signals, wherein the signal with the high confidence degree has a large weight value, and otherwise, the signal with the low confidence degree has a small weight value.
Compared with the prior art, the invention has the following beneficial effects:
the method can quickly and effectively obtain the vehicle state estimation, and plays a decisive role in the accurate control of the subsequent vehicle active safety.
Drawings
Fig. 1 is a schematic diagram of the principle of the present invention.
Detailed Description
The invention is further explained below with reference to the drawings and the embodiments.
Referring to fig. 1, the invention provides a multi-source fusion vehicle state parallel estimation method, which includes the following steps:
s1, acquiring real values corresponding to the steering wheel corner and the longitudinal acceleration of the vehicle at each moment in the running process, and dividing the real values into a training set and a test set according to the proportion after preprocessing;
step S2, constructing a neural network, training a training set and obtaining a trained neural network estimation model;
s3, constructing an extended Kalman filtering estimation model;
step S4: in the running process of a vehicle, a steering wheel corner sensor and a longitudinal acceleration sensor are used for acquiring a steering wheel corner and a longitudinal acceleration, and the steering wheel corner and the longitudinal acceleration are respectively transmitted to a trained neural network estimation model and an extended Kalman filtering estimation model for processing;
step S5: and performing signal fusion and optimization processing according to estimation results respectively obtained by the trained neural network estimation model and the extended Kalman filtering estimation model to obtain a final estimation result.
In the present embodiment, the actual values of the steering wheel angle and the longitudinal acceleration include the centroid slip angle, the yaw rate, and the longitudinal vehicle speed.
In the embodiment, the neural network adopts a radial basis function neural network to predict the relevant vehicle parameters, wherein the neural network implies that the layer node q is 9, and the nodes of the input and output layers are 1.
The neural network training process is divided into two stages, namely a non-instructor learning stage and an instructor learning stage.
The tutor-free learning stage specifically comprises the following steps:
(1) given the initial center vector C of each hidden nodei(0) (i ═ 1,2,. q), learning rate β (0)<β(0)<1) And a threshold value epsilon for judging to stop the calculation
(2) Node for calculating Euclidean distance and calculating minimum distance
Figure BDA0002789347750000051
In the formula, k is a sample serial number; r is a central vector ci(k-1) hidden node sequence number closest to input sample x (k);
(3) center of adjustment
Figure BDA0002789347750000061
In the formula, β (k) is a learning rate. β (k) ═ β (k-1)/(1+ int (k/q))1/2(ii) a int (·) denotes rounding (·);
(4) determining cluster quality
The above (2) and (3) were repeated for all samples k (k ═ 1, 2.., N) until the following was satisfied
If the formula is satisfied, finishing clustering;
Figure BDA0002789347750000062
the learning stage of the instructor is as follows:
when c is going toiAfter the determination, training the weight between the hidden layer and the output layer, and if the weight is a linear equation set, the problem of linear optimization becomes full-time; connection weight w between hidden layer and output layer of radial basic bible networkkiThe (k 1, 2.. times.l; i 1, 2.. times.q) learning algorithm is
wki(k+1)=wki(k)+η(tk-yk)ui(x)/uTu
Wherein u is [ u ]1(x),u2(x),..,uq(x)]T;ui(x) Is a Gaussian function; η is the learning rate.
In this embodiment, the extended kalman filter estimation model specifically includes: the vehicle three-degree-of-freedom dynamic model is used as a state equation and an observation equation, and the centroid slip angle, the yaw angular velocity and the longitudinal vehicle speed are estimated based on the extended Kalman filtering algorithm
Figure BDA0002789347750000063
In the formula, omega is yaw angular velocity, beta is barycenter slip angle and vxIs the longitudinal speed, k1And k2Yaw stiffness, a and b are the lengths from the center of mass of the vehicle to the front and rear axes, Iz is the moment of inertia, m is the total vehicle mass, delta is the steering wheel angle, axAnd ayLongitudinal acceleration and lateral acceleration.
In this embodiment, the signal fusion and optimization module compares the signals estimated by different algorithms with the reference signal to obtain different confidence degrees of the two estimated signals, and the signal with the higher confidence degree has a higher weight value, whereas the signal with the lower weight value. Specifically, the difference is made between the estimated signal and the reference signal according to the two algorithms, the magnitude of the two errors is compared within the estimated time of each step, the signal with small error is judged as the signal with high confidence, the assigned weight is larger, and otherwise, the assigned weight is small. Wherein, the sum of the weights of the two is 1.
In this embodiment, the signal fusion and optimization process specifically includes:
the calculated values of the centroid slip angle, the horizontal plaque angular velocity and the longitudinal vehicle speed can be obtained through a theoretical calculation method, but the calculated values are caused by error accumulation of the sensors and error accumulation of integral. Over time, the confidence of the calculated values is reduced and therefore can only be used as a reference for one signal.
And the signal fusion part takes the theoretical value as a calibration object, respectively uses two different algorithms to be the worst with the reference signal, and uses a dichotomy to search the weight suitable for each signal, wherein the signal with small error has large full weight and the signal with large error has small weight. Finally, the update value is the addition of each signal multiplied by its own weight.
Centroid slip angle:
Figure BDA0002789347750000071
yaw rate:
Figure BDA0002789347750000072
longitudinal vehicle speed:
Figure BDA0002789347750000073
wherein: omega and vxIs the yaw rate and the longitudinal speed, a, estimated by the neural networkyAs a lateral acceleration, ntFor the wheel speed,/wThe tire circumference.
The above description is only a preferred embodiment of the present invention, and all equivalent changes and modifications made in accordance with the claims of the present invention should be covered by the present invention.

Claims (8)

1. A multi-source fusion vehicle state parallel estimation method is characterized by comprising the following steps:
s1, acquiring real values corresponding to the steering wheel corner and the longitudinal acceleration of the vehicle at each moment in the running process, and dividing the real values into a training set and a test set according to the proportion after preprocessing;
step S2, constructing a neural network, training a training set and obtaining a trained neural network estimation model;
s3, constructing an extended Kalman filtering estimation model;
step S4: in the running process of a vehicle, a steering wheel corner sensor and a longitudinal acceleration sensor are used for acquiring a steering wheel corner and a longitudinal acceleration, and the steering wheel corner and the longitudinal acceleration are respectively transmitted to a trained neural network estimation model and an extended Kalman filtering estimation model for processing;
step S5: and performing signal fusion and optimization processing according to estimation results respectively obtained by the trained neural network estimation model and the extended Kalman filtering estimation model to obtain a final estimation result.
2. The multi-source fused vehicle state parallel estimation method according to claim 1, wherein the real values of the steering wheel angle and the longitudinal acceleration comprise a centroid slip angle, a yaw rate and a longitudinal vehicle speed.
3. The multi-source fused vehicle state parallel estimation method according to claim 1, wherein the neural network predicts relevant vehicle parameters by using a radial basis function neural network, wherein a hidden layer node q of the neural network is 9, and nodes of input and output layers are 1.
4. The multi-source fused vehicle state parallel estimation method according to claim 3, wherein the neural network training process is divided into two stages, namely a non-instructor learning stage and an instructor learning stage.
5. The multi-source fused vehicle state parallel estimation method according to claim 4,
the tutor-free learning stage specifically comprises the following steps:
(1) given each hidden nodeInitial center vector C of pointsi(0) (i ═ 1,2,. q), learning rate β (0)<β(0)<1) And a threshold value epsilon for judging to stop the calculation
(2) Node for calculating Euclidean distance and calculating minimum distance
Figure FDA0002789347740000021
In the formula, k is a sample serial number; r is a central vector ci(k-1) hidden node sequence number closest to input sample x (k);
(3) center of adjustment
Figure FDA0002789347740000022
In the formula, β (k) is a learning rate. β (k) ═ β (k-1)/(1+ int (k/q))1/2(ii) a int (·) denotes rounding (·);
(4) determining cluster quality
Repeating the above (2) and (3) for all samples k (k ═ 1, 2., N) until the following formula condition is satisfied, and then finishing clustering;
Figure FDA0002789347740000031
6. the multi-source fused vehicle state parallel estimation method according to claim 4,
the instructor learning stage specifically comprises:
when c is going toiAfter the determination, training the weight between the hidden layer and the output layer, and if the weight is a linear equation set, the problem of linear optimization becomes full-time; connection weight w between hidden layer and output layer of radial basic bible networkkiThe (k 1, 2.. times.l; i 1, 2.. times.q) learning algorithm is
wki(k+1)=wki(k)+η(tk-yk)ui(x)/uTu
Wherein u is [ u ]1(x),u2(x),..,uq(x)]T;ui(x) Is a Gaussian function; η is the learning rate.
7. The multi-source fusion vehicle state parallel estimation method according to claim 1, wherein the extended kalman filter estimation model is specifically: the vehicle three-degree-of-freedom dynamic model is used as a state equation and an observation equation, and the centroid slip angle, the yaw angular velocity and the longitudinal vehicle speed are estimated based on the extended Kalman filtering algorithm
Figure FDA0002789347740000032
Figure FDA0002789347740000033
In the formula, omega is yaw angular velocity, beta is barycenter slip angle and vxIs the longitudinal speed, k1And k2Yaw stiffness, a and b are the lengths from the center of mass of the vehicle to the front and rear axes, Iz is the moment of inertia, m is the total vehicle mass, delta is the steering wheel angle, axAnd ayLongitudinal acceleration and lateral acceleration.
8. The multi-source-fused vehicle state parallel estimation method according to claim 1, wherein the signal fusion and optimization module compares signals estimated by different algorithms with a reference signal to obtain different confidence degrees of the two estimated signals, the signal with the high confidence degree is weighted more heavily, and the weighted value is weighted less heavily.
CN202011309519.XA 2020-11-20 2020-11-20 Multi-source fusion vehicle state parallel estimation method Pending CN112417598A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011309519.XA CN112417598A (en) 2020-11-20 2020-11-20 Multi-source fusion vehicle state parallel estimation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011309519.XA CN112417598A (en) 2020-11-20 2020-11-20 Multi-source fusion vehicle state parallel estimation method

Publications (1)

Publication Number Publication Date
CN112417598A true CN112417598A (en) 2021-02-26

Family

ID=74774875

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011309519.XA Pending CN112417598A (en) 2020-11-20 2020-11-20 Multi-source fusion vehicle state parallel estimation method

Country Status (1)

Country Link
CN (1) CN112417598A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113916565A (en) * 2021-12-14 2022-01-11 禾多科技(北京)有限公司 Steering wheel zero deflection angle estimation method and device, vehicle and storage medium
CN113937893A (en) * 2021-10-22 2022-01-14 大连交通大学 Rail vehicle running state sensing system and method based on machine learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108545081A (en) * 2018-03-20 2018-09-18 北京理工大学 Slip angle estimation method and system based on robust Unscented kalman filtering
CN108715166A (en) * 2018-04-28 2018-10-30 南京航空航天大学 Intact stability index method of estimation based on deep learning
CN109466558A (en) * 2018-10-26 2019-03-15 重庆邮电大学 A kind of coefficient of road adhesion estimation method based on EKF and BP neural network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108545081A (en) * 2018-03-20 2018-09-18 北京理工大学 Slip angle estimation method and system based on robust Unscented kalman filtering
CN108715166A (en) * 2018-04-28 2018-10-30 南京航空航天大学 Intact stability index method of estimation based on deep learning
CN109466558A (en) * 2018-10-26 2019-03-15 重庆邮电大学 A kind of coefficient of road adhesion estimation method based on EKF and BP neural network

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
宗长富 等: "基于扩展卡尔曼滤波的信息融合技术在车辆状态估计中的应用", 《机械工程学报》 *
张凤娇 等: "基于深度学习的极限工况下车辆的状态估计", 《重庆理工大学学报(自然科学)》 *
王震坡 等: "基于自适应无迹卡尔曼滤波的分布式驱动电动汽车车辆状态参数估计", 《北京理工大学学报》 *
胡盛斌: "RBF神经网络", 《非线性多关节机器人系统滑模控制》 *
赵万忠等: "基于无迹卡尔曼滤波的汽车状态参数估计", 《华南理工大学学报(自然科学版)》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113937893A (en) * 2021-10-22 2022-01-14 大连交通大学 Rail vehicle running state sensing system and method based on machine learning
CN113916565A (en) * 2021-12-14 2022-01-11 禾多科技(北京)有限公司 Steering wheel zero deflection angle estimation method and device, vehicle and storage medium
CN113916565B (en) * 2021-12-14 2022-03-11 禾多科技(北京)有限公司 Steering wheel zero deflection angle estimation method and device, vehicle and storage medium

Similar Documents

Publication Publication Date Title
CN112417598A (en) Multi-source fusion vehicle state parallel estimation method
CN108715166B (en) Vehicle stability index estimation method based on deep learning
CN110688729B (en) LSTM-IDM (least squares-inverse discrete cosine transform) following characteristic fusion method based on adaptive Kalman filtering, storage medium and equipment
CN110588657B (en) Joint estimation method for vehicle motion state and road gradient
CN113447021B (en) MEMS inertial navigation system positioning enhancement method based on LSTM neural network model
CN113408047B (en) Vehicle dynamics prediction model based on time-lag feedback neural network, training data acquisition method and training method
CN108931233B (en) Road side slope value detection method and device
CN115406446A (en) Multi-axis special vehicle state estimation method based on neural network and unscented Kalman filtering
CN112373484B (en) Method for acquiring vehicle mass dynamics based on feedforward neural network
CN107741269B (en) Weighing sensor test compensation method based on fuzzy recognition
CN112270039A (en) Distributed asynchronous fusion-based nonlinear state estimation method for drive-by-wire chassis vehicle
CN117719519A (en) Vehicle running state estimation method
CN112550294B (en) Path tracking control method based on vehicle fault signal isolation
CN113030940B (en) Multi-star convex type extended target tracking method under turning maneuver
CN109916484B (en) Combined weighing method and device for weighing equipment
CN113705865B (en) Automobile stability factor prediction method based on deep neural network
CN109916483B (en) Weighing equipment combined monitoring method and device
CN114001759B (en) Array MEMS sensor control method and system
CN114061592B (en) Adaptive robust AUV navigation method based on multiple models
CN111461288B (en) Full-speed segment detection method and system for geometric parameters of track
CN112965965A (en) Outlier elimination method and system based on fuzzy prediction system and computer related product
CN113386781B (en) Intelligent vehicle track tracking control method based on data-driven vehicle dynamics model
Ghosn et al. A Robust Hybrid Observer for Side-slip Angle Estimation
CN113704684B (en) Centralized fusion robust filtering method
CN117574016A (en) Rear wheel steering angle calculation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210226

RJ01 Rejection of invention patent application after publication