CN114494360A - Lane keeping control method, device and equipment and readable storage medium - Google Patents

Lane keeping control method, device and equipment and readable storage medium Download PDF

Info

Publication number
CN114494360A
CN114494360A CN202210088859.7A CN202210088859A CN114494360A CN 114494360 A CN114494360 A CN 114494360A CN 202210088859 A CN202210088859 A CN 202210088859A CN 114494360 A CN114494360 A CN 114494360A
Authority
CN
China
Prior art keywords
predicted value
state quantity
imu
error covariance
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210088859.7A
Other languages
Chinese (zh)
Inventor
袁沐
许渊
万四禧
管杰
毕雅梦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongfeng Commercial Vehicle Co Ltd
Original Assignee
Dongfeng Commercial Vehicle Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongfeng Commercial Vehicle Co Ltd filed Critical Dongfeng Commercial Vehicle Co Ltd
Priority to CN202210088859.7A priority Critical patent/CN114494360A/en
Publication of CN114494360A publication Critical patent/CN114494360A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/15Vehicle, aircraft or watercraft design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2119/00Details relating to the type or aim of the analysis or the optimisation
    • G06F2119/14Force analysis or force optimisation, e.g. static or dynamic forces

Abstract

The application relates to a lane keeping control method, a lane keeping control device, lane keeping control equipment and a readable storage medium, which relate to the technical field of intelligent driving of automobiles and comprise the steps of detecting whether an IMU signal and/or a camera signal are/is received or not; when the IMU signal and the camera signal are not received, updating the state quantity predicted value and the error covariance predicted value of the IMU and the camera through a Kalman filtering algorithm and based on the time property; performing data fusion on the updated state quantity predicted value and error covariance predicted value of the IMU and the camera based on a multi-layer Kalman filtering data fusion algorithm to obtain a fusion target; and inputting the fusion target serving as an observed quantity into the vehicle transverse error dynamic model, and solving the vehicle transverse error dynamic model based on the linear quadratic regulator to obtain the input control quantity of the LKA. According to the lane keeping motion control method and device, the accurate input control quantity of the LKA can be obtained, and more stable and accurate lane keeping motion control is achieved.

Description

Lane keeping control method, device and equipment and readable storage medium
Technical Field
The application relates to the technical field of intelligent driving of automobiles, in particular to a lane keeping control method, a lane keeping control device, lane keeping control equipment and a readable storage medium.
Background
In the current intelligent driving field, the mainstream scheme for realizing the unmanned function of the vehicle is to use various sensors to sense environmental information, perform data fusion on multi-sensor information, and finally perform road planning and vehicle body motion control according to the fusion information. Among them, LKA (Lane keeping assist) is a technology for assisting a driver to keep a vehicle running in a Lane line, and is a function of Advanced Driving Assistance System (ADAS) developed on the basis of LDW (Lane Departure Warning System) function.
The LKA mainly identifies the boundary line of the lane by using one camera, and controls the on-off of a lane keeping system based on whether the boundary line of the lane where the vehicle is located is identified; when the camera recognizes the deviation direction of the vehicle according to the boundary line, the corresponding white line displayed by the instrument panel is changed into red, the steering wheel can vibrate to remind a driver, and the correction direction of the steering wheel can be automatically adjusted to return to the lane, namely when the distance between the vehicle and the lane line deviated from the side is less than a certain threshold value or the vehicle has wheels deviated from the lane line, the electronic control unit calculates the auxiliary steering force and deceleration, controls the steering wheel and the control module of the brake according to the deviation degree, and applies the steering force and the brake force to enable the vehicle to stably return to the normal track; however, if the driver turns on the turn signal and the lane change driving is normally performed, the system does not give any indication.
As can be seen, current LKAs are based more on visual sensors such as cameras to assist in lateral motion control of the vehicle. However, since the transmission period of the vehicle control signal is generally 10-20ms and the transmission period of the vision sensor signal is generally greater than the control period, i.e., about 50-100ms, the real-time vision sensor signal cannot be received in some control periods, thereby making it impossible to stably and accurately control the lateral motion of the vehicle. Therefore, how to ensure the stability and accuracy of LKA motion control at the moment when the sensing signal input is not received is a problem that needs to be solved at present.
Disclosure of Invention
The application provides a lane keeping control method, a lane keeping control device, lane keeping control equipment and a readable storage medium, and aims to solve the problem that stability and accuracy of LKA motion control cannot be effectively guaranteed in the related art.
In a first aspect, there is provided a lane keeping control method including the steps of:
detecting whether an IMU signal and/or a camera signal is received;
when the IMU signal and the camera signal are not received, updating the state quantity predicted value and the error covariance predicted value of the IMU and the state quantity predicted value and the error covariance predicted value of the camera through a Kalman filtering algorithm and based on time properties;
performing data fusion on the updated state quantity predicted value and error covariance predicted value of the IMU and the updated state quantity predicted value and error covariance predicted value of the camera based on a multi-layer Kalman filtering data fusion algorithm to obtain a fusion target;
and inputting the fusion target serving as an observed quantity into a vehicle transverse error dynamic model, and solving the vehicle transverse error dynamic model based on a linear quadratic regulator to obtain an input control quantity of the LKA.
In some embodiments, before the step of detecting whether the IMU signal and the camera signal are received, the method further includes:
determining a vehicle transverse dynamic model based on a mechanical balance state equation of the vehicle mass center and a moment balance state equation of the vehicle body rotating around the Z axis;
and inputting the transverse position deviation and the course angle deviation between the center of mass of the vehicle and the central line of the lane line into the transverse vehicle dynamic model to generate a transverse vehicle error dynamic model.
In some embodiments, the state space equation of the vehicle lateral error dynamics model is:
Figure BDA0003488266980000031
wherein e is1Represents the lateral position deviation between the actual lateral displacement and the expected displacement between the center of mass of the vehicle to the center line of the lane line,
Figure BDA0003488266980000032
representing the first derivative of the lateral position deviation, e2Representing a heading angle deviation between the actual heading angle and the desired heading angle,
Figure BDA0003488266980000033
representing the first derivative, V, of course angular deviationxRepresenting the longitudinal speed of the vehicle's center of mass along the X-axis, m representing the vehicle mass,
Figure BDA0003488266980000034
yaw rate, C, representing the rotation of the center of mass of the vehicle about the center of rotationafRepresenting the yaw stiffness, C, of the front axle of the vehiclearRepresenting vehicle rear axle cornering stiffness,/fRepresenting the distance, l, from the front axle of the vehicle to the centre of mass of the entire vehiclerIndicating the distance from the rear axle of the vehicle to the centre of mass of the entire vehicle, delta the angle of rotation of the front wheel, IzRepresenting the moment of inertia of the entire vehicle.
In some embodiments, the multi-layer kalman filter data fusion algorithm includes a first layer data fusion algorithm and a second layer data fusion algorithm.
In some embodiments, the performing data fusion on the updated state quantity predicted value and error covariance predicted value of the IMU and the updated state quantity predicted value and error covariance predicted value of the camera based on the multi-layer kalman filtering data fusion algorithm to obtain a fusion target includes:
performing data fusion on the updated state quantity predicted value and error covariance predicted value of the IMU and the updated state quantity predicted value and error covariance predicted value of the camera based on a first-layer data fusion algorithm to obtain a cross-correlation covariance matrix among the sensors;
and performing data fusion on the cross-correlation covariance matrix among the sensors, the updated state quantity predicted value and error covariance predicted value of the IMU, and the updated state quantity predicted value and error covariance predicted value of the camera based on a second-layer data fusion algorithm to obtain a fusion target, wherein the fusion target comprises a system state quantity and an error covariance matrix.
In some embodiments, after the step of detecting whether the IMU signal and/or the camera signal is received, the method further includes:
when the IMU signal and the camera signal are received, updating the state quantity measurement value and the error covariance measurement value of the IMU and updating the state quantity measurement value and the error covariance measurement value of the camera through a Kalman filtering algorithm based on the time property and the measurement property;
and performing data fusion on the updated IMU state quantity measurement value and error covariance measurement value and the updated camera state quantity measurement value and error covariance measurement value based on a multi-layer Kalman filtering data fusion algorithm to obtain a fusion target.
In some embodiments, after the step of detecting whether the IMU signal and/or the camera signal is received, the method further includes:
when the IMU signal is received and the camera signal is not received, updating the state quantity measured value and the error covariance measured value of the IMU through a Kalman filtering algorithm based on the time property and the measurement property, and updating the state quantity predicted value and the error covariance predicted value of the camera based on the time property;
performing data fusion on the updated IMU state quantity measurement value and error covariance measurement value and the updated camera state quantity predicted value and error covariance predicted value based on a multi-layer Kalman filtering data fusion algorithm to obtain a fusion target;
when the IMU signal is not received and the camera signal is received, updating the state quantity predicted value and the error covariance predicted value of the IMU through a Kalman filtering algorithm and based on time properties, and updating the state quantity measured value and the error covariance measured value of the camera based on the time properties and the measurement properties;
and performing data fusion on the updated state quantity predicted value and error covariance predicted value of the IMU and the updated state quantity measured value and error covariance measured value of the camera based on a multi-layer Kalman filtering data fusion algorithm to obtain a fusion target.
In a second aspect, there is provided a lane keeping control apparatus comprising:
the detection unit is used for detecting whether the IMU signal and/or the camera signal are/is received;
the updating unit is used for updating the state quantity predicted value and the error covariance predicted value of the IMU and the state quantity predicted value and the error covariance predicted value of the camera through a Kalman filtering algorithm and based on time properties when the IMU signal and the camera signal are not received;
the fusion unit is used for carrying out data fusion on the updated state quantity predicted value and error covariance predicted value of the IMU and the updated state quantity predicted value and error covariance predicted value of the camera based on a multi-layer Kalman filtering data fusion algorithm to obtain a fusion target;
and the calculation unit is used for inputting the fusion target serving as an observed quantity to a vehicle lateral error dynamic model, and solving the vehicle lateral error dynamic model based on a linear quadratic regulator to obtain an input control quantity of the LKA.
In a third aspect, there is provided a lane keeping control apparatus comprising: the lane keeping control system comprises a memory and a processor, wherein at least one instruction is stored in the memory, and is loaded and executed by the processor to realize the lane keeping control method.
In a fourth aspect, a computer-readable storage medium is provided, which stores a computer program that, when executed by a processor, implements the aforementioned lane-keeping control method.
The beneficial effect that technical scheme that this application provided brought includes: the stability and accuracy of LKA motion control can be effectively guaranteed.
The application provides a lane keeping control method, a lane keeping control device, lane keeping control equipment and a readable storage medium, aiming at a scene incapable of receiving IMU signals and camera signals simultaneously, the application updates state quantity predicted values and error covariance predicted values of the IMU and the camera based on the time property of a Kalman filtering algorithm, performs data fusion on the updated state quantity predicted values and error covariance predicted values by using a multi-layer Kalman filtering data fusion algorithm, inputs a fusion target as an observed quantity to a vehicle transverse error dynamic model, and solves the error dynamic model by using a linear quadratic regulator to obtain an accurate input control quantity of LKA, so that the LKA can realize lane keeping motion control more stably and accurately based on the input control quantity.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a lane keeping control method according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart of a two-layer Kalman filtering data fusion algorithm provided in the embodiment of the present application;
fig. 3 is a schematic structural diagram of a lane keeping control device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a lane keeping control apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a lane keeping control method, a lane keeping control device, lane keeping control equipment and a readable storage medium, and solves the problem that stability and accuracy of LKA motion control cannot be effectively guaranteed in the related art.
Fig. 1 is a lane keeping control method provided in an embodiment of the present application, including the following steps:
step S10: detecting whether an IMU (Inertial Measurement Unit) signal and/or a camera signal is received;
further, before the step of detecting whether the IMU signal and the camera signal are received, the method further includes:
determining a vehicle transverse dynamic model based on a mechanical balance state equation of the vehicle mass center and a moment balance state equation of the vehicle body rotating around the Z axis;
and inputting the transverse position deviation and the course angle deviation between the center of mass of the vehicle and the central line of the lane line into the transverse vehicle dynamic model to generate a transverse vehicle error dynamic model.
Further, the state space equation of the vehicle lateral error dynamic model is as follows:
Figure BDA0003488266980000071
wherein e is1Represents the lateral position deviation between the actual lateral displacement and the expected displacement between the center of mass of the vehicle to the center line of the lane line,
Figure BDA0003488266980000072
representing the first derivative of the lateral position deviation, e2Representing a heading angle deviation between the actual heading angle and the desired heading angle,
Figure BDA0003488266980000073
representing the first derivative, V, of course angular deviationxRepresenting the longitudinal speed of the vehicle's center of mass along the X-axis, m representing the vehicle mass,
Figure BDA0003488266980000074
yaw rate, C, representing the rotation of the center of mass of the vehicle about the center of rotationafRepresenting the yaw stiffness, C, of the front axle of the vehiclearRepresenting vehicle rear axle cornering stiffness,/fRepresenting the distance, l, from the front axle of the vehicle to the centre of mass of the entire vehiclerIndicating the distance from the rear axle of the vehicle to the centre of mass of the entire vehicle, delta the angle of rotation of the front wheel, IzRepresenting the moment of inertia of the entire vehicle.
In an exemplary embodiment, it is assumed that the vehicle is a rigid body, the driving plane is horizontal, the vehicle steering mode is front wheel steering, and the longitudinal speed V isxConstant, the vehicle lateral error dynamics model is created as follows:
the analysis of the mechanical equilibrium state of the vehicle centroid can obtain:
Figure BDA0003488266980000075
wherein m represents the mass of the whole vehicle, VxRepresenting the longitudinal velocity of the center of mass along the X-axis,
Figure BDA0003488266980000076
representing the yaw rate at which the center of mass rotates about the center of rotation,
Figure BDA0003488266980000077
respectively representing the transverse linear velocity and the transverse linear acceleration of the mass center along the Y axis, lf、lrRespectively representing the distances from the front and rear axles of the vehicle to the center of mass of the whole vehicle, Caf、CarRespectively representing the lateral deflection rigidity of the front axle and the rear axle of the vehicle, and delta representing the wheel rotation angle of the front wheel.
And then analyzing the moment balance state of the front and the rear vehicle bodies of the vehicle rotating around the Z axis to obtain the following results:
Figure BDA0003488266980000081
wherein, IzThe moment of inertia of the entire vehicle is represented,
Figure BDA0003488266980000082
representing the yaw acceleration of the rotation of the center of mass about the center of rotation.
According to the equations (1) and (2), the state space equation of the vehicle lateral dynamic model can be obtained as follows:
Figure BDA0003488266980000083
where y represents the actual lateral displacement of the centroid and ψ represents the actual heading angle between the axle and the lane line tangent.
When the vehicle is performing lateral motion control, the deviation signal e can be usedyAnd eψRespectively corresponding to the system state quantities e1And e2Wherein e isyRepresenting the actual lateral displacement y and the expected displacement y between the center of mass of the vehicle and the center line of the lane linedesLateral position deviation therebetween, eψIndicating the actual heading angle psi and the desired heading angle psidesThe course angle deviation therebetween, namely:
Figure BDA0003488266980000084
and (3) using the deviation signal as a system state quantity of the vehicle transverse dynamic model, namely obtaining a state space equation of the vehicle transverse error dynamic model as follows:
Figure BDA0003488266980000085
the time-varying parameters are derived from a camera unit, an inertia measurement unit and a Controller Area Network (CAN) network of the whole vehicle, and are specificallyTransverse linear velocity
Figure BDA0003488266980000091
Lateral linear acceleration
Figure BDA0003488266980000092
And yaw rate
Figure BDA0003488266980000093
The data are acquired through a camera unit; yaw angular acceleration
Figure BDA0003488266980000094
The data are acquired by an inertia measurement unit; longitudinal velocity VxAnd the front wheel rotation angle delta is acquired through the whole vehicle CAN network. Therefore, the inertial measurement unit is used for collecting the yaw angular velocity signal and the transverse linear acceleration signal, time does not need to be integrated, and accumulated errors do not exist, so that the measurement precision is high; meanwhile, system input signals are obtained through the whole vehicle CAN network and the intelligent vehicle sensor, additional equipment does not need to be additionally installed, and the arrangement is easy.
Since the transmission period of the vehicle control signal is generally 10-20ms, and the transmission period of the camera signal is generally greater than the control period, i.e., about 50-100ms, so that the real-time vision sensor signal cannot be received in some control periods, the present embodiment detects whether the IMU signal and/or the camera signal is received before performing the lane keeping control.
Step S20: when the IMU signal and the camera signal are not received, updating the state quantity predicted value and the error covariance predicted value of the IMU and the state quantity predicted value and the error covariance predicted value of the camera through a Kalman filtering algorithm and based on time properties;
further, after the step of detecting whether the IMU signal and/or the camera signal is received, the method further includes:
when the IMU signal and the camera signal are received, updating the state quantity measurement value and the error covariance measurement value of the IMU and updating the state quantity measurement value and the error covariance measurement value of the camera through a Kalman filtering algorithm based on the time property and the measurement property;
and performing data fusion on the updated IMU state quantity measurement value and error covariance measurement value and the updated camera state quantity measurement value and error covariance measurement value based on a multi-layer Kalman filtering data fusion algorithm to obtain a fusion target.
Further, after the step of detecting whether the IMU signal and/or the camera signal is received, the method further includes:
when the IMU signal is received and the camera signal is not received, updating the state quantity measured value and the error covariance measured value of the IMU through a Kalman filtering algorithm based on the time property and the measurement property, and updating the state quantity predicted value and the error covariance predicted value of the camera based on the time property;
performing data fusion on the updated IMU state quantity measurement value and error covariance measurement value and the updated camera state quantity predicted value and error covariance predicted value based on a multi-layer Kalman filtering data fusion algorithm to obtain a fusion target;
when the IMU signal is not received and the camera signal is received, updating the state quantity predicted value and the error covariance predicted value of the IMU through a Kalman filtering algorithm and based on time properties, and updating the state quantity measured value and the error covariance measured value of the camera based on the time properties and the measurement properties;
and performing data fusion on the updated state quantity predicted value and error covariance predicted value of the IMU and the updated state quantity measured value and error covariance measured value of the camera based on a multi-layer Kalman filtering data fusion algorithm to obtain a fusion target.
Step S30: performing data fusion on the updated state quantity predicted value and error covariance predicted value of the IMU and the updated state quantity predicted value and error covariance predicted value of the camera based on a multi-layer Kalman filtering data fusion algorithm to obtain a fusion target;
further, the multi-layer kalman filter data fusion algorithm comprises a first-layer data fusion algorithm and a second-layer data fusion algorithm.
Further, the data fusion of the updated state quantity predicted value and error covariance predicted value of the IMU and the updated state quantity predicted value and error covariance predicted value of the camera based on the multi-layer kalman filtering data fusion algorithm to obtain a fusion target includes:
performing data fusion on the updated state quantity predicted value and error covariance predicted value of the IMU and the updated state quantity predicted value and error covariance predicted value of the camera based on a first layer of data fusion algorithm to obtain a cross-correlation covariance matrix among the sensors;
and performing data fusion on the cross-correlation covariance matrix among the sensors, the updated state quantity predicted value and error covariance predicted value of the IMU, and the updated state quantity predicted value and error covariance predicted value of the camera based on a second-layer data fusion algorithm to obtain a fusion target, wherein the fusion target comprises a system state quantity and an error covariance matrix.
Exemplarily, referring to fig. 2, whether signal output exists in the IMU and the camera unit is detected, taking the IMU as an example, when the IMU signal is detected, it is first determined whether the IMU signal passes the inspection, and if so, the state quantity measurement value and the error covariance measurement value of the IMU are updated through a kalman filter algorithm based on the time property and the measurement property; and when the IMU signal is not detected or the IMU signal is not checked, updating the state quantity predicted value and the error covariance predicted value of the IMU through a Kalman filtering algorithm and based on time properties. Taking a camera unit as an example, when a camera signal is detected, judging whether the camera signal passes the test or not, if so, updating a state quantity measurement value and an error covariance measurement value of the camera through a Kalman filtering algorithm and based on time properties and measurement properties; and when the camera signal is not detected or the camera signal is not checked, updating the state quantity predicted value and the error covariance predicted value of the camera through a Kalman filtering algorithm and based on time property.
Then, performing data fusion on the updated state quantity predicted value, error covariance predicted value, state quantity measured value and error covariance measured value based on a first-layer data fusion algorithm to obtain a cross-correlation covariance matrix among the sensors; and finally, performing data fusion on the cross-correlation covariance matrix among the sensors, the updated state quantity predicted value, the error covariance predicted value, the state quantity measured value and the error covariance measured value based on a second-layer data fusion algorithm to obtain a system state quantity and an error covariance matrix.
Specifically, the classical kalman filter algorithm is applicable to a linear system that can be described by a state equation and an output equation in the measurement process, and the form of the state equation and the output equation is as follows:
Figure BDA0003488266980000111
wherein A, B and C are constant matrixes, X (t), u (t) and Y (t) are respectively corresponding to system state, system input and system output at time t, w (t) represents process noise, and z (t) represents measurement noise.
The update equation of the classic kalman filter algorithm is:
Figure BDA0003488266980000112
wherein the content of the first and second substances,
Figure BDA0003488266980000121
p- (t +1) represents the predicted value of the system state at the time of t +1 and the predicted value of the error covariance,
Figure BDA0003488266980000122
p (t) represents a state estimation value of a system state at the time t and a state estimation value of an error covariance, respectively, K (t +1) represents a Kalman coefficient at the time t +1, and InRepresenting the unit matrix, Q the variance of the process noise and R the variance of the measurement noise.
The cross-correlation covariance matrix is calculated by the formula:
Figure BDA0003488266980000123
wherein, Pij(t)、Pij(t +1) represents a correlation coefficient matrix of the sensor i and the sensor j at the time t and the time t +1, respectively, and Ki(t+1)、Kj(t +1) represents the kalman coefficients of sensor i and sensor j at time t +1, respectively.
The calculation formula of the system state quantity and the error covariance matrix is as follows:
Figure BDA0003488266980000124
wherein F ═ F1,F2,...,Fl]T,Fi(I ═ 1, 2.. times.l) denotes the fusion coefficient of the l-th sensor, I ═ 1, 2.. times.l)N=[In1,In2,...,Inl]T,InlA unit array representing the ith sensor,
Figure BDA0003488266980000125
and PORespectively representing the system state quantity and the error covariance matrix,
Figure BDA0003488266980000126
represents the system state quantity, P, of the first sensorij-1Representing a matrix of predicted correlation coefficients for sensor i and sensor j.
Therefore, data fusion is carried out on the IMU signal and the camera signal through the formulas (7) to (9), and the system state information after the IMU and the camera are fused can be obtained.
Step S40: and inputting the fusion target serving as an observed quantity into a vehicle transverse error dynamic model, and solving the vehicle transverse error dynamic model based on a linear quadratic regulator to obtain an input control quantity of the LKA.
Exemplarily, in the present embodiment, the vehicle lateral error dynamic model represented by equation (5) is decomposed into the following two state space equations according to the source of the system state quantity signal:
Figure BDA0003488266980000131
Figure BDA0003488266980000132
then the equations (5-1) and (5-2) can be expressed as:
Figure BDA0003488266980000133
it can be seen that the form of equation (10) conforms to the output equation form of equation (6).
Then discretizing the formula (10) to obtain:
Xe(t+1)=(TAe(t)+I)Xe(t)+TBe(t)ue(t) (11)
as can be seen, the form of equation (11) corresponds to the state equation form of equation (6), and therefore, the system state quantity and the error covariance matrix obtained in step S30 are input as observed quantities to the vehicle lateral error dynamical model (i.e., equations (10) and (11)), and the vehicle lateral error dynamical model is solved based on LQR (Linear Quadratic Regulator), so that the input control quantity of LKA is obtained.
Aiming at the scene that IMU signals and camera signals cannot be received simultaneously, the state quantity predicted value and the error covariance predicted value of the IMU and the camera are updated based on the time property of a classic Kalman filtering algorithm, the updated state quantity predicted value and the updated error covariance predicted value are subjected to data fusion by using a multi-layer Kalman filtering data fusion algorithm, a fusion target is used as an observed quantity and input into a vehicle transverse error dynamic model, and a linear quadratic regulator is used for solving the error dynamic model to obtain an accurate input control quantity of LKA, so that the LKA can realize lane keeping motion control more stably and accurately based on the input control quantity. The IMU signal and the camera signal are subjected to data fusion by using a two-layer Kalman filtering data fusion algorithm, so that the state quantity and the error covariance matrix of the system can be still updated when the sensor signal is missing or the signal fails to pass the inspection, and the multi-sensor information fusion is completed.
Referring to fig. 3, an embodiment of the present application further provides a lane keeping control apparatus, including:
the detection unit is used for detecting whether the IMU signal and/or the camera signal are/is received;
the updating unit is used for updating the state quantity predicted value and the error covariance predicted value of the IMU and the state quantity predicted value and the error covariance predicted value of the camera through a Kalman filtering algorithm and based on the time property when the IMU signal and the camera signal are not received;
the fusion unit is used for carrying out data fusion on the updated state quantity predicted value and error covariance predicted value of the IMU and the updated state quantity predicted value and error covariance predicted value of the camera based on a multi-layer Kalman filtering data fusion algorithm to obtain a fusion target;
and the calculation unit is used for inputting the fusion target serving as an observed quantity to a vehicle lateral error dynamic model, and solving the vehicle lateral error dynamic model based on a linear quadratic regulator to obtain an input control quantity of the LKA.
Further, the apparatus comprises a construction unit for:
determining a vehicle transverse dynamic model based on a mechanical balance state equation of a vehicle mass center and a moment balance state equation of a vehicle body rotating around a Z axis;
and inputting the transverse position deviation and the course angle deviation between the center of mass of the vehicle and the central line of the lane line into the transverse vehicle dynamic model to generate a transverse vehicle error dynamic model.
The state space equation of the vehicle lateral error dynamic model is as follows:
Figure BDA0003488266980000141
wherein e is1Represents the lateral position deviation between the actual lateral displacement and the expected displacement between the center of mass of the vehicle to the center line of the lane line,
Figure BDA0003488266980000142
representing the first derivative of the lateral position deviation, e2Representing a heading angle deviation between the actual heading angle and the desired heading angle,
Figure BDA0003488266980000143
representing the first derivative, V, of course angular deviationxRepresenting the longitudinal speed of the vehicle's center of mass along the X-axis, m representing the vehicle mass,
Figure BDA0003488266980000144
yaw rate, C, representing the rotation of the center of mass of the vehicle about the center of rotationafRepresenting the yaw stiffness, C, of the front axle of the vehiclearRepresenting vehicle rear axle cornering stiffness,/fRepresenting the distance, l, from the front axle of the vehicle to the centre of mass of the entire vehiclerIndicating the distance from the rear axle of the vehicle to the centre of mass of the entire vehicle, delta the angle of rotation of the front wheel, IzRepresenting the moment of inertia of the entire vehicle.
Further, the multi-layer kalman filter data fusion algorithm comprises a first-layer data fusion algorithm and a second-layer data fusion algorithm.
Further, the fusion unit is specifically configured to:
performing data fusion on the updated state quantity predicted value and error covariance predicted value of the IMU and the updated state quantity predicted value and error covariance predicted value of the camera based on a first layer of data fusion algorithm to obtain a cross-correlation covariance matrix among the sensors;
and performing data fusion on the cross-correlation covariance matrix among the sensors, the updated state quantity predicted value and error covariance predicted value of the IMU, and the updated state quantity predicted value and error covariance predicted value of the camera based on a second-layer data fusion algorithm to obtain a fusion target, wherein the fusion target comprises a system state quantity and an error covariance matrix.
Further, the update unit is further configured to: when the IMU signal and the camera signal are received, updating the state quantity measurement value and the error covariance measurement value of the IMU and updating the state quantity measurement value and the error covariance measurement value of the camera through a Kalman filtering algorithm based on the time property and the measurement property; the fusion unit is further configured to: and performing data fusion on the updated IMU state quantity measurement value and error covariance measurement value and the updated camera state quantity measurement value and error covariance measurement value based on a multi-layer Kalman filtering data fusion algorithm to obtain a fusion target.
Further, when the IMU signal is received and the camera signal is not received, the updating unit is further configured to: updating the state quantity measured value and the error covariance measured value of the IMU through a Kalman filtering algorithm based on the time property and the measurement property, and updating the state quantity predicted value and the error covariance predicted value of the camera based on the time property; the fusion unit is further configured to: performing data fusion on the updated IMU state quantity measurement value and error covariance measurement value and the updated camera state quantity predicted value and error covariance predicted value based on a multi-layer Kalman filtering data fusion algorithm to obtain a fusion target;
when the IMU signal is not received and the camera signal is received, the updating unit is further configured to: updating the state quantity predicted value and the error covariance predicted value of the IMU through a Kalman filtering algorithm based on the time property, and updating the state quantity measured value and the error covariance measured value of the camera based on the time property and the measurement property; the fusion unit is further configured to: and performing data fusion on the updated state quantity predicted value and error covariance predicted value of the IMU and the updated state quantity measured value and error covariance measured value of the camera based on a multi-layer Kalman filtering data fusion algorithm to obtain a fusion target.
According to the embodiment of the application, a two-layer Kalman filtering data fusion algorithm is used for carrying out data fusion on IMU signals and camera signals, the fusion signals are used as observed quantities and input into a vehicle transverse error dynamic model, LQR is used for solving the vehicle transverse error dynamic model to obtain control quantities input by a system, and LKA lane keeping of a vehicle is achieved.
It should be noted that, as will be clear to those skilled in the art, for convenience and brevity of description, the specific working processes of the above-described apparatus and units may refer to the corresponding processes in the foregoing embodiments of the lane keeping control method, and are not described herein again.
The lane-keeping control apparatus provided by the above-described embodiment may be implemented in the form of a computer program that can be run on the lane-keeping control device shown in fig. 4.
An embodiment of the present application further provides a lane keeping control apparatus, including: the lane keeping control method comprises a memory, a processor and a network interface which are connected through a system bus, wherein at least one instruction is stored in the memory, and the at least one instruction is loaded and executed by the processor so as to realize all steps or part of steps of the lane keeping control method.
The network interface is used for performing network communication, such as sending distributed tasks. Those skilled in the art will appreciate that the architecture shown in fig. 4 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
The Processor may be a CPU, other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device discrete hardware components, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, the processor being the control center of the computer device and the various interfaces and lines connecting the various parts of the overall computer device.
The memory may be used to store computer programs and/or modules, and the processor may implement various functions of the computer device by executing or executing the computer programs and/or modules stored in the memory, as well as by invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a video playing function, an image playing function, etc.), and the like; the storage data area may store data (such as video data, image data, etc.) created according to the use of the cellular phone, etc. Further, the memory may include high speed random access memory, and may include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
Embodiments of the present application also provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the computer program implements all or part of the steps of the aforementioned lane keeping control method.
The embodiments of the present application may implement all or part of the foregoing processes, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of the foregoing methods. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer memory, Read-only memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, in accordance with legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunications signals.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, server, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is merely exemplary of the present application and is presented to enable those skilled in the art to understand and practice the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A lane keep control method characterized by comprising the steps of:
detecting whether an IMU signal and/or a camera signal is received;
when the IMU signal and the camera signal are not received, updating the state quantity predicted value and the error covariance predicted value of the IMU and the state quantity predicted value and the error covariance predicted value of the camera through a Kalman filtering algorithm and based on time properties;
performing data fusion on the updated state quantity predicted value and error covariance predicted value of the IMU and the updated state quantity predicted value and error covariance predicted value of the camera based on a multi-layer Kalman filtering data fusion algorithm to obtain a fusion target;
and inputting the fusion target serving as an observed quantity into a vehicle transverse error dynamic model, and solving the vehicle transverse error dynamic model based on a linear quadratic regulator to obtain an input control quantity of the LKA.
2. The lane-keeping control method of claim 1, further comprising, before the step of detecting whether the IMU signal and the camera signal are received:
determining a vehicle transverse dynamic model based on a mechanical balance state equation of the vehicle mass center and a moment balance state equation of the vehicle body rotating around the Z axis;
and inputting the transverse position deviation and the course angle deviation between the center of mass of the vehicle and the central line of the lane line into the transverse vehicle dynamic model to generate a transverse vehicle error dynamic model.
3. The lane keep control method of claim 2, wherein the state space equation of the vehicle lateral error dynamics model is:
Figure FDA0003488266970000011
wherein e is1Represents the lateral position deviation between the actual lateral displacement and the expected displacement between the center of mass of the vehicle to the center line of the lane line,
Figure FDA0003488266970000012
representing the first derivative of the lateral position deviation, e2Representing a heading angle deviation between the actual heading angle and the desired heading angle,
Figure FDA0003488266970000021
representing the first derivative, V, of course angular deviationxRepresenting the longitudinal speed of the vehicle's center of mass along the X-axis, m representing the vehicle mass,
Figure FDA0003488266970000022
yaw rate, C, representing the rotation of the center of mass of the vehicle about the center of rotationafRepresenting the yaw stiffness, C, of the front axle of the vehiclearRepresenting vehicle rear axle cornering stiffness,/fRepresenting the distance, l, from the front axle of the vehicle to the centre of mass of the entire vehiclerIndicating the distance from the rear axle of the vehicle to the centre of mass of the entire vehicle, delta the angle of rotation of the front wheel, IzRepresenting the moment of inertia of the entire vehicle.
4. The lane-keeping control method of claim 1, wherein the multi-layer kalman filter data fusion algorithm includes a first-layer data fusion algorithm and a second-layer data fusion algorithm.
5. The lane keeping control method according to claim 4, wherein the data fusion of the updated IMU state quantity predicted value and error covariance predicted value and the updated camera state quantity predicted value and error covariance predicted value based on the multi-layer Kalman filtering data fusion algorithm to obtain a fusion target comprises:
performing data fusion on the updated state quantity predicted value and error covariance predicted value of the IMU and the updated state quantity predicted value and error covariance predicted value of the camera based on a first layer of data fusion algorithm to obtain a cross-correlation covariance matrix among the sensors;
and performing data fusion on the cross-correlation covariance matrix among the sensors, the updated state quantity predicted value and error covariance predicted value of the IMU, and the updated state quantity predicted value and error covariance predicted value of the camera based on a second-layer data fusion algorithm to obtain a fusion target, wherein the fusion target comprises a system state quantity and an error covariance matrix.
6. The lane-keeping control method of claim 1, further comprising, after the step of detecting whether an IMU signal and/or a camera signal is received:
when the IMU signal and the camera signal are received, updating the state quantity measurement value and the error covariance measurement value of the IMU and updating the state quantity measurement value and the error covariance measurement value of the camera through a Kalman filtering algorithm based on the time property and the measurement property;
and performing data fusion on the updated IMU state quantity measurement value and error covariance measurement value and the updated camera state quantity measurement value and error covariance measurement value based on a multi-layer Kalman filtering data fusion algorithm to obtain a fusion target.
7. The lane-keeping control method of claim 1, further comprising, after the step of detecting whether an IMU signal and/or a camera signal is received:
when the IMU signal is received and the camera signal is not received, updating the state quantity measured value and the error covariance measured value of the IMU through a Kalman filtering algorithm based on the time property and the measurement property, and updating the state quantity predicted value and the error covariance predicted value of the camera based on the time property;
performing data fusion on the updated IMU state quantity measurement value and error covariance measurement value and the updated camera state quantity predicted value and error covariance predicted value based on a multi-layer Kalman filtering data fusion algorithm to obtain a fusion target;
when the IMU signal is not received and the camera signal is received, updating the state quantity predicted value and the error covariance predicted value of the IMU through a Kalman filtering algorithm and based on time properties, and updating the state quantity measured value and the error covariance measured value of the camera based on the time properties and the measurement properties;
and performing data fusion on the updated state quantity predicted value and error covariance predicted value of the IMU and the updated state quantity measured value and error covariance measured value of the camera based on a multi-layer Kalman filtering data fusion algorithm to obtain a fusion target.
8. A lane keep control apparatus, characterized by comprising:
the detection unit is used for detecting whether the IMU signal and/or the camera signal are/is received;
the updating unit is used for updating the state quantity predicted value and the error covariance predicted value of the IMU and the state quantity predicted value and the error covariance predicted value of the camera through a Kalman filtering algorithm and based on time properties when the IMU signal and the camera signal are not received;
the fusion unit is used for carrying out data fusion on the updated state quantity predicted value and error covariance predicted value of the IMU and the updated state quantity predicted value and error covariance predicted value of the camera based on a multi-layer Kalman filtering data fusion algorithm to obtain a fusion target;
and the calculation unit is used for inputting the fusion target serving as an observed quantity to a vehicle lateral error dynamic model, and solving the vehicle lateral error dynamic model based on a linear quadratic regulator to obtain an input control quantity of the LKA.
9. A lane keep control apparatus, characterized by comprising: a memory and a processor, the memory having stored therein at least one instruction, the at least one instruction being loaded and executed by the processor to implement the lane-keeping control method of any of claims 1 to 7.
10. A computer-readable storage medium characterized by: the computer storage medium stores a computer program that, when executed by a processor, implements the lane-keeping control method of any one of claims 1 to 7.
CN202210088859.7A 2022-01-25 2022-01-25 Lane keeping control method, device and equipment and readable storage medium Pending CN114494360A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210088859.7A CN114494360A (en) 2022-01-25 2022-01-25 Lane keeping control method, device and equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210088859.7A CN114494360A (en) 2022-01-25 2022-01-25 Lane keeping control method, device and equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN114494360A true CN114494360A (en) 2022-05-13

Family

ID=81475462

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210088859.7A Pending CN114494360A (en) 2022-01-25 2022-01-25 Lane keeping control method, device and equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN114494360A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115071732A (en) * 2022-07-14 2022-09-20 东风商用车有限公司 SMC (sheet molding compound) commercial vehicle intelligent driving transverse control method based on LQR (Linear quadratic response)
CN115267868A (en) * 2022-09-27 2022-11-01 腾讯科技(深圳)有限公司 Positioning point processing method and device and computer readable storage medium
CN116872926A (en) * 2023-08-16 2023-10-13 北京斯年智驾科技有限公司 Automatic driving lane keeping method, system, device and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115071732A (en) * 2022-07-14 2022-09-20 东风商用车有限公司 SMC (sheet molding compound) commercial vehicle intelligent driving transverse control method based on LQR (Linear quadratic response)
CN115267868A (en) * 2022-09-27 2022-11-01 腾讯科技(深圳)有限公司 Positioning point processing method and device and computer readable storage medium
CN115267868B (en) * 2022-09-27 2023-09-19 腾讯科技(深圳)有限公司 Positioning point processing method and device and computer readable storage medium
CN116872926A (en) * 2023-08-16 2023-10-13 北京斯年智驾科技有限公司 Automatic driving lane keeping method, system, device and storage medium

Similar Documents

Publication Publication Date Title
CN114494360A (en) Lane keeping control method, device and equipment and readable storage medium
CN109760677B (en) Lane keeping auxiliary method and system
Tin Leung et al. A review of ground vehicle dynamic state estimations utilising GPS/INS
CN111873991B (en) Vehicle steering control method, device, terminal and storage medium
EP0615892B1 (en) Vehicle slip angle measuring method and a device therefor
CN102975716B (en) The system and method that override detects is turned to for the speed adaptive during automatization's track centering
DE102008026397A1 (en) Radar, lidar, and camera-assisted vehicle dynamics estimation methods
CN110316197B (en) Tilt estimation method, tilt estimation device, and non-transitory computer-readable storage medium storing program
CN110386189A (en) Interference signal is accessed into datum quantity in Cascade control
Lee et al. Multirate active steering control for autonomous vehicle lateral maneuvering
CN112433531A (en) Trajectory tracking method and device for automatic driving vehicle and computer equipment
US11731649B2 (en) High precision position estimation method through road shape classification-based map matching and autonomous vehicle thereof
CN114167470A (en) Data processing method and device
US11654966B2 (en) Method for controlling the lateral position of a motor vehicle
CN109204318B (en) Method and terminal equipment for judging rapid lane change of vehicle
CN111572551A (en) Course angle calculation method, device, equipment and storage medium under parking condition
JP2001134320A (en) Lane follow-up controller
KR101954510B1 (en) Lane keeping control method and apparatus
CN108860137A (en) Control method, device and the intelligent vehicle of unstability vehicle
WO2022203026A1 (en) Driving control device
JP2001280995A (en) Device and method for drift removal, and behavior detecting sensor for moving object
JP7234617B2 (en) Body attitude angle estimation device
CN112577513A (en) State quantity error determination method and vehicle-mounted terminal
JP7206875B2 (en) Vehicle lateral speed estimator
CN114212078B (en) Method and system for detecting positioning accuracy of self-vehicle in automatic parking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination