CN113361121B - Road adhesion coefficient estimation method based on time-space synchronization and information fusion - Google Patents

Road adhesion coefficient estimation method based on time-space synchronization and information fusion Download PDF

Info

Publication number
CN113361121B
CN113361121B CN202110684077.5A CN202110684077A CN113361121B CN 113361121 B CN113361121 B CN 113361121B CN 202110684077 A CN202110684077 A CN 202110684077A CN 113361121 B CN113361121 B CN 113361121B
Authority
CN
China
Prior art keywords
road surface
vehicle
image
value
adhesion coefficient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110684077.5A
Other languages
Chinese (zh)
Other versions
CN113361121A (en
Inventor
郭洪艳
赵旭
刘惠
刘俊
郭景征
孟庆瑜
刘畅
戴启坤
王连冰
谭中秋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN202110684077.5A priority Critical patent/CN113361121B/en
Publication of CN113361121A publication Critical patent/CN113361121A/en
Application granted granted Critical
Publication of CN113361121B publication Critical patent/CN113361121B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2119/00Details relating to the type or aim of the analysis or the optimisation
    • G06F2119/14Force analysis or force optimisation, e.g. static or dynamic forces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Library & Information Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a road adhesion coefficient estimation method based on space-time synchronization and information fusion, which comprises the steps of simultaneously acquiring road image information in front of a vehicle and vehicle dynamic response information, extracting an effective road area in the acquired road image in front of the vehicle through a semantic segmentation network, sending the effective road area into a road type identification network to obtain a road type identification result, obtaining a road adhesion coefficient estimation value by adopting an unscented Kalman filter estimation method according to the acquired vehicle dynamic response information, and screening by a space-time synchronization method to obtain the road type identification result and the road adhesion coefficient estimation value which meet the fusion condition; finally, judging a confidence coefficient threshold value of a preset road surface type recognition result and a weighted probability value comparison result, and fusing and outputting a final road surface adhesion coefficient estimation value; the method can realize the time-space synchronization of the road surface type data in front of the vehicle and the vehicle dynamic response information data, and ensure the predictability and the accuracy of the road surface adhesion coefficient estimation result after fusion.

Description

Road adhesion coefficient estimation method based on time-space synchronization and information fusion
Technical Field
The invention belongs to the field of pavement adhesion coefficient identification, relates to a pavement adhesion coefficient estimation method, and more particularly relates to a pavement adhesion coefficient estimation method based on space-time synchronization and information fusion.
Background
The condition of the road surface is the most direct factor influencing whether the vehicle can safely run, and the road surface adhesion coefficient represents the maximum interaction force which can be generated between the tire and the road surface and directly influences the driving property, the braking property and the operation stability of the automobile. The accurate acquisition of the road adhesion coefficient can expand the working condition adaptation range of the automobile active safety system, and the active safety system is helpful to adjust the control strategy in time by sensing the state change of the front road in advance. However, the existing road adhesion coefficient identification method, such as a vision-based road surface type identification method, can identify the front road surface type, the identification result is generally the range value of the road adhesion coefficient, and the method has good predictability but has a certain identification error; the road adhesion coefficient estimation method based on the vehicle dynamic response information can accurately estimate the road adhesion coefficient of the current position of the vehicle, the method is good in accuracy, but needs to meet a certain excitation condition and has hysteresis, how to combine the two methods and give full play to respective advantages to realize the advance prediction and accurate estimation of the road adhesion coefficient, and the method has very important theoretical research significance and practical application value for the active safety control of the vehicle and the promotion of the intelligent development of the vehicle.
Disclosure of Invention
The invention aims to solve the problems of recognition error and hysteresis of a certain degree in the existing road adhesion coefficient recognition method, combines a road surface type recognition method based on vision and a road adhesion coefficient estimation method based on vehicle dynamic response information, and provides a road adhesion coefficient estimation method based on space-time synchronization and information fusion, and the method has the advantages of predictability and accuracy; firstly, acquiring road surface image information in front of a vehicle and vehicle power response information at the same time; extracting effective pavement areas in the collected pavement images in front of the vehicles through a semantic segmentation network, sending the effective pavement areas into a pavement type recognition network to obtain a pavement type recognition result, and meanwhile, obtaining a pavement adhesion coefficient estimation value by adopting an unscented Kalman filter estimation method according to the collected vehicle dynamic response information; then, screening by a time-space synchronization method to obtain a road surface type identification result and a road surface adhesion coefficient estimation value which meet the fusion condition; and finally, judging a comparison result of the confidence coefficient threshold value of the preset road surface type identification result and the weighted probability value, and fusing and outputting a final road surface adhesion coefficient estimation value.
In order to solve the technical problems, the invention is realized by adopting the following technical scheme:
a road surface attachment coefficient estimation method based on time-space synchronization and information fusion comprises the steps of firstly, simultaneously collecting road surface image information in front of a vehicle and vehicle dynamic response information; extracting effective pavement areas in the collected pavement images in front of the vehicles through a semantic segmentation network, sending the effective pavement areas into a pavement type recognition network to obtain a pavement type recognition result, and meanwhile, obtaining a pavement adhesion coefficient estimation value by adopting an unscented Kalman filter estimation method according to the collected vehicle dynamic response information; then, screening by a time-space synchronization method to obtain a road surface type identification result and a road surface adhesion coefficient estimation value which meet the fusion condition; and finally, judging a confidence coefficient threshold value of a preset road surface type recognition result and a weighted probability value comparison result, and fusing and outputting a final road surface adhesion coefficient estimation value, wherein the method specifically comprises the following steps:
step one, collecting image information of a front road surface and vehicle dynamics response information at the same time
The method comprises the steps that image information of a road surface in front of a vehicle is acquired by a monocular vision sensor with the model of USB30-AR023ZWDR, the resolution of the acquired image is 1920 x 1080, 30 images are acquired per second, the monocular vision sensor is arranged in the vehicle and positioned right above a front windshield of the vehicle, the visual angle of a high-definition dynamic camera is adjusted, the lower boundary of an image shot by the high-definition dynamic camera is just positioned at the edge of a vehicle engine cover, the road surface image shot by the high-definition dynamic camera accounts for more than three fifths, due to the limitation of the installation position and the maximum effective distance of the monocular vision sensor, the effective road surface area in the image information of the road surface in front is determined to be within the range of 5-50 meters in front of the centroid of the vehicle, and the image information output by the monocular vision sensor is transmitted to an on-board industrial personal computer through the USB and is read according to the fixed frequency of 10 Hz;
the vehicle dynamics response information comprises wheel speed information, steering wheel corner information, vehicle speed information, vehicle longitudinal acceleration information, vehicle lateral acceleration information and vehicle yaw velocity information of four wheels, and is acquired by a vehicle body sensor and a GPS/INS inertial navigation combination system respectively, wherein the vehicle body sensor comprises the wheel speed sensors and the steering wheel corner sensors of the four wheels of the vehicle, the wheel speed information and the steering wheel corner information of the four wheels of the vehicle are acquired respectively, the sampling frequency of the vehicle body sensor is set to be 100Hz, and the vehicle body sensor is connected to a vehicle-mounted industrial personal computer through a CAN bus;
the GPS/INS inertial navigation combination system is in a model of OXTS RT2500, is arranged at the mass center position of the vehicle and is rigidly connected with the vehicle, and is used for acquiring vehicle speed information, longitudinal acceleration information, lateral acceleration information and yaw angular velocity information;
the model of the vehicle-mounted industrial personal computer is Nuvo-6108GC, and a double-channel Kvaser mini PCI-Express CAN/CAN FD adapter is installed;
after information collected by the monocular vision sensor, the vehicle body sensor and the GPS/INS inertial navigation combined system is transmitted to the industrial personal computer, vehicle dynamics response information is stored in the same file in real time through data initialization and is stored in a csv format, meanwhile, the images and the vehicle dynamics response information are inserted into a timestamp taking UTC time output by the GPS/INS inertial navigation combined system as a reference, and timestamp data updating is achieved through a Visual Basic for Applications editor in Excel;
step two, obtaining a road surface type classification recognition result and a road surface adhesion coefficient estimation value
Loading a pre-trained semantic segmentation network and a pre-trained pavement type classification recognition network on a vehicle-mounted industrial personal computer, reading acquired pavement image information by the vehicle-mounted industrial personal computer according to a fixed frequency of 10Hz, sending the acquired pavement image information into the semantic segmentation network, and removing environmental objects and other vehicles in the image through the semantic segmentation network and mask processing to obtain a sample image only containing an effective pavement area; sending the sample image only containing the effective road surface area into a road surface type identification network again, and identifying to obtain a road surface type result;
the semantic segmentation network selects ERFNet for segmenting images to extract effective pavement image areas, a cityscaps automatic driving data set is adopted to pre-train a network model, the model is of an encoder and a decoder structure, the encoder comprises a decomposition convolution layer non-cottleneck-1D and a down sampling layer down sampler block, and the structure of the semantic segmentation network model ERFNet is shown in the following table 1:
TABLE 1 ERFNet structures
Figure GDA0003517881870000021
Figure GDA0003517881870000031
Firstly, ERFNet is pre-trained, a cityscaps data set is adopted to train a semantic segmentation network, the data set comprises 30 semantic categories including vehicles, pedestrians, traffic markers, buildings and road surface areas, the cityscaps data set is converted into a TFRecord format to reduce the occupancy rate of a storage space, a network model can be ensured to read data quickly, the processing efficiency is improved, the size of a read picture tensor is randomly scaled between 0.5 time and 2 times according to 0.25 step length, tensor pictures are randomly cut by 769 multiplied by 769 pixels and are randomly turned left and right to achieve the purpose of data enhancement, the adaptability of the segmentation network is improved, a Poly learning rate rule is selected during training, and a learning rate attenuation expression is a formula (1):
Figure GDA0003517881870000032
wherein the initial learning rate LRinitial0.001, training iteration step number iter, maximum training step number max _ iter set to 2000 × 103Step, power is set to 0.9;
according to the hardware performance of the vehicle-mounted industrial personal computer, setting batch processing size as batch _ size ═ 8, storing model parameters once every 10 minutes, simultaneously using a verification set to evaluate the performance of the network, wherein an average Intersection over unit is adopted as an evaluation index, the average Intersection over unit is abbreviated as MIoU later, the ratio of the Intersection to the Union of each type of prediction result and true value is represented, and the result of averaging is summed, wherein the formula is as follows (2):
Figure GDA0003517881870000033
wherein TP, FP, TN, FN are defined by the confusion matrix, as shown in table 2:
TABLE 2 confusion matrix
Figure GDA0003517881870000034
After pre-training, the MIoU index is 0.696, the precision meets the requirement, the pavement image is processed by a semantic segmentation network ERFNet, the obtained result is the positions of different semantic categories on the image, the positions are distinguished by different RGB values, the RGB color standard is a common image color system, wherein R represents red, G represents green, and B represents blue, through different changes and mutual superposition of the three color channels, various colors almost comprising human perception can be generated, the value range of each color channel is 0-255, the color channels can be represented by 8 bits in a computer language, the whole image is 24-bit depth, the RGB values corresponding to a pavement area are (128,64,128), and the average processing time of each image is 0.078 seconds;
the semantic segmentation network obtains the position of the road surface area in the image and the corresponding RGB value, the RGB value of the road surface area is regarded as an interesting area, and the road surface area is extracted from the image by means of a mask in a digital image processing method;
the mask processing process comprises the steps of firstly carrying out slicing operation on a semantic segmentation prediction result image to obtain a two-dimensional matrix of three color channels, wherein three channels respectively correspond to RGB when the image is processed in OpenCV, as the RGB values corresponding to a road surface area are (128,64,128), firstly making a mask of a red channel, namely, the mask value corresponding to a pixel point with the R value of 128 in an identification result image is 1, and the mask value corresponding to other positions is 0, then making a mask of a green channel, the mask value corresponding to a pixel point with the G value of 64 in the identification result image is 1, and the mask value corresponding to the other positions is 0, finally making a mask of a blue channel, the mask value corresponding to a pixel point with the B value of 128 in the identification result image is 1, the mask value corresponding to other positions is 0, only the pixel points with the three channel values being 1 are reserved, and when the pixel point value of a certain channel is 0, the pixel points corresponding to other channels are set to zero, so as to ensure that only the road surface area is extracted, finally, splicing the masks of the three color channels, and respectively performing matrix dot multiplication operation on the masks and the channels corresponding to the original image to obtain a sample image only containing a pavement area, wherein the non-pavement area is changed into black, and the average processing time of the masks is 0.0046 seconds;
obtaining a sample image only containing a road surface area through semantic segmentation and mask processing, sending the sample image into a road surface type identification network for identification, obtaining an identification result as a road surface type, wherein the road surface type comprises 8 road surface types which are common in daily life and respectively comprise dry asphalt, wet asphalt, dry cement, wet cement, brick paving, loose snow, compacted snow and an ice film, a road surface adhesion coefficient corresponding to the road surface type is not a fixed value but a range value, and the road surface adhesion coefficient is determined by referring to an automobile longitudinal sliding adhesion coefficient reference value table and an automobile longitudinal sliding adhesion coefficient reference value table of an ice and snow road surface in GA/T643-2006 typical traffic accident form vehicle driving speed technical identification, considering that the road surface adhesion coefficient is related to road surface abrasion, tire abrasion and air temperature and humidity influence factors, and the road surface adhesion coefficient range values are different under different driving speeds, for this purpose, a comparison table of different road surface types and adhesion coefficient range values is set for two cases of high-speed running and low-speed running as shown in table 3:
TABLE 3 comparison table of different road surface types and range values of adhesion coefficients
Figure GDA0003517881870000041
Figure GDA0003517881870000051
The network for type recognition of the effective road surface area sample image obtained by semantic segmentation and masking is YOLO-V3, YOLO-V3 is a full convolution network, downsampling is performed by using convolution with the step size of 2 through layer-skipping connection of residuals, and the structure of YOLO-V3 is shown in table 4 below:
TABLE 4 structure of YOLO-V3
Figure GDA0003517881870000052
The data set used for training YOLO-V3 is 8000 images acquired according to 8 road surface types defined in Table 3, each class is 1000 images, then the data set is disordered, 200 images of each class are randomly extracted according to a proportion of 20% to serve as a verification set, the remaining 800 images serve as a training set, the accuracy of the final training result is 97.8%, the average identification time of a single image is 0.0095 seconds, the sum of the average processing time of a semantic segmentation network, a mask and an identification network is 0.0921 seconds, and the requirements on precision and instantaneity are met;
storing road surface type classification identification result data, firstly recording UTC time output by a current GPS/INS inertial navigation combination system, recording the road surface type identification result as index, then obtaining a road surface adhesion coefficient range value corresponding to the road surface type through a lookup table 3, recording the lower limit as a and the upper limit as b, and taking the average value as a prior value mu of the adhesion coefficient corresponding to different road surface typesPI.e. muP(a + b)/2 in matrix [ index a b μp]Storing road surface type classification recognition result data in a form;
meanwhile, vehicle dynamics response information is input into an unscented Kalman filter to estimate a road adhesion coefficient of the current position of the vehicle, a vehicle dynamics model is firstly established, and the method for estimating the road adhesion coefficient by adopting the dynamics response information pays attention to the longitudinal motion, the lateral motion and the yaw motion of the vehicle, so that the vehicle model is simplified and assumed: neglecting the smaller air resistance and rolling resistance; assuming that the vehicle is running on a horizontal road surface, ignoring roll and pitch motions; neglecting the effect of the suspension, assuming a rigid connection between the wheel and the sprung mass portion; the tire characteristics of each tire are consistent; in order to facilitate the study of the vehicle dynamic response characteristics, a set of normalized coordinate systems needs to be established, and the following vehicle dynamic coordinate systems are defined:
a vehicle body coordinate system: selecting an ISO standard coordinate system, taking the position of the mass center of the vehicle as a coordinate origin O, taking the driving direction of the vehicle as the positive direction of an x coordinate axis and parallel to a road surface, taking a z coordinate axis which is vertical to the road surface and faces upwards, and obtaining the positive direction of a y coordinate axis by a right-hand rule;
tire coordinate system: an ISO standard coordinate system is also adopted, a tire suspension center is selected as an origin of coordinates, and each tire has a reference coordinate system; the advancing direction of the tire is the positive direction of an x coordinate axis and is parallel to the road surface, a z coordinate axis is vertical to the road surface and is upward, and the positive direction of a y coordinate axis can be obtained by a right-hand rule; when the vehicle runs straight, the axial direction is consistent with the vehicle body coordinate system, and a tire steering angle exists during steering, namely the tire coordinate system rotates a certain angle relative to the vehicle body coordinate system;
according to the darenbell principle, the vehicle dynamics equation is derived:
vehicle motion along x-axis:
ax=((Fxfl+Fxfr)cosδf-(Fyfl+Fyfr)sinδf+Fxrl+Fxrr)/m (3)
in the formula, FxflShows the longitudinal tire force of the left front wheel FxfrRepresenting the right front wheel longitudinal tire force, FxrlShowing the longitudinal tire force, F, of the left and rear wheelsxrrRepresents the right rear wheel longitudinal tire force, δfIs the corner of the front wheel, m is the mass of the whole vehicle,
vehicle motion along y-axis:
ay=((Fxfl+Fxfr)sinδf+(Fyfl+Fyfr)cosδf+Fyrl+Fyrr)/m (4)
vehicle motion about the z-axis:
Figure GDA0003517881870000061
in the formula IzFor moment of inertia about the z-axis, yaw moment MzThe calculation formula is as follows:
Figure GDA0003517881870000062
in the formula, LfIs the distance from the center of mass of the vehicle to the front axle, LrIs the distance from the center of mass of the vehicle to the rear axle, BfFor front wheel track, BrFor the rear wheel track, the longitudinal tire force and the lateral tire force in the formulas (3), (4) and (6) are solved through a magic formula:
y=Dsin(Carctan(Bx-E(Bx-arctanBx))) (7)
satisfies the following conditions:
Figure GDA0003517881870000071
wherein Y represents that the output variable is longitudinal tire force or lateral tire force, X represents the input variable, and when Y represents longitudinal tire force, X is longitudinal slip ratio kappa; when Y represents a lateral tire force, X is a tire slip angle α; b is a stiffness factor, C is a shape factor, D is a crest factor, E is a curvature factor, SHFor horizontal offset, SVIs vertically offset;
in order to avoid unstable tire performance under the condition of low vehicle speed, a low-speed threshold value v is selectedlowCalculating the longitudinal slip ratio kappai
Figure GDA0003517881870000072
In the formula, wiAs the wheel speed, ReThe low speed threshold is set as v for the effective radius of the wheel rollinglow=0.1km/h,vxiThe longitudinal speed under the wheel center coordinate system;
the vertical loads of the four wheels of the vehicle are:
Figure GDA0003517881870000073
wherein h is the distance from the center of mass of the vehicle to the ground, KRepresenting front axle roll stiffness, KRepresenting rear axle roll stiffness;
selecting the adhesion coefficients of four wheels as state variables of the estimation system, and recording as follows:
x=[μfl μfr μrl μrr]T (11)
selecting the front wheel steering angle as the input signal of the estimation system, and recording as u ═ deltaf(ii) a Selecting longitudinal acceleration, lateral acceleration and yaw angular acceleration measured by a sensor as observation variables of an estimation system, recording the observation variables,
Figure GDA0003517881870000081
the estimation system is as follows:
Figure GDA0003517881870000082
in the formula, the state equation of the estimation system is f (x, u) ═ I4×4·x,I4×4Is an identity matrix of 4 orders; the measurement equation h (x, u) is composed of the following equations (3), (4) and (5), and discretizing the equation (13) yields a form of a nonlinear difference equation:
Figure GDA0003517881870000083
in the formula, the discretized state equation is
Figure GDA0003517881870000084
T is the sampling time, wkAnd vkRespectively process noise and measurement noise, and the initial value of the state variable of the unscented Kalman filter is x0=[1 1 1 1]TThe initial value of the covariance matrix of the estimation error is P0=0.01×I4×4Initial value of process noise covariance matrix Qk=I4×4Measuring an initial value R of the covariance of the noisek=0.01×I4×4In which I4×4Is an identity matrix, the sampling time T is set to be 0.001 second, and the mean and covariance matrices satisfy:
Figure GDA0003517881870000085
the flow of estimating the road adhesion coefficient by the unscented kalman filter is as follows:
(1) initializing, setting parameters of unscented Kalman filter, including estimating initial value of state variable of system as x0The initial value of the covariance matrix of the estimation error is P0
(2) Time updating, establishing Sigma sampling point chii,k
Figure GDA0003517881870000086
In which λ represents the mean of the random variables
Figure GDA0003517881870000087
A scaling factor of the distance from the Sigma sample point, assuming that the state estimate obtained at time k-1 is
Figure GDA0003517881870000088
And an estimation error covariance of
Figure GDA0003517881870000089
And a weight of Wi mAnd Wi cSigma sampling point of
Figure GDA00035178818700000810
The sampling point passes through
Figure GDA0003517881870000091
After the equation, the prior estimated value at the k moment is obtained by weighted summation
Figure GDA0003517881870000092
Sum error covariance
Figure GDA0003517881870000093
Figure GDA0003517881870000094
Figure GDA0003517881870000095
(3) Measurement update procedure
The best estimate from the mean and covariance of the state variable x at this time is
Figure GDA0003517881870000096
And
Figure GDA0003517881870000097
a set of Sigma samples was again selected
Figure GDA0003517881870000098
The sample points are then passed through a non-linear measurement equation, i.e.
Figure GDA0003517881870000099
And then obtaining an estimation result of the measured value at the k moment through weighted summation
Figure GDA00035178818700000910
And covariance matrix:
Figure GDA00035178818700000911
Figure GDA00035178818700000912
Figure GDA00035178818700000913
finally, a UKF filter gain matrix K can be obtained through calculationkAnd combining the sensor measurements y obtained at time kkFurther deducing the posterior state estimation value of the state variable at the k moment
Figure GDA00035178818700000914
And estimate error covariance
Figure GDA00035178818700000915
Figure GDA00035178818700000916
Figure GDA00035178818700000917
Figure GDA00035178818700000918
And iterating to complete the whole estimation algorithm along with the increasing of the k value, and recording the estimated value of the road adhesion coefficient of the current position of the vehicle as muDRecording the UTC time output by the current GPS/INS inertial navigation combination system;
step three, screening the vehicle front road image type result and the road adhesion coefficient estimation value which meet the fusion condition through a space-time synchronization method
Vehicle passing tkTime is driven to xkPosition, i.e. tkAt time, the vehicle centroid position is at xkPosition, at which x is obtainedkRoad surface adhesion coefficient estimation value mu of positionD
Because the monocular camera is at PiThe road image in front of the vehicle collected by the point is transmitted to a vehicle-mounted industrial personal computer, the road type result can be obtained only after the road image is processed and identified on line through a semantic segmentation network, a mask and a road type identification network, and after the processing time, the vehicle runs to Pi' Point, find tkA road surface image that satisfies the following conditions simultaneously before the time:
(1) at tkThe pavement type recognition is completed before the moment;
Pi'≤xk (25)
(2) at tkTime of day, vehicle position xkAt PiWithin a road region in the image;
Pi+5≤xk≤Pi+50 (26)
step four, finally outputting the road adhesion coefficient estimated value with predictability and accuracy by the vehicle front road surface type recognition result and the road adhesion coefficient estimated value through a fusion strategy
At tkTime of day, find t according to space-time synchronization methodkThe nearest 10 road surface image sample points before the moment are at the position Pi(i 1,2, …,10), and the road surface type identification result is IndexiWhere the larger the value of i is, the sample iskThe shorter the time interval, the position when the image is taken is away from xkThe closer, the higher the image recognition result confidence; presetting weight coefficient wP,iThe larger i is, the larger wP,iThe larger, and satisfies formula (27):
Figure GDA0003517881870000101
detecting weighted summation probability values of the same Index values in 10 sample points, wherein j is 0,1, …,7, and represents 8 pavement types;
Figure GDA0003517881870000102
finding the maximum probability value p therein by comparisonmaxAnd its corresponding Index value, which represents the most reliable road surface type identification result; finally obtain tkImage recognition result [ Index ] with time for fusion algorithmk ak bk]Presetting an image recognition confidence threshold value pCFThe following fusion rules are established:
(1) if p ismax<pCFIf the road surface characteristics are not obvious or the road surface image database does not establish such an image sample set, the final adhesion coefficient result is output based on the dynamic response information estimation method, that is, mu is equal to muD(ii) a Saving a group of road surface images and a priori value mu at the momentp=μDThe updated image sample library is used for training and updating the classification network;
(2) if (p)max≥pCF)∩(ak≤μD≤bk) The road surface image characteristics are obvious, the confidence coefficient of the road surface type identification result is high and is consistent with the result of the dynamic response information estimation method, and the final adhesion coefficient output result is that mu is equal to mu at the momentDAnd updating the prior value mu of the image recognition resultp=μD
(3) If (p)max≥pCF)∩((μD<ak)∪(μD>bk) Showing that the confidence coefficient of the road surface type identification result is higher but is not consistent with the result of the dynamic response information estimation method, and the adopted probability density function truncation method passes through the range of the attachment coefficient (a)k,bk) As a constraint condition for correcting the kinetic estimation result, the corrected adhesion coefficient estimation value is the final result at that time, and is recorded as μ ═ μC
The probability density function truncation method includes: let us assume at tkThe posterior state estimated value of the unscented Kalman filter is obtained at any moment
Figure GDA0003517881870000111
And estimate error covariance
Figure GDA0003517881870000112
See equations (23) and (24), and s scalar state constraints:
Figure GDA0003517881870000113
in the formula (I), the compound is shown in the specification,
Figure GDA0003517881870000114
is a linear function of a state variable, akiAnd bkiMinimum and maximum constraint boundary values, respectively; the problem is converted into a probability density function with s constraint boundaries
Figure GDA0003517881870000115
Truncation is carried out by obtaining probability density function after truncationMean value after satisfying constraint condition
Figure GDA0003517881870000116
Sum covariance
Figure GDA0003517881870000117
Processing s constraint conditions one by one, and recording state estimation values meeting the first i constraint conditions as
Figure GDA0003517881870000118
Covariance of
Figure GDA0003517881870000119
When i is 0, then:
Figure GDA00035178818700001110
the probability density function truncation problem for multidimensional state variables can be solved by linear transformation:
Figure GDA00035178818700001111
in the formula, xkFor the random state variable to be estimated, zkiFor new random state variables after transformation, T and W are
Figure GDA00035178818700001112
Is a criterion of a proper decomposition matrix, i.e. satisfies
Figure GDA00035178818700001113
T is an orthogonal matrix, W is a diagonal matrix, a square root matrix of the orthogonal matrix is easy to obtain, rho is an n multiplied by n dimension orthogonal matrix, and the following conditions are satisfied:
Figure GDA00035178818700001114
according toThe above conclusion can convert the general bilateral linear constraint into the normalized scalar limitation, i.e. the conversion in the form of the random variable zkiUpper limit value d of the constraint boundary ofkiAnd a lower limit value cki
Figure GDA0003517881870000121
Figure GDA0003517881870000122
Because of the random variable zkiThe error covariance matrix of (2) is a unit matrix, the components are statistically independent of each other, and only the first element is constrained, see equations (33) and (34), so that x is originally xkIs converted into z by multi-dimensional joint probability density function truncationkiZ before being constrained, i.e. when i is 0kiObey the standard normal distribution N (0,1), i.e. satisfies:
Figure GDA0003517881870000123
knowing the new constraint boundaries, the new constraint boundary is determined by removing the original pdf (z)ki) And calculating the total area of the probability density function of the rest part outside the middle constraint boundary:
Figure GDA0003517881870000124
in the equation, the error function is defined as:
Figure GDA0003517881870000125
normalizing the probability density function after the constrained boundary is cut off to obtain zkiIs constrained by a first element ofThe latter probability density function, denoted pdf (z)k,i+1):
Figure GDA0003517881870000131
Wherein the content of the first and second substances,
Figure GDA0003517881870000132
zk,i+1the mean and variance calculation formula of (a) is as follows:
Figure GDA0003517881870000133
Figure GDA0003517881870000134
thus, the random variable z satisfying the first constraint condition is obtainedkiState estimation mean and covariance of (2):
Figure GDA0003517881870000135
performing inverse transformation on the formula (31) to obtain a random variable x meeting a first constraint conditionkState estimation mean and covariance of (2):
Figure GDA0003517881870000136
adding 1 to i, repeating the equations (31) to (43) until all s constraint conditions are met, and obtaining the state estimation error and the covariance finally meeting all the constraint conditions by a probability density function truncation method:
Figure GDA0003517881870000137
thus, the road surface adhesion coefficient output by the unscented Kalman filter can be obtained by a probability density function truncation method to meet the constraint condition (a)k,bk) Road surface adhesion coefficient muC
Compared with the prior art, the invention has the beneficial effects that:
the invention provides a road adhesion coefficient estimation method based on space-time synchronization and information fusion, and the space-time synchronization method in the third step screens out the vehicle front road type identification result meeting the fusion condition and the road adhesion coefficient estimation value adopting an unscented Kalman filter, so that the space-time synchronization of the vehicle front road type data and the vehicle dynamic response information data can be realized. In the fourth step, a vehicle front road surface type identification result meeting the fusion condition and a road surface adhesion coefficient estimation value adopting an unscented Kalman filter are fused, so that the predictability and the accuracy of the fused road surface adhesion coefficient estimation result can be ensured.
Drawings
The invention is further described with reference to the accompanying drawings in which:
FIG. 1 is a flow chart of a road adhesion coefficient estimation method based on space-time synchronization and information fusion according to the present invention.
FIG. 2 is a schematic illustration of an experimental platform used in the embodiments.
FIG. 3 is a diagram illustrating spatio-temporal synchronization in an embodiment.
Fig. 4 is a flowchart of the fusion strategy of step four of the method.
Detailed Description
The invention will be described in detail below with reference to the accompanying drawings and an east wind AX7 test vehicle as a platform, wherein the main parameters of the east wind AX7 test vehicle platform are shown in table 5:
TABLE 5 Main parameters of the experimental platform
Parameter(s) Unit of Parameter value
Vehicle mass m kg 1542
Distance l between center of mass of vehicle and front axlef m 1.082
Distance l between center of mass of vehicle and rear axler m 1.63
Moment of inertia I about z-axisz kg·m2 2315.3
Front wheel track Bf m 1.585
Rear wheel track Br m 1.585
Rolling radius of wheel Re m 0.325
Monocular vision sensor ground height H m 1.52
Distance L between monocular vision sensor and left side edge of vehicle3 m 0.92
As shown in fig. 1, a flow of a road adhesion coefficient estimation method based on time-space synchronization and information fusion is to simultaneously acquire image information of a road surface in front of a vehicle and dynamic response information of a current position of the vehicle, add UTC time output by a GPS/INS inertial navigation combination system to each information as a timestamp, extract an effective road surface area in the acquired road surface image in front of the vehicle through a semantic segmentation network and a mask, send the effective road surface area to a road surface type identification network to obtain a road surface type identification result, and simultaneously, obtain a road adhesion coefficient estimation value by using a unscented filter kalman estimation method according to the acquired vehicle dynamic response information; then, screening by a time-space synchronization method to obtain a road surface type identification result and a road surface adhesion coefficient estimation value which meet the fusion condition; finally, judging a confidence coefficient threshold value of a preset road surface type recognition result and a weighted probability value comparison result, and fusing and outputting a final road surface adhesion coefficient estimation value;
step one, collecting image information of a front road surface and vehicle dynamics response information at the same time
The image information of the road surface in front of the vehicle is acquired by a monocular vision sensor with the model number of USB30-AR023 ZDDR, the resolution of the acquired image is 1920 multiplied by 1080, 30 images are acquired per second, and as shown in figure 2, the monocular vision sensor is arranged in the vehicle at a position right above the front windshield of the vehicle and at a distance L from the left side edge of the vehicle3The height H is 0.92 m, the distance H from the ground is 1.52 m, and the visual angle of the high-definition dynamic camera is adjusted to ensure that the lower boundary of the image shot by the high-definition dynamic camera is just positioned on the vehicleThe engine hood edge enables road surface images shot by the high-definition dynamic camera to account for more than three fifths, and due to the limitation of the installation position and the maximum effective distance of the monocular vision sensor, the effective road surface area in the image information of the road surface in front is determined to be within the range of 5-50 meters in front of the COG position of the center of mass of the vehicle, namely L1Has a length of 5m, L2Is 45 meters; image information output by the monocular vision sensor is transmitted to a vehicle-mounted industrial personal computer through a USB (universal serial bus), and is read according to a fixed frequency of 10 Hz;
the vehicle dynamics response information comprises wheel speed information, steering wheel corner information, vehicle speed information, vehicle longitudinal acceleration information, vehicle lateral acceleration information and vehicle yaw velocity information of four wheels, and is acquired by a vehicle body sensor and a GPS/INS inertial navigation combination system respectively, wherein the vehicle body sensor comprises the wheel speed sensors and the steering wheel corner sensors of the four wheels of the vehicle, the wheel speed information and the steering wheel corner information of the four wheels of the vehicle are acquired respectively, the sampling frequency of the vehicle body sensor is set to be 100Hz, and the vehicle body sensor is connected to a vehicle-mounted industrial personal computer through a CAN bus;
the GPS/INS inertial navigation combination system is in a model of OXTS RT2500, is arranged at the mass center position of the vehicle and is rigidly connected with the vehicle, and is used for acquiring vehicle speed information, longitudinal acceleration information, lateral acceleration information and yaw angular velocity information;
the model of the vehicle-mounted industrial personal computer is Nuvo-6108GC, and a double-channel Kvaser mini PCI-Express CAN/CAN FD adapter is installed;
after information collected by the monocular vision sensor, the vehicle body sensor and the GPS/INS inertial navigation combined system is transmitted to the industrial personal computer, vehicle dynamics response information is stored in the same file in real time through data initialization and is stored in a csv format, meanwhile, the images and the vehicle dynamics response information are inserted into a timestamp taking UTC time output by the GPS/INS inertial navigation combined system as a reference, and timestamp data updating is achieved through a Visual Basic for Applications editor in Excel;
step two, obtaining a road surface type classification recognition result and a road surface adhesion coefficient estimation value
Loading a pre-trained semantic segmentation network and a pre-trained pavement type classification recognition network on a vehicle-mounted industrial personal computer, reading acquired pavement image information by the vehicle-mounted industrial personal computer according to a fixed frequency of 10Hz, sending the acquired pavement image information into the semantic segmentation network, and removing environmental objects and other vehicles in the image through the semantic segmentation network and mask processing to obtain a sample image only containing an effective pavement area; sending the sample image only containing the effective road surface area into a road surface type identification network again, and identifying to obtain a road surface type result;
the semantic segmentation network selects ERFNet for segmenting images to extract effective pavement image areas, a cityscaps automatic driving data set is adopted to pre-train a network model, the model is of an encoder and a decoder structure, the encoder comprises a decomposition convolution layer non-cottleneck-1D and a down sampling layer down sampler block, and the structure of the semantic segmentation network model ERFNet is shown in the following table 1:
TABLE 1 ERFNet structures
Figure GDA0003517881870000151
Figure GDA0003517881870000161
Firstly, ERFNet is pre-trained, a cityscaps data set is adopted to train a semantic segmentation network, the data set comprises 30 semantic categories including vehicles, pedestrians, traffic markers, buildings and road surface areas, the cityscaps data set is converted into a TFRecord format to reduce the occupancy rate of a storage space, a network model can be ensured to read data quickly, the processing efficiency is improved, the size of a read picture tensor is randomly scaled between 0.5 time and 2 times according to 0.25 step length, tensor pictures are randomly cut by 769 multiplied by 769 pixels and are randomly turned left and right to achieve the purpose of data enhancement, the adaptability of the segmentation network is improved, a Poly learning rate rule is selected during training, and a learning rate attenuation expression is a formula (1):
Figure GDA0003517881870000162
wherein the initial learning rate LRinitial0.001, training iteration step number iter, maximum training step number max _ iter set to 2000 × 103Step, power is set to 0.9;
according to the hardware performance of the vehicle-mounted industrial personal computer, setting batch processing size as batch _ size ═ 8, storing model parameters once every 10 minutes, simultaneously using a verification set to evaluate the performance of the network, wherein an average Intersection over unit is adopted as an evaluation index, the average Intersection over unit is abbreviated as MIoU later, the ratio of the Intersection to the Union of each type of prediction result and true value is represented, and the result of averaging is summed, wherein the formula is as follows (2):
Figure GDA0003517881870000163
wherein TP, FP, TN, FN are defined by the confusion matrix, as shown in table 2:
TABLE 2 confusion matrix
Figure GDA0003517881870000164
After pre-training, the MIoU index is 0.696, the precision meets the requirement, the pavement image is processed by a semantic segmentation network ERFNet, the obtained result is the positions of different semantic categories on the image, the positions are distinguished by different RGB values, the RGB color standard is a common image color system, wherein R represents red, G represents green, and B represents blue, through different changes and mutual superposition of the three color channels, various colors almost comprising human perception can be generated, the value range of each color channel is 0-255, the color channels can be represented by 8 bits in a computer language, the whole image is 24-bit depth, the RGB values corresponding to a pavement area are (128,64,128), and the average processing time of each image is 0.078 seconds;
the semantic segmentation network obtains the position of the road surface area in the image and the corresponding RGB value, the RGB value of the road surface area is regarded as an interesting area, and the road surface area is extracted from the image by means of a mask in a digital image processing method;
the mask processing process comprises the steps of firstly carrying out slicing operation on a semantic segmentation prediction result image to obtain a two-dimensional matrix of three color channels, wherein three channels respectively correspond to RGB when the image is processed in OpenCV, as the RGB values corresponding to a road surface area are (128,64,128), firstly making a mask of a red channel, namely, the mask value corresponding to a pixel point with the R value of 128 in an identification result image is 1, and the mask value corresponding to other positions is 0, then making a mask of a green channel, the mask value corresponding to a pixel point with the G value of 64 in the identification result image is 1, and the mask value corresponding to the other positions is 0, finally making a mask of a blue channel, the mask value corresponding to a pixel point with the B value of 128 in the identification result image is 1, the mask value corresponding to other positions is 0, only the pixel points with the three channel values being 1 are reserved, and when the pixel point value of a certain channel is 0, the pixel points corresponding to other channels are set to zero, so as to ensure that only the road surface area is extracted, finally, splicing the masks of the three color channels, and respectively performing matrix dot multiplication operation on the masks and the channels corresponding to the original image to obtain a sample image only containing a pavement area, wherein the non-pavement area is changed into black, and the average processing time of the masks is 0.0046 seconds;
obtaining a sample image only containing a road surface area through semantic segmentation and mask processing, sending the sample image into a road surface type identification network for identification, obtaining an identification result as a road surface type, wherein the road surface type comprises 8 road surface types which are common in daily life and respectively comprise dry asphalt, wet asphalt, dry cement, wet cement, brick paving, loose snow, compacted snow and an ice film, a road surface adhesion coefficient corresponding to the road surface type is not a fixed value but a range value, and the road surface adhesion coefficient is determined by referring to an automobile longitudinal sliding adhesion coefficient reference value table and an automobile longitudinal sliding adhesion coefficient reference value table of an ice and snow road surface in GA/T643-2006 typical traffic accident form vehicle driving speed technical identification, considering that the road surface adhesion coefficient is related to road surface abrasion, tire abrasion and air temperature and humidity influence factors, and the road surface adhesion coefficient range values are different under different driving speeds, for this purpose, a comparison table of different road surface types and adhesion coefficient range values is set for two cases of high-speed running and low-speed running as shown in table 3:
TABLE 3 comparison table of different road surface types and range values of adhesion coefficients
Road surface type Coefficient of adhesion (below 48 km/h) Coefficient of adhesion (over 48 km/h)
Dry asphalt 0.55-0.8 0.45-0.7
Wet asphalt 0.45-0.7 0.4-0.6
Dry cement 0.55-0.8 0.5-0.75
Wet cement 0.45-0.75 0.45-0.65
Brick paving 0.5-0.8 0.45-0.7
Pine snow 0.2-0.45 0.2-0.35
Compacted snow 0.1-0.25 0.1-0.2
Ice film 0.1-0.2 0.1-0.15
The network for type recognition of the effective road surface area sample image obtained by semantic segmentation and masking is YOLO-V3, YOLO-V3 is a full convolution network, downsampling is performed by using convolution with the step size of 2 through layer-skipping connection of residuals, and the structure of YOLO-V3 is shown in table 4 below:
TABLE 4 structure of YOLO-V3
Figure GDA0003517881870000181
The data set used for training YOLO-V3 is 8000 images acquired according to 8 road surface types defined in Table 3, each class is 1000 images, then the data set is disordered, 200 images of each class are randomly extracted according to a proportion of 20% to serve as a verification set, the remaining 800 images serve as a training set, the accuracy of the final training result is 97.8%, the average identification time of a single image is 0.0095 seconds, the sum of the average processing time of a semantic segmentation network, a mask and an identification network is 0.0921 seconds, and the requirements on precision and instantaneity are met;
storing road surface type classification identification result data, firstly recording UTC time output by a current GPS/INS inertial navigation combination system, recording the road surface type identification result as index, then obtaining a road surface adhesion coefficient range value corresponding to the road surface type through a lookup table 3, recording the lower limit as a and the upper limit as b, and taking the average value as different road surface type pairsShould be attached to the a priori value mu of the coefficientPI.e. muP(a + b)/2 in matrix [ index a b μp]Storing road surface type classification recognition result data in a form;
meanwhile, vehicle dynamics response information is input into an unscented Kalman filter to estimate a road adhesion coefficient of the current position of the vehicle, a vehicle dynamics model is firstly established, and the method for estimating the road adhesion coefficient by adopting the dynamics response information pays attention to the longitudinal motion, the lateral motion and the yaw motion of the vehicle, so that the vehicle model is simplified and assumed: neglecting the smaller air resistance and rolling resistance; assuming that the vehicle is running on a horizontal road surface, ignoring roll and pitch motions; neglecting the effect of the suspension, assuming a rigid connection between the wheel and the sprung mass portion; the tire characteristics of each tire are consistent; in order to facilitate the study of the vehicle dynamic response characteristics, a set of normalized coordinate systems needs to be established, and the following vehicle dynamic coordinate systems are defined:
a vehicle body coordinate system: selecting an ISO standard coordinate system, taking the position of the mass center of the vehicle as a coordinate origin O, taking the driving direction of the vehicle as the positive direction of an x coordinate axis and parallel to a road surface, taking a z coordinate axis which is vertical to the road surface and faces upwards, and obtaining the positive direction of a y coordinate axis by a right-hand rule;
tire coordinate system: an ISO standard coordinate system is also adopted, a tire suspension center is selected as an origin of coordinates, and each tire has a reference coordinate system; the advancing direction of the tire is the positive direction of an x coordinate axis and is parallel to the road surface, a z coordinate axis is vertical to the road surface and is upward, and the positive direction of a y coordinate axis can be obtained by a right-hand rule; when the vehicle runs straight, the axial direction is consistent with the vehicle body coordinate system, and a tire steering angle exists during steering, namely the tire coordinate system rotates a certain angle relative to the vehicle body coordinate system;
according to the darenbell principle, the vehicle dynamics equation is derived:
vehicle motion along x-axis:
ax=((Fxfl+Fxfr)cosδf-(Fyfl+Fyfr)sinδf+Fxrl+Fxrr)/m (3)
in the formula, FxflShows the longitudinal tire force of the left front wheel FxfrRepresenting the right front wheel longitudinal tire force, FxrlShowing the longitudinal tire force, F, of the left and rear wheelsxrrRepresents the right rear wheel longitudinal tire force, δfIs the corner of the front wheel, m is the mass of the whole vehicle,
vehicle motion along y-axis:
ay=((Fxfl+Fxfr)sinδf+(Fyfl+Fyfr)cosδf+Fyrl+Fyrr)/m (4)
vehicle motion about the z-axis:
Figure GDA0003517881870000191
in the formula IzFor moment of inertia about the z-axis, yaw moment MzThe calculation formula is as follows:
Figure GDA0003517881870000192
in the formula, LfIs the distance from the center of mass of the vehicle to the front axle, LrIs the distance from the center of mass of the vehicle to the rear axle, BfFor front wheel track, BrFor the rear wheel track, the longitudinal tire force and the lateral tire force in the formulas (3), (4) and (6) are solved through a magic formula:
y=Dsin(Carctan(Bx-E(Bx-arctanBx))) (7)
satisfies the following conditions:
Figure GDA0003517881870000201
wherein Y represents that the output variable is longitudinal tire force or lateral tire force, X represents the input variable, and when Y represents longitudinal tire force, X is longitudinal slip ratio kappa; when Y represents a lateral tire force, X is a tire slip angle α; b is a stiffness factor, C is a shape factor, D is a crest factor, E is a curvature factor, SHFor horizontal offset, SVIs vertically offset;
in order to avoid unstable tire performance under the condition of low vehicle speed, a low-speed threshold value v is selectedlowCalculating the longitudinal slip ratio kappai
Figure GDA0003517881870000202
In the formula, wiAs the wheel speed, ReThe low speed threshold is set as v for the effective radius of the wheel rollinglow=0.1km/h,vxiThe longitudinal speed under the wheel center coordinate system;
the vertical loads of the four wheels of the vehicle are:
Figure GDA0003517881870000203
wherein h is the distance from the center of mass of the vehicle to the ground, KRepresenting front axle roll stiffness, KRepresenting rear axle roll stiffness;
selecting the adhesion coefficients of four wheels as state variables of the estimation system, and recording as follows:
x=[μfl μfr μrl μrr]T (11)
selecting the front wheel steering angle as the input signal of the estimation system, and recording as u ═ deltaf(ii) a Selecting longitudinal acceleration, lateral acceleration and yaw angular acceleration measured by a sensor as observation variables of an estimation system, recording the observation variables,
Figure GDA0003517881870000204
the estimation system is as follows:
Figure GDA0003517881870000211
wherein the state equation of the estimation system is f: (x,u)=I4×4·x,I4×4Is an identity matrix of 4 orders; the measurement equation h (x, u) is composed of the following equations (3), (4) and (5), and discretizing the equation (13) yields a form of a nonlinear difference equation:
Figure GDA0003517881870000212
in the formula, the discretized state equation is
Figure GDA0003517881870000213
T is the sampling time, wkAnd vkRespectively process noise and measurement noise, and the initial value of the state variable of the unscented Kalman filter is x0=[1 1 1 1]TThe initial value of the covariance matrix of the estimation error is P0=0.01×I4×4Initial value of process noise covariance matrix Qk=I4×4Measuring an initial value R of the covariance of the noisek=0.01×I4×4In which I4×4Is an identity matrix, the sampling time T is set to be 0.001 second, and the mean and covariance matrices satisfy:
Figure GDA0003517881870000214
the flow of estimating the road adhesion coefficient by the unscented kalman filter is as follows:
(1) initializing, setting parameters of unscented Kalman filter, including estimating initial value of state variable of system as x0The initial value of the covariance matrix of the estimation error is P0
(2) Time updating, establishing Sigma sampling point chii,k
Figure GDA0003517881870000215
In which λ represents the mean of the random variables
Figure GDA0003517881870000216
A scaling factor of the distance from the Sigma sample point, assuming that the state estimate obtained at time k-1 is
Figure GDA0003517881870000217
And an estimation error covariance of
Figure GDA0003517881870000218
And a weight of Wi mAnd Wi cSigma sampling point of
Figure GDA0003517881870000219
The sampling point passes through
Figure GDA00035178818700002110
After the equation, the prior estimated value at the k moment is obtained by weighted summation
Figure GDA00035178818700002111
Sum error covariance
Figure GDA00035178818700002112
Figure GDA0003517881870000221
Figure GDA0003517881870000222
(3) Measurement update procedure
The best estimate from the mean and covariance of the state variable x at this time is
Figure GDA0003517881870000223
And
Figure GDA0003517881870000224
a set of Sigma samples was again selected
Figure GDA0003517881870000225
The sample points are then passed through a non-linear measurement equation, i.e.
Figure GDA0003517881870000226
And then obtaining an estimation result of the measured value at the k moment through weighted summation
Figure GDA0003517881870000227
And covariance matrix:
Figure GDA0003517881870000228
Figure GDA0003517881870000229
Figure GDA00035178818700002210
finally, a UKF filter gain matrix K can be obtained through calculationkAnd combining the sensor measurements y obtained at time kkFurther deducing the posterior state estimation value of the state variable at the k moment
Figure GDA00035178818700002211
And estimate error covariance
Figure GDA00035178818700002212
Figure GDA00035178818700002213
Figure GDA00035178818700002214
Figure GDA00035178818700002215
And iterating to complete the whole estimation algorithm along with the increasing of the k value, and recording the estimated value of the road adhesion coefficient of the current position of the vehicle as muDRecording the UTC time output by the current GPS/INS inertial navigation combination system;
step three, screening the vehicle front road image type result and the road adhesion coefficient estimation value which meet the fusion condition through a space-time synchronization method
4 different situations exist in the process of acquiring and processing the image information of the road surface in front of the vehicle in the driving process of the vehicle and the estimated value of the road adhesion coefficient according to the dynamic response information of the vehicle; first the vehicle is at t1Time of day at x1The position, the camera keeps the image information with the frequency of 10Hz, the effective road surface area in the collected image is about in the range of 5-50m in front of the vehicle mass center, after a section of driving process, the vehicle is in t2Run to x at time2Position, as shown in FIG. 3, at x in dashed lines1And x2Estimated value mu of dynamic response information obtained corresponding to two positionsD1And muD2Four dots P1,P2,P3,P4Is shown at t2The position of the vehicle is acquired by the camera when four different images are acquired before the moment, the acquired images need to be preprocessed by the semantic segmentation network and identified on line by the classification network to obtain the road surface type information, and the processing time is tcnnAnd less than 0.01 second at the time of t-lapsecnnAfter-time vehicle will travel to P'1,P′2,P′3,P′4Where the triangle is located. Taking into account the difference in the traveling vehicle speed will result in P1,P2,P3,P4And P'1,P′2,P′3,P′4The relative distance between the two is different, which causes 4 conditions to occur in the relative position relation of the vehicle when the road surface image information and the dynamic response information are acquired; in the first case, at t1Before the moment of time, i.e. x1Before position, vehicle is at P1At P1' Prime toTreatment is complete, but x1Is not located at P1In the road surface area of the point-collected image, it is obvious that P cannot be converted1Point-collected road surface image information and dynamic response information estimation value muD1Fusing; second case, P2Image of point acquisition at t2Acquisition and processing is completed before time, and x2At position P2In the road surface area of the point-collected image, P can be converted2Point-collected road surface image information and dynamic response information estimation value muD2Fusing; in the third case, P3The road surface region in the acquired image, although likewise containing x2Position, but at t2P 'after time'3The road surface type recognition result can be obtained at the position, so P cannot be obtained3Point-collected road surface image information and dynamic response information estimation value muD2Fusing; in the fourth case, P4The road surface area in the image collected by the point does not contain x2Position even at t2The pre-time processing is completed, and P still can not be processed4Point-collected road surface image information and dynamic response information estimation value muD2Fusing; in the actual driving condition, the time t is needed by the image classification and identification processingcnnSmaller, the distance the vehicle travels during this time period is less than 5m, i.e. Pi′-Pi< 5, so the second case and the fourth case are most frequent;
from the above analysis, the vehicle passes tkTime is driven to xkPosition, i.e. tkAt time, the vehicle centroid position is at xkPosition, at which x is obtainedkRoad surface adhesion coefficient estimation value mu of positionD(ii) a Because the monocular camera is at PiThe road image in front of the vehicle collected by the point is transmitted to a vehicle-mounted industrial personal computer, the road type result can be obtained only after the road image is processed and identified on line through a semantic segmentation network, a mask and a road type identification network, and after the processing time, the vehicle runs to Pi' Point, find tkA road surface image that satisfies the following conditions simultaneously before the time:
(1) at tkThe pavement type recognition is completed before the moment;
Pi'≤xk (25)
(2) at tkTime of day, vehicle position xkAt PiWithin a road region in the image;
Pi+5≤xk≤Pi+50 (26)
step four, finally outputting the road adhesion coefficient estimated value with predictability and accuracy by the vehicle front road surface type recognition result and the road adhesion coefficient estimated value through a fusion strategy
The flow of the whole fusion process is shown in FIG. 4, at tkTime of day, find t according to space-time synchronization methodkThe nearest 10 road surface image sample points before the moment are at the position Pi(i 1,2, …,10), and the road surface type identification result is IndexiWhere the larger the value of i is, the sample iskThe shorter the time interval, the position when the image is taken is away from xkThe closer, the higher the image recognition result confidence; presetting weight coefficient wP,iThe larger i is, the larger wP,iThe larger, and satisfies formula (27):
Figure GDA0003517881870000241
detecting weighted summation probability values of the same Index values in 10 sample points, wherein j is 0,1, …,7, and represents 8 pavement types;
Figure GDA0003517881870000242
finding the maximum probability value p therein by comparisonmaxAnd its corresponding Index value, which represents the most reliable road surface type identification result; finally obtain tkImage recognition result [ Index ] with time for fusion algorithmk ak bk]Presetting an image recognition confidence threshold value pCFThe following fusion rules are established:
(1) if p ismax<pCFIf the road surface characteristics are not obvious or the road surface image database does not establish such an image sample set, the final adhesion coefficient result is output based on the dynamic estimation, that is, mu is muD(ii) a Saving a group of road surface images and a priori value mu at the momentp=μDThe updated image sample library is used for training and updating the classification network;
(2) if (p)max≥pCF)∩(ak≤μD≤bk) The road surface image characteristics are obvious, the confidence coefficient of the road surface type identification result is high and is consistent with the dynamics estimation result, and the final adhesion coefficient result is output to be mu-muDAnd updating the prior value mu of the image recognition resultp=μD
(3) If (p)max≥pCF)∩((μD<ak)∪(μD>bk) Showing that the confidence coefficient of the road surface type identification result is higher but is not consistent with the dynamics estimation result, the adopted probability density function truncation method passes through the range of the attachment coefficient (a)k,bk) As a constraint condition for correcting the kinetic estimation result, the corrected adhesion coefficient estimation value is the final result at that time, and is recorded as μ ═ μC
The probability density function truncation method includes: let us assume at tkThe posterior state estimated value of the unscented Kalman filter is obtained at any moment
Figure GDA0003517881870000243
And estimate error covariance
Figure GDA0003517881870000251
See equations (23) and (24), and s scalar state constraints:
Figure GDA0003517881870000252
in the formula (I), the compound is shown in the specification,
Figure GDA0003517881870000253
is shaped likeLinear function of state variable, akiAnd bkiMinimum and maximum constraint boundary values, respectively; the problem is converted into a probability density function with s constraint boundaries
Figure GDA0003517881870000254
Truncation, namely obtaining the mean value after the constraint condition is met by solving the probability density function after the truncation
Figure GDA0003517881870000255
Sum covariance
Figure GDA0003517881870000256
Processing s constraint conditions one by one, and recording state estimation values meeting the first i constraint conditions as
Figure GDA0003517881870000257
Covariance of
Figure GDA0003517881870000258
When i is 0, then:
Figure GDA0003517881870000259
the probability density function truncation problem for multidimensional state variables can be solved by linear transformation:
Figure GDA00035178818700002510
in the formula, xkFor the random state variable to be estimated, zkiFor new random state variables after transformation, T and W are
Figure GDA00035178818700002511
Is a criterion of a proper decomposition matrix, i.e. satisfies
Figure GDA00035178818700002512
T is an orthogonal matrix, W is a diagonal matrix, a square root matrix of the orthogonal matrix is easy to obtain, rho is an n multiplied by n dimension orthogonal matrix, and the following conditions are satisfied:
Figure GDA00035178818700002513
from the above conclusions, the general bilateral linear constraint can be converted into a normalized scalar constraint, i.e. the conversion is in the form of obtaining the random variable zkiUpper limit value d of the constraint boundary ofkiAnd a lower limit value cki
Figure GDA00035178818700002514
Figure GDA0003517881870000261
Because of the random variable zkiThe error covariance matrix of (2) is a unit matrix, the components are statistically independent of each other, and only the first element is constrained, see equations (33) and (34), so that x is originally xkIs converted into z by multi-dimensional joint probability density function truncationkiZ before being constrained, i.e. when i is 0kiObey the standard normal distribution N (0,1), i.e. satisfies:
Figure GDA0003517881870000262
knowing the new constraint boundaries, the new constraint boundary is determined by removing the original pdf (z)ki) And calculating the total area of the probability density function of the rest part outside the middle constraint boundary:
Figure GDA0003517881870000263
in the equation, the error function is defined as:
Figure GDA0003517881870000264
normalizing the probability density function after the constrained boundary is cut off to obtain zkiIs the constrained probability density function of (1), denoted pdf (z)k,i+1):
Figure GDA0003517881870000265
Wherein the content of the first and second substances,
Figure GDA0003517881870000266
zk,i+1the mean and variance calculation formula of (a) is as follows:
Figure GDA0003517881870000267
Figure GDA0003517881870000271
thus, the random variable z satisfying the first constraint condition is obtainedkiState estimation mean and covariance of (2):
Figure GDA0003517881870000272
performing inverse transformation on the formula (31) to obtain a random variable x meeting a first constraint conditionkState estimation mean and covariance of (2):
Figure GDA0003517881870000273
adding 1 to i, repeating the equations (31) to (43) until all s constraint conditions are met, and obtaining the state estimation error and the covariance finally meeting all the constraint conditions by a probability density function truncation method:
Figure GDA0003517881870000274
thus, the road surface adhesion coefficient output by the unscented Kalman filter can be obtained by a probability density function truncation method to meet the constraint condition (a)k,bk) Road surface adhesion coefficient muC

Claims (1)

1. A road surface attachment coefficient estimation method based on time-space synchronization and information fusion comprises the steps of firstly, simultaneously collecting road surface image information in front of a vehicle and vehicle dynamic response information; extracting effective pavement areas in the collected pavement images in front of the vehicles through a semantic segmentation network, sending the effective pavement areas into a pavement type recognition network to obtain a pavement type recognition result, and meanwhile, obtaining a pavement adhesion coefficient estimation value by adopting an unscented Kalman filter estimation method according to the collected vehicle dynamic response information; then, screening by a time-space synchronization method to obtain a road surface type identification result and a road surface adhesion coefficient estimation value which meet the fusion condition; and finally, judging a confidence coefficient threshold value of a preset road surface type recognition result and a weighted probability value comparison result, and fusing and outputting a final road surface adhesion coefficient estimation value, wherein the method is characterized by comprising the following specific steps of:
step one, collecting image information of a front road surface and vehicle dynamics response information at the same time
The method comprises the steps that image information of a road surface in front of a vehicle is acquired by a monocular vision sensor with the model of USB30-AR023ZWDR, the resolution of the acquired image is 1920 x 1080, 30 images are acquired per second, the monocular vision sensor is arranged in the vehicle and positioned right above a front windshield of the vehicle, the visual angle of a high-definition dynamic camera is adjusted, the lower boundary of an image shot by the high-definition dynamic camera is just positioned at the edge of a vehicle engine cover, the road surface image shot by the high-definition dynamic camera accounts for more than three fifths, due to the limitation of the installation position and the maximum effective distance of the monocular vision sensor, the effective road surface area in the image information of the road surface in front is determined to be within the range of 5-50 meters in front of the centroid of the vehicle, and the image information output by the monocular vision sensor is transmitted to an on-board industrial personal computer through the USB and is read according to the fixed frequency of 10 Hz;
the vehicle dynamics response information comprises wheel speed information, steering wheel corner information, vehicle speed information, vehicle longitudinal acceleration information, vehicle lateral acceleration information and vehicle yaw velocity information of four wheels, and is acquired by a vehicle body sensor and a GPS/INS inertial navigation combination system respectively, wherein the vehicle body sensor comprises the wheel speed sensors and the steering wheel corner sensors of the four wheels of the vehicle, the wheel speed information and the steering wheel corner information of the four wheels of the vehicle are acquired respectively, the sampling frequency of the vehicle body sensor is set to be 100Hz, and the vehicle body sensor is connected to a vehicle-mounted industrial personal computer through a CAN bus;
the GPS/INS inertial navigation combination system is in a model of OXTS RT2500, is arranged at the mass center position of the vehicle and is rigidly connected with the vehicle, and is used for acquiring vehicle speed information, longitudinal acceleration information, lateral acceleration information and yaw angular velocity information;
the model of the vehicle-mounted industrial personal computer is Nuvo-6108GC, and a double-channel Kvaser mini PCI-Express CAN/CAN FD adapter is installed;
after information collected by the monocular vision sensor, the vehicle body sensor and the GPS/INS inertial navigation combined system is transmitted to the industrial personal computer, vehicle dynamics response information is stored in the same file in real time through data initialization and is stored in a csv format, meanwhile, the images and the vehicle dynamics response information are inserted into a timestamp taking UTC time output by the GPS/INS inertial navigation combined system as a reference, and timestamp data updating is achieved through a Visual Basic for Applications editor in Excel;
step two, obtaining a road surface type classification recognition result and a road surface adhesion coefficient estimation value
Loading a pre-trained semantic segmentation network and a pre-trained pavement type classification recognition network on a vehicle-mounted industrial personal computer, reading acquired pavement image information by the vehicle-mounted industrial personal computer according to a fixed frequency of 10Hz, sending the acquired pavement image information into the semantic segmentation network, and removing environmental objects and other vehicles in the image through the semantic segmentation network and mask processing to obtain a sample image only containing an effective pavement area; sending the sample image only containing the effective road surface area into a road surface type identification network again, and identifying to obtain a road surface type result;
the semantic segmentation network selects ERFNet for segmenting images to extract effective pavement image areas, a cityscaps automatic driving data set is adopted to pre-train a network model, the model is of an encoder and a decoder structure, the encoder comprises a decomposition convolution layer non-cottleneck-1D and a down sampling layer down sampler block, and the structure of the semantic segmentation network model ERFNet is shown in the following table 1:
TABLE 1 ERFNet structures
Layer(s) Type (B) 1-2 2 x Downsampler block (down sampling) 3-7 5×Non-Bottleneck-1D 8 Downsampler block (Down sampling) 9 Non-Bottleneck-1D(dilated 2) 10 Non-Bottleneck-1D(dilated 4) 11 Non-Bottleneck-1D(dilated 8) 12 Non-Bottleneck-1D(dilated 16) 13 Non-Bottleneck-1D(dilated 2) 14 Non-Bottleneck-1D(dilated 4) 15 Non-Bottleneck-1D(dilated 8) 16 Non-Bottleneck-1D(dilated 16) 17 Deconvolation (upsampling) 18-19 2×Non-Bottleneck-1D 20 Deconvolation (upsampling) 21-22 2×Non-Bottleneck-1D 23 Deconvolation (upsampling)
Firstly, ERFNet is pre-trained, a cityscaps data set is adopted to train a semantic segmentation network, the data set comprises 30 semantic categories including vehicles, pedestrians, traffic markers, buildings and road surface areas, the cityscaps data set is converted into a TFRecord format to reduce the occupancy rate of a storage space, a network model can be ensured to read data quickly, the processing efficiency is improved, the size of a read picture tensor is randomly scaled between 0.5 time and 2 times according to 0.25 step length, tensor pictures are randomly cut by 769 multiplied by 769 pixels and are randomly turned left and right to achieve the purpose of data enhancement, the adaptability of the segmentation network is improved, a Poly learning rate rule is selected during training, and a learning rate attenuation expression is a formula (1):
Figure FDA0003517881860000031
wherein the initial learning rate LRinitial0.001, training iteration step number iter, maximum training step number max _ iter set to 2000 × 103Step, power is set to 0.9;
according to the hardware performance of the vehicle-mounted industrial personal computer, setting batch processing size as batch _ size ═ 8, storing model parameters once every 10 minutes, simultaneously using a verification set to evaluate the performance of the network, wherein an average Intersection over unit is adopted as an evaluation index, the average Intersection over unit is abbreviated as MIoU later, the ratio of the Intersection to the Union of each type of prediction result and true value is represented, and the result of averaging is summed, wherein the formula is as follows (2):
Figure FDA0003517881860000032
wherein TP, FP, TN, FN are defined by the confusion matrix, as shown in table 2:
TABLE 2 confusion matrix
Figure FDA0003517881860000033
After pre-training, the MIoU index is 0.696, the precision meets the requirement, the pavement image is processed by a semantic segmentation network ERFNet, the obtained result is the positions of different semantic categories on the image, the positions are distinguished by different RGB values, the RGB color standard is a common image color system, wherein R represents red, G represents green, and B represents blue, through different changes and mutual superposition of the three color channels, various colors almost comprising human perception can be generated, the value range of each color channel is 0-255, the color channels can be represented by 8 bits in a computer language, the whole image is 24-bit depth, the RGB values corresponding to a pavement area are (128,64,128), and the average processing time of each image is 0.078 seconds;
the semantic segmentation network obtains the position of the road surface area in the image and the corresponding RGB value, the RGB value of the road surface area is regarded as an interesting area, and the road surface area is extracted from the image by means of a mask in a digital image processing method;
the mask processing process comprises the steps of firstly carrying out slicing operation on a semantic segmentation prediction result image to obtain a two-dimensional matrix of three color channels, wherein three channels respectively correspond to RGB when the image is processed in OpenCV, as the RGB values corresponding to a road surface area are (128,64,128), firstly making a mask of a red channel, namely, the mask value corresponding to a pixel point with the R value of 128 in an identification result image is 1, and the mask value corresponding to other positions is 0, then making a mask of a green channel, the mask value corresponding to a pixel point with the G value of 64 in the identification result image is 1, and the mask value corresponding to the other positions is 0, finally making a mask of a blue channel, the mask value corresponding to a pixel point with the B value of 128 in the identification result image is 1, the mask value corresponding to other positions is 0, only the pixel points with the three channel values being 1 are reserved, and when the pixel point value of a certain channel is 0, the pixel points corresponding to other channels are set to zero, so as to ensure that only the road surface area is extracted, finally, splicing the masks of the three color channels, and respectively performing matrix dot multiplication operation on the masks and the channels corresponding to the original image to obtain a sample image only containing a pavement area, wherein the non-pavement area is changed into black, and the average processing time of the masks is 0.0046 seconds;
obtaining a sample image only containing a road surface area through semantic segmentation and mask processing, sending the sample image into a road surface type identification network for identification, obtaining an identification result as a road surface type, wherein the road surface type comprises 8 road surface types which are common in daily life and respectively comprise dry asphalt, wet asphalt, dry cement, wet cement, brick paving, loose snow, compacted snow and an ice film, a road surface adhesion coefficient corresponding to the road surface type is not a fixed value but a range value, and the road surface adhesion coefficient is determined by referring to an automobile longitudinal sliding adhesion coefficient reference value table and an automobile longitudinal sliding adhesion coefficient reference value table of an ice and snow road surface in GA/T643-2006 typical traffic accident form vehicle driving speed technical identification, considering that the road surface adhesion coefficient is related to road surface abrasion, tire abrasion and air temperature and humidity influence factors, and the road surface adhesion coefficient range values are different under different driving speeds, for this purpose, a comparison table of different road surface types and adhesion coefficient range values is set for two cases of high-speed running and low-speed running as shown in table 3:
TABLE 3 comparison table of different road surface types and range values of adhesion coefficients
Road surface type Coefficient of adhesion (below 48 km/h) Coefficient of adhesion (over 48 km/h) Dry asphalt 0.55-0.8 0.45-0.7 Wet asphalt 0.45-0.7 0.4-0.6 Dry cement 0.55-0.8 0.5-0.75 Wet cement 0.45-0.75 0.45-0.65 Brick paving 0.5-0.8 0.45-0.7 Pine snow 0.2-0.45 0.2-0.35 Compacted snow 0.1-0.25 0.1-0.2 Ice film 0.1-0.2 0.1-0.15
The network for type recognition of the effective road surface area sample image obtained by semantic segmentation and masking is YOLO-V3, YOLO-V3 is a full convolution network, downsampling is performed by using convolution with the step size of 2 through layer-skipping connection of residuals, and the structure of YOLO-V3 is shown in table 4 below:
TABLE 4 structure of YOLO-V3
Figure FDA0003517881860000041
Figure FDA0003517881860000051
The data set used for training YOLO-V3 is 8000 images acquired according to 8 road surface types defined in Table 3, each class is 1000 images, then the data set is disordered, 200 images of each class are randomly extracted according to a proportion of 20% to serve as a verification set, the remaining 800 images serve as a training set, the accuracy of the final training result is 97.8%, the average identification time of a single image is 0.0095 seconds, the sum of the average processing time of a semantic segmentation network, a mask and an identification network is 0.0921 seconds, and the requirements on precision and instantaneity are met;
storing road surface type classification identification result data, firstly recording UTC time output by a current GPS/INS inertial navigation combination system, recording the road surface type identification result as index, then obtaining a road surface adhesion coefficient range value corresponding to the road surface type through a lookup table 3, recording the lower limit as a and the upper limit as b, and taking the average value as a prior value mu of the adhesion coefficient corresponding to different road surface typesPI.e. muP(a + b)/2 in matrix [ index a b μp]Storing road surface type classification recognition result data in a form;
meanwhile, vehicle dynamics response information is input into an unscented Kalman filter to estimate a road adhesion coefficient of the current position of the vehicle, a vehicle dynamics model is firstly established, and the method for estimating the road adhesion coefficient by adopting the dynamics response information pays attention to the longitudinal motion, the lateral motion and the yaw motion of the vehicle, so that the vehicle model is simplified and assumed: neglecting the smaller air resistance and rolling resistance; assuming that the vehicle is running on a horizontal road surface, ignoring roll and pitch motions; neglecting the effect of the suspension, assuming a rigid connection between the wheel and the sprung mass portion; the tire characteristics of each tire are consistent; in order to facilitate the study of the vehicle dynamic response characteristics, a set of normalized coordinate systems needs to be established, and the following vehicle dynamic coordinate systems are defined:
a vehicle body coordinate system: selecting an ISO standard coordinate system, taking the position of the mass center of the vehicle as a coordinate origin O, taking the driving direction of the vehicle as the positive direction of an x coordinate axis and parallel to a road surface, taking a z coordinate axis which is vertical to the road surface and faces upwards, and obtaining the positive direction of a y coordinate axis by a right-hand rule;
tire coordinate system: an ISO standard coordinate system is also adopted, a tire suspension center is selected as an origin of coordinates, and each tire has a reference coordinate system; the advancing direction of the tire is the positive direction of an x coordinate axis and is parallel to the road surface, a z coordinate axis is vertical to the road surface and is upward, and the positive direction of a y coordinate axis can be obtained by a right-hand rule; when the vehicle runs straight, the axial direction is consistent with the vehicle body coordinate system, and a tire steering angle exists during steering, namely the tire coordinate system rotates a certain angle relative to the vehicle body coordinate system;
according to the darenbell principle, the vehicle dynamics equation is derived:
vehicle motion along x-axis:
ax=((Fxfl+Fxfr)cosδf-(Fyfl+Fyfr)sinδf+Fxrl+Fxrr)/m (3)
in the formula, FxflShows the longitudinal tire force of the left front wheel FxfrRepresenting the right front wheel longitudinal tire force, FxrlShowing the longitudinal tire force, F, of the left and rear wheelsxrrRepresents the right rear wheel longitudinal tire force, δfIs the corner of the front wheel, m is the mass of the whole vehicle,
vehicle motion along y-axis:
ay=((Fxfl+Fxfr)sinδf+(Fyfl+Fyfr)cosδf+Fyrl+Fyrr)/m (4)
vehicle motion about the z-axis:
Figure FDA0003517881860000061
in the formula IzTo be around the z-axisMoment of inertia, yaw moment MzThe calculation formula is as follows:
Figure FDA0003517881860000062
in the formula, LfIs the distance from the center of mass of the vehicle to the front axle, LrIs the distance from the center of mass of the vehicle to the rear axle, BfFor front wheel track, BrFor the rear wheel track, the longitudinal tire force and the lateral tire force in the formulas (3), (4) and (6) are solved through a magic formula:
y=Dsin(Carctan(Bx-E(Bx-arctanBx))) (7)
satisfies the following conditions:
Figure FDA0003517881860000071
wherein Y represents that the output variable is longitudinal tire force or lateral tire force, X represents the input variable, and when Y represents longitudinal tire force, X is longitudinal slip ratio kappa; when Y represents a lateral tire force, X is a tire slip angle α; b is a stiffness factor, C is a shape factor, D is a crest factor, E is a curvature factor, SHFor horizontal offset, SVIs vertically offset;
in order to avoid unstable tire performance under the condition of low vehicle speed, a low-speed threshold value v is selectedlowCalculating the longitudinal slip ratio kappai
Figure FDA0003517881860000072
In the formula, wiAs the wheel speed, ReThe low speed threshold is set as v for the effective radius of the wheel rollinglow=0.1km/h,vxiThe longitudinal speed under the wheel center coordinate system;
the vertical loads of the four wheels of the vehicle are:
Figure FDA0003517881860000073
wherein h is the distance from the center of mass of the vehicle to the ground, KRepresenting front axle roll stiffness, KRepresenting rear axle roll stiffness;
selecting the adhesion coefficients of four wheels as state variables of the estimation system, and recording as follows:
x=[μfl μfr μrl μrr]T (11)
selecting the front wheel steering angle as the input signal of the estimation system, and recording as u ═ deltaf(ii) a Selecting longitudinal acceleration, lateral acceleration and yaw angular acceleration measured by a sensor as observation variables of an estimation system, recording the observation variables,
Figure FDA0003517881860000081
the estimation system is as follows:
Figure FDA0003517881860000082
in the formula, the state equation of the estimation system is f (x, u) ═ I4×4·x,I4×4Is an identity matrix of 4 orders; the measurement equation h (x, u) is composed of the following equations (3), (4) and (5), and discretizing the equation (13) yields a form of a nonlinear difference equation:
Figure FDA0003517881860000083
in the formula, the discretized state equation is
Figure FDA0003517881860000084
T is the sampling time, wkAnd vkRespectively process noise and measurement noise, and the initial value of the state variable of the unscented Kalman filter isx0=[1 1 1 1]TThe initial value of the covariance matrix of the estimation error is P0=0.01×I4×4Initial value of process noise covariance matrix Qk=I4×4Measuring an initial value R of the covariance of the noisek=0.01×I4×4In which I4×4Is an identity matrix, the sampling time T is set to be 0.001 second, and the mean and covariance matrices satisfy:
Figure FDA0003517881860000085
the flow of estimating the road adhesion coefficient by the unscented kalman filter is as follows:
(1) initializing, setting parameters of unscented Kalman filter, including estimating initial value of state variable of system as x0The initial value of the covariance matrix of the estimation error is P0
(2) Time updating, establishing Sigma sampling point chii,k
Figure FDA0003517881860000091
In which λ represents the mean of the random variables
Figure FDA0003517881860000092
A scaling factor of the distance from the Sigma sample point, assuming that the state estimate obtained at time k-1 is
Figure FDA0003517881860000093
And an estimation error covariance of
Figure FDA0003517881860000094
And a weight of Wi mAnd Wi cSigma sampling point of
Figure FDA0003517881860000095
Sampling point warpFor treating
Figure FDA0003517881860000096
After the equation, the prior estimated value at the k moment is obtained by weighted summation
Figure FDA0003517881860000097
Sum error covariance
Figure FDA0003517881860000098
Figure FDA0003517881860000099
Figure FDA00035178818600000910
(3) Measurement update procedure
The best estimate from the mean and covariance of the state variable x at this time is
Figure FDA00035178818600000911
And
Figure FDA00035178818600000912
a set of Sigma samples was again selected
Figure FDA00035178818600000913
The sample points are then passed through a non-linear measurement equation, i.e.
Figure FDA00035178818600000914
And then obtaining an estimation result of the measured value at the k moment through weighted summation
Figure FDA00035178818600000915
And covariance matrix:
Figure FDA00035178818600000916
Figure FDA00035178818600000917
Figure FDA00035178818600000918
finally, a UKF filter gain matrix K can be obtained through calculationkAnd combining the sensor measurements y obtained at time kkFurther deducing the posterior state estimation value of the state variable at the k moment
Figure FDA0003517881860000101
And estimate error covariance
Figure FDA0003517881860000102
Figure FDA0003517881860000103
Figure FDA0003517881860000104
Figure FDA0003517881860000105
And iterating to complete the whole estimation algorithm along with the increasing of the k value, and recording the estimated value of the road adhesion coefficient of the current position of the vehicle as muDRecording the UTC time output by the current GPS/INS inertial navigation combination system;
step three, screening the vehicle front road image type result and the road adhesion coefficient estimation value which meet the fusion condition through a space-time synchronization method
Vehicle passing tkTime is driven to xkPosition, i.e. tkAt time, the vehicle centroid position is at xkPosition, at which x is obtainedkRoad surface adhesion coefficient estimation value mu of positionD
Because the monocular camera is at PiThe road surface image in front of the vehicle collected in the point is transmitted to a vehicle-mounted industrial personal computer, the road surface type result can be obtained only after the road surface image is processed and identified on line through a semantic segmentation network, a mask and a road surface type identification network, and after the processing time, the vehicle runs to P'iPoint, find tkA road surface image that satisfies the following conditions simultaneously before the time:
(1) at tkThe pavement type recognition is completed before the moment;
P′i≤xk (25)
(2) at tkTime of day, vehicle position xkAt PiWithin a road region in the image;
Pi+5≤xk≤Pi+50 (26)
step four, finally outputting the road adhesion coefficient estimated value with predictability and accuracy by the vehicle front road surface type recognition result and the road adhesion coefficient estimated value through a fusion strategy
At tkTime of day, find t according to space-time synchronization methodkThe nearest 10 road surface image sample points before the moment are at the position Pi(i 1,2, …,10), and the road surface type identification result is IndexiWhere the larger the value of i is, the sample iskThe shorter the time interval, the position when the image is taken is away from xkThe closer, the higher the image recognition result confidence; presetting weight coefficient wP,iThe larger i is, the larger wP,iThe larger, and satisfies formula (27):
Figure FDA0003517881860000111
detecting weighted summation probability values of the same Index values in 10 sample points, wherein j is 0,1, …,7, and represents 8 pavement types;
Figure FDA0003517881860000112
finding the maximum probability value p therein by comparisonmaxAnd its corresponding Index value, which represents the most reliable road surface type identification result; finally obtain tkImage recognition result [ Index ] with time for fusion algorithmk ak bk]Presetting an image recognition confidence threshold value pCFThe following fusion rules are established:
(1) if p ismax<pCFIf the road surface characteristics are not obvious or the road surface image database does not establish such an image sample set, the final adhesion coefficient result is output based on the dynamic estimation, that is, mu is muD(ii) a Saving a group of road surface images and a priori value mu at the momentp=μDThe updated image sample library is used for training and updating the classification network;
(2) if (p)max≥pCF)∩(ak≤μD≤bk) The road surface image characteristics are obvious, the confidence coefficient of the road surface type identification result is high and is consistent with the dynamics estimation result, and the final adhesion coefficient result is output to be mu-muDAnd updating the prior value mu of the image recognition resultp=μD
(3) If (p)max≥pCF)∩((μD<ak)∪(μD>bk) Showing that the confidence coefficient of the road surface type identification result is higher but is not consistent with the dynamics estimation result, the adopted probability density function truncation method passes through the range of the attachment coefficient (a)k,bk) As a constraint condition for correcting the kinetic estimation result, the corrected adhesion coefficient estimation value is the final result at that time, and is recorded as μ ═ μC
The probability density function truncation method includes:let us assume at tkThe posterior state estimated value of the unscented Kalman filter is obtained at any moment
Figure FDA0003517881860000121
And estimate error covariance
Figure FDA0003517881860000122
See equations (23) and (24), and s scalar state constraints:
Figure FDA0003517881860000123
in the formula (I), the compound is shown in the specification,
Figure FDA0003517881860000124
is a linear function of a state variable, akiAnd bkiMinimum and maximum constraint boundary values, respectively; the problem is converted into a probability density function with s constraint boundaries
Figure FDA0003517881860000125
Truncation, namely obtaining the mean value after the constraint condition is met by solving the probability density function after the truncation
Figure FDA0003517881860000126
Sum covariance
Figure FDA0003517881860000127
Processing s constraint conditions one by one, and recording state estimation values meeting the first i constraint conditions as
Figure FDA0003517881860000128
Covariance of
Figure FDA0003517881860000129
When i is 0, then:
Figure FDA00035178818600001210
the probability density function truncation problem for multidimensional state variables can be solved by linear transformation:
Figure FDA00035178818600001211
in the formula, xkFor the random state variable to be estimated, zkiFor new random state variables after transformation, T and W are
Figure FDA00035178818600001212
Is a criterion of a proper decomposition matrix, i.e. satisfies
Figure FDA00035178818600001213
T is an orthogonal matrix, W is a diagonal matrix, a square root matrix of the orthogonal matrix is easy to obtain, rho is an n multiplied by n dimension orthogonal matrix, and the following conditions are satisfied:
Figure FDA00035178818600001214
from the above conclusions, the general bilateral linear constraint can be converted into a normalized scalar constraint, i.e. the conversion is in the form of obtaining the random variable zkiUpper limit value d of the constraint boundary ofkiAnd a lower limit value cki
Figure FDA0003517881860000131
Figure FDA0003517881860000132
Because of the random variable zkiThe error covariance matrix of (2) is a unit matrix, the components are statistically independent of each other, and only the first element is constrained, see equations (33) and (34), so that x is originally xkIs converted into z by multi-dimensional joint probability density function truncationkiZ before being constrained, i.e. when i is 0kiObey the standard normal distribution N (0,1), i.e. satisfies:
Figure FDA0003517881860000133
knowing the new constraint boundaries, the new constraint boundary is determined by removing the original pdf (z)ki) And calculating the total area of the probability density function of the rest part outside the middle constraint boundary:
Figure FDA0003517881860000134
in the equation, the error function is defined as:
Figure FDA0003517881860000135
normalizing the probability density function after the constrained boundary is cut off to obtain zkiIs the constrained probability density function of (1), denoted pdf (z)k,i+1):
Figure FDA0003517881860000141
Wherein the content of the first and second substances,
Figure FDA0003517881860000142
zk,i+1the mean and variance calculation formula of (a) is as follows:
Figure FDA0003517881860000143
Figure FDA0003517881860000144
thus, the random variable z satisfying the first constraint condition is obtainedkiState estimation mean and covariance of (2):
Figure FDA0003517881860000145
performing inverse transformation on the formula (31) to obtain a random variable x meeting a first constraint conditionkState estimation mean and covariance of (2):
Figure FDA0003517881860000146
adding 1 to i, repeating the equations (31) to (43) until all s constraint conditions are met, and obtaining the state estimation error and the covariance finally meeting all the constraint conditions by a probability density function truncation method:
Figure FDA0003517881860000147
thus, the road surface adhesion coefficient output by the unscented Kalman filter can be obtained by a probability density function truncation method to meet the constraint condition (a)k,bk) Road surface adhesion coefficient muC
CN202110684077.5A 2021-06-21 2021-06-21 Road adhesion coefficient estimation method based on time-space synchronization and information fusion Active CN113361121B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110684077.5A CN113361121B (en) 2021-06-21 2021-06-21 Road adhesion coefficient estimation method based on time-space synchronization and information fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110684077.5A CN113361121B (en) 2021-06-21 2021-06-21 Road adhesion coefficient estimation method based on time-space synchronization and information fusion

Publications (2)

Publication Number Publication Date
CN113361121A CN113361121A (en) 2021-09-07
CN113361121B true CN113361121B (en) 2022-03-29

Family

ID=77535361

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110684077.5A Active CN113361121B (en) 2021-06-21 2021-06-21 Road adhesion coefficient estimation method based on time-space synchronization and information fusion

Country Status (1)

Country Link
CN (1) CN113361121B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113935107B (en) * 2021-09-28 2024-05-10 吉林大学 Vehicle model modeling method suitable for ice and snow road surface
CN114092815B (en) * 2021-11-29 2022-04-15 自然资源部国土卫星遥感应用中心 Remote sensing intelligent extraction method for large-range photovoltaic power generation facility
CN114264597B (en) * 2021-12-21 2022-07-22 盐城工学院 Low-cost road adhesion coefficient determination method and system
CN114547551B (en) * 2022-02-23 2023-08-29 阿波罗智能技术(北京)有限公司 Road surface data acquisition method based on vehicle report data and cloud server
CN114549473B (en) * 2022-02-23 2024-04-19 中国民用航空总局第二研究所 Road surface detection method and system with autonomous learning rapid adaptation capability
CN114332828A (en) * 2022-03-17 2022-04-12 北京中科慧眼科技有限公司 Method and system for adjusting working mode of suspension damper based on binocular stereo camera
CN114368385B (en) * 2022-03-21 2022-07-15 北京宏景智驾科技有限公司 Cruise control method and apparatus, electronic device, and storage medium
CN116559169B (en) * 2023-07-11 2023-10-10 中南大学 Real-time pavement state detection method
CN116977650A (en) * 2023-07-31 2023-10-31 西北工业大学深圳研究院 Image denoising method, image denoising device, electronic equipment and storage medium
CN116946148B (en) * 2023-09-20 2023-12-12 广汽埃安新能源汽车股份有限公司 Vehicle state information and road surface information estimation method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110550041A (en) * 2019-07-19 2019-12-10 北京中科原动力科技有限公司 Road adhesion coefficient estimation method based on cloud data sharing
CN111688706A (en) * 2020-05-26 2020-09-22 同济大学 Road adhesion coefficient interactive estimation method based on vision and dynamics
CN111860322A (en) * 2020-07-20 2020-10-30 吉林大学 Unstructured pavement type identification method based on multi-source sensor information fusion
CN111845709A (en) * 2020-07-17 2020-10-30 燕山大学 Road adhesion coefficient estimation method and system based on multi-information fusion
CN112455443A (en) * 2020-11-12 2021-03-09 复旦大学 Vehicle active braking system based on multi-sensor fusion

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106004881B (en) * 2016-08-04 2018-05-25 清华大学 Coefficient of road adhesion method of estimation based on frequency domain fusion
CN111688707A (en) * 2020-05-26 2020-09-22 同济大学 Vision and dynamics fused road adhesion coefficient estimation method
CN111723849A (en) * 2020-05-26 2020-09-29 同济大学 Road adhesion coefficient online estimation method and system based on vehicle-mounted camera

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110550041A (en) * 2019-07-19 2019-12-10 北京中科原动力科技有限公司 Road adhesion coefficient estimation method based on cloud data sharing
CN111688706A (en) * 2020-05-26 2020-09-22 同济大学 Road adhesion coefficient interactive estimation method based on vision and dynamics
CN111845709A (en) * 2020-07-17 2020-10-30 燕山大学 Road adhesion coefficient estimation method and system based on multi-information fusion
CN111860322A (en) * 2020-07-20 2020-10-30 吉林大学 Unstructured pavement type identification method based on multi-source sensor information fusion
CN112455443A (en) * 2020-11-12 2021-03-09 复旦大学 Vehicle active braking system based on multi-sensor fusion

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A federated filter design of electronic stability control for electric-wheel vehicle;C. Wang 等;《2015 8th International Congress on Image and Signal Processing (CISP)》;20160218;1105-1110 *
Modular scheme for four-wheel-drive electric vehicle tire-road force and velocity estimation;Guo, Hongyan 等;《IET Intelligent Transport Systems》;20190331;第13卷(第3期);551-562 *
主动安全相关的路面状况识别方法研究;脱王捷;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》;20200815(第08期);C035-289 *
路面附着系数识别方法发展现状综述及展望;袁朝春 等;《机械制造与自动化》;20180420;第47卷(第02期);1-4、7 *

Also Published As

Publication number Publication date
CN113361121A (en) 2021-09-07

Similar Documents

Publication Publication Date Title
CN113361121B (en) Road adhesion coefficient estimation method based on time-space synchronization and information fusion
US10037039B1 (en) Object bounding box estimation
CN106240458B (en) A kind of vehicular frontal impact method for early warning based on vehicle-mounted binocular camera
CN109635672B (en) Unmanned road characteristic parameter estimation method
CN110263844B (en) Method for online learning and real-time estimation of road surface state
GB2577485A (en) Control system for a vehicle
CN112389440B (en) Vehicle driving risk prediction method in off-road environment based on vehicle-road action mechanism
CN107615201A (en) Self-position estimation unit and self-position method of estimation
EP2372304A2 (en) Vehicle position recognition system
CN102222236A (en) Image processing system and position measurement system
CN111551957B (en) Park low-speed automatic cruise and emergency braking system based on laser radar sensing
CN114475573B (en) Fluctuating road condition identification and vehicle control method based on V2X and vision fusion
CN111860322A (en) Unstructured pavement type identification method based on multi-source sensor information fusion
CN109829365A (en) More scenes based on machine vision adapt to drive the method for early warning that deviates and turn
CN112486197B (en) Fusion positioning tracking control method based on self-adaptive power selection of multi-source image
US11270164B1 (en) Vehicle neural network
CN113569778A (en) Pavement slippery area detection and early warning method based on multi-mode data fusion
CN110550041B (en) Road adhesion coefficient estimation method based on cloud data sharing
CN114715158A (en) Device and method for measuring road adhesion coefficient based on road texture features
CN114235679A (en) Pavement adhesion coefficient estimation method and system based on laser radar
CN114821193A (en) Road surface adhesion coefficient online prediction device based on camera and vehicle speed sensor
CN112634354B (en) Road side sensor-based networking automatic driving risk assessment method and device
CN114802223A (en) Intelligent vehicle controllable capacity grade prediction method
CN115240471A (en) Intelligent factory collision avoidance early warning method and system based on image acquisition
CN112904388A (en) Fusion positioning tracking control method based on navigator strategy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant