CN111507233B - Multi-mode information fusion intelligent vehicle pavement type identification method - Google Patents

Multi-mode information fusion intelligent vehicle pavement type identification method Download PDF

Info

Publication number
CN111507233B
CN111507233B CN202010283306.8A CN202010283306A CN111507233B CN 111507233 B CN111507233 B CN 111507233B CN 202010283306 A CN202010283306 A CN 202010283306A CN 111507233 B CN111507233 B CN 111507233B
Authority
CN
China
Prior art keywords
adopting
road
vehicle
extracting
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010283306.8A
Other languages
Chinese (zh)
Other versions
CN111507233A (en
Inventor
詹军
王战古
段春光
管欣
卢萍萍
杨凯
祝怀南
仲昭辉
董学才
刘荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN202010283306.8A priority Critical patent/CN111507233B/en
Publication of CN111507233A publication Critical patent/CN111507233A/en
Application granted granted Critical
Publication of CN111507233B publication Critical patent/CN111507233B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses an intelligent vehicle road type recognition method based on multi-modal information fusion, which comprises the steps of firstly, respectively extracting characteristics of perception information of different modes by adopting different modeling methods according to the characteristics and data structures of the road perception information collected by each sensor, then carrying out characteristic level data fusion on characteristic vectors extracted by the perception information of each mode, finally converting multi-modal fusion characteristics into a time sequence classification problem by adopting an LSTM deep learning network, and completing the recognition of the road type through supervised learning. The invention improves the information fusion depth and the pavement identification precision of each sensor; in addition, frequent false detection caused by accidental errors can be effectively avoided by adopting the LSTM time sequence classification model, and the robustness and the accuracy of road surface identification are further improved.

Description

Multi-mode information fusion intelligent vehicle pavement type identification method
Technical Field
The invention belongs to the field of intelligent automobile environment perception, and particularly relates to an intelligent automobile road type identification method based on multi-mode information fusion.
Background
The maximum friction coefficient and the flatness which can be provided by different road surface types have larger difference, which has decisive influence on the decision motion track, the lateral acceleration and speed, the longitudinal acceleration and speed degree and the like of the intelligent automobile. Therefore, whether the intelligent vehicle can accurately identify the road surface type and adjust the control strategy has important significance for improving the driving safety and comfort of the intelligent vehicle.
The road surface type identification methods adopted at home and abroad at present can be roughly divided into two categories: one class is identified Based on vehicle response parameters (Effect-Based); the other is recognition by sensor sensing combined with a recognition algorithm (Cause-Based). The mouse-Based method mainly utilizes sensors such as laser radar, machine vision, sound waves, electromagnetic waves and the like to directly detect the type of the road surface. For example, yuzhuang et al in the thesis "unmanned vehicle road surface adhesion coefficient estimation based on laser radar", gaoze et al in the thesis "mobile robot outdoor road surface classification research based on visual information", lihong in the thesis "mobile robot outdoor road surface state identification based on machine vision" and Pieter L et al in the thesis "An access Sensor system for determination of macroscopic surface roughness", brgeJ et al in the thesis "Sensor data fusion based estimation of both-road-surface-free to-environment compatibility" all adopt different types of sensors to directly sense the road surface information, and then adopt methods such as statistical analysis, pattern recognition, machine learning and deep learning to complete the identification of the road surface type.
The Effect-Based identification method is to identify the type of the road surface by measuring and analyzing the response of the whole vehicle caused by different road surface structures, wherein the response of the whole vehicle mainly comprises the response of tires and the dynamic response of the vehicle. For example, holzmann F et al in the paper "Predictive evaluation of the road-top frictioncoefficient" identifies the type of road surface based on the noise of the tire; gurkan E et al in the paper "Estimation of tire-front reflectivity using a novel wire piezoelectric property sensor", tuonen A et al in the paper "On-board Estimation of dynamic type for from optically measured tire tires reflectors" all identify the road surface by measuring the amount of tread deformation; the vehicle longitudinal dynamic response is adopted to realize the identification of the road surface in paper 'vehicle longitudinal dynamic control based on the road surface identification'; zheng hong Yu et al in the paper "road surface parameter estimation based on steer-by-wire system", cheng Wu xi et al in the paper "road surface adhesion coefficient estimation algorithm under vehicle steering condition" is to identify the road surface through the vehicle transverse dynamic response. In addition, in order to improve the identification precision of the road surface type, many scholars and researchers also try to identify the road surface type by fusing various perception information, for example, in a single addition 22426in a paper 'study on uneven road type identification strategy based on multi-source information fusion', in a paper 'study on a test field road identification system based on multi-source information fusion' on the basis of sun-thought, in a paper 'study on vehicle road surface type identification technology based on machine learning' and in a patent CN104392245B, wangsheng and the like all try to adopt a multi-perception information fusion method to improve the precision of the road surface identification in a paper 'study on road surface information perception technology of unmanned vehicles'.
By analyzing the existing pavement type identification method, some places with insufficient integrity are found, firstly, the sensing information of a single sensor is not complete enough, the advantages of the sensor cannot be given to the greatest extent by a simple sensor fusion scheme, and when the conditions of a complex environment and a complex pavement are met, the identification precision of the system is not high; secondly, the perception information used for road surface type recognition is collected based on a transient state or a fixed time window, the continuity between the contexts of the perception information is split by the extraction of fragmented features, the perception information is easily interfered by accidental errors, and the robustness of the road surface recognition needs to be improved.
Disclosure of Invention
In order to solve the problems, the invention provides an intelligent vehicle road type recognition method based on multi-modal information fusion, which comprises the steps of firstly, respectively extracting characteristics of perception information of different modes by adopting different modeling methods according to the characteristics and data structures of the road perception information collected by each sensor, then, carrying out characteristic level data fusion on characteristic vectors extracted from the perception information of each mode, finally, converting multi-modal fusion characteristics into a time sequence classification problem by adopting a Long Short-Term Memory neural network (LSTM), and completing the recognition of the road type through supervised learning. The advantages of the Cause-Based and Effect-Based identification methods are fully exerted, and the information fusion depth and the pavement identification precision of each sensor are improved; in addition, frequent false detection caused by accidental errors can be effectively avoided by adopting the LSTM time sequence classification model, and the robustness and the accuracy of road surface identification are further improved.
The purpose of the invention is realized by the following technical scheme:
a multi-mode information fusion intelligent vehicle pavement type identification method is mainly used for improving the accuracy and robustness of an intelligent vehicle for identifying different pavement types, and mainly comprises the following steps:
step 1, collecting road surface perception information and extracting characteristics: acquiring sensor perception information under different road types by adopting an actual measurement experiment, and performing feature extraction on perception information in different modes by adopting different modeling methods, wherein the sampling frequency settings of all perception information are the same;
step 2, data preprocessing and LSTM road type identification model modeling: preprocessing the feature vectors extracted in the step 1, constructing an LSTM road type recognition model, and completing off-line training and verification of the model;
step 3, model deployment and online identification: and (3) deploying the LSTM road type recognition model trained in the step (2) to edge computing equipment, and carrying out online recognition on the road surface type by using the trained LSTM road type recognition model.
Further, the step 1 of collecting road surface perception information and extracting features respectively comprises:
step 1.1, extracting tire noise characteristics of tires: collecting tyre noise information of the tyre, and extracting 29-dimensional tyre noise information by respectively adopting statistical analysis, power spectrum characteristic analysis and MFCC algorithm;
step 1.2, laser radar road surface feature extraction: extracting road surface information from the point cloud data of the laser radar, and extracting 10-dimensional features from the perception information of the laser radar by adopting a statistical analysis method;
step 1.3, image pavement feature extraction: extracting the features of the images of different road surface types by adopting an Alexnet model of transfer learning, and then performing dimensionality reduction processing on the extracted features by adopting a PCA algorithm to obtain the feature vectors of the road surface images;
step 1.4, vehicle dynamics response feature extraction: and 10-dimensional vehicle response characteristics are extracted from the vehicle-mounted CAN signals, the vehicle-mounted GPS and the inertial navigation system.
Further, the step 1.1 of extracting the tire noise features specifically comprises the following steps:
step 1.1.1, collecting a fetal noise original signal from an acoustic sensor, and modifying an original dual-channel audio into a single-channel audio;
step 1.1.2, collecting engine noise at different engine rotating speeds through a calibration experiment, and recording frequency domain characteristics of the noise at different rotating speeds;
step 1.1.3, according to the frequency domain characteristic of the current engine rotating speed noise, carrying out noise reduction processing on original tire noise information by adopting a band-pass filter;
step 1.1.4, extracting 12-dimensional characteristic vectors of the denoised fetal noise signals by adopting a statistical analysis method, wherein the method comprises the following steps: mean value characteristic, median value characteristic, standard deviation characteristic, average absolute deviation, quartile, skewness coefficient, kurtosis, shannon entropy and spectral entropy;
step 1.1.5, converting the fetal noise signal into a frequency domain signal by adopting fast Fourier transform, and extracting 3-dimensional features from a power spectrum of the frequency domain signal, wherein the steps respectively comprise: dominant frequency value, dominant frequency amplitude, dominant frequency ratio;
step 1.1.6, extracting 14-dimensional feature vectors from the fetal noise signals by adopting an MFCC algorithm, wherein the feature vectors are respectively as follows: MFCC 1-MFCC 14.
And step 1.1.7, combining the statistical characteristics of the fetal noise, the spectral power characteristics and the characteristic vector extracted by the MFCC algorithm into a 29-dimensional fetal noise characteristic vector to finish the extraction of the fetal noise characteristics.
Further, the step 1.2 of laser radar road surface feature extraction specifically comprises the following steps:
step 1.2.1, respectively collecting point cloud sensing information of laser radars with different road surface types by adopting a real vehicle experiment, and labeling point cloud data of different roads;
step 1.2.2, determining a segmentation threshold value W of the point cloud transverse position limit
Step 1.2.3, extracting 10-dimensional characteristic vectors from the echo information of the processed point cloud by adopting a statistical analysis method, wherein the method comprises the following steps: mean feature, median feature, standard deviation feature, mean absolute deviation, quartile, range, standard score.
Further, the step 1.3 of extracting the road surface features of the image specifically comprises the following steps:
step 1.3.1, collecting road surface images of different roads by adopting a real vehicle experiment, and cutting and marking the road images;
step 1.3.2, utilizing an image processing method to augment an original image, loading a pretrained Alexnet model, and improving and retraining the model by adopting transfer learning;
step 1.3.3, extracting the characteristic vectors of all images from the last full connection layer of the model;
and step 1.3.4, reducing the dimension of the feature vectors of the full-connection layer by adopting a PCA algorithm, wherein the feature vectors after dimension reduction are 10 dimensions in total.
Further, the step 1.4 of extracting the vehicle dynamics response characteristics specifically comprises the following steps:
step 1.4.1, according to the analytic protocol of the vehicle-mounted information, extracting 7-dimensional response characteristics of the vehicle under different road environment experiments from a vehicle CAN signal, wherein the 7-dimensional response characteristics respectively comprise the following steps: the method comprises the following steps of (1) vehicle speed, left and right wheel rotating speeds, rotating speed difference, steering wheel rotating angle, pedal force and throttle opening;
step 1.4.2, extracting acceleration response characteristics of the vehicle in X, Y and Z directions from the vehicle-mounted GPS and the inertial navigation system, wherein the acceleration response characteristics respectively comprise the following steps: lateral acceleration, vertical acceleration, longitudinal acceleration;
and step 1.4.3, combining the vehicle characteristics extracted in the step 1.4.1 and the step 1.4.2 to generate 10-dimensional vehicle dynamics response characteristics, and labeling the response characteristics according to different road types.
Further, the step 2 of data preprocessing and modeling of the LSTM road type identification model comprises the following steps:
step 2.1, data preprocessing, including data normalization and time series segmentation;
step 2.2, modeling of the time series LSTM road classification model: the input layer of the LSTM model is set as the dimension of the feature vector; the hidden layer neuron selects a bidirectional long-term and short-term memory; the full connection layer is set to represent the number of road types; one layer of SoftmaxLayer is used for calculating the confidence probability of each road type prediction; the last layer is a classification layer and is used for outputting the identification result of the road type;
step 2.3, setting model training parameters, including a model optimization algorithm, an initial learning rate, a maximum iteration cycle, a Mini-Batch Size and a model training environment;
step 2.4, modeling training and testing of a time series LSTM road type recognition model;
further, the step 3 model deployment and online identification comprises the following steps:
step 3.1, adopting a TensoRT optimizer to convert the trained LSTM road type recognition model into an inference engine of the edge computing equipment, and deploying the generated inference engine to the edge computing equipment;
step 3.2, well defining communication interfaces of each sensor and the edge computing equipment, setting sampling frequency, and ensuring the time synchronization of the road perception information of each sensor by adopting a timestamp alignment mode;
and 3.3, performing feature extraction and data preprocessing on the perception information of each sensor by adopting the method in the step 1, inputting each 30 sampling periods serving as a time sequence into an LSTM road type recognition model, and performing online recognition on the road type by the LSTM road type recognition model according to the perception information.
Through the scheme, the invention can bring the following beneficial effects:
(1) The road surface perception information of each sensor of the intelligent vehicle is fully fused, different feature extraction methods are adopted according to perception information of each mode, a feature matrix of feature level information fusion is established, and the depth and the precision of the road surface perception information fusion are effectively improved.
(2) The LSTM deep learning network is adopted to convert the multi-modal fusion characteristics into the classification problem of the time sequence, the relevance of perception information on the time sequence is fully utilized, frequent false detection caused by accidental errors can be effectively avoided, and the robustness and the accuracy of pavement identification are further improved.
Drawings
FIG. 1 is a flow chart of a method for identifying the road type of an intelligent vehicle based on multi-mode information fusion
FIG. 2 is a schematic view of laser radar and camera installation
FIG. 3 is a schematic diagram of the clipping and normalization of a road surface image
FIG. 4 is a PCA feature dimension reduction contribution rate statistical chart
FIG. 5 is a diagram of time-series perceptual information segmentation
Detailed Description
The technical scheme of the invention is further described in the following with the accompanying drawings:
as shown in fig. 1, the invention provides a method for identifying road types of an intelligent vehicle by multi-modal information fusion, which is mainly used for improving the accuracy and robustness of the intelligent vehicle in identifying different road types, and mainly comprises the following steps:
step 1, collecting road surface perception information and extracting characteristics. The method comprises the following steps of collecting sensor perception information under different road types (asphalt pavement, gravel pavement, ice and snow pavement and wading pavement) by adopting an actual measurement experiment, extracting characteristics by adopting different modeling methods according to perception information of different modes, setting the sampling frequency of all perception information to be 10HZ, and specifically comprising the following steps:
step 1.1, extracting tire noise characteristics of the tire. Collecting tyre noise information of the tyre, and extracting 29-dimensional tyre noise information by respectively adopting statistical analysis, power spectrum characteristic analysis and Mel Frequency Cepstrum Coefficient (MFCC) algorithm, wherein the method specifically comprises the following steps:
step 1.1.1, collecting original signals of the fetal noises from the acoustic sensor, and modifying original dual-channel audio into single-channel audio.
Step 1.1.2, collecting engine noise of the vehicle at different engine rotating speeds through a calibration experiment, and recording frequency domain characteristics of the noise at different rotating speeds.
And 1.1.3, performing noise reduction processing on the original tire noise information by adopting a band-pass filter according to the frequency domain characteristic of the current engine rotating speed noise.
Step 1.1.4, extracting 12-dimensional characteristic vectors of the denoised fetal noise signals by adopting a statistical analysis method, wherein the method respectively comprises the following steps: mean characteristic, median characteristic, standard deviation characteristic, average absolute deviation, quartile (4-dimension), skewness coefficient, kurtosis, shannon entropy and spectral entropy.
Step 1.1.5, transforming the fetal noise signal into a frequency domain signal by adopting fast Fourier transform, extracting 3-dimensional features from a power spectrum of the frequency domain signal, and respectively comprising the following steps of: dominant frequency value, dominant frequency amplitude, dominant frequency ratio.
Step 1.1.6, extracting 14-dimensional feature vectors from the fetal noise signals by adopting an MFCC algorithm, wherein the feature vectors are respectively as follows: MFCC 1-MFCC 14.
And step 1.1.7, combining the statistical characteristics of the fetal noise, the spectral power characteristics and the characteristic vector extracted by the MFCC algorithm into a 29-dimensional fetal noise characteristic vector to finish the extraction of the fetal noise characteristics.
And step 1.2, extracting the road surface characteristics of the laser radar. The method comprises the following steps of extracting road surface information from point cloud data of the laser radar, and extracting 10 dimensional features from perception information of the laser radar by adopting a statistical analysis method, wherein the method specifically comprises the following steps:
step 1.2.1, respectively acquiring point cloud sensing information of laser radars with different road surface types by adopting a real vehicle experiment, and labeling point cloud data of different roads, wherein the method mainly comprises the following steps:
(1) The lidar arrangement is mounted. The four-wire laser radar is fixed at the center of a front bumper of a vehicle, and the included angle between a radar emitting surface and a horizontal plane is theta, so that point cloud information of the laser radar is ensured to come from the ground as shown in figure 2.
(2) And collecting radar perception information of different road types, and marking the road surface type of each experimental data.
Step 1.2.2, in order to eliminate the interference of the point cloud data outside the road, a segmentation threshold value W of the transverse position of the point cloud needs to be determined limit . The lane width of the first-level road in China is 3.75m, so that only the transverse position is kept at [ -1.9,1.9 [ -1.9]Point cloud data within the range.
Step 1.2.3, extracting 10-dimensional characteristic vectors from the echo information of the processed point cloud by adopting a statistical analysis method, wherein the method comprises the following steps: mean, median, standard deviation, mean absolute deviation, quartile (4 dimensions), range, standard score.
And step 1.3, extracting the road surface characteristics of the image. The method comprises the steps of extracting features of images of different road surface types by using an Alexnet model of transfer learning, and then performing dimensionality reduction processing on the extracted features by using Principal Component Analysis (PCA) to obtain feature vectors of road surface images. The method specifically comprises the following steps:
step 1.3.1, adopting a real vehicle experiment to collect road surface images of different roads, cutting and labeling the road images, and mainly comprising the following steps:
(1) Camera arrangement and installation. The camera is fixed in the center of the top of the vehicle through the support, and is inclined to the ground by a fixed angle beta in order to ensure that the camera captures road information as much as possible and is spatially synchronous with radar perception information.
(2) In order to avoid interference of the surrounding environment on image perception information, a central pixel area of 45% is cut by taking the bottom edge of the original image as an extension line, then the cut image is normalized into a standard image of 227 × 227, as shown in fig. 3, and the road type of each frame of image is labeled.
Step 1.3.2, utilizing an image processing method to amplify original images, loading a pre-trained Alexnet model, and adopting transfer learning to improve and retrain the model to obtain a road type recognition classification model, which mainly comprises the following steps:
(1) In order to increase training data, overfitting during model training is prevented, and model training precision is improved. The invention adopts the following four image processing methods to amplify the training sample, and specifically comprises the following steps: mirror image inversion, contrast change, brightness change, adaptive gray scale enhancement. After sample augmentation processing, the original training images were increased from 36800 to 128000.
(2) And loading the Alexnet model trained in the ImageNet data set, keeping other parameters of the model unchanged, and modifying the neuron of the last full connection layer from 4096 to 4, wherein 4 represents the output of four road types.
(3) Retraining the model by adopting a double-path RTX 2080ti GPU, setting Mini-Batch Size to be 32, setting the maximum iteration cycle to be 10, and setting the initial learning rate to be 10 -3 And the learning rate is reduced to 0.5 time of the original learning rate after every two iteration cycles.
(4) When the model is trained, 90% of data is used as a training set, 10% of data is used as a test set, the model is tested by using the test set after the training is finished, and the test accuracy of the whole model is 86.7%.
And step 1.3.3, extracting feature vectors Image _ feature 1-Image _ feature4096 of all images from the second last full connection layer of the model.
Step 1.3.4, adopting PCA algorithm to reduce the dimension of the eigenvector of the full connection layer, only keeping the eigenvector with the accumulated contribution rate exceeding 95%, and the eigenvector after dimension reduction totals 10 dimensions, which are respectively: image _ PCA 1-Image _ PCA10, as shown in fig. 4.
And 1.4, extracting vehicle dynamics response characteristics. The method for extracting 10-dimensional vehicle response characteristics from the vehicle-mounted CAN signal and the vehicle-mounted GPS and inertial navigation system specifically comprises the following steps:
step 1.4.1, according to the analytic protocol of the vehicle-mounted information, 7-dimensional response characteristics of the vehicle under different road environment experiments are respectively extracted from the vehicle CAN signal, and the method respectively comprises the following steps: vehicle speed, left and right wheel rotation speed, rotation speed difference, steering wheel rotation angle, pedal force and throttle opening.
Step 1.4.2, extracting acceleration response characteristics of the vehicle in three directions of X, Y and Z from the vehicle-mounted GPS and inertial navigation system, wherein the acceleration response characteristics respectively comprise the following steps: lateral acceleration, vertical acceleration, longitudinal acceleration.
And step 1.4.3, combining the vehicle characteristics extracted in the step 1.4.1 and the step 1.4.2 to generate 10-dimensional vehicle dynamics response characteristics, and labeling the response characteristics according to different road types.
And 2, data preprocessing and LSTM road type identification model modeling. And (4) preprocessing the feature vectors extracted in the step (1), constructing an LSTM road type recognition model, and finishing off-line training and verification of the model.
And 2.1, preprocessing data. The data after the characteristic extraction is subjected to time alignment and normalization processing, and the method mainly comprises the following steps:
and 2.1.1, performing time synchronization on the sensing information under different roads by adopting a timestamp alignment mode, and ensuring that the information acquired by the sensors is in the same time sequence.
Step 2.1.2, normalize all the perception data to [ -1,1], ensure that the data ranges of different sensors are in the same order of magnitude.
And 2.1.3, segmenting the time series data, wherein each sequence segment comprises 30 sampling periods (3 s), each sequence segment comprises a feature matrix of 30 multiplied by 59, and the overlapping time of every two adjacent sequence segments is 10 sampling periods (1 s), as shown in fig. 5.
And 2.2, modeling the time series LSTM road classification model. The method specifically comprises the following steps: the input layer of the LSTM model is 59, representing the dimensions of the feature vectors; the hidden layer neuron is 100, and a bidirectional long-term and short-term memory is selected; the full connection layer is set to 4, representing the number of road types; a layer of softmaxLayer used for calculating the confidence probability of each road type prediction; the last layer is a classification layer for outputting the identification result of the road type.
And 2.3, setting model training parameters. The method specifically comprises the following steps: the network is trained by adopting an Adaptive Moment Estimation (Adam) algorithm, and the initial learning rate is 10 -3 The learning rate is reduced to 0.5 times every 5 iteration cycles. The maximum iteration cycle of training is set to 20, mini-Batch Size is set to 32, and the model training environment is set to GPU =1.
And 2.4, modeling training and testing of the time series LSTM road type recognition model. The method specifically comprises the following steps:
(1) Experimental data were randomly divided into two, 80% of the data were training samples and 20% of the data were testing samples. Where all training samples were divided equally into 4 more samples for cross-validation.
(2) And testing and evaluating the model after training is finished, if the road type identification accuracy is more than 95%, exporting the stored model, and if the road type identification accuracy is not more than 95%, readjusting the model and the training parameters until the accuracy of the model meets the requirement.
And 3, model deployment and online identification. Deploying the trained LSTM road type recognition model to Jetson AGX Xavier edge computing equipment, and performing online recognition on the road type by using the trained LSTM road type recognition model, wherein the online recognition method mainly comprises the following steps:
and 3.1, converting the trained LSTM road type recognition model into an inference engine of the edge computing equipment by adopting a TensoRT optimizer, and deploying the generated inference engine to Jetson AGX Xavier.
And 3.2, defining a communication interface between each sensor and Jetson AGX Xavier, setting the sampling frequency to be 10HZ, and ensuring the time synchronization of the road perception information of each sensor by adopting a timestamp alignment mode.
And 3.3, performing feature extraction and data preprocessing on the perception information of each sensor by adopting the method in the step 1, inputting each 30 sampling periods serving as a time sequence into an LSTM road type recognition model, and performing online recognition on the road type by the LSTM road type recognition model according to the perception information.

Claims (6)

1. A multi-mode information fusion intelligent vehicle pavement type identification method is characterized by comprising the following steps:
step 1, road surface perception information acquisition and feature extraction: acquiring sensor perception information under different road types by adopting an actual measurement experiment, and performing feature extraction on perception information in different modes by adopting different modeling methods, wherein the sampling frequency settings of all perception information are the same;
the step 1 of collecting the road surface perception information and extracting the characteristics respectively comprises the following steps:
step 1.1, extracting tire noise characteristics: collecting tire noise information of the tire from an acoustic sensor, and extracting 29-dimensional tire noise information by respectively adopting statistical analysis, power spectrum characteristic analysis and MFCC algorithm;
step 1.2, laser radar road surface feature extraction: extracting road surface information from the point cloud data of the laser radar, and extracting 10-dimensional features from the perception information of the laser radar by adopting a statistical analysis method;
step 1.3, image pavement feature extraction: carrying out feature extraction on images of different pavement types by adopting an Alexnet model of transfer learning, and then carrying out dimensionality reduction processing on the extracted features by adopting a PCA algorithm to obtain a feature vector of the pavement image;
step 1.4, vehicle dynamics response feature extraction: extracting 10-dimensional vehicle response characteristics from a vehicle-mounted CAN signal, a vehicle-mounted GPS and an inertial navigation system;
step 2, data preprocessing and LSTM road type identification model modeling: preprocessing the feature vectors extracted in the step 1, constructing an LSTM road type recognition model, and completing off-line training and verification of the model;
the step 2 of data preprocessing and LSTM road type identification model modeling comprises the following steps:
step 2.1, data preprocessing: the method comprises the steps of data normalization and time series segmentation;
step 2.2, modeling of the time series LSTM road classification model: the input layer of the LSTM model is set as the dimension of the feature vector; the hidden layer neuron selects a bidirectional long-term and short-term memory; the full connection layer is set to represent the number of road types; one layer of SoftmaxLayer is used for calculating the confidence probability of each road type prediction; the last layer is a classification layer and is used for outputting the identification result of the road type;
step 2.3, setting model training parameters: the method comprises a model optimization algorithm, an initial learning rate, a maximum iteration cycle, a Mini-Batch Size and a model training environment;
step 2.4, modeling training and testing of a time series LSTM road type recognition model;
step 3, model deployment and online identification: and (3) deploying the LSTM road type recognition model trained in the step (2) to edge computing equipment, and carrying out online recognition on the road surface type by using the trained LSTM road type recognition model.
2. The method for recognizing the road type of the intelligent vehicle with the multi-modal information fusion as claimed in claim 1, wherein the step 1.1 of extracting the tire noise characteristics specifically comprises the following steps:
step 1.1.1, collecting a fetal noise original signal from an acoustic sensor, and modifying an original dual-channel audio into a single-channel audio;
step 1.1.2, collecting engine noise at different engine rotating speeds through a calibration experiment, and recording frequency domain characteristics of the noise at different rotating speeds;
step 1.1.3, according to the frequency domain characteristic of the current engine rotating speed noise, carrying out noise reduction processing on original tire noise information by adopting a band-pass filter;
step 1.1.4, extracting 12-dimensional characteristic vectors of the denoised fetal noise signals by adopting a statistical analysis method, wherein the method comprises the following steps: mean value characteristic, median value characteristic, standard deviation characteristic, average absolute deviation, quartile, skewness coefficient, kurtosis, shannon entropy and spectral entropy;
step 1.1.5, converting the fetal noise signal into a frequency domain signal by adopting fast Fourier transform, and extracting 3-dimensional features from a power spectrum of the frequency domain signal, wherein the steps respectively comprise: dominant frequency value, dominant frequency amplitude, dominant frequency ratio;
step 1.1.6, extracting 14-dimensional feature vectors from the fetal noise signals by adopting an MFCC algorithm, wherein the feature vectors are respectively as follows: MFCC 1-MFCC 14;
and step 1.1.7, combining the statistical characteristics of the fetal noise, the spectral power characteristics and the characteristic vector extracted by the MFCC algorithm into a 29-dimensional fetal noise characteristic vector to finish the extraction of the fetal noise characteristics.
3. The method for recognizing the road type of the intelligent vehicle fused with the multi-mode information as claimed in claim 1, wherein the step 1.2 of extracting the laser radar road surface features specifically comprises the following steps:
step 1.2.1, respectively collecting point cloud sensing information of laser radars with different road surface types by adopting a real vehicle experiment, and labeling point cloud data of different roads;
step 1.2.2, determining a segmentation threshold value W of the horizontal position of the point cloud limit
Step 1.2.3, extracting 10-dimensional characteristic vectors from the echo information of the processed point cloud by adopting a statistical analysis method, wherein the method comprises the following steps: mean feature, median feature, standard deviation feature, mean absolute deviation, quartile, range, standard score.
4. The method for recognizing the road surface type of the intelligent vehicle fused with the multi-mode information as claimed in claim 1, wherein the step 1.3 of extracting the road surface features of the image specifically comprises the following steps:
step 1.3.1, collecting road surface images of different roads by adopting a real vehicle experiment, and cutting and marking the road images;
step 1.3.2, utilizing an image processing method to augment an original image, loading a pretrained Alexnet model, and improving and retraining the model by adopting transfer learning;
step 1.3.3, extracting the characteristic vectors of all images from the last full connection layer of the model;
and step 1.3.4, reducing the dimension of the eigenvector of the full connection layer by adopting a PCA algorithm, wherein the eigenvector after dimension reduction accounts for 10 dimensions.
5. The method for recognizing the road type of the intelligent vehicle with the multi-modal information fusion as claimed in claim 1, wherein the step 1.4 of extracting the vehicle dynamics response characteristics specifically comprises the following steps:
step 1.4.1, according to the analytic protocol of the vehicle-mounted information, extracting 7-dimensional response characteristics of the vehicle under different road environment experiments from a vehicle CAN signal, wherein the 7-dimensional response characteristics respectively comprise the following steps: the method comprises the following steps of (1) vehicle speed, left and right wheel rotating speeds, rotating speed difference, steering wheel rotating angle, pedal force and throttle opening;
step 1.4.2, extracting acceleration response characteristics of the vehicle in X, Y and Z directions from the vehicle-mounted GPS and the inertial navigation system, wherein the acceleration response characteristics respectively comprise the following steps: lateral acceleration, vertical acceleration, longitudinal acceleration;
and step 1.4.3, combining the vehicle characteristics extracted in the step 1.4.1 and the step 1.4.2 to generate 10-dimensional vehicle dynamics response characteristics, and labeling the response characteristics according to different road types.
6. The method for recognizing the road type of the intelligent vehicle through multi-modal information fusion as claimed in claim 1, wherein the step 3 model deployment and online recognition comprises the following steps:
step 3.1, adopting a TensoRT optimizer to convert the trained LSTM road type recognition model into an inference engine of the edge computing equipment, and deploying the generated inference engine to the edge computing equipment;
step 3.2, well defining communication interfaces of each sensor and the edge computing equipment, setting sampling frequency, and ensuring the time synchronization of the road perception information of each sensor by adopting a timestamp alignment mode;
and 3.3, performing feature extraction and data preprocessing on the perception information of each sensor by adopting the method in the step 1, inputting the perception information into an LSTM road type recognition model as a time sequence in every 30 sampling periods, and performing online recognition on the road type by the LSTM road type recognition model according to the perception information.
CN202010283306.8A 2020-04-13 2020-04-13 Multi-mode information fusion intelligent vehicle pavement type identification method Active CN111507233B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010283306.8A CN111507233B (en) 2020-04-13 2020-04-13 Multi-mode information fusion intelligent vehicle pavement type identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010283306.8A CN111507233B (en) 2020-04-13 2020-04-13 Multi-mode information fusion intelligent vehicle pavement type identification method

Publications (2)

Publication Number Publication Date
CN111507233A CN111507233A (en) 2020-08-07
CN111507233B true CN111507233B (en) 2022-12-13

Family

ID=71876016

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010283306.8A Active CN111507233B (en) 2020-04-13 2020-04-13 Multi-mode information fusion intelligent vehicle pavement type identification method

Country Status (1)

Country Link
CN (1) CN111507233B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112099014B (en) * 2020-08-24 2023-08-22 广东工业大学 Road millimeter wave noise model detection estimation method based on deep learning
CN112163285B (en) * 2020-11-03 2024-01-23 浙江天行健智能科技有限公司 Modeling method of road surface type prediction model for simulating driving system
CN112418324B (en) * 2020-11-25 2022-06-24 武汉大学 Cross-modal data fusion method for electrical equipment state perception
CN112666553B (en) * 2020-12-16 2023-04-18 动联(山东)电子科技有限公司 Road ponding identification method and equipment based on millimeter wave radar
CN113255448A (en) * 2021-04-23 2021-08-13 长江勘测规划设计研究有限责任公司 Method for recognizing front dam surface vortex based on deep learning
CN115339458A (en) * 2021-04-28 2022-11-15 华为技术有限公司 Pavement type identification method and device and vehicle
CN113780426B (en) * 2021-09-14 2023-06-30 中国联合网络通信集团有限公司 Multi-mode information fusion method, MEC, mode information acquisition unit and system
CN113962301B (en) * 2021-10-20 2022-06-17 北京理工大学 Multi-source input signal fused pavement quality detection method and system
CN114282430B (en) * 2021-11-29 2024-05-31 北京航空航天大学 Road surface condition sensing method and system based on multi-MEMS sensor data fusion
CN114494849B (en) * 2021-12-21 2024-04-09 重庆特斯联智慧科技股份有限公司 Road surface state identification method and system for wheeled robot
CN115320608B (en) * 2022-10-17 2023-01-03 广东粤港澳大湾区黄埔材料研究院 Method, device and system for monitoring tire road surface information
CN115937818A (en) * 2022-11-18 2023-04-07 吉林大学 Road surface type surveying method and device for intelligent automobile and related equipment
CN116129553A (en) * 2023-04-04 2023-05-16 北京理工大学前沿技术研究院 Fusion sensing method and system based on multi-source vehicle-mounted equipment
CN117218375B (en) * 2023-11-08 2024-02-09 山东科技大学 Priori knowledge and data driven based environment visibility prediction method and device
CN117392396B (en) * 2023-12-08 2024-03-05 安徽蔚来智驾科技有限公司 Cross-modal target state detection method, device, intelligent device and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829386A (en) * 2019-01-04 2019-05-31 清华大学 Intelligent vehicle based on Multi-source Information Fusion can traffic areas detection method
WO2020007453A1 (en) * 2018-07-03 2020-01-09 Nokia Technologies Oy Method and apparatus for sensor orientation determination

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102011008B1 (en) * 2017-04-25 2019-08-16 만도헬라일렉트로닉스(주) System and method for detecing a road state
CN109034371B (en) * 2018-06-27 2021-06-25 北京文安智能技术股份有限公司 Deep learning model reasoning period acceleration method, device and system
US11034357B2 (en) * 2018-09-14 2021-06-15 Honda Motor Co., Ltd. Scene classification prediction
CN109800661A (en) * 2018-12-27 2019-05-24 东软睿驰汽车技术(沈阳)有限公司 A kind of road Identification model training method, roads recognition method and device
CN109870456B (en) * 2019-02-01 2022-01-28 上海智能交通有限公司 Rapid detection system and method for road surface health condition

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020007453A1 (en) * 2018-07-03 2020-01-09 Nokia Technologies Oy Method and apparatus for sensor orientation determination
CN109829386A (en) * 2019-01-04 2019-05-31 清华大学 Intelligent vehicle based on Multi-source Information Fusion can traffic areas detection method

Also Published As

Publication number Publication date
CN111507233A (en) 2020-08-07

Similar Documents

Publication Publication Date Title
CN111507233B (en) Multi-mode information fusion intelligent vehicle pavement type identification method
CN110992683B (en) Dynamic image perception-based intersection blind area early warning method and system
JP7090105B2 (en) Classification of rare cases
CN107591002B (en) Real-time estimation method for highway traffic parameters based on distributed optical fiber
CN105892471A (en) Automatic automobile driving method and device
EP3498559B1 (en) Method for recognizing the driving style of a driver of a land vehicle, and corresponding apparatus
KR101473957B1 (en) Apparatus and method for determining insurance premium based on driving pattern recognition
CN113378741B (en) Auxiliary sensing method and system for aircraft tractor based on multi-source sensor
CN116738211A (en) Road condition identification method based on multi-source heterogeneous data fusion
Dózsa et al. Road abnormality detection using piezoresistive force sensors and adaptive signal models
CN113642114A (en) Modeling method for humanoid random car following driving behavior capable of making mistakes
CN113701642A (en) Method and system for calculating appearance size of vehicle body
CN210822158U (en) Automatic control system for motor vehicle windshield wiper
Wang et al. Road surface recognition based on vision and tire noise
CN112230208B (en) Automobile running speed detection method based on smart phone audio perception
Chen et al. Road roughness level identification based on bigru network
CN115346514A (en) Intelligent driving evaluation-oriented audio early warning intelligent identification method
KR102356347B1 (en) Security surveillance radar systems using feature base neural network learning and security surveillance method thereof
Serttaş et al. Driver classification using K-means clustering of within-car accelerometer data
CN113147781A (en) Intelligent driving information display system for automobile
Altunkaya et al. Design and implementation of a novel algorithm to smart tachograph for detection and recognition of driving behaviour
Darwiche et al. Speed bump detection for autonomous vehicles using signal-processing techniques
CN113177536B (en) Vehicle collision detection method and device based on deep residual shrinkage network
US20230322237A1 (en) Computer-Implemented Method for Training an Articial Intelligence Module to Determine a Tire Type of a Motor Vehicle
Yang et al. Road Terrain Recognition Based on Tire Noise for Autonomous Vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant