CN115294449A - Ground identification method for crawler robot - Google Patents

Ground identification method for crawler robot Download PDF

Info

Publication number
CN115294449A
CN115294449A CN202210862532.0A CN202210862532A CN115294449A CN 115294449 A CN115294449 A CN 115294449A CN 202210862532 A CN202210862532 A CN 202210862532A CN 115294449 A CN115294449 A CN 115294449A
Authority
CN
China
Prior art keywords
road surface
feature
robot
layer
pavement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210862532.0A
Other languages
Chinese (zh)
Inventor
曾日芽
钟必清
秦博男
高泽鹏
刘炜杭
蒋海波
侯学轶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China North Vehicle Research Institute
Original Assignee
China North Vehicle Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China North Vehicle Research Institute filed Critical China North Vehicle Research Institute
Priority to CN202210862532.0A priority Critical patent/CN115294449A/en
Publication of CN115294449A publication Critical patent/CN115294449A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/176Urban or other man-made structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention provides a track robot ground identification method based on information fusion of internal sensing type sensors, which comprises the steps of firstly selecting road surface types according to typical environments of outdoor working environments of a track robot, collecting data of the internal sensing type sensors when the track robot runs on different road surfaces at different speeds, and dividing the data into a training set and a testing set; obtaining feature types suitable for road surface classification in each signal component by adopting a statistical feature extraction and covariance-based feature selection method, and fusing the feature types into road surface feature vectors; constructing a probabilistic neural network, training to obtain a probabilistic neural network recognizer of each single-source signal by taking a training set road surface feature vector as input, and inputting a test set into the trained network to obtain a single-source recognition result of road surface recognition; and performing multi-source fusion by adopting a weighted voting decision method to obtain the final road surface type. The invention can realize the pavement identification of the tracked robot under the linear multi-speed working condition and has higher accuracy.

Description

Ground identification method for crawler robot
Technical Field
The invention belongs to the technical field of robot pavement identification, and particularly relates to a crawler robot ground identification method based on internal perception type sensor information fusion.
Background
As a common special working platform, the crawler robot is often applied to outdoor dangerous tasks such as rescue, explosion elimination and the like. However, outdoor environments are complex and changeable, and the motion control of the crawler robot is seriously hindered by weather changes, road surface types and the like. The tracked robot overcomes the resistance of the road surface and other obstacles mainly by depending on the characteristics of a track-soil system, and identifies the road surface which is running or is about to run so as to effectively improve the motion control precision and the running efficiency. Therefore, the pavement identification technology has important significance for improving the intellectualization and autonomy of the track robot.
The classification is carried out according to the sensor category, and the road surface identification is mainly based on an external sensing type sensor and an internal sensing type sensor. Wherein, the external sensing type sensor mainly comprises optical and acoustic sensors, such as laser radar, camera, etc.; the internal sensing type sensor mainly refers to a sensor for state feedback of the robot. The external sensing type sensor is easily influenced by environmental changes, such as illumination intensity, humidity and other factors, and is difficult to adapt to complicated and variable field environments. Accordingly, road surface recognition based on internal sensing sensors is gaining increasing attention. The traditional method is used for carrying out parameter identification based on complex track-road dynamics, but the method depends on a high-precision model, has the defects of complex model, large calculation amount and the like, and cannot realize rapid identification of the road surface. In addition, the uncertainty of the model itself can affect the recognition accuracy. The method based on machine learning can avoid the method depending on a high-precision model, and therefore becomes a research trend for solving the problems.
The machine learning method requires preprocessing of data, extraction of corresponding feature vectors, and exploration of the relationship between feature vectors and label types. Most researches mainly adopt a general method to extract the characteristics of data, such as extracting the most value or mean value of the data in a certain period of time. However, due to the influence of data sources (such as sensor types, installation positions, and signal types), a general feature extraction method is single, and features of the road surface types cannot be effectively represented. However, too many feature types easily form high-dimensional vectors, thereby causing the problem of overfitting or too much calculation. Therefore, extracting the characteristic values of the data in the specific interval from various angles, and performing relatively effective screening and fusion on the characteristic value information become critical problems to be solved urgently.
Disclosure of Invention
Technical problem to be solved
The invention provides a track robot ground identification method based on information fusion of internal sensing sensors, and aims to solve the technical problems of how to extract and select road surface data features and how to identify road surface types based on a probabilistic neural network.
(II) technical scheme
In order to solve the technical problem, the invention provides a track robot ground identification method based on internal perception type sensor information fusion, which comprises the following steps:
s1, collecting data of an internal sensing type sensor of an outdoor typical working pavement of a tracked robot
The internal sensing type sensor comprises an inertial sensor and a driving motor current sensor; collecting sensor signals of an inertial sensor and a driving motor current sensor on an asphalt pavement, a cobblestone pavement, a dry sand pavement, a sandy soil pavement and a clay pavement through a data collection terminal; the inertial sensor signals comprise three-axis acceleration and three-axis angular velocity signals; the driving motor current sensor signals comprise current signals of driving motors on two sides when the crawler robot runs in a straight line in a rear-drive mode;
s2, feature extraction based on statistical calculation and feature selection based on covariance
S2-1, feature extraction based on statistical calculation
In the feature extraction, a sliding window with the window length of 1s and the overlapping degree of 50% is selected for signal processing, and time domain features and frequency domain features are extracted through a statistical method; for off-line training, randomly extracting 60% of an original data set as a training set, ensuring that the sample sizes of various pavements in the training set are consistent, and taking the rest 40% of data as a test set; performing time domain feature extraction on various sensor signals through dimensional statistic value calculation and dimensionless statistic value calculation; fourier transformation is respectively carried out on various sensor signals, and frequency domain feature extraction is carried out;
s2-2. Feature selection based on covariance
Evaluating a correlation RelF (F) between a feature and a label using a covariance function according to the following formula i ) Features and correlations between features, redF (F) i ):
Figure BDA0003757258260000031
Figure BDA0003757258260000032
Cov(X,Y)=E((X-E(X))(Y-E(Y)))
Wherein E (X) and E (Y) represent the expected values of X and Y, respectively; s, L represent the total amount of data and the label of the data, respectively, F i ,F j E is F as a characteristic vector; relF (F) i ) And RedF (F) i ) Respectively representing the correlation between the characteristics to be evaluated and the final result and the redundancy between the characteristics to be evaluated and other characteristics;
RelF (F) was calculated according to the following formula i ) And RedF (F) i ) Difference value of (E) Eval (F) i ):
Eval(F i )=RelF(F i )-RedF(F i )
In the direction of characteristicsUnder the condition of quantity normalization, the difference value Eval (F) i ) Larger represents higher priority for the feature; finally, 7 feature types with the highest Eval value are selected to be combined into a feature vector to be used as input of the neural network;
s3, constructing a probabilistic neural network and training the probabilistic neural network by using a training set
The probabilistic neural network consists of an input layer, a mode layer, a summation layer and an output layer; wherein the number of neurons in the input layer is equal to the dimension of the feature vector;
feature vectors as input to the neural network are passed by the input layer to the pattern layer; the nerves of the mode layer are set as radial basis functions; the distance Φ between the input vector and the central vector is calculated according to the following formula pq (x):
Figure BDA0003757258260000033
Wherein p =1,2,3, \ 8230N (N is the total number of training samples), theta is a smoothing parameter, k is the spatial dimension of the samples, x pq The qth center representing the pth sample;
the output of class p at the summation layer is:
Figure BDA0003757258260000034
wherein N is L Representing the number of the p-th neuron;
the number of the neurons in the summation layer is the same as the number of the pavement types, and the probability density estimation of each pavement type is transmitted to the output layer in proportion; the output layer is composed of competitive neurons, the result of the summation layer is subjected to normalization processing of the output layer, the final result of probability density estimation of all road surface types is obtained through calculation, the neurons with the maximum posterior probability are obtained through threshold discrimination, and therefore the most possible road surface types of the robot in current operation are obtained through calculation; in the process, 8 single-source identifiers are finally formed and respectively correspond to 8 signal categories;
s4, realizing ground category fusion solving based on weighted voting decision method
The recognition performance of 8 single-source recognizers on various road surfaces in a test set is taken as reference, and the reference accuracy rate R is calculated according to the following formula mn
R mn =R m ·R n
In the formula, R m The overall accuracy of the single-source recognizer under the test set is represented by m =1,2,3, \ 8230; 8, and respectively represents longitudinal acceleration, transverse acceleration, vertical acceleration, lateral acceleration, pitch acceleration, yaw angular velocity and inner and outer motor currents; r n Representing the accuracy of each road type of the single-source recognizer under the test set, wherein n =1,2,3,4,5 and respectively represents an asphalt road surface, a cobblestone road surface, a dry sand road surface, a sand soil road surface and a clay road surface;
the weighted voting decision process is as follows: firstly summarizing the road surface type recognition results of 8 single-source recognizers, and if two or more than two highest ticket road surfaces do not exist, outputting the highest ticket road surface types as final results; if two or more conditions exist on the highest-ticket road surface, namely the flat-ticket phenomenon occurs, the reference accuracy rate R of the single-source recognizer which correspondingly votes on the flat-ticket road surface is used mn And taking the high as the basis to finally identify the road surface.
Furthermore, the inertial sensor is installed in the center of the installation flat plate on the robot, and data of the inertial sensor are directly sent to the data acquisition terminal.
Further, the driving motor current sensor is integrated in the motor controller and is sent to the data acquisition terminal by the motor controller.
Further, in step S1, when the sensor signal is collected, the crawler robot stably travels over a distance of 100m or 1 minute on five road surfaces at ten equal interval speeds in a back-drive mode as a forward mode; the ten speeds include the highest speed of the tracked robot.
Further, in step S1, the acquired sensor signal is preprocessed, the speed of the acceleration and stop section of the tracked robot is removed, and the inertial sensor signal is down-sampled to the same sampling frequency as the current signal of the driving motor.
Further, in step S2-1, the dimension statistic includes: absolute mean, median, mean, maximum, minimum, root mean square, square root amplitude, variance, skewness, kurtosis; dimensionless statistical values include: crest factor, form factor, pulse factor, clearance factor, skew factor.
Further, in step S2-1, the frequency domain feature extraction includes dimension statistical feature values and frequency statistical feature values; the dimensional statistical characteristic values comprise a maximum value, a median value, a root mean square amplitude value and a variance, and the frequency statistical characteristic values comprise an average frequency, a root mean square frequency, a center frequency and a root variance frequency.
(III) advantageous effects
The invention provides a ground identification method of a tracked robot based on information fusion of internal sensory type sensors, which comprises the steps of firstly selecting road surface types according to the typical environment of the outdoor working environment of the tracked robot, collecting data (including inertial sensors and driving motor currents) of the internal sensory type sensors when the tracked robot runs at different speeds on different road surfaces, and dividing the data into a training set and a testing set; obtaining feature types suitable for road surface classification in each signal component by adopting a statistical feature extraction and covariance-based feature selection method, and fusing the feature types into road surface feature vectors; constructing a probabilistic neural network, training to obtain a probabilistic neural network recognizer of each single-source signal by taking a training set road surface feature vector as input, and inputting a test set into the trained network to obtain a single-source recognition result of road surface recognition; and performing multi-source fusion by adopting a weighted voting decision method to obtain the final road surface type. The method is based on the internal sensing type sensor information, the probabilistic neural network and the weighted voting decision, can realize the road surface identification of the tracked robot under the linear multi-speed working condition, has higher accuracy, and provides effective reference for the motion planning and control of the tracked robot.
Drawings
Fig. 1 is a flow chart of a ground identification method of a crawler robot according to an embodiment of the present invention.
Detailed Description
In order to make the objects, contents and advantages of the present invention clearer, the following detailed description of the embodiments of the present invention will be made in conjunction with the accompanying drawings and examples.
The embodiment provides a track robot ground identification method based on information fusion of internal sensing type sensors, the main flow of which is shown in fig. 1, and the method specifically comprises the following steps:
s1, collecting data of an internal sensing type sensor of an outdoor typical working pavement of a tracked robot
The internal sensing type sensor comprises an inertial sensor and a driving motor current sensor. The inertial sensor is arranged at the central position of a mounting flat plate on the robot, and the data of the inertial sensor is directly sent to the data acquisition terminal; the driving motor current sensor is integrated in the motor controller and is sent to the data acquisition terminal by the motor controller.
The method comprises the steps of acquiring sensor signals of an inertial sensor and a driving motor current sensor on five kinds of road surfaces, such as an asphalt road surface, a cobblestone road surface, a dry sand road surface, a sandy soil road surface, a clay road surface and the like, through a data acquisition terminal. When the sensor signals are collected, the crawler robot takes a rear-drive mode as a forward mode, and stably walks for more than 100m or 1 minute on five road surfaces at ten equal interval speeds respectively. The ten speeds include the highest speed of the tracked robot.
The inertial sensor signals mainly comprise three-axis acceleration and three-axis angular velocity signals; the driving motor current sensor signals mainly comprise current signals of driving motors on two sides when the crawler robot runs in a straight line in a rear-drive mode. Thus, there are 8 signal types in the sensor signal. And preprocessing the acquired sensor signals, removing the speed of an acceleration section and a stop section of the tracked robot, and down-sampling the signals of the inertial sensor to be the same as the sampling frequency of current signals of the driving motor.
S2, feature extraction based on statistical calculation and feature selection based on covariance
S2-1, feature extraction based on statistical calculation
In the feature extraction, a sliding window with the window length of 1s and the overlapping degree of 50% is selected for signal processing, and time domain features and frequency domain features are extracted through a statistical method. For off-line training, 60% of the original data set is randomly extracted to serve as a training set, sample sizes of various road surfaces in the training set are consistent, and the rest 40% of data serve as a testing set.
And performing time domain feature extraction on various sensor signals by dimensional statistic value calculation and dimensionless statistic value calculation. Wherein, the dimension statistic value comprises: absolute mean, median, mean, maximum, minimum, root mean square, square root amplitude, variance, skewness, kurtosis; dimensionless statistical values include: crest factor, form factor, pulse factor, clearance factor, skew factor.
Fourier transformation is respectively carried out on various sensor signals, and frequency domain feature extraction is carried out. The frequency domain feature extraction mainly comprises dimension statistical feature values (maximum value, median value, root mean square amplitude and variance) and frequency statistical feature values (average frequency, root mean square frequency, center frequency and root variance frequency).
S2-2. Feature selection based on covariance
The correlation RelF (F) between features and labels is evaluated using a covariance function according to the following formula i ) Features and correlations between features, redF (F) i ):
Figure BDA0003757258260000071
Figure BDA0003757258260000072
Cov(X,Y)=E((X-E(X))(Y-E(Y)))
Wherein E (X) and E (Y) represent the expected values of X and Y, respectively; s, L represent the total amount of data and the label of the data, respectively, F i ,F j E.f is the feature vector. RelF (F) i ) And RedF (F) i ) Respectively representing the correlation of the characteristic to be evaluated with the final result and the redundancy between the characteristic to be evaluated and other characteristics.
RelF (F) was calculated according to the following formula i ) And RedF (F) i ) Difference value of (E) Eval (F) i ):
Eval(F i )=RelF(F i )-RedF(F i )
Under the condition of feature vector normalization, the difference value Eval (F) i ) Larger represents higher priority for the feature. And finally, selecting 7 feature types with the highest Eval values to combine into a feature vector to be used as the input of the neural network.
S3, constructing a probabilistic neural network and training the probabilistic neural network by using a training set
The probabilistic neural network is composed of an input layer, a mode layer, a summation layer and an output layer. Wherein the number of neurons in the input layer is equal to the dimension of the feature vector.
Feature vectors, which are inputs to the neural network, are passed from the input layer to the pattern layer. The nerves of the mode layer are set to radial basis functions. The distance Φ between the input vector and the central vector is calculated according to the following formula pq (x):
Figure BDA0003757258260000081
Wherein p =1,2,3, \ 8230N (N is the total number of training samples), theta is a smoothing parameter, k is the spatial dimension of the samples, x pq Representing the qth center of the pth sample.
The output of class p at the summation layer is:
Figure BDA0003757258260000082
wherein N is L Is shown as p The number of neurons in the class.
The number of neurons in the summation layer is the same as the number of pavement types, and the probability density estimate for each pavement type is passed in proportion to the output layer. The output layer is composed of competitive neurons, the result of the summation layer is subjected to normalization processing of the output layer, the final result of probability density estimation of all the road surface types is obtained through calculation, the neurons with the maximum posterior probability are obtained through threshold discrimination, and therefore the most possible road surface type of the robot during current operation is obtained through calculation. The process finally forms 8 single-source identifiers, which correspond to 8 signal classes respectively.
S4, realizing ground category fusion solving based on weighted voting decision method
The recognition performance of 8 single-source recognizers on various pavements in a test set is taken as a reference, and the reference accuracy rate R is calculated according to the following formula mn
R mn =R m ·R n
In the formula, R m The overall accuracy of the single-source recognizer under the test set is represented, m =1,2,3, \8230;, 8, and respectively represents longitudinal acceleration, transverse acceleration, vertical acceleration, lateral acceleration, pitch acceleration, yaw angular velocity and inner and outer motor currents; r is n The accuracy of each road type of the single-source recognizer under the test set is represented, n =1,2,3,4,5, and represents an asphalt road surface, a cobblestone road surface, a dry sand road surface, a sand soil road surface and a clay road surface respectively.
The weighted voting decision process is as follows: firstly, summarizing the road surface type recognition results of 8 single-source recognizers, and if two or more than two highest ticket road surfaces do not exist, outputting the highest ticket road surface type as a final result; if two or more than two conditions exist on the highest ticket road surface, namely the flat ticket phenomenon occurs, the reference accuracy rate R of the single-source recognizer which votes correspondingly on the flat ticket road surface is used mn And taking the high one as the final identification road surface as the basis.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (7)

1. A track robot ground identification method based on internal perception type sensor information fusion is characterized by comprising the following steps:
s1, acquiring data of sensing type sensor in outdoor typical working pavement of tracked robot
The internal sensing type sensor comprises an inertial sensor and a driving motor current sensor; collecting sensor signals of an inertial sensor and a driving motor current sensor on an asphalt pavement, a cobblestone pavement, a dry sand pavement, a sandy soil pavement and a clay pavement through a data collection terminal; the inertial sensor signals comprise three-axis acceleration and three-axis angular velocity signals; the driving motor current sensor signals comprise driving motor current signals on two sides when the crawler robot runs linearly in a rear-drive mode;
s2, feature extraction based on statistical calculation and feature selection based on covariance
S2-1, feature extraction based on statistical calculation
In the feature extraction, a sliding window with the window length of 1s and the overlapping degree of 50% is selected for signal processing, and time domain features and frequency domain features are extracted through a statistical method; for off-line training, randomly extracting 60% of an original data set as a training set, ensuring that the sample sizes of various road surfaces in the training set are consistent, and taking the rest 40% of data as a test set; performing time domain feature extraction on various sensor signals through dimensional statistic value calculation and dimensionless statistic value calculation; fourier transformation is respectively carried out on various sensor signals, and frequency domain feature extraction is carried out;
s2-2. Feature selection based on covariance
Evaluating a correlation RelF (F) between a feature and a label using a covariance function according to the following formula i ) Features and correlations between features, redF (F) i ):
Figure FDA0003757258250000011
Figure FDA0003757258250000012
Cov(X,Y)=E((X-E(X))(Y-E(y)))
Wherein E (X) and E (Y) represent the expected values of X and Y, respectively; s and L represent the total data amount and the data label respectively,F i ,F j E is F as a characteristic vector; relF (F) i ) And RedF (F) i ) Respectively representing the correlation between the characteristics to be evaluated and the final result and the redundancy between the characteristics to be evaluated and other characteristics;
RelF (F) was calculated according to the following formula i ) And RedF (F) i ) Difference value of (E) Eval (F) i ):
Eval(F i )=RelF(F i )-RedF(F i )
The difference value Eval (F) under the condition of feature vector normalization i ) Larger represents higher priority for the feature; finally, selecting 7 characteristic types with the highest Eval value to form a characteristic vector as the input of the neural network;
s3, constructing a probabilistic neural network and training the probabilistic neural network by using a training set
The probabilistic neural network consists of an input layer, a mode layer, a summation layer and an output layer; wherein the number of neurons in the input layer is equal to the dimension of the feature vector;
feature vectors as input to the neural network are passed by the input layer to the pattern layer; the nerves of the mode layer are set as radial basis functions; calculating the distance phi between the input vector and the center vector according to the following formula pq (x):
Figure FDA0003757258250000021
Wherein p =1,2,3, \8230N (N is total number of training samples), theta is smoothing parameter, k is sample space dimension, x pq Represents the qth center of the pth sample;
the output of class p at the summation layer is:
Figure FDA0003757258250000022
wherein N is L Represents the number of neurons of the p-th class;
the number of the neurons in the summation layer is the same as the number of the pavement types, and the probability density estimation of each pavement type is transmitted to the output layer in proportion; the output layer is composed of competitive neurons, the result of the summation layer is subjected to normalization processing of the output layer, the final result of probability density estimation of all road surface types is obtained through calculation, the neurons with the maximum posterior probability are obtained through threshold discrimination, and therefore the most possible road surface type of the robot in current operation is obtained through calculation; in the process, 8 single-source identifiers are finally formed and respectively correspond to 8 signal categories;
s4, realizing ground category fusion solving based on weighted voting decision method
The recognition performance of 8 single-source recognizers on various pavements in a test set is taken as a reference, and the reference accuracy rate R is calculated according to the following formula mn
R mn =R m ·R n
In the formula, R m The overall accuracy of the single-source recognizer under the test set is represented, m =1,2,3, \8230;, 8, and respectively represents longitudinal acceleration, transverse acceleration, vertical acceleration, lateral acceleration, pitch acceleration, yaw angular velocity and inner and outer motor currents; r n The accuracy of each road type of the single-source recognizer under the test set is represented, n =1,2,3,4,5 and respectively represents an asphalt road surface, a cobblestone road surface, a dry sand road surface, a sand soil road surface and a clay road surface;
the weighted voting decision process is as follows: firstly, summarizing the road surface type recognition results of 8 single-source recognizers, and if two or more than two highest ticket road surfaces do not exist, outputting the highest ticket road surface type as a final result; if two or more than two conditions exist on the highest ticket road surface, namely the flat ticket phenomenon occurs, the reference accuracy rate R of the single-source recognizer which votes correspondingly on the flat ticket road surface is used mn And taking the high one as the final identification road surface as the basis.
2. The ground recognition method for the track robot according to claim 1, wherein the inertial sensor is mounted at a central position of a mounting plate on the robot, and data of the inertial sensor is directly transmitted to the data acquisition terminal.
3. The ground identification method for the crawler robot according to claim 1, wherein the driving motor current sensor is integrated into a motor controller and is transmitted to the data acquisition terminal by the motor controller.
4. The ground recognition method for the crawler robot according to claim 1, wherein in step S1, when the sensor signal is collected, the crawler robot stably travels on five road surfaces for more than 100m or 1 minute at ten equal interval speeds in a backward drive mode as a forward mode; the ten speeds include the highest speed of the tracked robot.
5. The ground identification method for the crawler robot according to claim 1, wherein in step S1, the acquired sensor signals are preprocessed, the speed of the acceleration and stop sections of the crawler robot is removed, and the inertial sensor signals are down-sampled to the same frequency as the current signals of the driving motor.
6. The ground recognition method for a track robot according to claim 1, wherein in step S2-1, the dimensional statistics include: absolute mean, median, mean, maximum, minimum, root mean square, square root amplitude, variance, skewness, kurtosis; dimensionless statistical values include: crest factor, form factor, pulse factor, clearance factor, skew factor.
7. The ground identification method for the crawler robot according to claim 1, wherein in step S2-1, the frequency domain feature extraction includes dimensional statistical feature values and frequency statistical feature values; the dimensional statistical characteristic values comprise a maximum value, a median value, a root mean square amplitude value and a variance, and the frequency statistical characteristic values comprise an average frequency, a root mean square frequency, a center frequency and a root variance frequency.
CN202210862532.0A 2022-07-21 2022-07-21 Ground identification method for crawler robot Pending CN115294449A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210862532.0A CN115294449A (en) 2022-07-21 2022-07-21 Ground identification method for crawler robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210862532.0A CN115294449A (en) 2022-07-21 2022-07-21 Ground identification method for crawler robot

Publications (1)

Publication Number Publication Date
CN115294449A true CN115294449A (en) 2022-11-04

Family

ID=83825211

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210862532.0A Pending CN115294449A (en) 2022-07-21 2022-07-21 Ground identification method for crawler robot

Country Status (1)

Country Link
CN (1) CN115294449A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116842385A (en) * 2023-06-30 2023-10-03 南京理工大学 LSTM road surface unevenness identification method based on tracked vehicle vibration characteristics

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116842385A (en) * 2023-06-30 2023-10-03 南京理工大学 LSTM road surface unevenness identification method based on tracked vehicle vibration characteristics
CN116842385B (en) * 2023-06-30 2024-03-19 南京理工大学 LSTM road surface unevenness identification method based on tracked vehicle vibration characteristics

Similar Documents

Publication Publication Date Title
CN107492251B (en) Driver identity recognition and driving state monitoring method based on machine learning and deep learning
CN103605362B (en) Based on motor pattern study and the method for detecting abnormality of track of vehicle multiple features
CN105892471B (en) Automatic driving method and apparatus
Weiss et al. Vibration-based terrain classification using support vector machines
US9053433B2 (en) Assisting vehicle guidance over terrain
CN107133974A (en) The vehicle type classification method that Gaussian Background modeling is combined with Recognition with Recurrent Neural Network
CN111598142B (en) Outdoor terrain classification method for wheeled mobile robot
CN108877267A (en) A kind of intersection detection method based on vehicle-mounted monocular camera
CN109886304B (en) HMM-SVM double-layer improved model-based surrounding vehicle behavior recognition method under complex road conditions
Wang et al. Joint deep neural network modelling and statistical analysis on characterizing driving behaviors
CN112734808A (en) Trajectory prediction method for vulnerable road users in vehicle driving environment
Gruner et al. Spatiotemporal representation of driving scenarios and classification using neural networks
CN115294449A (en) Ground identification method for crawler robot
Harkous et al. A two-stage machine learning method for highly-accurate drunk driving detection
CN114882069A (en) Taxi track abnormity detection method based on LSTM network and attention mechanism
CN116738211A (en) Road condition identification method based on multi-source heterogeneous data fusion
Moosavi et al. Driving style representation in convolutional recurrent neural network model of driver identification
Guan et al. Vinet: Visual and inertial-based terrain classification and adaptive navigation over unknown terrain
Sadhukhan et al. Terrain estimation using internal sensors
CN116611603B (en) Vehicle path scheduling method, device, computer and storage medium
Gao et al. Rough set based unstructured road detection through feature learning
CN112319468B (en) Driverless lane keeping method for maintaining road shoulder distance
CN112428939B (en) Driveway keeping induction assembly device for maintaining road shoulder distance
CN112328651B (en) Traffic target identification method based on millimeter wave radar data statistical characteristics
CN112180913A (en) Special vehicle identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination