CN111260057B - Foot type robot terrain sensing method based on virtual sensor - Google Patents
Foot type robot terrain sensing method based on virtual sensor Download PDFInfo
- Publication number
- CN111260057B CN111260057B CN202010070559.7A CN202010070559A CN111260057B CN 111260057 B CN111260057 B CN 111260057B CN 202010070559 A CN202010070559 A CN 202010070559A CN 111260057 B CN111260057 B CN 111260057B
- Authority
- CN
- China
- Prior art keywords
- neural network
- detection neural
- machine learning
- leg joint
- terrain
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 210000001503 joint Anatomy 0.000 claims abstract description 55
- 238000001514 detection method Methods 0.000 claims abstract description 54
- 238000010801 machine learning Methods 0.000 claims abstract description 54
- 238000003062 neural network model Methods 0.000 claims abstract description 47
- 239000002689 soil Substances 0.000 claims abstract description 47
- 230000008447 perception Effects 0.000 claims abstract description 35
- 238000012549 training Methods 0.000 claims abstract description 28
- 238000012706 support-vector machine Methods 0.000 claims description 25
- 239000000284 extract Substances 0.000 claims 1
- 238000012545 processing Methods 0.000 abstract description 4
- 238000013461 design Methods 0.000 abstract description 3
- 238000012423 maintenance Methods 0.000 abstract description 3
- 230000006399 behavior Effects 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 35
- 238000013528 artificial neural network Methods 0.000 description 17
- 238000012360 testing method Methods 0.000 description 15
- 239000011159 matrix material Substances 0.000 description 11
- 239000000463 material Substances 0.000 description 7
- 239000004033 plastic Substances 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 239000013598 vector Substances 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- XAGFODPZIPBFFR-UHFFFAOYSA-N aluminium Chemical compound [Al] XAGFODPZIPBFFR-UHFFFAOYSA-N 0.000 description 5
- 229910052782 aluminium Inorganic materials 0.000 description 5
- 241000238631 Hexapoda Species 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 238000012795 verification Methods 0.000 description 4
- 238000002474 experimental method Methods 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- 230000007547 defect Effects 0.000 description 2
- 238000009795 derivation Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000002790 cross-validation Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B21/00—Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant
- G01B21/22—Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant for measuring angles or tapers; for testing the alignment of axes
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N33/00—Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
- G01N33/24—Earth materials
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01P—MEASURING LINEAR OR ANGULAR SPEED, ACCELERATION, DECELERATION, OR SHOCK; INDICATING PRESENCE, ABSENCE, OR DIRECTION, OF MOVEMENT
- G01P3/00—Measuring linear or angular speed; Measuring differences of linear or angular speeds
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
- G01R19/00—Arrangements for measuring currents or voltages or for indicating presence or sign thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/217—Validation; Performance evaluation; Active pattern learning techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Evolutionary Biology (AREA)
- Computing Systems (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Chemical & Material Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Medical Informatics (AREA)
- Environmental & Geological Engineering (AREA)
- General Life Sciences & Earth Sciences (AREA)
- Geology (AREA)
- Remote Sensing (AREA)
- Biomedical Technology (AREA)
- Food Science & Technology (AREA)
- Medicinal Chemistry (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Manipulator (AREA)
Abstract
A foot type robot terrain perception method based on a virtual sensor belongs to the robot perception field and comprises the following steps: s1, establishing a touchdown detection neural network model and a soil classification machine learning model; s2, collecting the angle of a leg joint, the angular velocity of the leg joint, the motor current and contact force data of the leg and the ground of the legged robot as samples under the conditions of different terrains and different states; s3, training a touchdown detection neural network model and a soil classification machine learning model by using the samples collected in the S2; and S4, taking the ground contact detection neural network model and the soil classification machine learning model trained in the S3 as a terrain perception system of the foot type robot for terrain perception. The method can improve the walking stability and the movement capability of the robot, and simultaneously enhance the robustness and the behavior reliability of the robot; in addition, the hardware of the robot is simplified, and the design, processing and maintenance costs are reduced.
Description
Technical Field
The invention relates to a terrain perception method of a foot type robot based on a virtual sensor, relates to terrain perception of the foot type robot for outdoor operation, is suitable for the foot type robot to acquire internal and external environment information, and belongs to the field of robot perception.
Background
The terrain perception capability is very important for outdoor operation, especially for deep space exploration robots, and the complex unstructured geological conditions bring great influence on the mobility of the robots, so that the robots cannot complete preset tasks, and even cause danger to the robots. The terrain structure and physical characteristics under surface soil are difficult to obtain by a pure vision-based terrain sensing method, most of the vision-based terrain sensing methods need to install a touch sensor on the foot of a robot, and the sensor interacts with various terrains and is easy to damage, so that the robustness of the system is reduced. Moreover, the spacecraft has the characteristic that on-orbit faults are not easy to maintain, and the touch sensing system can seriously affect the reliability of the whole system.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the method overcomes the defects of the prior art, and provides a foot type robot terrain sensing method based on a virtual sensor, which comprises the following steps: s1, establishing a touchdown detection neural network model and a soil classification machine learning model; s2, collecting the angle of a leg joint, the angular velocity of the leg joint, the motor current and contact force data of the leg and the ground of the legged robot as samples under the conditions of different terrains and different states; s3, training a touchdown detection neural network model and a soil classification machine learning model by using the samples collected in the S2; and S4, taking the ground contact detection neural network model and the soil classification machine learning model trained in the S3 as a terrain perception system of the foot type robot for terrain perception. The method can improve the walking stability and the movement capability of the robot, and simultaneously enhance the robustness and the behavior reliability of the robot; in addition, the hardware of the robot is simplified, and the design, processing and maintenance cost is reduced.
The purpose of the invention is realized by the following technical scheme:
a terrain perception method of a foot robot based on a virtual sensor comprises the following steps:
s1, establishing a touchdown detection neural network model and a soil classification machine learning model;
s2, collecting the angle of a leg joint, the angular velocity of the leg joint, the motor current and contact force data of the leg and the ground of the legged robot as samples under the conditions of different terrains and different states;
s3, training a touchdown detection neural network model and a soil classification machine learning model by using the samples collected in the S2;
and S4, taking the ground contact detection neural network model and the soil classification machine learning model trained in the S3 as a terrain perception system of the legged robot for terrain perception.
In the terrain sensing method of the foot type robot based on the virtual sensor, when the touchdown detection neural network model is trained in S3, the network weight of the touchdown detection neural network model is corrected by using the error between the network output and the expected output and the derivative error between the network output and the expected output to the input.
In the terrain sensing method for the legged robot based on the virtual sensor, in S3, discrete wavelet transform is firstly adopted to extract sample data of single-leg joint information of the legged robot, and then a support vector machine classifier is used to classify the extracted single-leg joint information of the legged robot, so that a trained soil classification machine learning model is obtained.
The method for sensing the terrain of the foot type robot based on the virtual sensor takes the angle of the leg joint, the angular velocity of the leg joint and the motor current of the foot type robot as input samples of the touchdown detection neural network model, and takes the contact force of the leg and the ground as output samples of the touchdown detection neural network model.
According to the terrain sensing method of the legged robot based on the virtual sensor, the angle of the leg joint of the legged robot, the angular velocity of the leg joint, the motor current and the soil sample are used as the samples of the soil classification machine learning model.
A foot type robot terrain perception system based on a virtual sensor comprises a sample acquisition module, an offline learning module and an online application module;
the sample acquisition module is used for acquiring the angle of the leg joint, the angular velocity of the leg joint, the motor current and contact force data of the leg and the ground of the legged robot as samples under the conditions of different terrains and different states;
the offline learning module is used for establishing a touchdown detection neural network model and a soil classification machine learning model, and then training the touchdown detection neural network model and the soil classification machine learning model by using the samples collected by the sample collection module;
the on-line application module utilizes the ground contact detection neural network model and the soil classification machine learning model trained in the off-line learning module to complete terrain perception.
According to the foot type robot terrain perception system based on the virtual sensor, when the off-line learning module trains the touchdown detection neural network model, the network weight of the touchdown detection neural network model is corrected by using the error between the network output and the expected output and the derivative error between the network output and the expected output and the input.
In the foot type robot terrain perception system based on the virtual sensor, the offline learning module firstly adopts discrete wavelet transform to extract sample data of single-leg joint information of the foot type robot, and then uses a support vector machine classifier to classify the extracted single-leg joint information of the foot type robot, so as to obtain a trained soil classification machine learning model.
The terrain awareness system for the foot robot based on the virtual sensor uses the angle of the leg joint, the angular velocity of the leg joint and the motor current of the foot robot as input samples of the touchdown detection neural network model, and uses the contact force between the leg and the ground as output samples of the touchdown detection neural network model.
The foot type robot terrain perception system based on the virtual sensor takes the angle of the leg joint of the foot type robot, the angular velocity of the leg joint, the motor current and the soil sample as the sample of the soil classification machine learning model.
Compared with the prior art, the invention has the following beneficial effects:
(1) The invention provides a terrain perception method of a foot type robot based on a virtual sensor, which can obviously improve the following performances of the foot type robot: the touchdown state of the foot type robot can be predicted so as to improve the walking stability of the robot; the basic features of the terrain can be identified, the soil type is determined, and therefore the terrain adaptability and the movement capability of the robot are improved, and the classification precision of the method on the soil can reach more than 90% through verification;
(2) According to the method, the touchdown state and the topographic characteristics can be estimated only through the leg joint angle, the angular speed and the motor current of the robot, a foot sensor does not need to be additionally designed and installed, the hardware of the robot is simplified, and the design, processing and maintenance costs are reduced. Particularly in deep space exploration application, the carrying pressure of a spacecraft can be effectively reduced, the defect that on-orbit faults are difficult to maintain is overcome, and the reliability of the robot is enhanced;
(3) The method provides a terrain perception machine learning model of a legged robot, establishes a touchdown detection neural network model and a soil classification machine learning model, provides a neural network learning algorithm considering derivative information in consideration of the limited number of samples in a space environment, overcomes the over-fitting problem of small sample learning, and has the advantages of high prediction precision and high dynamic response speed;
(4) The method establishes the touchdown detection neural network model of the neural network learning algorithm considering the derivative information, corrects the network weight according to the error of the network output and the expected output, and also corrects the network weight according to the derivative error of the network output and the expected output to the input, thereby reducing the overfitting problem of the network and improving the generalization capability of the network.
Drawings
FIG. 1 is a model of a touchdown detection neural network architecture;
FIG. 2 is a block diagram of a soil classification machine learning model modeling method;
FIG. 3 is a BP neural network learning training process;
FIG. 4 is a block diagram of a virtual sensor terrain awareness application;
FIG. 5 is a schematic diagram of a one-leg touchdown motion experimental scheme of the hexapod robot;
FIG. 6 is a model of a hexapod robot touchdown detection neural network architecture;
FIG. 7 is a comparison curve of the neural network training output and the sample output for a centroid height of 0.36m, a single step period of 4s, and a soft plastic collision surface material;
FIG. 8 is a graph of the neural network training output versus the sample output for a centroid height of 0.42m, single step cycle 6s, with the impinging surface material being aluminum;
FIG. 9 is a graph of the comparison of the predicted results of a virtual sensor with a centroid height of 0.36m, a single step cycle of 8s, and a soft plastic impact surface material with experimental test results;
FIG. 10 is a graph of the predicted results of a virtual sensor with a centroid height of 0.42m, a single step period of 10s, and an impinging surface material of aluminum compared to experimental test results;
FIG. 11 is a classification confusion matrix of four different SVM models (ST-1: aluminum, ST-2: rubber, ST-3: rigid plastic, ST-4: soft plastic), where FIG. 11 (a) is an SVM model with a linear kernel function, 11 (b) is an SVM model with a quadratic kernel function, 11 (c) is an SVM model with a cubic kernel function, and 11 (d) is an SVM model with a Gaussian radial basis kernel function.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
Example 1:
step (1) modeling of terrain awareness machine learning model
The modeling of the terrain awareness machine learning model is mainly divided into two parts: touchdown detection neural network modeling and soil classification machine learning model modeling
1) Touchdown detection neural network modeling
A neural network learning algorithm considering derivative information is established to establish a touchdown detection neural network model of one leg of the legged robot so as to realize touchdown detection. The touchdown detection neural network model comprises three layers: the device comprises an input layer, a hidden layer and an output layer, wherein the layers are connected through a function operation relation. The input signals of the input layer are joint angles, joint angular velocities and joint motor currents of joints of a single leg. If the number of leg joints is N, the input layer of the touchdown detection neural network model includes 3N nodes. The output signal of the output layer is the contact force between the leg and foot end and the ground, which is divided into normal contact force and tangential friction force, so that the node number of the output layer is 2. The number of the hidden layer nodes is generally set to an initial value according to experience, and then a final value is determined according to repeated trial and error. The initial value may be set with reference to the following empirical formula:
wherein n is the number of nodes of the hidden layer, h is the number of nodes of the input layer, m is the number of nodes of the output layer, and a is an adjusting constant between 1 and 10.
Fig. 1 is a built neural network structure model, and mathematical relations among layers are expressed as follows:
y j =f 1 (net j )j=1,2,…,n (3)
o k =f 2 (net k )k=1,2 (5)
wherein the input vector is X = (X) 1 ,x 2 ,…,x i ,…,x 3N ) T ,x i Is the input data of the ith unit of the input layer; the hidden layer output vector is Y = (Y) 1 ,y 2 ,…,y j ,…,y n ) T N is the number of hidden layer units, y j Is the output data of the jth unit of the hidden layer; the output layer output vector is O = (O) 1 ,o 2 ) T The desired output vector is d = (d) 1 ,d 2 ) T ,o k And d k Respectively, the network output and the expected output of the kth unit of the output layer; the weight matrix from the input layer to the hidden layer is represented by V, V = (V) 1 ,V 2 ,…,V j ,…,V n ) T The weight matrix from hidden layer to output layer is represented by W, W = (W) 1 ,W 2 ) T ,v ij And w jk The weights between the input layer and the hidden layer, and between the hidden layer and the output layer, net j And net k Input data, f, of the jth cell of the hidden layer and the kth cell of the output layer, respectively 1 And f 2 The transfer functions of the hidden layer and the output layer, respectively. The transfer function in the nonlinear network can adopt a Sigmoid activation function, and a hyperbolic tangent function and a logarithmic function are adopted.
2) Soil classification machine learning model modeling
Firstly, discrete Wavelet Transform (DWT) is adopted to perform feature extraction on sample data of the single-leg joint information of the legged robot. Each sample datum is single-leg joint information of the foot type robot walking by one step, and the single-leg joint information comprises angles, angular speeds and motor currents of all joints. If the number of joints of one leg is N, the joint information of one leg includes 3N amounts. And (3) performing feature extraction by using Daubechies wavelet functions, wherein the support area in the wavelet function psi (t) and the scale function phi (t) is 2N-1, and the vanishing moment of psi (t) is N.
And then designing a Support Vector Machine (SVM) classifier to classify the single-leg joint features extracted in the last step. Different kernel functions can be selected to form different support vector machines, such as linear kernel functions, polynomial kernel functions and gaussian kernel functions, and the formulas are as follows:
linear kernel function:
K(x i ,x j )=(x i ·x j ) (8)
polynomial kernel (quadratic: d =2; cubic: d = 3)
K(x i ,x j )=[(x i ·x j )+1] d (9)
Gaussian radial basis kernel function
K(x i ,x j )=exp(-γ||x i ·x j || 2 )γ>0 (10)
A heuristic program is used for selecting kernel proportion, an SVM multi-class classifier is constructed by adopting a One-to-One method (One-vs-One), namely, a plurality of two classifiers are combined to realize the construction of the multi-classifier, specifically, an SVM is designed between any two classes of samples, so that L (L-1)/2 SVM samples need to be designed. A block diagram of the soil classification machine learning model modeling method is shown in fig. 2.
And (2) collecting sample data.
Before training by applying a machine learning model, sample data acquisition needs to be completed first. The method collects the angle, the angular speed and the motor current of the leg joint of the legged robot in different terrains and in different walking processes, and the contact force between the leg and the ground is used as a sample. Taking a walking step as a sample.
Aiming at the touchdown detection neural network model, a joint angle, a joint angular velocity and a joint motor current are used as input samples of the neural network, and a contact force is used as an output sample. Then, the sample data is divided into three parts of a training set, a verification set and a test set. When the data volume is small, the proportion of the training set, the verification set and the test set is 6; when the data amount is very large, the ratio of the training data, the verification data and the test data is 98. Generally, neural networks require normalization of sample data prior to training, i.e., mapping the data into smaller intervals, such as [0,1] or [ -1,1].
And aiming at the soil classification machine learning model, taking the joint angle, the joint angular speed, the joint motor current and the corresponding soil type as samples. And (3) carrying out standardization on the sample data, wherein the standardization is carried out by a common method of z-score, the mean value of the processed data is 0, and the standard deviation is 1.
And (3) machine learning model algorithm.
The machine learning model algorithm comprises a touchdown detection neural network model algorithm and a soil classification machine learning model algorithm.
1) Touchdown detection neural network model algorithm
The neural network learning algorithm flow described herein that considers derivative information is shown in fig. 3.
Defining the error of the network output from the expected output as:
introducing a gradient descent method to the weight v ij And w jk Make an adjustment to E 1 And continuously reducing, namely:
wherein, eta represents the learning rate, u represents the iteration times of the network learning training process, and i, j and k are the node ordinal number of the network input layer, the node ordinal number of the hidden layer and the node ordinal number of the output layer respectively.
In addition, as can be seen from equations (2) to (5), the partial derivative matrix of the network output O with respect to the network input X is:
wherein, [ f 2 (net k )]' means f 2 (net k ) To net k Derivation is carried out; [ f ] 1 (net j )]' means f 1 (net j ) Paired nets j Derivation, Q = (Q) 1 ,Q 2 ) T ,Q k Partial derivative vector, Q, of network output to network input for k unit of output layer k =(q k1 ,q k2 ,…,q ki ,…,q kn )。
Let the matrix of expected values of the partial derivatives of the network output over the network input be T = (T) 1 ,T 2 ) T Wherein T is k For the network output of the k-th cell of the output layer, a vector of expected values, T, of partial derivatives of the network input k =(t k1 ,t k2 ,…,t ki ,…,t kn ) Then, the error of the partial derivative of the network output to the network input from its expected value is defined as:
also, the weight v is weighted by gradient descent ij And w jk Making an adjustment to E 2 And continuously reducing, namely:
2) Soil classification machine learning model algorithm
And calculating all the characteristic vectors of the sample data in the training process of the established soil classification model according to the kernel function, and constructing a characteristic space enabling the sample to be separable. The method comprises the specific steps of classifying the features of all sample data by using a selected classifier, respectively calculating the correlation value of each eigenvector in each classifier according to a selected kernel function, then calculating a covariance matrix space according to the correlation values, carrying out Householder transformation to obtain a corresponding hyperplane matrix, calculating the characteristic coefficients to obtain model parameters, and thus, performing classification according to the model parameters. And a cross validation method is adopted in the training process to prevent the overfitting phenomenon in the training.
And (4) a virtual sensor terrain perception system.
The virtual sensor mainly calculates the touchdown state and the terrain characteristic of the foot end at the moment according to the current state of the leg joint of the robot, and collects the information in a terrain perception system. The virtual sensor is mainly divided into three parts of sample acquisition, offline learning and online application in terrain perception application, and a schematic diagram of the virtual sensor is shown in fig. 4, wherein a hexapod robot is taken as an example, the method is suitable for any multi-pod robot. The walking path of the robot leg designed by the sample acquisition part envelopes all motion states of joints of the robot leg as much as possible so as to improve the prediction precision. And the offline learning part is used for respectively establishing a terrain perception machine learning model of each foot and applying sample data of each foot to carry out learning training on the corresponding machine learning model. And an on-line application part, which is used for applying the trained machine learning model to a terrain perception system of the foot type robot and predicting the ground contact state and the terrain characteristic of the robot in the walking process on line.
Example 2:
a terrain perception method of a virtual sensor for a foot robot mainly comprises 4 components: (1) collecting sample data; (2) modeling a terrain awareness machine learning model; (3) a machine learning model algorithm; and (4) a virtual sensor terrain perception system. The composition of each part is described in detail below.
(1) Sample data collection
In order to verify the effectiveness of the method, a one-leg touchdown motion experiment of the hexapod robot is designed. The schematic diagram of the experimental scheme is shown in fig. 5, a one-dimensional force sensor is arranged on the sole of one leg of the robot, and the sampling frequency is 1000Hz. The experiments respectively carry out the ground contact walking tests of the robot on the ground with different mass center heights and different walking speeds and on the ground made of different materials. Wherein, the single step walking cycle takes: 3s, 4s, 5s, 6s, 7s, 8s, 9s, 10s; height of center of mass: 0.36m, 0.42m; ground material: aluminum, rubber, hard plastic, soft plastic. The angle, the angular speed, the motor current and the collision force measured by the force sensor of the joint are collected in the experimental process. The joint angle information is acquired from a joint encoder; the joint angular velocity is obtained by a joint angle difference method; the joint motor adopts a permanent magnet synchronous motor, and a joint current signal is extracted from the permanent magnet synchronous motor. The total number of samples collected in the experiment was 64 groups.
For the touchdown detection neural network model, the input samples are: joint angle, joint angular velocity and joint motor current; the output samples are: and (4) collision force. The distribution ratio of the training set, the validation set and the test set is 3. All sample data are normalized before use, the normalized data are distributed in an interval [0,1], and the normalization formula is as follows:
wherein X is the original value of the data, X' is the normalized value, X min Is the minimum value of X, X max Is the maximum value of X.
For the soil classification machine learning model, samples are joint angles, joint angular velocities, joint motor currents and corresponding ground material types. Carrying out standardization processing on the sample data by adopting z-score, wherein the mean value of the processed data is 0, the standard deviation is 1, and the standardization formula is as follows:
X'=X-μσ (19)
where μ is the sample mean and σ is the sample standard deviation.
(2) Terrain awareness machine learning model modeling
The input layer input signals of the established touchdown detection neural network model are the angles of all joints of a single leg, the angular velocity and the motor current, and the number of the joints of the single leg is 2, so that the number of the nodes of the input layer is 6. The output layer output signal is a one-dimensional collision force, so the number of output layer nodes is 1. The number of hidden layer nodes is preset to 10 according to empirical formula (1). Fig. 6 shows the structural model of the touchdown detection neural network constructed by this example, and the mathematical relationships among the layers are shown in formulas (2) - (7).
The built soil classification machine learning model adopts DWT to carry out feature extraction on sample data of single leg joint information of the legged robot. Each sample datum is single-leg joint information of the foot type robot walking by one step, and the single-leg joint information comprises joint angles, angular speeds and motor currents. Since the number of joints of one leg is 2, the joint information of one leg includes 6 amounts. And (3) setting the number of samples as S, the walking one-step time as T and the sampling frequency as 1000Hz, wherein the size of the extracted feature matrix is F multiplied by S, and F is less than or equal to (1000T multiplied by 3N). And (3) performing feature extraction by using Daubechies wavelet functions, wherein the support regions in the wavelet functions psi (t) and the scale function phi (t) are 2N-1, and the vanishing moment of psi (t) is 4. And then designing an SVM classifier to classify the extracted single-leg joint characteristics. And respectively selecting a linear kernel function, a quadratic kernel function, a cubic kernel function and a Gaussian radial basis kernel function to form four different SVM. The kernel function formulas are (8) - (10). A heuristic program is used for selecting kernel proportion, and a One-to-One method (One-vs-One) is adopted for constructing the SVM multi-class classifier, and since 4 classes of samples are contained in the example, 6 SVM's need to be designed.
(3) Machine learning model algorithm
1) Training the established touchdown detection neural network model according to a neural network learning algorithm considering derivative information, wherein the method is specifically realized by the following steps:
(1) initialization
Setting the initial weight as a random number between (-1, 1); the learning rate η is set to a fraction between (0, 1), here set to 0.2; learning accuracy E 1 、E 2 Set to a positive decimal number, here set to 1 × 10 -4 (ii) a The maximum number of learning times is set to 5000.
(2) Inputting sample data, calculating output of each layer
Using current sample pair input quantity x i And (4) assigning values, and applying formulas (2) - (7) to calculate each component.
(3) Calculating error E 1
Calculated by applying equation (11)Error E 1 The value of (c).
(4) Modifying the weights of the layers to E 1 Reduce the size of
The weights V and W are adjusted by using equations (12) and (13).
(5) Calculating error E 2
Error E is calculated using equation (15) 2 The value of (c).
(6) Modifying the weight of each layer to E 2 Reduce the size of
The weights V and W are adjusted by using equations (16) and (17).
(7) Check if all samples have been trained once
If not, returning to the step (2), if yes, turning to the step (8).
(8) Checking whether the network error meets the precision requirement or the network reaches the maximum learning times
If not, returning to the step (2), and if yes, finishing the training.
And after the neural network training is finished, the training result can be checked. Fig. 7 and 8 are comparison curves of two randomly-drawn sets of network training outputs and sample outputs, which intuitively show that the network training values are well matched with the experimental values.
2) The method comprises the specific steps of classifying the characteristics of all sample data by using a selected classifier according to a SVM training method, respectively calculating the correlation value of each eigenvector in each classifier according to a selected kernel function, then calculating a covariance matrix space according to the correlation values, carrying out Householder transformation to obtain a corresponding hyperplane matrix, calculating the characteristic coefficients to obtain model parameters, and carrying out classification according to the model parameters.
(4) Virtual sensor terrain perception system
After the touchdown detection neural network model and the soil classification machine learning model are trained, the two models can be used as virtual sensors and as a terrain perception system of the robot for terrain perception. The part tests the generalization ability of the two models by applying test samples, and 16 groups of test samples are untrained data of the machine learning model.
For the touchdown detection neural network model, the normalized joint angle, angular velocity and motor current signals are used as model input, and the output of the inverse normalization is the predicted impact force. The impact force results predicted by the model are compared to the experimentally measured impact force signals to verify the effectiveness of the virtual sensor. Fig. 9 and fig. 10 are comparison curves of two sets of predicted results and experimental results, and it can be seen from the graphs that the predicted values have better consistency with the experimental test values. To evaluate the prediction accuracy, the Root Mean Square Error (RMSE) and mean absolute error (MMAE) of the 16 predicted and experimental values, defined as
Wherein n is the total number of test values; d i And o i The ith test value and the predicted value are respectively.
The RMSE of this test was found to be 0.5876 from equations (20) and (21); MMAE was 0.4525.
For the soil classification machine learning model, the standardized joint angle, angular velocity and motor current are used as test samples to perform classification so as to verify the generalization capability of the model. FIG. 11 is a classification confusion matrix for four different SVM models. From the results, it can be seen that the classification accuracy of the cubic kernel function is the highest, which is 92%. Wherein ST-1: aluminum, ST-2: rubber, ST-3: rigid plastic, ST-4: fig. 11 (a) shows an SVM model of a linear kernel function, fig. 11 (b) shows an SVM model of a quadratic kernel function, fig. 11 (c) shows an SVM model of a cubic kernel function, and fig. 11 (d) shows an SVM model of a gaussian radial basis kernel function.
The results show that the established terrain awareness machine learning model has good prediction precision and generalization capability, and the virtual sensor designed by the terrain awareness machine learning model has high precision and reliability, and can be applied to a terrain awareness system of a robot for ground contact detection and terrain identification.
Those skilled in the art will appreciate that those matters not described in detail in the present specification are not particularly limited to the specific examples described herein.
Although the present invention has been described with reference to the preferred embodiments, it is not intended to limit the present invention, and those skilled in the art can make variations and modifications of the present invention without departing from the spirit and scope of the present invention by using the methods and technical contents disclosed above.
Claims (10)
1. A terrain perception method of a foot robot based on a virtual sensor is characterized by comprising the following steps:
s1, establishing a touchdown detection neural network model and a soil classification machine learning model;
s2, collecting the angle of a leg joint, the angular velocity of the leg joint, the motor current and contact force data of the leg and the ground of the legged robot as samples under the conditions of different terrains and different states;
s3, training a touchdown detection neural network model and a soil classification machine learning model by using the samples collected in the S2;
and S4, taking the ground contact detection neural network model and the soil classification machine learning model trained in the S3 as a terrain perception system of the legged robot for terrain perception.
2. The terrain awareness method for the legged robot based on the virtual sensor as claimed in claim 1, wherein in S3, when training the touchdown detection neural network model, the error between the network output and the expected output and the error between the derivative of the network output and the expected output to the input are used to modify the network weights of the touchdown detection neural network model.
3. The terrain awareness method for the legged robot based on the virtual sensor according to claim 1 or 2, characterized in that in S3, discrete wavelet transform is firstly adopted to extract sample data of single-leg joint information of the legged robot, and then a support vector machine classifier is used to classify the extracted single-leg joint information of the legged robot, so as to obtain a trained soil classification machine learning model.
4. The terrain awareness method for the legged robot based on the virtual sensor as claimed in claim 1 or 2, wherein the angle of the leg joint, the angular velocity of the leg joint, and the motor current of the legged robot are used as input samples of the ground contact detection neural network model, and the contact force between the leg and the ground is used as an output sample of the ground contact detection neural network model.
5. The terrain awareness method for the legged robot based on the virtual sensor as claimed in claim 1 or 2, characterized in that the angle of the leg joint of the legged robot, the angular velocity of the leg joint, the motor current and the soil sample are used as the samples of the soil classification machine learning model.
6. A foot type robot terrain perception system based on a virtual sensor is characterized by comprising a sample acquisition module, an offline learning module and an online application module;
the sample acquisition module is used for acquiring the angle of the leg joint, the angular velocity of the leg joint, the motor current and the contact force data of the leg and the ground of the legged robot as samples under different terrains and different dynamic conditions;
the offline learning module is used for establishing a touchdown detection neural network model and a soil classification machine learning model, and then training the touchdown detection neural network model and the soil classification machine learning model by using the samples collected by the sample collection module;
the on-line application module utilizes the ground contact detection neural network model and the soil classification machine learning model trained in the off-line learning module to complete terrain perception.
7. The terrain awareness system of foot robot based on virtual sensor as claimed in claim 6, wherein the offline learning module is configured to train the touchdown detection neural network model while correcting the network weights of the touchdown detection neural network model by using the error of the network output from the expected output and the error of the derivative of the network output from the expected output to the input.
8. The terrain awareness system of the legged robot based on the virtual sensor as claimed in claim 6 or 7, wherein the offline learning module firstly extracts sample data of the single-leg joint information of the legged robot by using discrete wavelet transform, and then classifies the extracted single-leg joint information of the legged robot by using a support vector machine classifier to obtain a trained soil classification machine learning model.
9. The terrain awareness system for the legged robot based on the virtual sensor according to claim 6 or 7, characterized in that the angle of the leg joint, the angular velocity of the leg joint, and the motor current of the legged robot are used as input samples of the ground contact detection neural network model, and the contact force between the leg and the ground is used as an output sample of the ground contact detection neural network model.
10. The terrain awareness system of the legged robot based on the virtual sensor as claimed in claim 6 or 7, characterized in that the angle of the leg joint of the legged robot, the angular velocity of the leg joint, the motor current and the soil sample are used as the sample of the soil classification machine learning model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010070559.7A CN111260057B (en) | 2020-01-21 | 2020-01-21 | Foot type robot terrain sensing method based on virtual sensor |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010070559.7A CN111260057B (en) | 2020-01-21 | 2020-01-21 | Foot type robot terrain sensing method based on virtual sensor |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111260057A CN111260057A (en) | 2020-06-09 |
CN111260057B true CN111260057B (en) | 2023-04-07 |
Family
ID=70949084
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010070559.7A Active CN111260057B (en) | 2020-01-21 | 2020-01-21 | Foot type robot terrain sensing method based on virtual sensor |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111260057B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111753924A (en) * | 2020-07-03 | 2020-10-09 | 济南傲谷智控科技有限公司 | Four-footed robot foot end touchdown detection method based on supervised training |
CN112256028B (en) * | 2020-10-15 | 2021-11-19 | 华中科技大学 | Method, system, equipment and medium for controlling compliant gait of biped robot |
CN112326552B (en) * | 2020-10-21 | 2021-09-07 | 山东大学 | Tunnel block falling disease detection method and system based on vision and force perception |
CN112504540B (en) * | 2020-11-23 | 2021-10-08 | 乐聚(深圳)机器人技术有限公司 | Method, system and device for detecting falling feet and main controller |
CN113504778B (en) * | 2021-07-26 | 2023-09-19 | 广东工业大学 | Foot-type robot control method, system and equipment based on fusion probability model |
CN113704992A (en) * | 2021-08-25 | 2021-11-26 | 哈尔滨工业大学 | Foot type robot terrain perception and terrain classification method based on foot-ground contact model |
CN116766214A (en) * | 2022-03-07 | 2023-09-19 | 广东博智林机器人有限公司 | Motion control method and system of trowelling robot and trowelling robot |
CN115145292B (en) * | 2022-06-22 | 2024-03-26 | 广东工业大学 | Terrain detection method based on wheel-foot robot joint motion analysis |
CN117409264B (en) * | 2023-12-16 | 2024-03-08 | 武汉理工大学 | Multi-sensor data fusion robot terrain sensing method based on transformer |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103217237A (en) * | 2013-03-18 | 2013-07-24 | 哈尔滨工业大学 | Hexapod robot leg omni-directional force perception system |
US9561592B1 (en) * | 2015-05-15 | 2017-02-07 | Google Inc. | Ground plane compensation for legged robots |
CN108844618B (en) * | 2018-06-12 | 2019-07-23 | 中国科学技术大学 | A kind of landform cognitive method |
CN109249429B (en) * | 2018-09-25 | 2019-10-01 | 安徽果力智能科技有限公司 | A kind of biped robot's classification of landform system |
-
2020
- 2020-01-21 CN CN202010070559.7A patent/CN111260057B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN111260057A (en) | 2020-06-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111260057B (en) | Foot type robot terrain sensing method based on virtual sensor | |
CN102930302B (en) | Based on the incrementally Human bodys' response method of online sequential extreme learning machine | |
Batool et al. | Sensors technologies for human activity analysis based on SVM optimized by PSO algorithm | |
CN110321833B (en) | Human body behavior identification method based on convolutional neural network and cyclic neural network | |
Liu et al. | Tactile image based contact shape recognition using neural network | |
CN103065158B (en) | The behavior recognition methods of the ISA model based on relative gradient | |
CN105740823A (en) | Dynamic gesture trace recognition method based on depth convolution neural network | |
CN107657204A (en) | The construction method and facial expression recognizing method and system of deep layer network model | |
CN105741267A (en) | Multi-source image change detection method based on clustering guided deep neural network classification | |
CN109447128B (en) | Micro-inertia technology-based walking and stepping in-place movement classification method and system | |
CN106326843B (en) | A kind of face identification method | |
CN109979161A (en) | A kind of tumble detection method for human body based on convolution loop neural network | |
CN105320937A (en) | Kinect based traffic police gesture recognition method | |
CN107563308A (en) | SLAM closed loop detection methods based on particle swarm optimization algorithm | |
CN107679516A (en) | Lower extremity movement recognition methods based on multiple dimensioned Gauss Markov random field model | |
Wang et al. | A2dio: Attention-driven deep inertial odometry for pedestrian localization based on 6d imu | |
CN111582361A (en) | Human behavior recognition method based on inertial sensor | |
CN113609999B (en) | Human body model building method based on gesture recognition | |
CN106485750A (en) | A kind of estimation method of human posture based on supervision Local Subspace | |
CN108520205B (en) | motion-KNN-based human body motion recognition method | |
CN112257817B (en) | Geological geology online semantic recognition method and device and electronic equipment | |
Chao et al. | Structural feature representation and fusion of human spatial cooperative motion for action recognition | |
CN111553954A (en) | Direct method monocular SLAM-based online luminosity calibration method | |
CN116206728A (en) | Rehabilitation training method and system based on sensor fusion and transfer learning | |
Endres et al. | Graph-based action models for human motion classification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |