CN111260057A - Foot type robot terrain sensing method based on virtual sensor - Google Patents

Foot type robot terrain sensing method based on virtual sensor Download PDF

Info

Publication number
CN111260057A
CN111260057A CN202010070559.7A CN202010070559A CN111260057A CN 111260057 A CN111260057 A CN 111260057A CN 202010070559 A CN202010070559 A CN 202010070559A CN 111260057 A CN111260057 A CN 111260057A
Authority
CN
China
Prior art keywords
neural network
detection neural
terrain
machine learning
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010070559.7A
Other languages
Chinese (zh)
Other versions
CN111260057B (en
Inventor
吴爽
危清清
陈磊
张沛
王储
刘宾
姜水清
李德伦
刘鑫
白美
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Spacecraft System Engineering
Original Assignee
Beijing Institute of Spacecraft System Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Spacecraft System Engineering filed Critical Beijing Institute of Spacecraft System Engineering
Priority to CN202010070559.7A priority Critical patent/CN111260057B/en
Publication of CN111260057A publication Critical patent/CN111260057A/en
Application granted granted Critical
Publication of CN111260057B publication Critical patent/CN111260057B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B21/00Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant
    • G01B21/22Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant for measuring angles or tapers; for testing the alignment of axes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N33/00Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
    • G01N33/24Earth materials
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01PMEASURING LINEAR OR ANGULAR SPEED, ACCELERATION, DECELERATION, OR SHOCK; INDICATING PRESENCE, ABSENCE, OR DIRECTION, OF MOVEMENT
    • G01P3/00Measuring linear or angular speed; Measuring differences of linear or angular speeds
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R19/00Arrangements for measuring currents or voltages or for indicating presence or sign thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Chemical & Material Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Medical Informatics (AREA)
  • Environmental & Geological Engineering (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • Geology (AREA)
  • Remote Sensing (AREA)
  • Biomedical Technology (AREA)
  • Food Science & Technology (AREA)
  • Medicinal Chemistry (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Manipulator (AREA)

Abstract

A foot type robot terrain perception method based on a virtual sensor belongs to the robot perception field and comprises the following steps: s1, establishing a touchdown detection neural network model and a soil classification machine learning model; s2, collecting the angle of the leg joint, the angular velocity of the leg joint, the motor current and the contact force data of the leg and the ground of the legged robot as samples under the conditions of different terrains and different states; s3, training a touchdown detection neural network model and a soil classification machine learning model by using the samples collected in S2; and S4, taking the ground contact detection neural network model and the soil classification machine learning model trained in the S3 as a terrain perception system of the legged robot, and using the terrain perception system for terrain perception. The method can improve the walking stability and the movement capability of the robot, and simultaneously enhance the robustness and the behavior reliability of the robot; in addition, the hardware of the robot is simplified, and the design, processing and maintenance cost is reduced.

Description

Foot type robot terrain sensing method based on virtual sensor
Technical Field
The invention relates to a terrain perception method of a foot type robot based on a virtual sensor, relates to terrain perception of the foot type robot for outdoor operation, is suitable for the foot type robot to acquire internal and external environment information, and belongs to the field of robot perception.
Background
The terrain perception capability is very important for outdoor operation, particularly for deep space exploration robots, and the complex unstructured geological conditions bring great influence on the mobility of the robots, so that the robots cannot complete preset tasks, and even cause danger to the robots. The terrain structure and physical characteristics under surface soil are difficult to obtain by a pure vision-based terrain sensing method, most of the vision-based terrain sensing methods need to install a touch sensor on the foot of a robot, and the sensor interacts with various terrains and is easy to damage, so that the robustness of the system is reduced. Moreover, the spacecraft has the characteristic that on-orbit faults are not easy to maintain, and the touch sensing system can seriously affect the reliability of the whole system.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the method overcomes the defects of the prior art, and provides a foot type robot terrain sensing method based on a virtual sensor, which comprises the following steps: s1, establishing a touchdown detection neural network model and a soil classification machine learning model; s2, collecting the angle of the leg joint, the angular velocity of the leg joint, the motor current and the contact force data of the leg and the ground of the legged robot as samples under the conditions of different terrains and different states; s3, training a touchdown detection neural network model and a soil classification machine learning model by using the samples collected in S2; and S4, taking the ground contact detection neural network model and the soil classification machine learning model trained in the S3 as a terrain perception system of the legged robot, and using the terrain perception system for terrain perception. The method can improve the walking stability and the movement capability of the robot, and simultaneously enhance the robustness and the behavior reliability of the robot; in addition, the hardware of the robot is simplified, and the design, processing and maintenance cost is reduced.
The purpose of the invention is realized by the following technical scheme:
a terrain perception method of a foot robot based on a virtual sensor comprises the following steps:
s1, establishing a touchdown detection neural network model and a soil classification machine learning model;
s2, collecting the angle of the leg joint, the angular velocity of the leg joint, the motor current and the contact force data of the leg and the ground of the legged robot as samples under the conditions of different terrains and different states;
s3, training a touchdown detection neural network model and a soil classification machine learning model by using the samples collected in S2;
and S4, taking the ground contact detection neural network model and the soil classification machine learning model trained in the S3 as a terrain perception system of the legged robot, and using the terrain perception system for terrain perception.
In the terrain sensing method for the foot robot based on the virtual sensor, when the touchdown detection neural network model is trained in the step S3, the network weight of the touchdown detection neural network model is corrected by using the error between the network output and the expected output and the derivative error between the network output and the expected output to the input.
In the terrain perception method for the legged robot based on the virtual sensor, in step S3, discrete wavelet transform is firstly adopted to extract sample data of single-leg joint information of the legged robot, and then a support vector machine classifier is used to classify the extracted single-leg joint information of the legged robot, so as to obtain a trained soil classification machine learning model.
In the terrain sensing method for the foot type robot based on the virtual sensor, the angle of the leg joint, the angular velocity of the leg joint and the motor current of the foot type robot are used as input samples of the touchdown detection neural network model, and the contact force between the leg and the ground is used as an output sample of the touchdown detection neural network model.
According to the terrain sensing method of the legged robot based on the virtual sensor, the angle of the leg joint of the legged robot, the angular velocity of the leg joint, the motor current and the soil sample are used as the samples of the soil classification machine learning model.
A foot type robot terrain perception system based on a virtual sensor comprises a sample acquisition module, an offline learning module and an online application module;
the sample acquisition module is used for acquiring the angle of the leg joint, the angular velocity of the leg joint, the motor current and contact force data of the leg and the ground of the legged robot as samples under the conditions of different terrains and different states;
the offline learning module is used for establishing a touchdown detection neural network model and a soil classification machine learning model, and then training the touchdown detection neural network model and the soil classification machine learning model by using the samples collected by the sample collection module;
the on-line application module utilizes the ground contact detection neural network model and the soil classification machine learning model trained in the off-line learning module to complete terrain perception.
According to the foot type robot terrain perception system based on the virtual sensor, when the off-line learning module trains the touchdown detection neural network model, the network weight of the touchdown detection neural network model is corrected by using the error between the network output and the expected output and the derivative error between the network output and the expected output and the input.
In the foot type robot terrain perception system based on the virtual sensor, the offline learning module firstly adopts discrete wavelet transform to extract sample data of single-leg joint information of the foot type robot, and then uses a support vector machine classifier to classify the extracted single-leg joint information of the foot type robot, so as to obtain a trained soil classification machine learning model.
In the terrain awareness system for the legged robot based on the virtual sensor, the angle of the leg joint, the angular velocity of the leg joint, and the motor current of the legged robot are used as input samples of the ground contact detection neural network model, and the contact force between the leg and the ground is used as an output sample of the ground contact detection neural network model.
The terrain sensing system of the legged robot based on the virtual sensor takes the angle of the leg joint of the legged robot, the angular velocity of the leg joint, the motor current and the soil sample as the samples of the soil classification machine learning model.
Compared with the prior art, the invention has the following beneficial effects:
(1) the invention provides a terrain perception method of a foot type robot based on a virtual sensor, which can obviously improve the following performances of the foot type robot: the touchdown state of the foot type robot can be predicted so as to improve the walking stability of the robot; the basic features of the terrain can be identified, the soil type is determined, and therefore the terrain adaptability and the movement capability of the robot are improved, and the classification precision of the method on the soil can reach more than 90% through verification;
(2) according to the method, the touchdown state and the topographic characteristics can be estimated only through the leg joint angle, the angular speed and the motor current of the robot, a foot sensor does not need to be additionally designed and installed, the hardware of the robot is simplified, and the design, processing and maintenance costs are reduced. Particularly in deep space exploration application, the carrying pressure of a spacecraft can be effectively reduced, the defect that on-orbit faults are difficult to maintain is overcome, and the reliability of the robot is enhanced;
(3) the method provides a terrain perception machine learning model of a legged robot, establishes a touchdown detection neural network model and a soil classification machine learning model, provides a neural network learning algorithm considering derivative information in consideration of the limited number of samples in a space environment, overcomes the over-fitting problem of small sample learning, and has the advantages of high prediction precision and high dynamic response speed;
(4) the method establishes the touchdown detection neural network model of the neural network learning algorithm considering the derivative information, corrects the network weight according to the error of the network output and the expected output, and also corrects the network weight according to the derivative error of the network output and the expected output to the input, thereby reducing the overfitting problem of the network and improving the generalization capability of the network.
Drawings
FIG. 1 is a model of a touchdown detection neural network architecture;
FIG. 2 is a block diagram of a soil classification machine learning model modeling method;
FIG. 3 is a BP neural network learning training process;
FIG. 4 is a block diagram of a virtual sensor terrain awareness application;
FIG. 5 is a schematic diagram of a one-leg touchdown motion experimental scheme of the hexapod robot;
FIG. 6 is a model of a hexapod robot touchdown detection neural network architecture;
FIG. 7 is a graph of the neural network training output versus the sample output for a centroid height of 0.36m, single step cycle 4s, with soft plastic impact surface material;
FIG. 8 is a graph of the neural network training output versus the sample output for a centroid height of 0.42m, single step cycle 6s, with the impinging surface material being aluminum;
FIG. 9 is a graph of the comparison of the predicted results of a virtual sensor with a centroid height of 0.36m, a single step cycle of 8s, and a soft plastic impact surface material with experimental test results;
FIG. 10 is a plot of the predicted results versus experimental test results for a virtual sensor with a centroid height of 0.42m, a single step cycle of 10s, and an aluminum bump surface material;
FIG. 11 is a classification confusion matrix of four different SVM models (ST-1: aluminum, ST-2: rubber, ST-3: rigid plastic, ST-4: soft plastic), where FIG. 11(a) is an SVM model with a linear kernel function, 11(b) is an SVM model with a quadratic kernel function, 11(c) is an SVM model with a cubic kernel function, and 11(d) is an SVM model with a Gaussian radial basis kernel function.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
Example 1:
step (1) modeling of terrain perception machine learning model
The modeling of the terrain awareness machine learning model is mainly divided into two parts: touchdown detection neural network modeling and soil classification machine learning model modeling
1) Touchdown detection neural network modeling
A neural network learning algorithm considering derivative information is established to establish a touchdown detection neural network model of one leg of the legged robot so as to realize touchdown detection. The touchdown detection neural network model comprises three layers: the device comprises an input layer, a hidden layer and an output layer, wherein the layers are connected through a function operation relation. The input signals of the input layer are joint angles, joint angular velocities and joint motor currents of joints of a single leg. If the number of leg joints is N, the input layer of the touchdown detection neural network model includes 3N nodes. The output signal of the output layer is the contact force between the leg and foot end and the ground, which is divided into normal contact force and tangential friction force, so that the node number of the output layer is 2. The number of the nodes of the hidden layer is generally set as an initial value according to experience, and then a final value is determined according to repeated trial calculation. The initial value may be set with reference to the following empirical formula:
Figure BDA0002377200260000051
wherein n is the number of nodes of the hidden layer, h is the number of nodes of the input layer, m is the number of nodes of the output layer, and a is an adjusting constant between 1 and 10.
Fig. 1 is a built neural network structure model, and mathematical relations among layers are expressed as follows:
Figure BDA0002377200260000052
yj=f1(netj)j=1,2,…,n (3)
Figure BDA0002377200260000061
ok=f2(netk)k=1,2 (5)
wherein the input vector is X ═ X1,x2,…,xi,…,x3N)T,xiIs the input data of the ith unit of the input layer; the hidden layer output vector is Y ═ Y1,y2,…,yj,…,yn)TN is the number of hidden layer units, yjIs the output data of the jth unit of the hidden layer; the output vector of the output layer is O ═ O1,o2)TThe desired output vector is d ═ d (d)1,d2)T,okAnd dkNetwork output and desired output of the kth unit of the output layer, respectively; the weight matrix from the input layer to the hidden layer is represented by V, where V is (V)1,V2,…,Vj,…,Vn)TThe weight matrix from hidden layer to output layer is represented by W ═ W1,W2)T,vijAnd wjkThe weights between the input layer and the hidden layer, and between the hidden layer and the output layer, netjAnd netkInput data, f, of the jth cell of the hidden layer and the kth cell of the output layer, respectively1And f2The transfer functions of the hidden layer and the output layer, respectively. The transfer function in the nonlinear network can adopt a Sigmoid activation function, and a hyperbolic tangent function and a logarithmic function are adopted.
Figure BDA0002377200260000062
Figure BDA0002377200260000063
2) Soil classification machine learning model modeling
Firstly, Discrete Wavelet Transform (DWT) is adopted to perform feature extraction on sample data of single-leg joint information of the legged robot. Each sample data is single-leg joint information of the legged robot walking by one step, and the single-leg joint information comprises angles, angular speeds and motor currents of all joints. If the number of joints of one leg is N, the joint information of one leg includes 3N amounts. And (3) performing feature extraction by using a Daubechies wavelet function, wherein the support region in the wavelet function psi (t) and the scale function phi (t) is 2N-1, and the vanishing moment of psi (t) is N.
And then designing a Support Vector Machine (SVM) classifier to classify the single-leg joint features extracted in the previous step. Different kernel functions can be selected to form different support vector machines, such as linear kernel functions, polynomial kernel functions and gaussian kernel functions, and the formulas are respectively as follows:
linear kernel function:
K(xi,xj)=(xi·xj) (8)
polynomial kernel function (quadratic: d ═ 2; cubic: d ═ 3)
K(xi,xj)=[(xi·xj)+1]d(9)
Gaussian radial basis kernel function
K(xi,xj)=exp(-γ||xi·xj||2)γ>0 (10)
A heuristic program is used for selecting kernel proportion, an SVM multi-class classifier is constructed by adopting a One-to-One method (One-vs-One), namely, a plurality of two classifiers are combined to realize the construction of the multi-classifier, specifically, an SVM is designed between any two classes of samples, so that L (L-1)/2 SVM samples need to be designed. A block diagram of the soil classification machine learning model modeling method is shown in fig. 2.
And (2) collecting sample data.
Before training by applying a machine learning model, sample data acquisition needs to be completed first. The method collects the angle, the angular speed and the motor current of the leg joint of the legged robot in different terrains and in different walking processes, and the contact force between the leg and the ground is used as a sample. Taking a walking step as a sample.
Aiming at the touchdown detection neural network model, a joint angle, a joint angular velocity and a joint motor current are used as input samples of the neural network, and a contact force is used as an output sample. Then, the sample data is divided into three parts of a training set, a verification set and a test set. When the data volume is small, the proportion of the training set, the verification set and the test set is 6:2: 2; when the amount of data is very large, the ratio of training data, validation data and test data is 98:1: 1. Generally, neural networks require normalization of sample data prior to training, i.e., mapping the data into smaller intervals, such as [0,1] or [ -1,1 ].
And aiming at the soil classification machine learning model, taking the joint angle, the joint angular speed, the joint motor current and the corresponding soil type as samples. And (3) carrying out standardization on the sample data, wherein the standardization is carried out by a common method of z-score, the mean value of the processed data is 0, and the standard deviation is 1.
And (3) machine learning model algorithm.
The machine learning model algorithm comprises a touchdown detection neural network model algorithm and a soil classification machine learning model algorithm.
1) Touchdown detection neural network model algorithm
The neural network learning algorithm flow described herein that considers derivative information is shown in fig. 3.
Defining the error of the network output from the expected output as:
Figure BDA0002377200260000081
introducing gradient descent method to weightvijAnd wjkMaking an adjustment to E1And continuously reducing, namely:
Figure BDA0002377200260000082
Figure BDA0002377200260000083
wherein η represents the learning rate, u represents the iteration number of the network learning training process, and i, j, and k are the node ordinal number of the network input layer, the node ordinal number of the hidden layer, and the node ordinal number of the output layer, respectively.
In addition, as can be seen from equations (2) to (5), the partial derivative matrix of the network output O with respect to the network input X is:
Figure BDA0002377200260000084
wherein, [ f ]2(netk)]' means f2(netk) Paired netskDerivation is carried out; [ f ] of1(netj)]' means f1(netj) Paired netsjDerivation, Q ═ Q1,Q2)T,QkPartial derivative vector, Q, of network output to network input for k unit of output layerk=(qk1,qk2,…,qki,…,qkn)。
Let the matrix of expected values for the partial derivatives of the network output over the network input be T ═ T (T ═ T)1,T2)TWherein T iskFor the expected value vector of the partial derivatives of the network output to the network input, T, of the k-th cell of the output layerk=(tk1,tk2,…,tki,…,tkn) Then, the error of the partial derivative of the network output to the network input from its expected value is defined as:
Figure BDA0002377200260000085
likewise, weighting by gradient descentValue vijAnd wjkMaking an adjustment to E2And continuously reducing, namely:
Figure BDA0002377200260000091
Figure BDA0002377200260000092
2) soil classification machine learning model algorithm
And calculating all the characteristic vectors of the sample data in the training process of the established soil classification model according to the kernel function, and constructing a characteristic space enabling the sample to be separable. The method comprises the specific steps of classifying the features of all sample data by using a selected classifier, respectively calculating the correlation value of each eigenvector in each classifier according to a selected kernel function, then calculating a covariance matrix space according to the correlation values, carrying out Householder transformation to obtain a corresponding hyperplane matrix, calculating the characteristic coefficients to obtain model parameters, and thus, performing classification according to the model parameters. And a cross validation method is adopted in the training process to prevent the overfitting phenomenon in the training.
And (4) a virtual sensor terrain perception system.
The virtual sensor mainly calculates the touchdown state and the terrain characteristic of the foot end at the moment according to the current state of the leg joint of the robot, and collects the information in a terrain perception system. The virtual sensor is mainly divided into three parts of sample acquisition, offline learning and online application in terrain perception application, and a schematic diagram of the virtual sensor is shown in fig. 4, wherein a hexapod robot is taken as an example, the method is suitable for any multi-pod robot. The walking path of the robot leg designed by the sample acquisition part envelopes all motion states of joints of the robot leg as much as possible so as to improve the prediction precision. And the offline learning part is used for respectively establishing a terrain perception machine learning model of each foot and applying sample data of each foot to carry out learning training on the corresponding machine learning model. And an on-line application part, which is used for applying the trained machine learning model to a terrain perception system of the foot type robot and predicting the ground contact state and the terrain characteristic of the robot in the walking process on line.
Example 2:
a terrain perception method of a virtual sensor for a foot robot mainly comprises 4 components: (1) collecting sample data; (2) modeling a terrain perception machine learning model; (3) a machine learning model algorithm; (4) virtual sensor terrain awareness system. The composition of each part is described in detail below.
(1) Sample data collection
In order to verify the effectiveness of the method, a one-leg touchdown motion experiment of the hexapod robot is designed. The schematic diagram of the experimental scheme is shown in fig. 5, a one-dimensional force sensor is installed on the sole of one leg of the robot, and the sampling frequency is 1000 Hz. The experiments respectively carry out the ground contact walking tests of the robot on the ground with different mass center heights and different walking speeds and on the ground made of different materials. Wherein, the single step walking cycle takes: 3s, 4s, 5s, 6s, 7s, 8s, 9s, 10 s; height of center of mass: 0.36m, 0.42 m; ground material: aluminum, rubber, hard plastic, soft plastic. The angle, the angular speed, the motor current and the collision force measured by the force sensor of the joint are collected in the experimental process. The joint angle information is acquired from a joint encoder; the joint angular velocity is obtained by a joint angle difference method; the joint motor adopts a permanent magnet synchronous motor, and a joint current signal is extracted from the permanent magnet synchronous motor. The total number of samples collected in the experiment was 64 groups.
For the touchdown detection neural network model, the input samples are: joint angle, joint angular velocity and joint motor current; the output samples are: and (4) collision force. The distribution ratio of the training set, the validation set and the test set is 3:1: 1. All sample data are normalized before use, the normalized data are distributed in an interval [0,1], and the normalization formula is as follows:
Figure BDA0002377200260000101
wherein X is the original value of the data, X' is the normalized value, XminIs the minimum value of X, XmaxIs the maximum value of X.
For the soil classification machine learning model, samples are joint angles, joint angular velocities, joint motor currents and corresponding ground material types. Carrying out standardization processing on the sample data by adopting z-score, wherein the mean value of the processed data is 0, the standard deviation is 1, and the standardization formula is as follows:
X'=X-μσ (19)
where μ is the sample mean and σ is the sample standard deviation.
(2) Terrain awareness machine learning model modeling
The input layer input signals of the established touchdown detection neural network model are the angles of all joints of a single leg, the angular velocity and the motor current, and the number of the joints of the single leg is 2, so that the number of the nodes of the input layer is 6. The output layer output signal is a one-dimensional collision force, so the number of output layer nodes is 1. The number of hidden layer nodes is preset to 10 according to empirical formula (1). Fig. 6 shows the structural model of the touchdown-detected neural network constructed by this example, and the mathematical relationships among the layers are shown in equations (2) - (7).
The built soil classification machine learning model adopts DWT to carry out feature extraction on sample data of single leg joint information of the legged robot. Each sample data is single-leg joint information of the legged robot walking by one step, and the single-leg joint information comprises joint angles, angular speeds and motor currents. Since the number of joints of one leg is 2, the joint information of one leg includes 6 amounts. And (3) setting the number of samples as S, the walking one-step time as T and the sampling frequency as 1000Hz, wherein the size of the extracted feature matrix is F multiplied by S, and F is less than or equal to (1000T multiplied by 3N). Feature extraction is carried out by adopting a Daubechies wavelet function, a support region in the wavelet function psi (t) and the scale function phi (t) is 2N-1, and the vanishing moment of psi (t) is 4. And then designing an SVM classifier to classify the extracted single-leg joint features. And respectively selecting a linear kernel function, a quadratic kernel function, a cubic kernel function and a Gaussian radial basis kernel function to form four different SVM. The kernel function formulas are (8) - (10). A heuristic program is used for selecting kernel proportion, and a One-to-One method (One-vs-One) is adopted for constructing the SVM multi-class classifier, and since 4 classes of samples are contained in the example, 6 SVM's need to be designed.
(3) Machine learning model algorithm
1) Training the established touchdown detection neural network model according to a neural network learning algorithm considering derivative information, wherein the method is specifically realized by the following steps:
① initialization
Initial weight set to a random number between (-1,1), learning rate η set to a fractional number between (0,1), here set to 0.2, learning precision E1、E2Set to a positive decimal number, here set to 1 × 10-4(ii) a The maximum number of learning times is set to 5000.
② inputting sample data, calculating each layer output
Using current sample to input quantity xiThe values are assigned and equations (2) - (7) are applied to calculate the components.
③ calculating the error E1
Error E is calculated using equation (11)1The value of (c).
④ modifying the weighting values of the layers to E1Reduce
The weight V, W is adjusted by using equations (12) and (13).
⑤ calculating the error E2
Error E is calculated using equation (15)2The value of (c).
⑥ modifying the weighting values of the layers to E2Reduce
The weight V, W is adjusted by using equations (16) and (17).
⑦ check if a round is completed for all samples
If not, return to step ②, yes, go to step ⑧.
⑧ checking whether the network error meets the accuracy requirement or the network reaches the maximum learning times
If not, returning to step ②, if yes, training is ended.
And after the neural network training is finished, the training result can be checked. Fig. 7 and 8 are comparison curves of two randomly-drawn sets of network training outputs and sample outputs, which intuitively show that the network training values are well matched with the experimental values.
2) The method comprises the specific steps of classifying the characteristics of all sample data by using a selected classifier according to a SVM training method, respectively calculating the correlation value of each eigenvector in each classifier according to a selected kernel function, then calculating a covariance matrix space according to the correlation values, carrying out Householder transformation to obtain a corresponding hyperplane matrix, calculating the characteristic coefficients to obtain model parameters, and carrying out classification according to the model parameters.
(4) Virtual sensor terrain perception system
After the touchdown detection neural network model and the soil classification machine learning model are trained, the two models can be used as virtual sensors and as a terrain perception system of the robot for terrain perception. The part tests the generalization ability of the two models by applying test samples, and 16 groups of test samples are untrained data of the machine learning model.
For the touchdown detection neural network model, the normalized joint angle, angular velocity and motor current signals are used as model inputs, and the output of inverse normalization is the predicted impact force. The impact force results predicted by the model are compared to the experimentally measured impact force signals to verify the effectiveness of the virtual sensor. Fig. 9 and fig. 10 are comparison curves of two sets of predicted results and experimental results, and it can be seen from the graphs that the predicted values have better consistency with the experimental test values. To evaluate the prediction accuracy, the Root Mean Square Error (RMSE) and mean absolute error (MMAE) of the 16 predicted and experimental values, defined as
Figure BDA0002377200260000131
Figure BDA0002377200260000132
Wherein n is the total number of test values; diAnd oiThe ith test value and the predicted value are respectively.
From equations (20) and (21), the RMSE of this test is 0.5876; the MMAE was 0.4525.
For the soil classification machine learning model, the standardized joint angle, angular velocity and motor current are used as test samples to perform classification so as to verify the generalization capability of the model. FIG. 11 is a classification confusion matrix for four different SVM models. From the results, it can be seen that the classification accuracy of the cubic kernel function is the highest, which is 92%. Wherein ST-1: aluminum, ST-2: rubber, ST-3: rigid plastic, ST-4: fig. 11(a) shows an SVM model of a linear kernel function, fig. 11(b) shows an SVM model of a quadratic kernel function, fig. 11(c) shows an SVM model of a cubic kernel function, and fig. 11(d) shows an SVM model of a gaussian radial basis kernel function.
The results show that the established terrain awareness machine learning model has good prediction precision and generalization capability, and the virtual sensor designed by the terrain awareness machine learning model has high precision and reliability, and can be applied to a terrain awareness system of a robot for ground contact detection and terrain identification.
Those skilled in the art will appreciate that those matters not described in detail in the present specification are well known in the art.
Although the present invention has been described with reference to the preferred embodiments, it is not intended to limit the present invention, and those skilled in the art can make variations and modifications of the present invention without departing from the spirit and scope of the present invention by using the methods and technical contents disclosed above.

Claims (10)

1. A terrain perception method of a foot robot based on a virtual sensor is characterized by comprising the following steps:
s1, establishing a touchdown detection neural network model and a soil classification machine learning model;
s2, collecting the angle of the leg joint, the angular velocity of the leg joint, the motor current and the contact force data of the leg and the ground of the legged robot as samples under the conditions of different terrains and different states;
s3, training a touchdown detection neural network model and a soil classification machine learning model by using the samples collected in S2;
and S4, taking the ground contact detection neural network model and the soil classification machine learning model trained in the S3 as a terrain perception system of the legged robot, and using the terrain perception system for terrain perception.
2. The terrain awareness method for foot-type robots based on virtual sensors as claimed in claim 1, wherein during training of the touchdown detection neural network model in S3, the error of the network output and the expected output and the derivative error of the network output and the expected output to the input are used simultaneously to modify the network weights of the touchdown detection neural network model.
3. The terrain awareness method for the legged robot based on the virtual sensor as claimed in claim 1 or 2, wherein in S3, discrete wavelet transform is first adopted to extract sample data of single-leg joint information of the legged robot, and then a support vector machine classifier is used to classify the extracted single-leg joint information of the legged robot, so as to obtain a trained soil classification machine learning model.
4. The terrain awareness method for the legged robot based on the virtual sensor as claimed in claim 1 or 2, wherein the angle of the leg joint, the angular velocity of the leg joint, and the motor current of the legged robot are used as input samples of the ground contact detection neural network model, and the contact force between the leg and the ground is used as an output sample of the ground contact detection neural network model.
5. The terrain awareness method for the legged robot based on the virtual sensor as claimed in claim 1 or 2, characterized in that the angle of the leg joint of the legged robot, the angular velocity of the leg joint, the motor current and the soil sample are used as the samples of the soil classification machine learning model.
6. A foot type robot terrain perception system based on a virtual sensor is characterized by comprising a sample acquisition module, an offline learning module and an online application module;
the sample acquisition module is used for acquiring the angle of the leg joint, the angular velocity of the leg joint, the motor current and contact force data of the leg and the ground of the legged robot as samples under the conditions of different terrains and different states;
the offline learning module is used for establishing a touchdown detection neural network model and a soil classification machine learning model, and then training the touchdown detection neural network model and the soil classification machine learning model by using the samples collected by the sample collection module;
the on-line application module utilizes the ground contact detection neural network model and the soil classification machine learning model trained in the off-line learning module to complete terrain perception.
7. The terrain awareness system of foot robot based on virtual sensor as claimed in claim 6, wherein the offline learning module is configured to train the touchdown detection neural network model while correcting the network weights of the touchdown detection neural network model by using the error of the network output from the expected output and the error of the derivative of the network output from the expected output to the input.
8. The terrain awareness system of the legged robot based on the virtual sensor as claimed in claim 6 or 7, wherein the offline learning module firstly extracts sample data of the single-leg joint information of the legged robot by using discrete wavelet transform, and then classifies the extracted single-leg joint information of the legged robot by using a support vector machine classifier to obtain a trained soil classification machine learning model.
9. The terrain awareness system for the legged robot based on the virtual sensor as claimed in claim 6 or 7, wherein the angle of the leg joint, the angular velocity of the leg joint, and the motor current of the legged robot are used as input samples of the ground contact detection neural network model, and the contact force between the leg and the ground is used as an output sample of the ground contact detection neural network model.
10. The terrain awareness system of the legged robot based on the virtual sensor as claimed in claim 6 or 7, characterized in that the angle of the leg joint of the legged robot, the angular velocity of the leg joint, the motor current and the soil sample are used as the sample of the soil classification machine learning model.
CN202010070559.7A 2020-01-21 2020-01-21 Foot type robot terrain sensing method based on virtual sensor Active CN111260057B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010070559.7A CN111260057B (en) 2020-01-21 2020-01-21 Foot type robot terrain sensing method based on virtual sensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010070559.7A CN111260057B (en) 2020-01-21 2020-01-21 Foot type robot terrain sensing method based on virtual sensor

Publications (2)

Publication Number Publication Date
CN111260057A true CN111260057A (en) 2020-06-09
CN111260057B CN111260057B (en) 2023-04-07

Family

ID=70949084

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010070559.7A Active CN111260057B (en) 2020-01-21 2020-01-21 Foot type robot terrain sensing method based on virtual sensor

Country Status (1)

Country Link
CN (1) CN111260057B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111753924A (en) * 2020-07-03 2020-10-09 济南傲谷智控科技有限公司 Four-footed robot foot end touchdown detection method based on supervised training
CN112256028A (en) * 2020-10-15 2021-01-22 华中科技大学 Method, system, equipment and medium for controlling compliant gait of biped robot
CN112326552A (en) * 2020-10-21 2021-02-05 山东大学 Tunnel block falling disease detection method and system based on vision and force perception
CN112504540A (en) * 2020-11-23 2021-03-16 乐聚(深圳)机器人技术有限公司 Method, system and device for detecting falling feet and main controller
CN113504778A (en) * 2021-07-26 2021-10-15 广东工业大学 Foot type robot control method, system and equipment based on fusion probability model
CN113704992A (en) * 2021-08-25 2021-11-26 哈尔滨工业大学 Foot type robot terrain perception and terrain classification method based on foot-ground contact model
CN115145292A (en) * 2022-06-22 2022-10-04 广东工业大学 Terrain detection method based on joint motion analysis of wheel-foot robot
WO2023168865A1 (en) * 2022-03-07 2023-09-14 广东博智林机器人有限公司 Motion control method and system for troweling and leveling robot, and troweling and leveling robot
CN117409264A (en) * 2023-12-16 2024-01-16 武汉理工大学 Multi-sensor data fusion robot terrain sensing method based on transformer

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103217237A (en) * 2013-03-18 2013-07-24 哈尔滨工业大学 Hexapod robot leg omni-directional force perception system
US9561592B1 (en) * 2015-05-15 2017-02-07 Google Inc. Ground plane compensation for legged robots
CN108844618A (en) * 2018-06-12 2018-11-20 中国科学技术大学 A kind of landform cognitive method
CN109249429A (en) * 2018-09-25 2019-01-22 安徽果力智能科技有限公司 A kind of biped robot's classification of landform system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103217237A (en) * 2013-03-18 2013-07-24 哈尔滨工业大学 Hexapod robot leg omni-directional force perception system
US9561592B1 (en) * 2015-05-15 2017-02-07 Google Inc. Ground plane compensation for legged robots
CN108844618A (en) * 2018-06-12 2018-11-20 中国科学技术大学 A kind of landform cognitive method
CN109249429A (en) * 2018-09-25 2019-01-22 安徽果力智能科技有限公司 A kind of biped robot's classification of landform system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
傅汇乔等: "基于卷积神经网络的六足机器人环境自适应方法研究", 《现代机械》 *
王刚等: "基于CPG的仿蟹机器人复杂地形步态生成方法", 《中南大学学报(自然科学版)》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111753924A (en) * 2020-07-03 2020-10-09 济南傲谷智控科技有限公司 Four-footed robot foot end touchdown detection method based on supervised training
CN112256028A (en) * 2020-10-15 2021-01-22 华中科技大学 Method, system, equipment and medium for controlling compliant gait of biped robot
CN112326552A (en) * 2020-10-21 2021-02-05 山东大学 Tunnel block falling disease detection method and system based on vision and force perception
CN112326552B (en) * 2020-10-21 2021-09-07 山东大学 Tunnel block falling disease detection method and system based on vision and force perception
CN112504540A (en) * 2020-11-23 2021-03-16 乐聚(深圳)机器人技术有限公司 Method, system and device for detecting falling feet and main controller
CN113504778B (en) * 2021-07-26 2023-09-19 广东工业大学 Foot-type robot control method, system and equipment based on fusion probability model
CN113504778A (en) * 2021-07-26 2021-10-15 广东工业大学 Foot type robot control method, system and equipment based on fusion probability model
CN113704992A (en) * 2021-08-25 2021-11-26 哈尔滨工业大学 Foot type robot terrain perception and terrain classification method based on foot-ground contact model
WO2023168865A1 (en) * 2022-03-07 2023-09-14 广东博智林机器人有限公司 Motion control method and system for troweling and leveling robot, and troweling and leveling robot
CN115145292A (en) * 2022-06-22 2022-10-04 广东工业大学 Terrain detection method based on joint motion analysis of wheel-foot robot
CN115145292B (en) * 2022-06-22 2024-03-26 广东工业大学 Terrain detection method based on wheel-foot robot joint motion analysis
CN117409264A (en) * 2023-12-16 2024-01-16 武汉理工大学 Multi-sensor data fusion robot terrain sensing method based on transformer
CN117409264B (en) * 2023-12-16 2024-03-08 武汉理工大学 Multi-sensor data fusion robot terrain sensing method based on transformer

Also Published As

Publication number Publication date
CN111260057B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN111260057B (en) Foot type robot terrain sensing method based on virtual sensor
CN102930302B (en) Based on the incrementally Human bodys' response method of online sequential extreme learning machine
Das et al. Model-based inverse reinforcement learning from visual demonstrations
CN107665355B (en) Agricultural pest detection method based on regional convolutional neural network
CN103065158B (en) The behavior recognition methods of the ISA model based on relative gradient
CN107016342A (en) A kind of action identification method and system
CN105741267A (en) Multi-source image change detection method based on clustering guided deep neural network classification
CN106097393A (en) A kind of based on multiple dimensioned and adaptive updates method for tracking target
CN110610158A (en) Human body posture identification method and system based on convolution and gated cyclic neural network
CN105320937A (en) Kinect based traffic police gesture recognition method
CN112294295A (en) Human body knee motion posture identification method based on extreme learning machine
CN111531537B (en) Mechanical arm control method based on multiple sensors
CN107679516A (en) Lower extremity movement recognition methods based on multiple dimensioned Gauss Markov random field model
CN111383273B (en) High-speed rail contact net part positioning method based on improved structure reasoning network
CN111582361A (en) Human behavior recognition method based on inertial sensor
Wang et al. A2dio: Attention-driven deep inertial odometry for pedestrian localization based on 6d imu
CN106485750A (en) A kind of estimation method of human posture based on supervision Local Subspace
CN113609999B (en) Human body model building method based on gesture recognition
Salam et al. Learning and leveraging features in flow-like environments to improve situational awareness
CN108520205B (en) motion-KNN-based human body motion recognition method
Sahak et al. Review on current methods of gait analysis and recognition using kinect
CN112257817A (en) Geological geology online semantic recognition method and device and electronic equipment
CN116206728A (en) Rehabilitation training method and system based on sensor fusion and transfer learning
CN104715133B (en) A kind of kinematics parameters in-orbit identification method and apparatus of object to be identified
CN111553954A (en) Direct method monocular SLAM-based online luminosity calibration method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant