CN110852158A - Radar human motion state classification algorithm and system based on model fusion - Google Patents

Radar human motion state classification algorithm and system based on model fusion Download PDF

Info

Publication number
CN110852158A
CN110852158A CN201910942378.6A CN201910942378A CN110852158A CN 110852158 A CN110852158 A CN 110852158A CN 201910942378 A CN201910942378 A CN 201910942378A CN 110852158 A CN110852158 A CN 110852158A
Authority
CN
China
Prior art keywords
support vector
vector machine
model
machine model
obtaining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910942378.6A
Other languages
Chinese (zh)
Other versions
CN110852158B (en
Inventor
包敏
邢汉桐
史林
邢孟道
宋源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201910942378.6A priority Critical patent/CN110852158B/en
Publication of CN110852158A publication Critical patent/CN110852158A/en
Application granted granted Critical
Publication of CN110852158B publication Critical patent/CN110852158B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention belongs to the technical field of radars, and particularly relates to a radar human motion state classification algorithm and system based on model fusion, wherein the method comprises the following steps: obtaining a training set; constructing a support vector machine model according to the training set; obtaining a predicted value of the support vector machine model according to the support vector machine model; and constructing a limit gradient lifting tree model according to the predicted value of the support vector machine model. According to the method, the support vector machine model and the extreme gradient lifting tree model are fused through a stacking model fusion method, the support vector machine model is suitable for processing the high latitude of a small sample, the extreme gradient lifting tree model has the advantage of strong fitting capability, and the fused model has the advantages of the support vector machine model and the extreme gradient lifting tree model, so that the model is higher in generalization capability and recognition accuracy, and the time for model training in deep learning is reduced.

Description

Radar human motion state classification algorithm and system based on model fusion
Technical Field
The invention belongs to the technical field of radars, and particularly relates to a radar human motion state classification algorithm and system based on model fusion.
Background
The radar has great advantages compared with other sensors, such as that the optical sensor is easily influenced by weather environment and light, and the radar can work all day long. In addition, the radar has certain penetrating capacity, can detect targets behind obstacles, even can judge the motion state of human bodies behind the obstacles, and can be used for anti-terrorism, military affairs, rescue after disasters and life detection by combining related technologies. In a word, the human motion state classification based on the radar has wide application prospect.
From the nineties of the twentieth century, researchers began studying human micromovements based on the micro-doppler signature of radar, initially for classification between different targets in war, and later applied to the classification of the motion state of human targets. Chen V.C establishes a human body model, simulates simulated radar echo data through software, performs time-frequency analysis on the simulated radar echo data, and contrasts and analyzes the micro Doppler characteristic difference of limbs of the model in different motion states; aiming at the defects of the traditional time-frequency analysis method in processing non-stationary signals, Chiehping Lai et al introduces Hilbert-Huang transformation to extract micro Doppler characteristics of a human body from complex echo signals, but the operation processing time is longer; javier et al studied the classification of various human activities based on micro-Doppler features using linear predictive coding, and proposed a method for extracting micro-Doppler features mixed at different frequencies with classification accuracy up to 85%.
The existing human motion state classification method based on machine learning mostly uses a single classifier, which causes the defects of insufficient generalization capability of the model and relatively low identification precision. In deep learning, the complexity of the model is very high, the calculation amount is too large, and a training data set is not easy to collect; training the model often takes a lot of time, and the process of the model automatically extracting the image features is not interpretable.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a radar human motion state classification algorithm and system based on model fusion. The technical problem to be solved by the invention is realized by the following technical scheme:
a radar human motion state classification algorithm based on model fusion comprises the following steps:
obtaining a training set;
constructing a support vector machine model according to the training set;
obtaining a predicted value of the support vector machine model according to the support vector machine model;
and constructing a limit gradient lifting tree model according to the predicted value of the support vector machine model.
In one embodiment of the invention, constructing a support vector machine model from the training set comprises:
obtaining a directional gradient support vector machine model according to the training set;
obtaining a local binary support vector machine model according to the training set;
obtaining a haar support vector machine model according to the training set;
and carrying out combined operation on the directional gradient support vector machine model, the local binary support vector machine model and the haar support vector machine model to obtain a support vector machine model.
In an embodiment of the present invention, the combining the directional gradient support vector machine model, the local binary support vector machine model, and the haar support vector machine model to obtain a support vector machine model includes:
respectively acquiring weighting coefficients of the directional gradient support vector machine model, the local binary support vector machine model and the haar support vector machine model;
constructing a plurality of primary support vector machine models according to a plurality of preset proportionality coefficients and weighting coefficients of the directional gradient support vector machine model, the local binary support vector machine model and the haar support vector machine model;
obtaining a prediction result according to the plurality of primary support vector machine models;
obtaining an optimal prediction result according to the prediction results of the plurality of primary support vector machine models;
and selecting a primary support vector machine model corresponding to the optimal prediction result as a support vector machine model.
In one embodiment of the present invention, constructing a limit gradient lifting tree model according to the predicted value of the support vector machine model comprises:
presetting a plurality of tree depth values;
constructing a plurality of primary extreme gradient lifting tree models according to the plurality of tree depth values and the predicted values of the support vector machine model;
obtaining a plurality of model parameters according to the plurality of primary extreme gradient lifting tree models;
obtaining optimal model parameters according to the plurality of model parameters;
and selecting a primary extreme gradient lifting tree model corresponding to the optimal model parameter as an extreme gradient lifting tree model.
The invention also provides a radar human motion state classification system based on model fusion, which comprises the following steps:
the information acquisition module is used for acquiring a training set;
the support vector machine model building module is used for building a support vector machine model according to the training set;
the predicted value obtaining module is used for obtaining a predicted value of the support vector machine model according to the support vector machine model;
and the extreme gradient lifting tree model building module is used for building an extreme gradient lifting tree model according to the predicted value of the support vector machine model.
In one embodiment of the invention, the support vector machine model construction module comprises:
the directional gradient support vector machine model construction unit is used for obtaining a directional gradient support vector machine model according to the training set;
the local binary support vector machine model construction unit is used for obtaining a local binary support vector machine model according to the training set;
the haar support vector machine model building unit is used for obtaining a haar support vector machine model according to the training set;
and the support vector machine model construction unit is used for carrying out combined operation on the directional gradient support vector machine model, the local binary support vector machine model and the haar support vector machine model to obtain a support vector machine model.
In one embodiment of the present invention, the support vector machine model construction unit includes:
a weighting coefficient obtaining subunit, configured to obtain weighting coefficients of the directional gradient support vector machine model, the local binary support vector machine model, and the haar support vector machine model, respectively;
the primary support vector machine model constructing subunit is used for constructing a plurality of primary support vector machine models according to a plurality of preset proportionality coefficients and the weighting coefficients of the directional gradient support vector machine model, the local binary support vector machine model and the haar support vector machine model;
the prediction result obtaining subunit is used for obtaining a prediction result according to the plurality of primary support vector machine models;
the optimal prediction result obtaining subunit is used for obtaining an optimal prediction result according to the prediction results of the plurality of primary support vector machine models;
and the support vector machine model constructing subunit is used for selecting the primary support vector machine model corresponding to the optimal prediction result as the support vector machine model.
In one embodiment of the present invention, the extreme gradient lifting tree model building module comprises:
the primary extreme gradient lifting tree model building unit is used for building a plurality of primary extreme gradient lifting tree models according to a plurality of preset tree depth values and the predicted values of the support vector machine model;
the model parameter extraction unit is used for obtaining a plurality of model parameters according to the primary extreme gradient lifting tree models;
the optimal model parameter obtaining unit is used for obtaining optimal model parameters according to the plurality of model parameters;
and the extreme gradient lifting tree model construction unit is used for selecting the primary extreme gradient lifting tree model corresponding to the optimal model parameter as the extreme gradient lifting tree model.
The invention has the beneficial effects that:
according to the method, the support vector machine model and the extreme gradient lifting tree model are fused through a stacking model fusion method, the support vector machine model is suitable for processing the high latitude of a small sample, the extreme gradient lifting tree model has the advantage of strong fitting capability, and the fused model has the advantages of the support vector machine model and the extreme gradient lifting tree model, so that the model is higher in generalization capability and recognition accuracy, and the time for model training in deep learning is reduced.
The present invention will be described in further detail with reference to the accompanying drawings and examples.
Drawings
FIG. 1 is a schematic flow chart of a radar human motion state classification algorithm based on model fusion according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating specific steps of a radar human motion state classification algorithm based on model fusion according to an embodiment of the present invention;
fig. 3 is a block diagram of a radar human motion state classification system based on model fusion according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to specific examples, but the embodiments of the present invention are not limited thereto.
Referring to fig. 1, fig. 1 is a schematic flowchart of a radar human motion state classification algorithm based on model fusion according to an embodiment of the present invention, including:
obtaining a training set;
constructing a support vector machine model according to the training set;
obtaining a predicted value of the support vector machine model according to the support vector machine model;
and constructing a limit gradient lifting tree model according to the predicted value of the support vector machine model.
According to the method, the support vector machine model and the extreme gradient lifting tree model are fused through a stacking model fusion method, the support vector machine model is suitable for processing the high latitude of a small sample, the extreme gradient lifting tree model has the advantage of strong fitting capability, and the fused model has the advantages of the support vector machine model and the extreme gradient lifting tree model, so that the model is higher in generalization capability and recognition accuracy, and the time for model training in deep learning is reduced.
In one embodiment of the invention, constructing a support vector machine model from the training set comprises:
obtaining a directional gradient support vector machine model according to the training set;
obtaining a local binary support vector machine model according to the training set;
obtaining a haar support vector machine model according to the training set;
and carrying out combined operation on the directional gradient support vector machine model, the local binary support vector machine model and the haar support vector machine model to obtain a support vector machine model.
Furthermore, the directional gradient (HOG) support vector machine model and the HAAR (HAAR) support vector machine model have a good effect of classifying the time-frequency graph of the walking of the human body, the Local Binary (LBP) support vector machine model has a good effect of classifying the time-frequency graph of the running of the human body, the characteristics of the three support vector machine models are combined, and the combined characteristics are used for training the Support Vector Machine (SVM) model, so that the support vector machine model can integrate the beneficial points of the three support vector machine models, and the generalization capability of the support vector machine model is greatly increased.
In an embodiment of the present invention, the combining the directional gradient support vector machine model, the local binary support vector machine model, and the haar support vector machine model to obtain a support vector machine model includes:
respectively acquiring weighting coefficients of the directional gradient support vector machine model, the local binary support vector machine model and the haar support vector machine model;
constructing a plurality of primary support vector machine models according to a plurality of preset proportionality coefficients and weighting coefficients of the directional gradient support vector machine model, the local binary support vector machine model and the haar support vector machine model;
obtaining a prediction result according to the plurality of primary support vector machine models;
obtaining an optimal prediction result according to the prediction results of the plurality of primary support vector machine models;
and selecting a primary support vector machine model corresponding to the optimal prediction result as a support vector machine model.
Specifically, the training set comprises 1586 time-frequency graphs of walking and running of the human body, and 80% of data in the training set is used as training data for training the model, wherein 645 images are the time-frequency graphs of walking of the human body, and 624 images are the time-frequency graphs of running of the human body; using 20% of data as verification data for model verification in a training stage, wherein 161 images are time-frequency graphs of human body walking, and 156 images are time-frequency graphs of human body running; the iteration times are set to 1000 times, the penalty coefficient C is set to 0.3, and the kernel function selects a Gaussian kernel function RBF
To facilitate comparison of performance between different models, define: TP is a time-frequency graph of the walking model predicted as walking; TN is a time-frequency diagram of the running model prediction and the running; FN is a time-frequency graph of the running model predicted as running; FP is the time-frequency diagram of the run model predicted as a walk.
The classification accuracy A of the model is as follows:
Figure BDA0002223266410000081
the precision rate P and the recall rate R are as follows:
Figure BDA0002223266410000082
harmonic mean F of precision P and recall R1Comprises the following steps:
Figure BDA0002223266410000083
training according to different characteristics to obtain a verified classification result:
prediction results of models obtained by training SVM (support vector machine) with different characteristics in verification set
Predicted results HOG LBP HAAR
TP 149 138 134
TN 133 137 126
FP 23 19 30
FN 12 23 27
Model performance index obtained by training SVM (support vector machine) with different characteristics
Figure BDA0002223266410000084
Specific analysis: verifying the time-frequency graph of walking and running of the verification data 317, and obtaining the highest accuracy of the overall classification of the directional gradient support vector machine model by using HOG characteristic training, wherein the accuracy can reach 88.96%, so that the walking and running time-frequency graph can be well distinguished by using HOG characteristics, and the classification effect of the HAAR support vector machine model constructed by using HAAR characteristics is poor; the recall rate R of the directional gradient support vector machine model and the HAAR support vector machine model is higher than the precision rate P, the prediction accuracy of the HOG characteristic and the HAAR characteristic on the time-frequency graph of the human body walking is proved to be better, the precision rate P of the LBP characteristic is higher than the recall rate R, and the prediction accuracy of the LBP characteristic on the time-frequency graph of the human body running is proved to be better. F1The overall performance of the model is reflected, and from the overall view, the HOG characteristic effect is better, the LBP characteristic is second, and the HAAR characteristic effect is poorer.
According to the analysis result, the classification of the time-frequency graphs of the walking and running of the human body is different by different characteristics, the HOG characteristic and the HAAR characteristic are better for classifying the time-frequency graphs of the walking and running of the human body, and the LBP characteristic is better for classifying the time-frequency graphs of the running of the human body, so that the characteristics of the three are combined for training a support vector machine model, and the combined characteristic is w: w is xHog + yLBP + zHAAR, x, y, z are different characteristic coefficients, and x + y + z is 1. Adjusting the values of x, y and z to obtain different combination characteristics and obtain a support vector machine model, verifying the verification data by using the obtained support vector machine model respectively to obtain prediction results under different proportionality coefficients, as shown in the following table:
prediction result table of model in verification set obtained by training support vector machine model with different combination characteristics
Predicted results (0.6,0.3,0.1) (0.5,0.4,0.1) (0.5,0.5,0) (0.5,0.3,0.2) (0.4,0.3,0.3)
TP 143 150 149 146 145
TN 142 141 138 140 138
FP 14 15 18 16 18
FN 18 11 12 15 16
Model performance index table obtained by training support vector machine model with different combination characteristics
Performance index Accuracy A Rate of accuracy P Recall rate R F1Value of
(0.6,0.3,0.1) 89.91% 91.08% 88.82% 89.92%
(0.5,0.4,0.1) 91.80% 90.91% 93.17% 92.16%
(0.5,0.5,0) 90.54% 94.90% 92.55% 93.76%
(0.5,0.3,0.2) 90.22% 92.99% 90.68% 91.83%
(0.4,0.3,0.3) 89.27% 88.96% 90.06% 89.53%
Specific analysis: it can be seen from the model performance index table obtained by training the support vector machine model with different combination characteristics that the HOG characteristics, the LBP characteristics and the HAAR characteristics are combined and used for constructing the support vector machine model, so that the classification accuracy of the model can be improved, when x is 0.5, y is 0.4, and z is 0 and 1, the classification performance of the model is optimal, and after the characteristics are combined, the difference between the accuracy and the recall rate is reduced, which indicates that the model predicts different time-frequency graphs more stably. Different features extract different classes and different levels of information of the time-frequency diagram, and the combination of the features can enable the model to more comprehensively learn the features of the time-frequency diagram, so that the generalization capability of the combined support vector machine model is stronger and the robustness is higher due to the combination of different features.
In one embodiment of the invention, constructing an extreme gradient spanning tree (XGBOOST) model according to the predicted values of the support vector machine model comprises:
presetting a plurality of tree depth values;
constructing a plurality of primary extreme gradient lifting tree models according to the plurality of tree depth values and the predicted values of the support vector machine model;
obtaining a plurality of model parameters according to the plurality of primary extreme gradient lifting tree models;
obtaining optimal model parameters according to the plurality of model parameters;
and selecting a primary extreme gradient lifting tree model corresponding to the optimal model parameter as an extreme gradient lifting tree model.
Specifically, the parameters of the XGBOOST model are shown in the following table:
training limit gradient lifting tree parameter table
Class of parameters booster eta lambda silent max_depth
Parameter value gbtree 0.1 1 0 4-9
As can be seen from the above table, the base classifier selected when the extreme gradient lifting tree model is trained is a tree model, i.e., a classification and regression tree (CART), and the tree model has stronger fitting capability than a linear model, in this embodiment, the learning rate eta is set to 0.1, and the leaf nodes of the XGBOOST model are multiplied by the learning rate, so that the following tree has a larger learning space; adding L2 regularization when lambda is 1, and preventing the model from being over-fitted; the silent is 0, namely information such as iteration times, model loss and the like is displayed on a console, and parameters can be conveniently adjusted in the training process; setting the tree depth, max _ depth to 4-9, and adjusting the parameter to obtain XGBOST models with different depths; the base classifier type boost is the tree model gbtree.
The depth of the XGBOOST tree can cause excessive learning training data to cause overfitting, different extreme gradient lifting tree models are obtained by adjusting the depth of the XGBOOST tree, the images in the verification data are subjected to walking and running time frequency graph verification, and the verification results are shown in the following table:
prediction result table of extreme gradient lifting tree models with different tree depths s degrees in verification set
Predicted results 4 5 6 7 8 9
TP 145 147 152 154 141 139
TN 141 142 143 138 143 140
FP 15 14 13 18 13 16
FN 16 14 9 7 20 22
The performance index table of the extreme gradient lifting tree model with different tree depths is obtained as follows:
performance index table of extreme gradient lifting tree model with different tree depths
Performance index Accuracy A Rate of accuracy P Recall rate R F1Value of
4 90.22% 90.63% 90.06% 90.33%
5 91.17% 91.30% 91.30% 91.30%
6 93.06% 92.12% 94.41% 93.25%
7 92.11% 89.53% 95.65% 92.49%
8 89.59% 91.56% 87.58% 89.53%
9 88.01% 89.67% 86.34% 87.97%
From the two tables, it can be seen that the extreme gradient lifting tree model with different tree depths is verified by using verification data, initially, with the increase of the tree depth, the classification accuracy of the model is stably improved, when the tree depth is 6, the obtained model classification accuracy is highest and can reach 93.06%, and when the tree depth reaches 7-9, the accuracy is gradually decreased, and an overfitting phenomenon occurs, so that it can be seen that the model has the strongest fitting capability and the best performance when the tree depth is 6; when the tree depth is 4-6 layers, the deviation between the precision rate and the recall rate of the extreme gradient lifting tree model is small, so that the extreme gradient lifting tree model at the moment is more stable. In summary, when the tree depth is 6, the performance of the model is optimal.
Further, referring to fig. 2, fig. 2 is a schematic diagram illustrating specific steps of a radar human motion state classification algorithm based on model fusion according to an embodiment of the present invention, in which coefficients of an optimal feature combination are found by comparing three different features, a support vector machine model is obtained by combining the three features using the set of coefficients, a limit gradient lifting tree is trained by using a prediction result of the support vector machine model, and the limit gradient lifting tree is finally obtained. The method comprises the steps of averagely dividing training data into k parts based on stacking model fusion, obtaining k models with different parameters by adopting k-fold cross validation training, training another model by taking predicted values of the k models as new characteristic values, and combining output values of different models to serve as output values after model fusion. In this embodiment, the features of the time-frequency graph of image walking and running of the training data set are divided into 5 parts, the support vector machine model is trained by a 5-fold cross validation method to obtain 5 support vector machine models, the predicted values of the 5 support vector machine models are used as training data of the extreme gradient lifting tree model to obtain an extreme gradient lifting tree, the predicted value E of the extreme gradient lifting tree is obtained, the support vector machine model is validated by validation data to obtain 5 predicted values, the predicted values are averaged to obtain the predicted value D of the support vector machine model, and the predicted value F of the final model is obtained by combining the predicted value E of the extreme gradient lifting tree and the predicted value D of the support vector machine model:
F=x1D+y1E,x1+y1=1,
wherein x1And y1The method comprises the steps of obtaining different final models by adjusting a proportionality coefficient of a model, fusing the final models after a support vector machine model and a limit gradient lifting tree model by a stacking model fusion method, verifying the different final models by a verification data set, and obtaining a verification result shown in the following table:
prediction result table of different final models in verification set
Further obtain the performance index table of different final models, as follows:
performance index of different fusion models
Performance index Accuracy A Rate of accuracy P Recall rate R F1Value of
(0.3,0.7) 92.11% 90.00% 95.03% 92.45%
(0.35,0.65) 94.32% 92.81% 96.27% 94.51%
(0.4,0.6) 93.06% 93.17% 93.17% 93.17%
(0.5,0.5) 92.43% 93.08% 91.93% 92.50%
(0.55,0.45) 92.74% 95.39% 90.06% 92.65%
Specific analysis:
as can be seen from the two tables, the fused models have better performance than the model of a single support vector machine, and the classification of the fused models is betterThe method has the advantages of certain promotion, small difference value of accuracy rate and recall rate, stable model performance and support of proportionality coefficient x of vector machine model1Is 0.35, the extreme gradient lifting tree model y1The final model classification performance at 0.65 is optimal. However, as the scale factor of the support vector machine model increases, the performance of the obtained final model is gradually reduced, and therefore, the optimal model corresponding to the optimal scale factor is obtained.
Referring to fig. 3, fig. 3 is a block diagram of a radar human motion state classification system based on model fusion according to an embodiment of the present invention, including:
the information acquisition module is used for acquiring a training set;
the support vector machine model building module is used for building a support vector machine model according to the training set;
the predicted value obtaining module is used for obtaining a predicted value of the support vector machine model according to the support vector machine model;
and the extreme gradient lifting tree model building module is used for building an extreme gradient lifting tree model according to the predicted value of the support vector machine model.
In one embodiment of the invention, the support vector machine model construction module comprises:
the directional gradient support vector machine model construction unit is used for obtaining a directional gradient support vector machine model according to the training set;
the local binary support vector machine model construction unit is used for obtaining a local binary support vector machine model according to the training set;
the haar support vector machine model building unit is used for obtaining a haar support vector machine model according to the training set;
and the support vector machine model construction unit is used for carrying out combined operation on the directional gradient support vector machine model, the local binary support vector machine model and the haar support vector machine model to obtain a support vector machine model.
In one embodiment of the present invention, the support vector machine model construction unit includes:
a weighting coefficient obtaining subunit, configured to obtain weighting coefficients of the directional gradient support vector machine model, the local binary support vector machine model, and the haar support vector machine model, respectively;
the primary support vector machine model constructing subunit is used for constructing a plurality of primary support vector machine models according to a plurality of preset proportionality coefficients and the weighting coefficients of the directional gradient support vector machine model, the local binary support vector machine model and the haar support vector machine model;
the prediction result obtaining subunit is used for obtaining a prediction result according to the plurality of primary support vector machine models;
the optimal prediction result obtaining subunit is used for obtaining an optimal prediction result according to the prediction results of the plurality of primary support vector machine models;
and the support vector machine model constructing subunit is used for selecting the primary support vector machine model corresponding to the optimal prediction result as the support vector machine model.
In one embodiment of the present invention, the extreme gradient lifting tree model building module comprises:
the primary extreme gradient lifting tree model building unit is used for building a plurality of primary extreme gradient lifting tree models according to a plurality of preset tree depth values and the predicted values of the support vector machine model;
the model parameter extraction unit is used for obtaining a plurality of model parameters according to the primary extreme gradient lifting tree models;
the optimal model parameter obtaining unit is used for obtaining optimal model parameters according to the plurality of model parameters;
and the extreme gradient lifting tree model construction unit is used for selecting the primary extreme gradient lifting tree model corresponding to the optimal model parameter as the extreme gradient lifting tree model.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.

Claims (8)

1. A radar human motion state classification algorithm based on model fusion is characterized by comprising the following steps:
obtaining a training set;
constructing a support vector machine model according to the training set;
obtaining a predicted value of the support vector machine model according to the support vector machine model;
and constructing a limit gradient lifting tree model according to the predicted value of the support vector machine model.
2. The model fusion-based radar human motion state classification algorithm according to claim 1, wherein constructing a support vector machine model according to the training set comprises:
obtaining a directional gradient support vector machine model according to the training set;
obtaining a local binary support vector machine model according to the training set;
obtaining a haar support vector machine model according to the training set;
and carrying out combined operation on the directional gradient support vector machine model, the local binary support vector machine model and the haar support vector machine model to obtain a support vector machine model.
3. The model fusion-based radar human motion state classification algorithm according to claim 2, wherein the combining operation of the directional gradient support vector machine model, the local binary support vector machine model and the haar support vector machine model to obtain a support vector machine model comprises:
respectively acquiring weighting coefficients of the directional gradient support vector machine model, the local binary support vector machine model and the haar support vector machine model;
constructing a plurality of primary support vector machine models according to a plurality of preset proportionality coefficients and weighting coefficients of the directional gradient support vector machine model, the local binary support vector machine model and the haar support vector machine model;
obtaining a prediction result according to the plurality of primary support vector machine models;
obtaining an optimal prediction result according to the prediction results of the plurality of primary support vector machine models;
and selecting a primary support vector machine model corresponding to the optimal prediction result as a support vector machine model.
4. The model fusion-based radar human motion state classification algorithm according to claim 1, wherein constructing a limit gradient lifting tree model according to the predicted values of the support vector machine model comprises:
presetting a plurality of tree depth values;
constructing a plurality of primary extreme gradient lifting tree models according to the plurality of tree depth values and the predicted values of the support vector machine model;
obtaining a plurality of model parameters according to the plurality of primary extreme gradient lifting tree models;
obtaining optimal model parameters according to the plurality of model parameters;
and selecting a primary extreme gradient lifting tree model corresponding to the optimal model parameter as an extreme gradient lifting tree model.
5. A radar human motion state classification system based on model fusion is characterized by comprising:
the information acquisition module is used for acquiring a training set;
the support vector machine model building module is used for building a support vector machine model according to the training set;
the predicted value obtaining module is used for obtaining a predicted value of the support vector machine model according to the support vector machine model;
and the extreme gradient lifting tree model building module is used for building an extreme gradient lifting tree model according to the predicted value of the support vector machine model.
6. The model fusion-based radar human motion state classification system of claim 5, wherein the support vector machine model construction module comprises:
the directional gradient support vector machine model construction unit is used for obtaining a directional gradient support vector machine model according to the training set;
the local binary support vector machine model construction unit is used for obtaining a local binary support vector machine model according to the training set;
the haar support vector machine model building unit is used for obtaining a haar support vector machine model according to the training set;
and the support vector machine model construction unit is used for carrying out combined operation on the directional gradient support vector machine model, the local binary support vector machine model and the haar support vector machine model to obtain a support vector machine model.
7. The model fusion-based radar human motion state classification system according to claim 5, wherein the support vector machine model construction unit comprises:
a weighting coefficient obtaining subunit, configured to obtain weighting coefficients of the directional gradient support vector machine model, the local binary support vector machine model, and the haar support vector machine model, respectively;
the primary support vector machine model constructing subunit is used for constructing a plurality of primary support vector machine models according to preset proportionality coefficients and weighting coefficients of the directional gradient support vector machine model, the local binary support vector machine model and the haar support vector machine model;
the prediction result obtaining subunit is used for obtaining a prediction result according to the plurality of primary support vector machine models;
the optimal prediction result obtaining subunit is used for obtaining an optimal prediction result according to the prediction results of the plurality of primary support vector machine models;
and the support vector machine model constructing subunit is used for selecting the primary support vector machine model corresponding to the optimal prediction result as the support vector machine model.
8. The model fusion-based radar human motion state classification system of claim 5, wherein the extreme gradient boosting tree model construction module comprises:
the primary extreme gradient lifting tree model building unit is used for building a plurality of primary extreme gradient lifting tree models according to a plurality of preset tree depth values and the predicted values of the support vector machine model;
the model parameter extraction unit is used for obtaining a plurality of model parameters according to the primary extreme gradient lifting tree models;
the optimal model parameter obtaining unit is used for obtaining optimal model parameters according to the plurality of model parameters;
and the extreme gradient lifting tree model construction unit is used for selecting the primary extreme gradient lifting tree model corresponding to the optimal model parameter as the extreme gradient lifting tree model.
CN201910942378.6A 2019-09-30 2019-09-30 Radar human motion state classification algorithm and system based on model fusion Active CN110852158B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910942378.6A CN110852158B (en) 2019-09-30 2019-09-30 Radar human motion state classification algorithm and system based on model fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910942378.6A CN110852158B (en) 2019-09-30 2019-09-30 Radar human motion state classification algorithm and system based on model fusion

Publications (2)

Publication Number Publication Date
CN110852158A true CN110852158A (en) 2020-02-28
CN110852158B CN110852158B (en) 2023-09-22

Family

ID=69596209

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910942378.6A Active CN110852158B (en) 2019-09-30 2019-09-30 Radar human motion state classification algorithm and system based on model fusion

Country Status (1)

Country Link
CN (1) CN110852158B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111812643A (en) * 2020-06-03 2020-10-23 中国科学院空天信息创新研究院 Radar imaging method, device, equipment and storage medium
CN112926390A (en) * 2021-01-26 2021-06-08 国家康复辅具研究中心 Gait motion mode recognition method and model establishment method
CN113159447A (en) * 2021-05-12 2021-07-23 中国人民解放军陆军工程大学 Laser radar electromagnetic environment effect prediction method and system
CN113611404A (en) * 2021-07-09 2021-11-05 哈尔滨智吾康软件开发有限公司 Plasma sample cancer early screening method based on ensemble learning
CN114463014A (en) * 2022-02-23 2022-05-10 河南科技大学 SVM-Xgboost-based mobile payment risk early warning method

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109409426A (en) * 2018-10-23 2019-03-01 冶金自动化研究设计院 A kind of extreme value gradient promotion logistic regression classification prediction technique

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109409426A (en) * 2018-10-23 2019-03-01 冶金自动化研究设计院 A kind of extreme value gradient promotion logistic regression classification prediction technique

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111812643A (en) * 2020-06-03 2020-10-23 中国科学院空天信息创新研究院 Radar imaging method, device, equipment and storage medium
CN112926390A (en) * 2021-01-26 2021-06-08 国家康复辅具研究中心 Gait motion mode recognition method and model establishment method
CN113159447A (en) * 2021-05-12 2021-07-23 中国人民解放军陆军工程大学 Laser radar electromagnetic environment effect prediction method and system
CN113611404A (en) * 2021-07-09 2021-11-05 哈尔滨智吾康软件开发有限公司 Plasma sample cancer early screening method based on ensemble learning
CN114463014A (en) * 2022-02-23 2022-05-10 河南科技大学 SVM-Xgboost-based mobile payment risk early warning method

Also Published As

Publication number Publication date
CN110852158B (en) 2023-09-22

Similar Documents

Publication Publication Date Title
CN110852158A (en) Radar human motion state classification algorithm and system based on model fusion
CN111401201B (en) Aerial image multi-scale target detection method based on spatial pyramid attention drive
US11402496B2 (en) Method and apparatus for enhancing semantic features of SAR image oriented small set of samples
Mosavi et al. Multi-layer perceptron neural network utilizing adaptive best-mass gravitational search algorithm to classify sonar dataset
Turner et al. State-space inference and learning with Gaussian processes
US20160224903A1 (en) Hyper-parameter selection for deep convolutional networks
US20210005067A1 (en) System and Method for Audio Event Detection in Surveillance Systems
EP3690741A2 (en) Method for automatically evaluating labeling reliability of training images for use in deep learning network to analyze images, and reliability-evaluating device using the same
CN111160268A (en) Multi-angle SAR target recognition method based on multi-task learning
CN104200814A (en) Speech emotion recognition method based on semantic cells
Wei et al. A method of underwater acoustic signal classification based on deep neural network
US20220114724A1 (en) Image processing model generation method, image processing method and device, and electronic device
CN110705600A (en) Cross-correlation entropy based multi-depth learning model fusion method, terminal device and readable storage medium
CN111216126B (en) Multi-modal perception-based foot type robot motion behavior recognition method and system
Tu et al. A theoretical investigation of several model selection criteria for dimensionality reduction
CN114202792A (en) Face dynamic expression recognition method based on end-to-end convolutional neural network
CN113723572A (en) Ship target identification method, computer system, program product and storage medium
CN111209813B (en) Remote sensing image semantic segmentation method based on transfer learning
CN111368653B (en) Low-altitude small target detection method based on R-D graph and deep neural network
CN113191996A (en) Remote sensing image change detection method and device and electronic equipment thereof
CN110414426B (en) Pedestrian gait classification method based on PC-IRNN
Lim et al. Temporal early exiting with confidence calibration for driver identification based on driving sensing data
CN114998731A (en) Intelligent terminal navigation scene perception identification method
CN116030300A (en) Progressive domain self-adaptive recognition method for zero-sample SAR target recognition
US20220391692A1 (en) Semantic understanding of dynamic imagery using brain emulation neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant