CN105335725A - Gait identification identity authentication method based on feature fusion - Google Patents

Gait identification identity authentication method based on feature fusion Download PDF

Info

Publication number
CN105335725A
CN105335725A CN201510749195.4A CN201510749195A CN105335725A CN 105335725 A CN105335725 A CN 105335725A CN 201510749195 A CN201510749195 A CN 201510749195A CN 105335725 A CN105335725 A CN 105335725A
Authority
CN
China
Prior art keywords
gait
image
pixel
background
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510749195.4A
Other languages
Chinese (zh)
Other versions
CN105335725B (en
Inventor
黄玮
殷铭
王劲松
田永生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University of Technology
Original Assignee
Tianjin University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University of Technology filed Critical Tianjin University of Technology
Priority to CN201510749195.4A priority Critical patent/CN105335725B/en
Publication of CN105335725A publication Critical patent/CN105335725A/en
Application granted granted Critical
Publication of CN105335725B publication Critical patent/CN105335725B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • G06V40/25Recognition of walking or running movements, e.g. gait recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a gait identification identity authentication method based on feature fusion. The method comprises the following steps: first of all, a camera acquiring in real time gait original image sequences of current background images and detection objects, and obtaining binary gait image sequences by use of such methods as graying, an Euclidean algorithm, median filtering and the like; then directly extracting figure contour pixel points to obtain static feature values based on aspect ratios and dynamic feature values based on gait contour mass center distances; and finally, classifying the detection objects by use of a new algorithm integrated with both an SVM algorithm and a Bayes algorithm to output a final identification result. The method provided by the invention can rapidly and accurately remove the background and improves the adaptability to different environments. Besides, since a prior probability of a conventional Bayes algorithm is manually set according to prior experience, as a result, the identification rate is quite low, yet after fusion of the SVM algorithm, the prior probability of the Bayes algorithm can be improved, and the overall identification precision can be substantially improved.

Description

The Gait Recognition identity identifying method that a kind of feature based merges
Technical field
The present invention relates to Gait Recognition technical field, be specifically related to the Gait Recognition identity identifying method that a kind of feature based merges.
Background technology
Gait Recognition utilizes computer technology, confirms identity by the walking manner of people and attitude.Identify that identity has some unique advantages by gait as biological characteristic: can remote data acquisition, under the unwitting prerequisite of measured, complete identification, Gait Recognition becomes one of the focus in living things feature recognition field in recent years.From domestic and international achievement in research, the difficult point of Gait Recognition mainly concentrates on background modeling, Method of Gait Feature Extraction, the aspects such as Gait Recognition speed and precision.
One, background modeling aspect, removal background is a great difficult problem, from current achievement in research, the method removing background adopts the thought of " iteration elimination " mostly, but the method iterations is many, efficiency of algorithm is slow, and reduce system performance, the gait image effect that it processes out is unsatisfactory.
Two, Method of Gait Feature Extraction aspect, research paper both domestic and external covers tens kinds of features for Gait Recognition, comprises gait cycle, gait frequency, joint angles Changing Pattern, outermost profile, distance signal etc.These feature extraction modes are all comparatively complicated.
Three, Gait Recognition speed and precision aspect, general Gait Recognition system because of adopt efficiency of algorithm low, running environment mostly relies on the support of supercomputer, and popular PC cannot meet actual needs.Also do not have commercial Gait Recognition software at present, with regard to existing Gait Recognition program, discrimination is not very desirable.
Summary of the invention
The object of the invention is for overcoming above-mentioned technical disadvantages and deficiency, providing the Gait Recognition identity identifying method that a kind of feature based merges.
The technical solution used in the present invention is as follows:
The Gait Recognition identity identifying method that feature based merges, comprises the following steps:
S1. the background image of camera Real-time Collection current environment and the gait original image of detection target, carry out gray processing process to image, obtain the image sequence of gray processing;
S2. the European algorithm of the gait sequence image after gray processing is removed background, tentatively obtain the gait sequence bianry image after removing background;
S3. the gait sequence image tentatively obtained is carried out medium filtering process, by the acnode noise remove in image, finally obtain the gait sequence bianry image after removing background;
S4. direct character contour pixel to be extracted, obtain contour edge proper vector;
S5. obtained contour edge proper vector is processed, obtain based on the static nature value of depth-width ratio and the behavioral characteristics value based on gait profile centroid distance;
S6. with behavioral characteristics and static nature, the SVM classifier classification based on Radial basis kernel function obtains new proper vector respectively to utilize all gait feature vectors in gait feature vector data storehouse;
S7. the bayesian algorithm estimated based on m is transferred to by the training result of SVM to identify;
S8. recognition result is exported.
In gait image gray processing processing procedure described in step S1, with reference to significance level, RGB tri-components are weighted on average with different weights; Because the sensitivity of human eye to green is the highest, minimum to blue-sensitive, therefore, by following formula, average energy is weighted to RGB three-component and obtains more rational gray level image,
f(i,j)=0.30R(i,j)+0.59G(i,j)+0.11B(i,j)。
The method that European algorithm described in step S2 removes background is: the Euclidean distance d calculating each color pixel cell of gait original image and background image,
Wherein x r, x g, x brepresent the value of pixel vector red, green, blue three kinds of color components of image in gait original sequence respectively, μ r, μ g, μ brepresent the value of red, green, blue three kinds of color components of background pixel vector respectively;
The R of all pixels in computed image, the mean value of G, B color component is threshold value Ta, if image pixel is not less than Ta with the Euclidean distance of corresponding background pixel, this image pixel is prospect, otherwise is background; Threshold value Ta be prospect and background subtraction other measure, namely prospect at least differs Ta with the distance of background; Threshold value is too large, then pixel has larger may being judged to become background, and the possibility of prospect misjudged one-tenth background is large; Threshold value is too little, then pixel has larger may being judged to become prospect, and the possibility of background misjudged one-tenth prospect is large; B (x, y) is background image, and F (x, y) is each pixel of traversal present frame, if be then that background dot is removed depending on this pixel, subscript r, g, b represent background RGB three kinds of color components respectively.
Medium filtering described in step S3 is made up of following steps:
S31. Filtering Template is roamed in the picture, and Filtering Template center is overlapped with certain location of pixels in image;
S32. the gray-scale value of each respective pixel in Filtering Template is read;
S33. these gray-scale values are arranged from small to large;
S34. get the intermediate data of this column data, this intermediate data is assigned to the pixel of corresponding Filtering Template center; If there is odd number element in Filtering Template, intermediate value gets element by the neutral element gray-scale value after the sequence of gray-scale value size; If there is even number element in Filtering Template, intermediate value gets element by after the sequence of gray-scale value size, the mean value of middle two element gray scales.
Contour pixel point extracting method described in step S4 is:
S41., from first point of the first row of picture, laterally search a coordinate (x1, y1) and make color be white, mark this point; Laterally search from this some the point that plain pixel is black again, this point is designated as (x2, y1);
S42., from next line first point, laterally travel through in the above described manner;
S43. step S42 is repeated, until travel through whole pictures.
The feature extracting method of step S5 is specific as follows:
S51. the static nature value based on depth-width ratio is extracted
Read height y and the width x of single frames gait, then calculate depth-width ratio k=(y/x);
S52. the behavioral characteristics value based on gait profile centroid distance is extracted
The feature extraction of profile barycenter is according to barycenter formula calculate the barycenter of outline, wherein (x c, y c) be the coordinate of barycenter, n is the number of pixel on outline, 1≤i≤n, (x i, y i) be the coordinate of pixel on outline; Behavioral characteristics value is peak (x1, y1) in character image, minimum point (x2, y2), ultra-left point (x3, y3), these 4 unique points of rightest point (x4, y4) divide the distance being clipped to barycenter, and they are to the formula below the distance of barycenter meets:
The concrete grammar that step S6 obtains new proper vector is as follows:
S61. using four row behavioral characteristics before the original feature vector of former training set that obtains and the 6th column label as new training set, using before former test set, four row are as new test set, and the training set new with this and test set are done as follows:
S611. select Radial basis kernel function exp (-|| x i-x j|| 2/ 2 γ 2), wherein γ=0.00001;
S612. select C-supporting vector machine model as single two sorter models, make C=1000, solve:
S613. training data is utilized to ask parameter a *, then ask
S614. decision function is constructed
S615. to the many disaggregated models of data configuration " one to one " pattern of each sample;
S616. classification results is formed the first row of new set;
S62. using the 5th row static nature of the original feature vector of former training set that obtains and the 6th column label as new training set, using the 5th of former test set the row as new test set, the training set new with this and test set do the secondary series that above-mentioned operation equally obtains new set;
S63. last column label is directly as the 3rd row of new set.
Step S7, the flow process of blending algorithm:
S71. suppose there is k kind characterization method, and t type, make Θ represent set of types, i.e. Θ={ O 1, O 2..., O t;
S72. to UNKNOWN TYPE sample O ∈ Θ, maximum a posteriori probability (MAP) Target recognition fusion identification judgement is write as:
In formula, O ibe after i-th characterization method SVM identifies, the identification of unknown sample is adjudicated; O mAPfor adjudicating the multiple features fusion identification of sample; P (O mAP| O 1, O 2..., O k) for k kind characterization method SVM identify joint probability density function; Above formula can be expressed as:
S73. according to Bayes rational formula, fusion recognition judgement is:
In formula, the prior probability that P (O) is O ∈ Θ; In above formula, denominator is total probability formula, has nothing to do, therefore can be reduced to the value of O:
S74. because various feature extracting method is separately independently, therefore, it is possible to be expressed as:
In formula, P (O i| O) be the likelihood function that i-th characterization method SVM identifies.
In step S7, using the remaining half of original recognition sample as recognition sample, by the result of previous step as training sample, classify, calculate personage's likelihood probability, using maximum probability personage as final recognition result.
Advantage of the present invention and beneficial effect
The present invention can carry out background removal fast, and uses gray processing and median filtering technology to improve its adaptability under different situation.In addition, be artificially set according to experience in the past owing to testing probability before traditional bayesian algorithm, cause its discrimination lower, and before bayesian algorithm can being improved after having merged SVM algorithm, test probability, make overall accuracy of identification have remarkable lifting.
Accompanying drawing explanation
Fig. 1 is that Gait Recognition of the present invention implements overall procedure;
Fig. 2 is gray level image;
Fig. 3 is the bianry image after removing background;
Fig. 4 extracts schematic diagram based on the static nature value of depth-width ratio;
Fig. 5 extracts schematic diagram based on the behavioral characteristics value of gait profile centroid distance.
Embodiment
Below in conjunction with drawings and Examples, the present invention is described in detail.
The Gait Recognition identity identifying method that feature based merges, identifies the walking step state of 7 people, intercepts 30 in the video of everyone shooting; As shown in Figure 1, specific embodiments comprises following content and step:
The background image of step S1. camera Real-time Collection current environment and the gait original image of detection target, carry out gray processing process to image, obtain the image sequence of gray processing, as shown in Figure 2; Side, the travel region view position erection camera needing to detect, the background image of Real-time Collection current environment and the gait original image of detection target, be kept in particular file folder.
The European algorithm of gait sequence image after gray processing is removed background by step S2., tentatively obtains the gait sequence bianry image after removing background.
The gait sequence image tentatively obtained is carried out medium filtering process by step S3., finally obtains the gait sequence bianry image after removing background, as shown in Figure 3.
Step S4. directly extracts character contour pixel, obtains contour edge proper vector.
Step S5. processes obtained contour edge proper vector, obtains based on the static nature value of depth-width ratio and the behavioral characteristics value based on gait profile centroid distance, respectively as shown in Figure 4, Figure 5.
Step S6. utilizes all gait feature vectors in gait feature vector data storehouse, and with behavioral characteristics and static nature, the SVM classifier classification based on Radial basis kernel function obtains new proper vector respectively;
The method obtaining new proper vector is as follows:
S61. using four row behavioral characteristics before the original feature vector of former training set that obtains and the 6th column label as new training set, using before former test set, four row are as new test set, and the training set new with this and test set are done as follows:
S611. utilize LibSVM to the many disaggregated model training of C-support vector machine of data configuration " one to one " pattern of each sample, wherein punish parameter C=1000, gamma parameter γ=0.00001;
S612. classification results is formed the first row of new set: 007,002,005,004,001,006,006,006,002,003,006,006,006,006,006,006,006,006,006,006,006,006,006,006,006,006,006,006,006,006;
S62. using the 5th row static nature of the original feature vector of former training set that obtains and the 6th column label as new training set, using the 5th of former test set the row as new test set, the training set new with this and test set do the secondary series that above-mentioned operation equally obtains new set: 001,001,001,007,007,007,001,001,001,001,001,004,001,001,001,005,005,001,001,004,004,004,004,004,004,001,001,005,005,005;
S63. last column label is directly as the 3rd row of new set: 007,002,005,004,001,006,006,006,002,003,006,006,006,006,006,006,006,006,006,006,006,006,006,006,006,006,006,006,006,006.
Step S7. goes out half at random as training set in the proper vector set obtained, and remaining half, as test set, utilizes the bayesian algorithm estimated based on m to identify test set;
The bayesian algorithm flow process estimated based on m is as follows:
S71. try to achieve the conditional probability under each attribute of each personage of 001-007 in training set, result is as shown in table 1; In table, NULL represents that SVM identifies that rear sample does not occur;
Table 1 Bayes trains SVM to identify the result of judgement
S72. utilize the recognition result of SVM as the prior probability detecting target, if there are some attributes not occur in training set in test set, then use m to estimate to replace its prior probability;
S73. obtain the posterior probability of test set respectively under each personage is prior probability situation, select the wherein maximum label as test set, result is as shown in table 2;
The maximum a posteriori probability of table 2 recognition sample
The maximum a posteriori probability of the 1st photo 5%
The maximum a posteriori probability of the 2nd photo 0.83%
The maximum a posteriori probability of the 3rd photo 0.83%
The maximum a posteriori probability of the 4th photo 40%
The maximum a posteriori probability of the 5th photo 5%
The maximum a posteriori probability of the 6th photo 5%
The maximum a posteriori probability of the 7th photo 40%
The maximum a posteriori probability of the 8th photo 6.67%
The maximum a posteriori probability of the 9th photo 6.67%
The maximum a posteriori probability of the 10th photo 40%
The maximum a posteriori probability of the 11st photo 40%
The maximum a posteriori probability of the 12nd photo 33.33%
The maximum a posteriori probability of the 13rd photo 33.33%
The maximum a posteriori probability of the 14th photo 6.67%
The maximum a posteriori probability of the 15th photo 6.67%
S74. repeat step S73, judge all photos intercepting out in video flowing, obtaining maximum likelihood probability and finally judge personage, is 5% because No. 006 personage's probability is 95%, No. 005 personage's probability, therefore final detection target is No. 006.
Step S8. exports recognition result.
The CASIA gait data storehouse DataSetA that method provided by the invention uses Institute of Automation Research of CAS to provide carries out repeatedly repeated test respectively, and makes comparisons with other common gait recognition methods.Comprise the data of 20 people in CASIA gait data storehouse DataSetA, everyone has 12 image sequences, 3 direction of travel, and there are 4 image sequences in each direction.We adopt 3 sequence training on visual angle, the side DataSetA in CASIA gait data storehouse, 1 sequential test is tested, and make comparisons with ImageSelf-similarity, Motion-based, Silhouette-based scheduling algorithm, result is as shown in table 3.
The discrimination of the various algorithm of table 3 compares
Algorithm Correct recognition rata
Image Self-similarity 73.0%
Motion-based 82.5%
Silhoucttc-based 71.0%
Gait appearance features 87.5%
Baseline alogrithm 79.0%
Statistical shape analysis 89.0%
The method that these works provide 95.0%
Meanwhile, the present invention has also carried out repeatedly testing under practical circumstances, as shown in table 4 below with single method comparison.
Before and after table 4 algorithm fusion, discrimination compares
Algorithm Correct recognition rata
Single bayesian algorithm 60.0%
Single SVM algorithm 83.3%
The method that these works provide 95.0%
The foregoing is only the preferred embodiments of the present invention, be not limited to the present invention, for a person skilled in the art, the present invention can have various modifications and variations.Within the spirit and principles in the present invention all, any amendment done, equivalent replacement, improvement etc., all should be included within right of the present invention.

Claims (9)

1. a Gait Recognition identity identifying method for feature based fusion, is characterized in that comprising the following steps:
S1. the background image of camera Real-time Collection current environment and the gait original image of detection target, and gray processing process is carried out to image, obtain the image sequence of gray processing;
S2. the European algorithm of the gait sequence image after gray processing is removed background, tentatively obtain the gait sequence bianry image after removing background;
S3. the gait sequence image tentatively obtained is carried out medium filtering process, by the acnode noise remove in image, obtain the gait sequence bianry image after final removal background;
S4. direct character contour pixel to be extracted, obtain contour edge proper vector;
S5. obtained contour edge proper vector is processed, obtain based on the static nature value of depth-width ratio and the behavioral characteristics value based on gait profile centroid distance;
S6. with behavioral characteristics and static nature, the SVM classifier classification based on Radial basis kernel function obtains new proper vector respectively to utilize all gait feature vectors in gait feature vector data storehouse;
S7. the bayesian algorithm estimated based on m is transferred to by the training result of SVM to identify;
S8. recognition result is exported.
2. Gait Recognition identity identifying method according to claim 1, is characterized in that: in the gait image gray processing processing procedure described in step S1, with reference to significance level, is weighted on average by RGB tri-components with different weights; Because the sensitivity of human eye to green is the highest, minimum to blue-sensitive, therefore, by following formula, average energy is weighted to RGB three-component and obtains more rational gray level image,
f(i,j)=0.30R(i,j)+0.59G(i,j)+0.11B(i,j)
3. Gait Recognition identity identifying method according to claim 1, is characterized in that: the method that the European algorithm described in step S2 removes background is: the Euclidean distance d calculating each color pixel cell of gait original image and background image,
Wherein x r, x g, x brepresent the value of pixel vector red, green, blue three kinds of color components of image in gait original sequence respectively, μ r,μ g,μ brepresent the value of red, green, blue three kinds of color components of background pixel vector respectively;
The R of all pixels in computed image, the mean value of G, B color component is threshold value Ta, if image pixel is not less than Ta with the Euclidean distance of corresponding background pixel, this image pixel is prospect, otherwise is background; Threshold value Ta be prospect and background subtraction other measure, namely prospect at least differs Ta with the distance of background; Threshold value is too large, then pixel has larger may being judged to become background, and the possibility of prospect misjudged one-tenth background is large; Threshold value is too little, then pixel has larger may being judged to become prospect, and the possibility of background misjudged one-tenth prospect is large; B (x, y) is background image, and F (x, y) is each pixel of traversal present frame, if
then be background dot depending on this pixel and removed, subscript r, g, b represent pixel RGB three kinds of color components respectively.
4. Gait Recognition identity identifying method according to claim 1, is characterized in that, the medium filtering described in step S3 is made up of following steps:
S31. Filtering Template is roamed in the picture, and Filtering Template center is overlapped with certain location of pixels in image;
S32. the gray-scale value of each respective pixel in Filtering Template is read;
S33. these gray-scale values are arranged from small to large;
S34. get the intermediate data of this column data, this intermediate data is assigned to the pixel of corresponding Filtering Template center; If there is odd number element in Filtering Template, intermediate value gets element by the neutral element gray-scale value after the sequence of gray-scale value size; If there is even number element in Filtering Template, intermediate value gets element by after the sequence of gray-scale value size, the mean value of middle two element gray scales.
5. Gait Recognition identity identifying method according to claim 1, is characterized in that the contour pixel point extracting method described in step S4 is:
S41., from first point of the first row of picture, laterally search a coordinate (x1, y1) and make color be white, mark this point; Laterally search from this some the point that plain pixel is black again, this point is designated as (x2, y1);
S42., from next line first point, laterally travel through in the above described manner;
S43. step S42 is repeated, until travel through whole pictures.
6. Gait Recognition identity identifying method according to claim 5, is characterized in that step S5 concrete grammar is as follows:
S51. the static nature value based on depth-width ratio is extracted
Read height y and the width x of single frames gait, then calculate depth-width ratio k=(y/x);
S52. the behavioral characteristics value based on gait profile centroid distance is extracted
The feature extraction of profile barycenter is according to barycenter formula calculate the barycenter of outline, wherein (x c, y c) be the coordinate of barycenter, n is the number of pixel on outline, 1≤i≤n, (x i, y i) be the coordinate of pixel on outline; Behavioral characteristics value is peak (x1, y1) in character image, minimum point (x2, y2), ultra-left point (x3, y3), these 4 unique points of rightest point (x4, y4) divide the distance being clipped to barycenter, and they are to the formula below the distance of barycenter meets:
7. Gait Recognition identity identifying method according to claim 1, is characterized in that the concrete grammar of step S6 is as follows:
S61. using four row behavioral characteristics before the original feature vector of former training set that obtains and the 6th column label as new training set, using before former test set, four row are as new test set, and the training set new with this and test set are done as follows:
S611. select Radial basis kernel function exp (-|| x i-x j|| 2/ 2 γ 2), wherein γ=0.00001;
S612. select C-supporting vector machine model as single two sorter models, make C=1000, solve:
S613. training data is utilized to ask parameter a *, then ask
S614. decision function is constructed
S615. to the many disaggregated models of data configuration " one to one " pattern of each sample;
S616. classification results is formed the first row of new set;
S62. using the 5th row static nature of the original feature vector of former training set that obtains and the 6th column label as new training set, using the 5th of former test set the row as new test set, the training set new with this and test set do the secondary series that above-mentioned operation equally obtains new set;
S63. last column label is directly as the 3rd row of new set.
8. Gait Recognition identity identifying method according to claim 1, is characterized in that the idiographic flow of step S7 algorithm is:
S71. suppose there is k kind characterization method, and t type, make Θ represent set of types, i.e. Θ={ O 1, O 2..., O t;
S72. to UNKNOWN TYPE sample O ∈ Θ, maximum a posteriori probability (MAP) Target recognition fusion identification judgement is write as:
In formula, O ibe after i-th characterization method SVM identifies, the identification of unknown sample is adjudicated; O mAPfor adjudicating the multiple features fusion identification of sample; P (O mAP| O 1, O 2..., O k) for k kind characterization method SVM identify joint probability density function; Above formula can be expressed as:
S73. according to Bayes rational formula, fusion recognition judgement is:
In formula, the prior probability that P (O) is O ∈ Θ, in above formula, denominator is total probability formula, has nothing to do, therefore can be reduced to the value of O:
S74. because various feature extracting method is separately independently, therefore, it is possible to be expressed as:
In formula, P (O i| O) be the likelihood function that i-th characterization method SVM identifies.
9. Gait Recognition identity identifying method according to claim 1, it is characterized in that described final recognition result is: using the remaining half of original recognition sample as recognition sample, by the result of previous step as training sample, classify, calculate personage's likelihood probability, using maximum probability personage as final recognition result.
CN201510749195.4A 2015-11-05 2015-11-05 A kind of Gait Recognition identity identifying method based on Fusion Features Expired - Fee Related CN105335725B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510749195.4A CN105335725B (en) 2015-11-05 2015-11-05 A kind of Gait Recognition identity identifying method based on Fusion Features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510749195.4A CN105335725B (en) 2015-11-05 2015-11-05 A kind of Gait Recognition identity identifying method based on Fusion Features

Publications (2)

Publication Number Publication Date
CN105335725A true CN105335725A (en) 2016-02-17
CN105335725B CN105335725B (en) 2019-02-26

Family

ID=55286241

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510749195.4A Expired - Fee Related CN105335725B (en) 2015-11-05 2015-11-05 A kind of Gait Recognition identity identifying method based on Fusion Features

Country Status (1)

Country Link
CN (1) CN105335725B (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106295544A (en) * 2016-08-04 2017-01-04 山东师范大学 A kind of unchanged view angle gait recognition method based on Kinect
CN106339687A (en) * 2016-08-30 2017-01-18 吉林大学 Self-adaptive pedestrian street-crossing signal calculating method based on video
CN106529499A (en) * 2016-11-24 2017-03-22 武汉理工大学 Fourier descriptor and gait energy image fusion feature-based gait identification method
CN106803072A (en) * 2016-12-30 2017-06-06 中国计量大学 Variable visual angle gait recognition method based on the fusion of quiet behavioral characteristics
CN107766819A (en) * 2017-10-18 2018-03-06 陕西国际商贸学院 A kind of video monitoring system and its real-time gait recognition methods
CN107818300A (en) * 2017-10-18 2018-03-20 河海大学 A kind of gait denoising method based on HMM
CN108563939A (en) * 2018-04-25 2018-09-21 常州大学 Human body identification based on gait geometric locus feature
CN109462691A (en) * 2018-10-27 2019-03-12 中国人民解放军战略支援部队信息工程大学 A kind of implicit means of defence and system based on Fusion
CN109858351A (en) * 2018-12-26 2019-06-07 中南大学 A kind of gait recognition method remembered in real time based on level
CN110161035A (en) * 2019-04-26 2019-08-23 浙江大学 Body structure surface crack detection method based on characteristics of image and bayesian data fusion
CN111168667A (en) * 2019-12-13 2020-05-19 天津大学 Robot control method based on Bayesian classifier
CN111488773A (en) * 2019-01-29 2020-08-04 广州市百果园信息技术有限公司 Action recognition method, device, equipment and storage medium
CN112131950A (en) * 2020-08-26 2020-12-25 浙江工业大学 Gait recognition method based on Android mobile phone
CN112401876A (en) * 2019-08-21 2021-02-26 斯沃奇集团研究及开发有限公司 Method and system for human gait detection
CN112926390A (en) * 2021-01-26 2021-06-08 国家康复辅具研究中心 Gait motion mode recognition method and model establishment method
CN113544735A (en) * 2018-12-28 2021-10-22 日本电气株式会社 Personal authentication apparatus, control method, and program
CN113591552A (en) * 2021-06-18 2021-11-02 新绎健康科技有限公司 Method and system for identity recognition based on gait acceleration data
CN114840834A (en) * 2022-04-14 2022-08-02 浙江大学 Implicit identity authentication method based on gait characteristics
CN117523642A (en) * 2023-12-01 2024-02-06 北京理工大学 Face recognition method based on optimal-spacing Bayesian classification model

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101571924A (en) * 2009-05-31 2009-11-04 北京航空航天大学 Gait recognition method and system with multi-region feature integration
CN101630364A (en) * 2009-08-20 2010-01-20 天津大学 Method for gait information processing and identity identification based on fusion feature
CN104061907A (en) * 2014-07-16 2014-09-24 中南大学 Viewing-angle greatly-variable gait recognition method based on gait three-dimensional contour matching synthesis
CN104200200A (en) * 2014-08-28 2014-12-10 公安部第三研究所 System and method for realizing gait recognition by virtue of fusion of depth information and gray-scale information

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101571924A (en) * 2009-05-31 2009-11-04 北京航空航天大学 Gait recognition method and system with multi-region feature integration
CN101630364A (en) * 2009-08-20 2010-01-20 天津大学 Method for gait information processing and identity identification based on fusion feature
CN104061907A (en) * 2014-07-16 2014-09-24 中南大学 Viewing-angle greatly-variable gait recognition method based on gait three-dimensional contour matching synthesis
CN104200200A (en) * 2014-08-28 2014-12-10 公安部第三研究所 System and method for realizing gait recognition by virtue of fusion of depth information and gray-scale information

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106295544A (en) * 2016-08-04 2017-01-04 山东师范大学 A kind of unchanged view angle gait recognition method based on Kinect
CN106295544B (en) * 2016-08-04 2019-05-28 山东师范大学 A kind of unchanged view angle gait recognition method based on Kinect
CN106339687A (en) * 2016-08-30 2017-01-18 吉林大学 Self-adaptive pedestrian street-crossing signal calculating method based on video
CN106529499A (en) * 2016-11-24 2017-03-22 武汉理工大学 Fourier descriptor and gait energy image fusion feature-based gait identification method
CN106803072A (en) * 2016-12-30 2017-06-06 中国计量大学 Variable visual angle gait recognition method based on the fusion of quiet behavioral characteristics
CN107766819B (en) * 2017-10-18 2021-06-18 陕西国际商贸学院 Video monitoring system and real-time gait recognition method thereof
CN107766819A (en) * 2017-10-18 2018-03-06 陕西国际商贸学院 A kind of video monitoring system and its real-time gait recognition methods
CN107818300A (en) * 2017-10-18 2018-03-20 河海大学 A kind of gait denoising method based on HMM
CN108563939A (en) * 2018-04-25 2018-09-21 常州大学 Human body identification based on gait geometric locus feature
CN108563939B (en) * 2018-04-25 2022-05-20 常州大学 Human body identity recognition based on gait track curve characteristics
CN109462691A (en) * 2018-10-27 2019-03-12 中国人民解放军战略支援部队信息工程大学 A kind of implicit means of defence and system based on Fusion
CN109462691B (en) * 2018-10-27 2021-01-26 中国人民解放军战略支援部队信息工程大学 Implicit protection method and system based on multi-sensor data fusion
CN109858351B (en) * 2018-12-26 2021-05-14 中南大学 Gait recognition method based on hierarchy real-time memory
CN109858351A (en) * 2018-12-26 2019-06-07 中南大学 A kind of gait recognition method remembered in real time based on level
CN113544735A (en) * 2018-12-28 2021-10-22 日本电气株式会社 Personal authentication apparatus, control method, and program
US12051273B2 (en) 2019-01-29 2024-07-30 Bigo Technology Pte. Ltd. Method for recognizing actions, device and storage medium
CN111488773A (en) * 2019-01-29 2020-08-04 广州市百果园信息技术有限公司 Action recognition method, device, equipment and storage medium
CN111488773B (en) * 2019-01-29 2021-06-11 广州市百果园信息技术有限公司 Action recognition method, device, equipment and storage medium
CN110161035A (en) * 2019-04-26 2019-08-23 浙江大学 Body structure surface crack detection method based on characteristics of image and bayesian data fusion
CN110161035B (en) * 2019-04-26 2020-04-10 浙江大学 Structural surface crack detection method based on image feature and Bayesian data fusion
US10783406B1 (en) 2019-04-26 2020-09-22 Zhejiang University Method for detecting structural surface cracks based on image features and bayesian data fusion
CN112401876A (en) * 2019-08-21 2021-02-26 斯沃奇集团研究及开发有限公司 Method and system for human gait detection
CN111168667B (en) * 2019-12-13 2022-08-05 天津大学 Robot control method based on Bayesian classifier
CN111168667A (en) * 2019-12-13 2020-05-19 天津大学 Robot control method based on Bayesian classifier
CN112131950A (en) * 2020-08-26 2020-12-25 浙江工业大学 Gait recognition method based on Android mobile phone
CN112131950B (en) * 2020-08-26 2024-05-07 浙江工业大学 Gait recognition method based on Android mobile phone
CN112926390A (en) * 2021-01-26 2021-06-08 国家康复辅具研究中心 Gait motion mode recognition method and model establishment method
CN113591552A (en) * 2021-06-18 2021-11-02 新绎健康科技有限公司 Method and system for identity recognition based on gait acceleration data
CN114840834A (en) * 2022-04-14 2022-08-02 浙江大学 Implicit identity authentication method based on gait characteristics
CN114840834B (en) * 2022-04-14 2024-06-11 浙江大学 Implicit identity authentication method based on gait characteristics
CN117523642A (en) * 2023-12-01 2024-02-06 北京理工大学 Face recognition method based on optimal-spacing Bayesian classification model

Also Published As

Publication number Publication date
CN105335725B (en) 2019-02-26

Similar Documents

Publication Publication Date Title
CN105335725A (en) Gait identification identity authentication method based on feature fusion
CN105956582B (en) A kind of face identification system based on three-dimensional data
CN110348319B (en) Face anti-counterfeiting method based on face depth information and edge image fusion
CN105740780B (en) Method and device for detecting living human face
CN103942577B (en) Based on the personal identification method for establishing sample database and composite character certainly in video monitoring
CN105022982B (en) Hand motion recognition method and apparatus
CN104835175B (en) Object detection method in a kind of nuclear environment of view-based access control model attention mechanism
CN102214309B (en) Special human body recognition method based on head and shoulder model
CN104036284A (en) Adaboost algorithm based multi-scale pedestrian detection method
CN107239777B (en) Tableware detection and identification method based on multi-view graph model
CN108564120B (en) Feature point extraction method based on deep neural network
CN106485651B (en) The image matching method of fast robust Scale invariant
CN110827304B (en) Traditional Chinese medicine tongue image positioning method and system based on deep convolution network and level set method
CN104715238A (en) Pedestrian detection method based on multi-feature fusion
CN110032932B (en) Human body posture identification method based on video processing and decision tree set threshold
CN109271918B (en) Method for distinguishing people with balance ability disorder based on gravity center shift model
KR101753360B1 (en) A feature matching method which is robust to the viewpoint change
CN104123554A (en) SIFT image characteristic extraction method based on MMTD
CN102147867A (en) Method for identifying traditional Chinese painting images and calligraphy images based on subject
CN113688846A (en) Object size recognition method, readable storage medium, and object size recognition system
CN106529547A (en) Texture identification method based on complete local characteristics
CN112101283A (en) Intelligent identification method and system for traffic signs
CN108388854A (en) A kind of localization method based on improvement FAST-SURF algorithms
CN110910497A (en) Method and system for realizing augmented reality map
CN107341487B (en) Method and system for detecting daubing characters

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190226

Termination date: 20211105