CN104299012A - Gait recognition method based on deep learning - Google Patents
Gait recognition method based on deep learning Download PDFInfo
- Publication number
- CN104299012A CN104299012A CN201410587758.XA CN201410587758A CN104299012A CN 104299012 A CN104299012 A CN 104299012A CN 201410587758 A CN201410587758 A CN 201410587758A CN 104299012 A CN104299012 A CN 104299012A
- Authority
- CN
- China
- Prior art keywords
- gait
- energygram
- matching
- convolutional neural
- neural networks
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000005021 gait Effects 0.000 title claims abstract description 145
- 230000001537 neural Effects 0.000 claims abstract description 46
- 230000000007 visual effect Effects 0.000 claims description 31
- 239000000284 extract Substances 0.000 claims description 25
- 238000000034 method Methods 0.000 claims description 23
- 238000000605 extraction Methods 0.000 claims description 18
- 230000000875 corresponding Effects 0.000 claims description 11
- 230000001808 coupling Effects 0.000 claims description 8
- 238000010168 coupling process Methods 0.000 claims description 8
- 238000005859 coupling reaction Methods 0.000 claims description 8
- 238000001514 detection method Methods 0.000 abstract 1
- 238000005516 engineering process Methods 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000005284 excitation Effects 0.000 description 2
- 230000000644 propagated Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/19—Recognition using electronic means
- G06V30/192—Recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references
- G06V30/194—References adjustable by an adaptive method, e.g. learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Abstract
The invention discloses a gait recognition method based on deep learning. The gait recognition method based on deep learning is characterized that identity of a person in a video is recognized according to gaits of the person through weight-shared two-channel convolutional neural networks by utilizing strong learning ability of the convolutional neural networks in a deep learning mode. The method has strong robustness on gait changes crossing large view angles, effectively solves the problem that accuracy is low when existing gait recognition technologies are used for recognizing gaits crossing view angles. The method can be widely applied to scenes with video monitoring, such as safety monitoring in airports and supermarkets, personnel recognition and criminal detection.
Description
Technical field
The present invention relates to computer vision and pattern-recognition, particularly a kind of gait recognition method based on degree of depth study.
Background technology
In gait recognition method, more common method is the profile first obtaining a people from video all sequences, and calculate its gait energygram (Gait Energy Image, GEI), then the similarity between more different gait energygram, mates finally by a nearest neighbor classifier.But, method in the past run into more serious across be all difficult to during viewing angle problem to reach can be practical precision.
The degree of depth theories of learning all achieve extraordinary result in fields such as speech recognition, image object classification and detections, especially degree of depth convolutional neural networks has the Nonlinear Mapping of very strong independent learning ability and height, and this is design complicated high-precision classification model to provide possibility.
Summary of the invention
In order to solve existing Gait Recognition technology in process across the not high problem of precision during the Gait Recognition of visual angle, the present invention proposes a kind of gait recognition method based on degree of depth study, gait energygram is adopted to describe gait sequence, by degree of depth convolutional neural networks training Matching Model, thus the identity of coupling Gait Recognition people.The method comprises training process and identifying, specific as follows:
Training process S1 is: extract gait energygram to the training gait video sequence having marked identity, repeat to choose wherein any two the Matching Model based on convolutional neural networks is trained, until model convergence;
Identifying S2 is: extract gait energygram respectively to single-view gait to be identified video and registered gait video sequence, the Matching Model based on convolutional neural networks trained in S1 is utilized to calculate the gait energygram of single-view gait to be identified video and the similarity of each gait energygram of registered gait video sequence, size according to similarity carries out identity prediction, and exports recognition result.
Preferably, the described Matching Model based on convolutional neural networks comprises feature extraction functions module and perceptron functional module.
Preferably, the step of training process S1 is as follows:
Step S11: extract gait energygram from the training gait video sequence comprising multiple visual angle;
Step S12: extract the identical gait energygram of identity to as positive sample, extract the different gait energygram of identity to negative sample;
Step S13: choose a positive sample or negative sample and send into feature extraction functions module based on the Matching Model of convolutional neural networks, extract this sample and comprise gait energygram to corresponding feature pair;
Step S14: by the feature that obtains in S13 to the perceptron functional module output matching result sent into based on the Matching Model of convolutional neural networks;
Step S15: the error calculating matching result and legitimate reading, and optimize the above-mentioned Matching Model based on convolutional neural networks;
Step S16: repeat S13 to S15 step, until the above-mentioned Matching Model based on convolutional neural networks restrains.
Preferably, the step of training process S2 is as follows:
Step S21: the gait energygram sequence extracting registered gait video sequence;
Step S22: by the gait energygram sequence inputting of registered gait video sequence based on the feature extraction functions module of the Matching Model of convolutional neural networks, calculate corresponding characteristic sequence respectively;
Step S23: the gait energygram extracting single-view gait video to be identified;
Step S24: by the feature extraction functions module of the Matching Model based on convolutional neural networks that the input of the gait energygram of gait video to be identified for single-view trains, calculate corresponding feature;
Step S25: the characteristic sequence obtained in the feature obtained in S24 and S22 is calculated similarity respectively by the perceptron functional module based on the Matching Model of convolutional neural networks;
Step S26, according to the similarity obtained in S25, goes out the result of identification by classifier calculated.
Preferably, the step that identifying first judges is set up in S21, if identifying performs S22 to S26 after then extracting the gait energygram of registered gait video sequence successively first; If non-identifying first then performs to S26 successively from S23;
Being provided with coupling storehouse in S22, preserving calculating corresponding feature in the gait energygram of registered gait video sequence and S22 stored in coupling storehouse.
Preferably, the training gait video sequence at described multiple visual angle, its visual angle is divided into 11 visual angles according to viewing angle from 0 degree to 180 degree.
Preferably, in described registered gait video sequence, each registered gait video only need extract the gait energygram under a visual angle.
Preferably, in S12, the extraction of gait energygram should be extracted from the gait energygram of different visual angles according to equal probability.
Preferably, in S12, the number ratio of positive sample and negative sample should equal setting value.
Preferably, in S12, the quantity of positive sample and negative sample is equal.
The present invention constructs the Matching Model based on convolutional neural networks, this model is trained by the training gait video sequence comprising multiple visual angle, and optimize relevant parameter, make to train the Matching Model based on convolutional neural networks obtained to possess the ability identifying gait across visual angle; In identifying, utilize the Matching Model based on convolutional neural networks to carry out feature extraction Similarity Measure to single-view gait to be identified video and registered gait video sequence, and then the identity of people in single-view gait video to be identified is identified to across during the Gait Recognition of visual angle, there is higher accuracy in process.The method can be widely used in the scene being equipped with video monitoring, as: the security monitoring in airport and supermarket, personal identification, criminal detects.
Accompanying drawing explanation
Fig. 1 is algorithm frame schematic diagram of the present invention.
Fig. 2 is the authentication algorithm flow schematic diagram that the present invention is based on gait.
Fig. 3 is the present invention's various visual angles gait energygram sample.
Embodiment
For making the object, technical solutions and advantages of the present invention clearly understand, below in conjunction with specific embodiment, and with reference to accompanying drawing, the present invention is described in further detail.
In order to be described in conjunction with specific embodiments better, the present embodiment combines actual test case and is described, test process is wherein equivalent to the identifying in practical application, and test gait video is equivalent to the single-view gait video to be identified in practical application.
The binary channels convolutional neural networks of the shared weight of the present embodiment utilization constructs the Matching Model based on convolutional neural networks, this model comprises feature extraction functions module and perceptron functional module, this embodiment specifically comprises training process and test process, and composition graphs 1, Fig. 2 are described below the present embodiment method step:
Training process:
Step S11, from the training gait video sequence relating to multiple visual angle, extract gait energygram sequence GEI-1 ..., GEI-i ..., GEI-N.First utilize traditional foreground segmentation method based on mixed Gauss model from gait video sequence, extract the outline of people, locate according to the center of gravity of outline and shear out foreground area, same yardstick is normalized to by convergent-divergent, then obtain the average sketch figure of each sequence, this is gait energygram.
Such as, the various visual angles walking video utilizing 100 people to mark, as training gait video sequence, comprises multiple visual angle, specifically as shown in Figure 3, with under the highly approximate horizontal state of people, 0,18 is divided into according to viewing angle ... 180 degree of totally 11 visual angles, and mark the identity of pedestrian in each sequence, human body outline has been extracted to above-mentioned 1100 gait video sequences, calculates gait energygram.
Step S12, extracts positive sample and negative sample.Extract the identical gait energygram of identity to as positive sample, extract the different gait energygram of identity to negative sample, the extraction of gait energygram should be extracted from the gait energygram of different visual angles according to equal probability.First be that the probability that the gait energygram of different visual angles in the gait energygram sequence of training gait video sequence is extracted out is equal, variously train Matching Model based on convolutional neural networks across visual angle situation with what extract according to justice.Next is the ratio use positive and negative samples according to setting.Because the gait energygram of same identity is to will far fewer than the gait energygram pair of different identity, extract by natural probability if do not retrain the ratio of positive and negative samples, there will be positive sample situation very little, thus cause the Matching Model over-fitting in the training process based on convolutional neural networks.。Preferably, the probability that positive and negative samples can be made to occur is equal.
Step S13, sends the often pair of gait energygram forming positive and negative samples in S12 into the Matching Model based on convolutional neural networks respectively, adopts propagated forward algorithm to extract their corresponding feature.As shown in Figure 1, the individual features utilizing the feature extraction functions module based on the Matching Model of convolutional neural networks to extract gait energygram GEI-a, GEI-b is feature a, feature b.Because need to do identical operation to two width gait energygrams of sample, so it shows as two passages of shared weight.Such as, the parameter configuration of a typical network is: ground floor have 16 7 × 7 convolution son, step-length is 1, with 2 × 2 and step-length be 2 space clustering layer; The second layer have 64 7 × 7 convolution son, step-length is 1, with 2 × 2 and step-length be 2 space clustering layer; Third layer has convolution of 256 11 × 11, and step-length is 5.
Step S14, the feature of extract in S13 two gait energygrams is compared and provides similarity score by the perceptron functional module based on the Matching Model of convolutional neural networks by this step, and carries out identity judgement, output matching result.Such as, when similarity value is between 0 to 1, can set when similarity is greater than 0.5, predict that this gait video sequence corresponding to feature has identical identity; Otherwise, predict that it has different identity.
Step S15, utilizes the error of matching result and legitimate reading, and employing error backpropagation algorithm trains the Matching Model based on convolutional neural networks.
Step S16, repeats S13 to S15 step, until the above-mentioned Matching Model based on convolutional neural networks restrains.
Above-mentioned error backpropagation algorithm is mainly used in the training of multilayered model, and its main body is that excitation is propagated and weight upgrades iterating of two links, until stop when reaching the condition of convergence.In excitation propagation stage, the perceptron functional module first feature a and feature b are fed through based on the Matching Model of convolutional neural networks obtains matching result, then matching result and legitimate reading is asked poor, thus obtains the error of output layer and monitor layer.In the weight more new stage, first the function derivative that known error and this layer respond front one deck responds is multiplied, thus obtain two-layer between the gradient of weight matrix, then along the opposite direction of this gradient with certain ratio adjustment weight matrix.Subsequently, this gradient is used as the error of front one deck thus the weight matrix of the front one deck of calculating.Complete the renewal to whole model by that analogy.
Test process, this process mainly utilizes the Matching Model based on convolutional neural networks trained in S1 respectively registered gait video sequence, test gait video to be carried out to feature extraction and Similarity Measure, thus carry out identity judgement.Need a registered in advance registered gait video sequence of identity information, namely comprise the identity of the gait sequence of many people (such as 1000 people) and the people of correspondence.It should be noted that, although provide the data at multiple visual angle can strengthen the effect of identification in registered gait video sequence, but possessed owing to training the model obtained in S15 and identified the ability of gait across visual angle, so each the registered gait video in registered gait video sequence herein only needs to comprise the gait video under an angle.Test assignment is herein, given above-mentioned registered gait video sequence, for the test gait video of a single-view, dopes the identity of its correspondence, specific as follows:
Step S21, with reference to method described in S11, utilizes the Matching Model based on convolutional neural networks trained in S1, extracts the gait energygram sequence of registered gait video sequence;
Step S22, by the feature extraction functions module of the gait energygram sequence inputting of registered gait video sequence based on the Matching Model of convolutional neural networks, extracts it respectively to the characteristic sequence across visual angle change robust, reduces computation complexity by this way.Consider the problem of feature size, the sample network topology provided in step S13 increases the sampling interval of third layer, and for the gait energygram input of 128 × 128, the length of feature is 2304(3 × 3 × 256);
Step S23: with reference to method described in S11, utilize the Matching Model based on convolutional neural networks trained in S1, extracts the gait energygram of test gait video;
Step S24, to test gait video, utilizes the feature extraction functions module of the Matching Model based on convolutional neural networks, calculates it to the feature across visual angle change robust;
Step S25, calculates similarity by the characteristic sequence obtained in the feature obtained in S24 and S22 respectively by the perceptron functional module based on the Matching Model of convolutional neural networks;
Step S26, in the simplest situations, can utilize nearest neighbor classifier to determine current tested identity, namely provide the identity that the sequence in the highest coupling storehouse of similarity is registered.
In order to better promote matching speed, the step that test process first judges can be set up in S21, if test process performs S22 to S26 after then extracting the gait energygram of registered gait video sequence successively first; If non-test process first then performs to S26 successively from S23.Being provided with coupling storehouse in S22, preserving calculating corresponding feature in the gait energygram of registered gait video sequence and S22 stored in coupling storehouse.So again in non-test process first, eliminate the step that registered gait video sequence characteristics is extracted, when arriving S25 can by the feature obtained in S24 directly with mate the feature of preserving in storehouse and carry out Similarity Measure, save the plenty of time.
The present embodiment constructs the Matching Model based on convolutional neural networks, this model is trained by the training gait video sequence comprising multiple visual angle, and optimize relevant parameter, make to train the Matching Model based on convolutional neural networks obtained to possess the ability identifying gait across visual angle; In test process, utilize the Matching Model based on convolutional neural networks trained to carry out feature extraction Similarity Measure to the test gait video of single-view and registered gait video sequence, and then the identity of people identifies to have higher accuracy in process across during the Gait Recognition of visual angle in test gait video.The method can be widely used in the scene being equipped with video monitoring, as: the security monitoring in airport and supermarket, personal identification, criminal detects.
The above; be only the embodiment in the present invention; but protection scope of the present invention is not limited thereto; any people being familiar with this technology is in the technical scope disclosed by the present invention; the conversion or replacement expected can be understood; all should be encompassed in and of the present inventionly comprise within scope, therefore, protection scope of the present invention should be as the criterion with the protection domain of claims.
Claims (10)
1., based on a gait recognition method for degree of depth study, it is characterized in that, described method comprises training process and identifying, as follows:
Training process S1 is: extract gait energygram to the training gait video sequence having marked identity, repeat to choose wherein any two the Matching Model based on convolutional neural networks is trained, until model convergence;
Identifying S2 is: extract gait energygram respectively to single-view gait to be identified video and registered gait video sequence, the Matching Model based on convolutional neural networks trained in S1 is utilized to calculate the gait energygram of single-view gait to be identified video and the similarity of each gait energygram of registered gait video sequence, size according to similarity carries out identity prediction, and exports recognition result.
2. method according to claim 1, is characterized in that, the described Matching Model based on convolutional neural networks comprises feature extraction functions module and perceptron functional module.
3. method according to claim 2, is characterized in that, the step of training process S1 is as follows:
Step S11: extract gait energygram from the training gait video sequence comprising multiple visual angle;
Step S12: extract the identical gait energygram of identity to as positive sample, extract the different gait energygram of identity to negative sample;
Step S13: choose a positive sample or negative sample and send into feature extraction functions module based on the Matching Model of convolutional neural networks, extract this sample and comprise gait energygram to corresponding feature pair;
Step S14: by the feature that obtains in S13 to the perceptron functional module output matching result sent into based on the Matching Model of convolutional neural networks;
Step S15: the error calculating matching result and legitimate reading, and optimize the above-mentioned Matching Model based on convolutional neural networks;
Step S16: repeat S13 to S15 step, until the above-mentioned Matching Model based on convolutional neural networks restrains.
4. according to the method in claim 2 or 3, it is characterized in that, the step of training process S2 is as follows:
Step S21: the gait energygram sequence extracting registered gait video sequence;
Step S22: by the gait energygram sequence inputting of registered gait video sequence based on the feature extraction functions module of the Matching Model of convolutional neural networks, calculate corresponding characteristic sequence respectively;
Step S23: the gait energygram extracting single-view gait video to be identified;
Step S24: by the feature extraction functions module of the Matching Model based on convolutional neural networks that the input of the gait energygram of gait video to be identified for single-view trains, calculate corresponding feature;
Step S25: the characteristic sequence obtained in the feature obtained in S24 and S22 is calculated similarity respectively by the perceptron functional module based on the Matching Model of convolutional neural networks;
Step S26, according to the similarity obtained in S25, goes out the result of identification by classifier calculated.
5. method according to claim 4, is characterized in that, sets up the step that identifying first judges, if identifying performs S22 to S26 after then extracting the gait energygram of registered gait video sequence successively first in S21; If non-identifying first then performs to S26 successively from S23;
Being provided with coupling storehouse in S22, preserving calculating corresponding feature in the gait energygram of registered gait video sequence and S22 stored in coupling storehouse.
6. method according to claim 5, is characterized in that, the training gait video sequence at described multiple visual angle, its visual angle is divided into 11 visual angles according to viewing angle from 0 degree to 180 degree.
7. method according to claim 6, is characterized in that, in described registered gait video sequence, each registered gait video only need extract the gait energygram under a visual angle.
8. method according to claim 7, is characterized in that, in S12, the extraction of gait energygram should be extracted from the gait energygram of different visual angles according to equal probability.
9. method according to claim 8, is characterized in that, in S12, the number ratio of positive sample and negative sample should equal setting value.
10. method according to claim 9, is characterized in that, described setting value is 1.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410587758.XA CN104299012B (en) | 2014-10-28 | 2014-10-28 | A kind of gait recognition method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410587758.XA CN104299012B (en) | 2014-10-28 | 2014-10-28 | A kind of gait recognition method based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104299012A true CN104299012A (en) | 2015-01-21 |
CN104299012B CN104299012B (en) | 2017-06-30 |
Family
ID=52318733
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410587758.XA Active CN104299012B (en) | 2014-10-28 | 2014-10-28 | A kind of gait recognition method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104299012B (en) |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105740773A (en) * | 2016-01-25 | 2016-07-06 | 重庆理工大学 | Deep learning and multi-scale information based behavior identification method |
CN105760835A (en) * | 2016-02-17 | 2016-07-13 | 天津中科智能识别产业技术研究院有限公司 | Gait segmentation and gait recognition integrated method based on deep learning |
CN106022380A (en) * | 2016-05-25 | 2016-10-12 | 中国科学院自动化研究所 | Individual identity identification method based on deep learning |
CN106919921A (en) * | 2017-03-06 | 2017-07-04 | 重庆邮电大学 | With reference to sub-space learning and the gait recognition method and system of tensor neutral net |
WO2017134554A1 (en) * | 2016-02-05 | 2017-08-10 | International Business Machines Corporation | Efficient determination of optimized learning settings of neural networks |
CN107085716A (en) * | 2017-05-24 | 2017-08-22 | 复旦大学 | Across the visual angle gait recognition method of confrontation network is generated based on multitask |
CN107292250A (en) * | 2017-05-31 | 2017-10-24 | 西安科技大学 | A kind of gait recognition method based on deep neural network |
CN107516060A (en) * | 2016-06-15 | 2017-12-26 | 阿里巴巴集团控股有限公司 | Object detection method and device |
CN108460427A (en) * | 2018-03-29 | 2018-08-28 | 国信优易数据有限公司 | A kind of disaggregated model training method, device and sorting technique and device |
CN108460340A (en) * | 2018-02-05 | 2018-08-28 | 北京工业大学 | A kind of gait recognition method based on the dense convolutional neural networks of 3D |
CN108537181A (en) * | 2018-04-13 | 2018-09-14 | 盐城师范学院 | A kind of gait recognition method based on the study of big spacing depth measure |
CN108596026A (en) * | 2018-03-16 | 2018-09-28 | 中国科学院自动化研究所 | Across the visual angle Gait Recognition device and training method of confrontation network are generated based on double fluid |
CN108921019A (en) * | 2018-05-27 | 2018-11-30 | 北京工业大学 | A kind of gait recognition method based on GEI and TripletLoss-DenseNet |
CN108965585A (en) * | 2018-06-22 | 2018-12-07 | 成都博宇科技有限公司 | A kind of method for identifying ID based on intelligent mobile phone sensor |
CN108960078A (en) * | 2018-06-12 | 2018-12-07 | 温州大学 | A method of based on monocular vision, from action recognition identity |
CN109211951A (en) * | 2018-11-16 | 2019-01-15 | 银河水滴科技(北京)有限公司 | A kind of safe examination system and safety inspection method based on image segmentation |
CN109255339A (en) * | 2018-10-19 | 2019-01-22 | 西安电子科技大学 | Classification method based on adaptive depth forest body gait energy diagram |
CN105205475B (en) * | 2015-10-20 | 2019-02-05 | 北京工业大学 | A kind of dynamic gesture identification method |
CN109344909A (en) * | 2018-10-30 | 2019-02-15 | 咪付(广西)网络技术有限公司 | A kind of personal identification method based on multichannel convolutive neural network |
CN109409297A (en) * | 2018-10-30 | 2019-03-01 | 咪付(广西)网络技术有限公司 | A kind of personal identification method based on binary channels convolutional neural networks |
CN109558834A (en) * | 2018-11-28 | 2019-04-02 | 福州大学 | A kind of multi-angle of view gait recognition method based on similarity study and kernel method |
CN109858351A (en) * | 2018-12-26 | 2019-06-07 | 中南大学 | A kind of gait recognition method remembered in real time based on level |
CN110096972A (en) * | 2019-04-12 | 2019-08-06 | 重庆科芮智能科技有限公司 | Data guard method, apparatus and system |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108121986B (en) * | 2017-12-29 | 2019-12-17 | 深圳云天励飞技术有限公司 | Object detection method and device, computer device and computer readable storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101241551B (en) * | 2008-03-06 | 2011-02-09 | 复旦大学 | Gait recognition method based on tangent vector |
US20120321136A1 (en) * | 2011-06-14 | 2012-12-20 | International Business Machines Corporation | Opening management through gait detection |
-
2014
- 2014-10-28 CN CN201410587758.XA patent/CN104299012B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101241551B (en) * | 2008-03-06 | 2011-02-09 | 复旦大学 | Gait recognition method based on tangent vector |
US20120321136A1 (en) * | 2011-06-14 | 2012-12-20 | International Business Machines Corporation | Opening management through gait detection |
Non-Patent Citations (5)
Title |
---|
JU MAN等: "Individual recognition using gait energy image", 《IEEE》 * |
SIJIN LI等: "Heterogeneous Multi-task Learning for Human Pose Estimation with Deep Convolutional Neural Network", 《IEEE》 * |
王磊: "基于步态能量图和加权质量向量的步态识别研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
王科俊等: "基于步态能量图像和2维主成分分析的步态识别方法", 《中国图象图形学报》 * |
许可: "卷积神经网络在图像识别上的应用的研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105205475B (en) * | 2015-10-20 | 2019-02-05 | 北京工业大学 | A kind of dynamic gesture identification method |
CN105740773A (en) * | 2016-01-25 | 2016-07-06 | 重庆理工大学 | Deep learning and multi-scale information based behavior identification method |
CN105740773B (en) * | 2016-01-25 | 2019-02-01 | 重庆理工大学 | Activity recognition method based on deep learning and multi-scale information |
WO2017134554A1 (en) * | 2016-02-05 | 2017-08-10 | International Business Machines Corporation | Efficient determination of optimized learning settings of neural networks |
US11093826B2 (en) | 2016-02-05 | 2021-08-17 | International Business Machines Corporation | Efficient determination of optimized learning settings of neural networks |
CN105760835B (en) * | 2016-02-17 | 2018-03-06 | 银河水滴科技(北京)有限公司 | A kind of gait segmentation and Gait Recognition integral method based on deep learning |
CN105760835A (en) * | 2016-02-17 | 2016-07-13 | 天津中科智能识别产业技术研究院有限公司 | Gait segmentation and gait recognition integrated method based on deep learning |
CN106022380A (en) * | 2016-05-25 | 2016-10-12 | 中国科学院自动化研究所 | Individual identity identification method based on deep learning |
CN107516060A (en) * | 2016-06-15 | 2017-12-26 | 阿里巴巴集团控股有限公司 | Object detection method and device |
CN106919921A (en) * | 2017-03-06 | 2017-07-04 | 重庆邮电大学 | With reference to sub-space learning and the gait recognition method and system of tensor neutral net |
CN107085716B (en) * | 2017-05-24 | 2021-06-04 | 复旦大学 | Cross-view gait recognition method based on multi-task generation countermeasure network |
CN107085716A (en) * | 2017-05-24 | 2017-08-22 | 复旦大学 | Across the visual angle gait recognition method of confrontation network is generated based on multitask |
CN107292250A (en) * | 2017-05-31 | 2017-10-24 | 西安科技大学 | A kind of gait recognition method based on deep neural network |
CN108460340A (en) * | 2018-02-05 | 2018-08-28 | 北京工业大学 | A kind of gait recognition method based on the dense convolutional neural networks of 3D |
CN108596026A (en) * | 2018-03-16 | 2018-09-28 | 中国科学院自动化研究所 | Across the visual angle Gait Recognition device and training method of confrontation network are generated based on double fluid |
CN108596026B (en) * | 2018-03-16 | 2020-06-30 | 中国科学院自动化研究所 | Cross-view gait recognition device and training method based on double-flow generation countermeasure network |
CN108460427A (en) * | 2018-03-29 | 2018-08-28 | 国信优易数据有限公司 | A kind of disaggregated model training method, device and sorting technique and device |
CN108537181A (en) * | 2018-04-13 | 2018-09-14 | 盐城师范学院 | A kind of gait recognition method based on the study of big spacing depth measure |
CN108921019A (en) * | 2018-05-27 | 2018-11-30 | 北京工业大学 | A kind of gait recognition method based on GEI and TripletLoss-DenseNet |
CN108921019B (en) * | 2018-05-27 | 2022-03-08 | 北京工业大学 | Gait recognition method based on GEI and TripletLoss-DenseNet |
CN108960078A (en) * | 2018-06-12 | 2018-12-07 | 温州大学 | A method of based on monocular vision, from action recognition identity |
CN108965585A (en) * | 2018-06-22 | 2018-12-07 | 成都博宇科技有限公司 | A kind of method for identifying ID based on intelligent mobile phone sensor |
CN109255339B (en) * | 2018-10-19 | 2021-04-06 | 西安电子科技大学 | Classification method based on self-adaptive deep forest human gait energy map |
CN109255339A (en) * | 2018-10-19 | 2019-01-22 | 西安电子科技大学 | Classification method based on adaptive depth forest body gait energy diagram |
CN109409297A (en) * | 2018-10-30 | 2019-03-01 | 咪付(广西)网络技术有限公司 | A kind of personal identification method based on binary channels convolutional neural networks |
CN109409297B (en) * | 2018-10-30 | 2021-11-23 | 咪付(广西)网络技术有限公司 | Identity recognition method based on dual-channel convolutional neural network |
CN109344909A (en) * | 2018-10-30 | 2019-02-15 | 咪付(广西)网络技术有限公司 | A kind of personal identification method based on multichannel convolutive neural network |
CN109211951A (en) * | 2018-11-16 | 2019-01-15 | 银河水滴科技(北京)有限公司 | A kind of safe examination system and safety inspection method based on image segmentation |
CN109558834A (en) * | 2018-11-28 | 2019-04-02 | 福州大学 | A kind of multi-angle of view gait recognition method based on similarity study and kernel method |
CN109858351B (en) * | 2018-12-26 | 2021-05-14 | 中南大学 | Gait recognition method based on hierarchy real-time memory |
CN109858351A (en) * | 2018-12-26 | 2019-06-07 | 中南大学 | A kind of gait recognition method remembered in real time based on level |
CN110096972A (en) * | 2019-04-12 | 2019-08-06 | 重庆科芮智能科技有限公司 | Data guard method, apparatus and system |
Also Published As
Publication number | Publication date |
---|---|
CN104299012B (en) | 2017-06-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104299012A (en) | Gait recognition method based on deep learning | |
WO2016065534A1 (en) | Deep learning-based gait recognition method | |
CN106326886B (en) | Finger vein image quality appraisal procedure based on convolutional neural networks | |
CN107194341B (en) | Face recognition method and system based on fusion of Maxout multi-convolution neural network | |
CN104866829A (en) | Cross-age face verify method based on characteristic learning | |
CN103136516B (en) | The face identification method that visible ray and Near Infrared Information merge and system | |
CN104850825A (en) | Facial image face score calculating method based on convolutional neural network | |
CN101901351B (en) | Face and iris image fusion and recognition method based on hierarchical structure | |
CN102521575B (en) | Iris identification method based on multidirectional Gabor and Adaboost | |
CN103020602B (en) | Based on the face identification method of neural network | |
CN103605972A (en) | Non-restricted environment face verification method based on block depth neural network | |
CN105095870A (en) | Pedestrian re-recognition method based on transfer learning | |
Nandini et al. | Face recognition using neural networks | |
CN105138968A (en) | Face authentication method and device | |
CN108182409A (en) | Biopsy method, device, equipment and storage medium | |
CN107967458A (en) | A kind of face identification method | |
CN108520216A (en) | A kind of personal identification method based on gait image | |
CN105825176A (en) | Identification method based on multi-mode non-contact identity characteristics | |
CN109145717A (en) | A kind of face identification method of on-line study | |
CN104134077A (en) | Deterministic learning theory based gait recognition method irrelevant to visual angle | |
Tiwari et al. | Face Recognition using morphological method | |
Kohli et al. | Face verification with disguise variations via deep disguise recognizer | |
Zhong et al. | Palmprint and dorsal hand vein dualmodal biometrics | |
CN105354468A (en) | User identification method based on multi-axis force platform gait analysis | |
CN103136540A (en) | Behavior recognition method based on concealed structure reasoning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
C06 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
C10 | Entry into substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20160830 Address after: 100090 Beijing city Haidian District Zhongguancun south a No. 2 B District three floor Horse International Hotel Applicant after: Galaxy drop Technology (Beijing) Co., Ltd. Address before: 100080 Zhongguancun East Road, Beijing, No. 95, No. Applicant before: Institute of Automation, Chinese Academy of Sciences |
|
C41 | Transfer of patent application or patent right or utility model | ||
GR01 | Patent grant | ||
GR01 | Patent grant |