CN112131950B - Gait recognition method based on Android mobile phone - Google Patents
Gait recognition method based on Android mobile phone Download PDFInfo
- Publication number
- CN112131950B CN112131950B CN202010866831.2A CN202010866831A CN112131950B CN 112131950 B CN112131950 B CN 112131950B CN 202010866831 A CN202010866831 A CN 202010866831A CN 112131950 B CN112131950 B CN 112131950B
- Authority
- CN
- China
- Prior art keywords
- mobile phone
- android mobile
- image sequence
- registration
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000005021 gait Effects 0.000 title claims abstract description 30
- 238000000034 method Methods 0.000 title claims abstract description 22
- 238000012360 testing method Methods 0.000 claims abstract description 19
- 238000001514 detection method Methods 0.000 claims abstract description 8
- 238000007781 pre-processing Methods 0.000 claims abstract description 7
- 238000003491 array Methods 0.000 claims abstract description 5
- 238000013136 deep learning model Methods 0.000 claims abstract description 5
- 239000000284 extract Substances 0.000 claims abstract description 5
- 238000004364 calculation method Methods 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 8
- 238000013135 deep learning Methods 0.000 abstract description 3
- 230000001815 facial effect Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 230000003542 behavioural effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/23—Recognition of whole body movements, e.g. for sport training
- G06V40/25—Recognition of walking or running movements, e.g. gait recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/02—Constructional features of telephone sets
- H04M1/0202—Portable telephone sets, e.g. cordless phones, mobile phones or bar type handsets
- H04M1/026—Details of the structure or mounting of specific components
- H04M1/0264—Details of the structure or mounting of specific components for a camera module assembly
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Biomedical Technology (AREA)
- Human Computer Interaction (AREA)
- Social Psychology (AREA)
- Signal Processing (AREA)
- Psychiatry (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
A gait recognition method based on an Android mobile phone comprises the following steps: step 1, preprocessing a registration data set, namely acquiring 5 data of different angles by each person by using an Android mobile phone, transmitting the data to a server, extracting a human body target contour by using a DeepLabv3+ deep learning model, and cutting by using a center line principle to obtain a 64 x 64 image; step 2, extracting features of the registration set image sequence by using a trained GaitSet gait recognition model to obtain registration set features; step 3, test data acquisition, wherein 1 angle image is acquired by each person and data are preprocessed similarly to the step 1, and GaitSet models are used for extracting detection set features; step 4, calculating similarity between each detected image sequence feature in the detection set and each image sequence feature in the registration set by using Euclidean distance; and 5, sorting the obtained distance arrays Dis from small to large, taking the 5 distances with the smallest distance in the Dis arrays, recording the corresponding labels LT of the corresponding features, calculating the labels and the confidence coefficient according to the LT, and transmitting the result back to the Android mobile phone. The invention can combine the advantages of the Android mobile phone and the high-performance server, the Android mobile phone can collect data conveniently, the high-performance server can accurately extract the outline of the human body by using the deep learning network, and the registration and the identification of the gait can be rapidly carried out, so that the system is simple to maintain and convenient to use.
Description
Technical Field
The invention belongs to the technical field of computer vision, and relates to a gait recognition method based on an Android mobile phone.
Technical Field
Human body biological characteristic recognition is a traditional pattern recognition problem, and is to perform human identity recognition by utilizing physiological or behavioral characteristics of a human body. Fingerprints, iris and facial images, etc. are the first generation of biometric features that often require close range or contact sensing, such as fingerprints that require contact with a fingerprint scanner, iris images that require close range capture, facial images that are not far apart, or that do not provide adequate resolution, etc. Obviously, in the case of a long distance, the above-mentioned human body biological features will not be possible to use. The human gait is still visible and it can be perceived and measured from any angle without being perceived by the observer as non-contact. Gait recognition is thus an emerging sub-field of biometric technology. From the viewpoint of visual monitoring, gait is the most potential biological feature under the condition of long distance, thereby arousing great interest of researchers at home and abroad.
Gait recognition carries out identity recognition through walking gestures of people, and compared with other biological feature recognition technologies, the gait recognition has the advantages of non-contact, long distance, difficult disguise and the like, and has wide application in crime prevention, forensic identification and social security.
The gait recognition input is a video image sequence of walking, so that the data acquisition is similar to the facial recognition, and the gait recognition input is non-invasive and acceptable. However, since the data size of the sequence image is large, the computational complexity of gait recognition is high and the processing is difficult. The carrier of mainstream gait recognition is a high-performance server, the calculation speed is high, but the data acquisition is not flexible enough. Android phones are also a common carrier for gait recognition, but the recognition speed is relatively slow due to the performance limitations.
Disclosure of Invention
In order to overcome the defects of the prior art, the method combines the respective advantages of the Android mobile phone and the high-performance server: the Android mobile phone is more convenient and fast, and is used for data acquisition and interaction with a server; the high-performance server has stronger calculation power and runs the deep learning model to finish gait recognition.
In order to solve the technical problems, the invention can provide the following technical scheme:
a gait recognition method based on an Android mobile phone comprises the following steps:
Step1, preprocessing a registration data set, transmitting acquired data to a server for registration by using an Android mobile phone, wherein the process is as follows:
1.1 Acquiring gait image sequences by using an Android mobile phone camera;
1.2 Using socket to transmit the image sequence and the corresponding label to the high-performance server on the Android mobile phone client;
1.3 The server performs batch operation on the obtained images, extracts the human body target contour by using a DeepLabv & lt3+ & gt deep learning model, and cuts the human body target contour by using a center line principle to obtain 64 x 64 images;
1.4 Changing the shooting angle of the Android mobile phone, repeating 1.1-1.3 for 5 times;
1.5 M image sequences and corresponding labels are stored, and are respectively recorded as q= { I i |i=1, 2,., m } and t= { L i |i=1, 2,., m }, wherein I i={Ok |k=1, 2,., 5} represents the I image sequence, 5 groups of images are total, and L i represents the label corresponding to the I image sequence;
Step 2, extracting features of the registration set image sequence Q by using a trained GaitSet gait recognition model, wherein the total number of features is 5*m, obtaining registration set features X= { F i |i=1, 2, & gt, 5*m }, and storing X;
Step 3, test data acquisition and feature extraction are carried out, and the process is as follows:
3.1 Transmitting and preprocessing the test image sequences using steps 1.1-1.3 to obtain n test set image sequences p= { O j |j=1, 2,., n };
3.2 Using a trained GaitSet gait recognition model to perform feature extraction on the test set image sequence P to obtain a test set feature y= { F j |j=1, 2,., n }, and storing Y;
and 4, comparing the similarity of the registration set feature X and the detection set feature Y, judging the identity of the test image sequence and calculating the confidence coefficient, wherein the process is as follows:
4.1 For each detected image sequence feature F j in Y, similarity is calculated with each registered image sequence feature F i in X using the euclidean distance, with the calculation formula:
Wherein Dij represents the euclidean distance between the jth feature in the detection set Y and the ith feature of the registration set X, resulting in a distance array dis= { D ij |i=1, 2., 5*m };
4.2 Ordering Dis arrays from small to large according to the distance;
4.3 Taking the 5 minimum distances in the Dis array, and recording corresponding labels LT= { L i |i=1, 2, & gt, 5};
4.4 If the LT has a mode, taking the label l as Li corresponding to the mode, num represents the number of times of the mode occurrence, calculating the confidence coefficient c, and the calculation formula is as follows:
4.5 If no mode is present in LT, let L be L 1 and c be 0.2;
4.6 Using socket to transmit the label and the confidence coefficient c back to the mobile phone to complete identification;
4.7 Repeating 4.1-4.6 until Y is traversed.
The beneficial effects of the invention are as follows: the invention can combine the advantages of the Android mobile phone and the high-performance server, the Android mobile phone can collect data conveniently, the high-performance server can accurately extract the outline of the human body by using the deep learning network, and the registration and the identification of the gait can be rapidly carried out, so that the system is simple to maintain and convenient to use.
Drawings
FIG. 1 is an Android client interface of the method of the present invention.
Fig. 2 is a graph showing the human body contour extraction effect of the method of the present invention.
FIG. 3 is a schematic illustration of a centerline principle cut of the method of the present invention.
Fig. 4 is a flow chart of the method of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
The Android client interface is referred to in fig. 1, and functions of collecting, transmitting pictures and labels are completed.
Fig. 2 shows the effect of DeepLabv3+ deep learning network on human body contour extraction, and fig. 3 is a schematic diagram of the principle cutting of the center line of the method of the invention.
Referring to fig. 4, a gait recognition method based on an Android mobile phone includes the following steps:
Step1, preprocessing a registration data set, transmitting acquired data to a server for registration by using an Android mobile phone, wherein the process is as follows:
1.1 Acquiring gait image sequences by using an Android mobile phone camera;
1.2 Using socket to transmit the image sequence and the corresponding label to the high-performance server on the Android mobile phone client;
1.3 The server performs batch operation on the obtained images, extracts the human body target contour by using a DeepLabv & lt3+ & gt deep learning model, and cuts the human body target contour by using a center line principle to obtain 64 x 64 images;
1.4 Changing the shooting angle of the Android mobile phone, repeating 1.1-1.3 for 5 times;
1.5 M image sequences and corresponding labels are stored, and are respectively recorded as q= { I i |i=1, 2,., m } and t= { L i |i=1, 2,., m }, wherein I i={Ok |k=1, 2,., 5} represents the I image sequence, 5 groups of images are total, and L i represents the label corresponding to the I image sequence;
Step 2, extracting features of the registration set image sequence Q by using a trained GaitSet gait recognition model, wherein the total number of features is 5*m, obtaining registration set features X= { F i |i=1, 2, & gt, 5*m }, and storing X;
Step 3, test data acquisition and feature extraction are carried out, and the process is as follows:
3.1 Transmitting and preprocessing the test image sequences using steps 1.1-1.3 to obtain n test set image sequences p= { O j |j=1, 2,., n };
3.2 Using a trained GaitSet gait recognition model to perform feature extraction on the test set image sequence P to obtain a test set feature y= { F j |j=1, 2,., n }, and storing Y;
and 4, comparing the similarity of the registration set feature X and the detection set feature Y, judging the identity of the test image sequence and calculating the confidence coefficient, wherein the process is as follows:
4.1 For each detected image sequence feature F j in Y, similarity is calculated with each registered image sequence feature F i in X using the euclidean distance, with the calculation formula:
Wherein D ij represents the euclidean distance between the jth feature in the detection set Y and the ith feature of the registration set X, resulting in a distance array dis= { D ij |i=1, 2,.. 5*m };
4.2 Ordering Dis arrays from small to large according to the distance;
4.3 Taking the 5 minimum distances in the Dis array, and recording corresponding labels LT= { L i |i=1, 2, & gt, 5};
4.4 If the LT has a mode, taking the label l as Li corresponding to the mode, num represents the number of times of the mode occurrence, calculating the confidence coefficient c, and the calculation formula is as follows:
4.5 If no mode is present in LT, let L be L 1 and c be 0.2;
4.6 Using socket to transmit the label and the confidence coefficient c back to the mobile phone to complete identification;
4.7 Repeat 4.1) -4.6) until Y is traversed.
Further, in the step 2, the GaitSet model training phase is set as follows: the training set uses CASIA-B, the optimizer uses Adam, the learning rate is 1e-4, the total iteration number is 80K, the batch size is (8, 8), the loss function is improved, and the accuracy of the network in two complex scenes of BG (carrying bag) and CL (wearing overcoat) of the CASIA-B dataset is improved.
Claims (1)
1. The gait recognition method based on the Android mobile phone is characterized by comprising the following steps of:
Step1, preprocessing a registration data set, transmitting acquired data to a server for registration by using an Android mobile phone, wherein the process is as follows:
1.1 Acquiring gait image sequences by using an Android mobile phone camera;
1.2 Using socket to transmit the image sequence and the corresponding label to the high-performance server on the Android mobile phone client;
1.3 The server performs batch operation on the obtained images, extracts the human body target contour by using a DeepLabv & lt3+ & gt deep learning model, and cuts the human body target contour by using a center line principle to obtain 64 x 64 images;
1.4 Changing the shooting angle of the Android mobile phone, repeating 1.1-1.3 for 5 times;
1.5 Storing m image sequences and corresponding labels, wherein the m image sequences are respectively marked as Q= { I i |i=1, 2, …, m } and T= { L i |i=1, 2, …, m }, I i={Ok |k=1, 2, …,5} represents the ith image sequence, 5 groups of images are shared, and L i represents the label corresponding to the ith image sequence;
step2, extracting features of the registration set image sequence Q by using a trained GaitSet gait recognition model, wherein the total number of features is 5*m, obtaining registration set features X= { F i |i=1, 2, …,5*m }, and storing X;
Step 3, test data acquisition and feature extraction are carried out, and the process is as follows:
3.1 Transmitting and preprocessing the test image sequences by using the steps 1.1-1.3 to obtain n test set image sequences P= { O j |j=1, 2, …, n };
3.2 Using a trained GaitSet gait recognition model to perform feature extraction on the test set image sequence P to obtain a test set feature Y= { F j |j=1, 2, …, n }, and storing Y;
and 4, comparing the similarity of the registration set feature X and the detection set feature Y, judging the identity of the test image sequence and calculating the confidence coefficient, wherein the process is as follows:
4.1 For each detected image sequence feature F j in Y, similarity is calculated with each registered image sequence feature F i in X using the euclidean distance, with the calculation formula:
wherein Dij represents the euclidean distance between the jth feature in the detection set Y and the ith feature in the registration set X, resulting in a distance array dis= { D ij |i=1, 2, …,5*m };
4.2 Ordering Dis arrays from small to large according to the distance;
4.3 Taking the first 5 distances with the smallest distance in the Dis array, and recording corresponding labels LT= { L i |i=1, 2, …,5} of corresponding features;
4.4 If the LT has a mode, taking the label l as Li corresponding to the mode, num represents the number of times of the mode occurrence, calculating the confidence coefficient c, and the calculation formula is as follows:
4.5 If no mode exists in LT, let L be L 1 and c be 20%;
4.6 Using socket to transmit the label and the confidence coefficient c back to the mobile phone to complete identification;
4.7 Repeating 4.1-4.6 until Y is traversed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010866831.2A CN112131950B (en) | 2020-08-26 | 2020-08-26 | Gait recognition method based on Android mobile phone |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010866831.2A CN112131950B (en) | 2020-08-26 | 2020-08-26 | Gait recognition method based on Android mobile phone |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112131950A CN112131950A (en) | 2020-12-25 |
CN112131950B true CN112131950B (en) | 2024-05-07 |
Family
ID=73848363
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010866831.2A Active CN112131950B (en) | 2020-08-26 | 2020-08-26 | Gait recognition method based on Android mobile phone |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112131950B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113177464B (en) * | 2021-04-27 | 2023-12-01 | 浙江工商大学 | End-to-end multi-mode gait recognition method based on deep learning |
US11544969B2 (en) | 2021-04-27 | 2023-01-03 | Zhejiang Gongshang University | End-to-end multimodal gait recognition method based on deep learning |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105335725A (en) * | 2015-11-05 | 2016-02-17 | 天津理工大学 | Gait identification identity authentication method based on feature fusion |
CN108520216A (en) * | 2018-03-28 | 2018-09-11 | 电子科技大学 | A kind of personal identification method based on gait image |
CN108537181A (en) * | 2018-04-13 | 2018-09-14 | 盐城师范学院 | A kind of gait recognition method based on the study of big spacing depth measure |
-
2020
- 2020-08-26 CN CN202010866831.2A patent/CN112131950B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105335725A (en) * | 2015-11-05 | 2016-02-17 | 天津理工大学 | Gait identification identity authentication method based on feature fusion |
CN108520216A (en) * | 2018-03-28 | 2018-09-11 | 电子科技大学 | A kind of personal identification method based on gait image |
CN108537181A (en) * | 2018-04-13 | 2018-09-14 | 盐城师范学院 | A kind of gait recognition method based on the study of big spacing depth measure |
Also Published As
Publication number | Publication date |
---|---|
CN112131950A (en) | 2020-12-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Happy et al. | Fuzzy histogram of optical flow orientations for micro-expression recognition | |
US5497430A (en) | Method and apparatus for image recognition using invariant feature signals | |
Wang et al. | Human gait recognition based on self-adaptive hidden Markov model | |
CN105574510A (en) | Gait identification method and device | |
CN111401303B (en) | Cross-visual angle gait recognition method with separated identity and visual angle characteristics | |
CN106778474A (en) | 3D human body recognition methods and equipment | |
CN111985332B (en) | Gait recognition method of improved loss function based on deep learning | |
KR101943433B1 (en) | System for detecting suspects in real-time through face sketch recognition | |
CN112131950B (en) | Gait recognition method based on Android mobile phone | |
CN111914643A (en) | Human body action recognition method based on skeleton key point detection | |
CN107169479A (en) | Intelligent mobile equipment sensitive data means of defence based on fingerprint authentication | |
Hou et al. | Finger-vein biometric recognition: A review | |
CN110796101A (en) | Face recognition method and system of embedded platform | |
Hwang et al. | Multi-modal human action recognition using deep neural networks fusing image and inertial sensor data | |
CN106611158A (en) | Method and equipment for obtaining human body 3D characteristic information | |
CN112541421A (en) | Pedestrian reloading identification method in open space | |
Linda et al. | Color-mapped contour gait image for cross-view gait recognition using deep convolutional neural network | |
CN100495427C (en) | Human ear detection under complex background and method for syncretizing various information | |
CN114998928A (en) | Cross-modal pedestrian re-identification method based on multi-granularity feature utilization | |
Glandon et al. | 3d skeleton estimation and human identity recognition using lidar full motion video | |
Abiyev et al. | Neural network based biometric personal identification with fast iris segmentation | |
Narang et al. | Robust face recognition method based on SIFT features using Levenberg-Marquardt Backpropagation neural networks | |
CN103942545A (en) | Method and device for identifying faces based on bidirectional compressed data space dimension reduction | |
CN116612497A (en) | Clothing changing pedestrian re-identification method based on clothing style feature fusion | |
Mousavi et al. | Seven staged identity recognition system using Kinect V. 2 sensor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |