CN109359578A - Weighted Fusion triple channel eigengait characterizing method - Google Patents
Weighted Fusion triple channel eigengait characterizing method Download PDFInfo
- Publication number
- CN109359578A CN109359578A CN201811172826.0A CN201811172826A CN109359578A CN 109359578 A CN109359578 A CN 109359578A CN 201811172826 A CN201811172826 A CN 201811172826A CN 109359578 A CN109359578 A CN 109359578A
- Authority
- CN
- China
- Prior art keywords
- gait
- channel
- formula
- weighted fusion
- triple channel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/23—Recognition of whole body movements, e.g. for sport training
- G06V40/25—Recognition of walking or running movements, e.g. gait recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2135—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/285—Selection of pattern recognition techniques, e.g. of classifiers in a multi-classifier system
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
Weighted Fusion triple channel eigengait characterizing method, belongs to mode identification technology.Solves the low technical problem of existing gait characterizing method discrimination.Gait characterizing method of the present invention: the binaryzation profile sequence of pedestrian is first obtained from gait video flowing, and by every frame graphics standard centralization;Then according to pedestrian in walking two leg separation degrees periodicity, detect non-frontal gait cycle, and in one cycle using light stream energy diagram extract gait global feature;Every frame step width timing information is encoded to RGB triple channel again, and step width information is projected in light stream energy diagram, obtains the gait characterization image of RGB triple channel;Canonical correlation analysis finally is carried out to the feature in the channel R, G, two obtained vectors execute Weighted Fusion, fused result and channel B feature are carried out canonical correlation analysis, two obtained vectors are weighted fusion.This method is effective to non-frontal period gait sequence is characterized, and can obtain preferable recognition performance.
Description
Technical field
The invention belongs to mode identification technologies, and in particular to a kind of Weighted Fusion triple channel eigengait characterization side
Method.
Background technique
Gait Recognition is computer vision and one, the living things feature recognition field research direction being concerned in recent years, it
It is intended to carry out identification according to the posture that people walks[1].Compared with other biological feature identification technique, Gait Recognition is biology
Method for distinguishing can be uniquely known at a distance in feature identification.Also, gait it is untouchable, be not easy the advantages that pretending, is remote,
There is very big application prospect in intelligent video monitoring.
However, pedestrian will receive the influence of external environment and oneself factor in the process of walking, such as different road surfaces of walking,
The factors such as different resolution, different perspectives, different dress ornaments, different belongings.Under the conditions of existing for the above-mentioned influence factor, gait
It characterizes difference and brings difficulty to Gait Recognition.It can be characterized by complete gait, extract the gait letter for being beneficial to identification as far as possible
Breath, solves the problems, such as that Gait Recognition rate is low under the conditions of different walkings.
For complete gait characterization, gait characterizing method much based on class energy diagram is proposed both at home and abroad.Class energy diagram
Method is without the concern for manikin structure and calculates the accurate parameters of partes corporis humani point, it be periodical space-time gait feature according to
The accumulation class of certain rule.Class energy diagram method is widely used in Gait Recognition.Classical class energy diagram method mainly has:
A. information cumulative method: 2006, Han et al.[2]It is proposed gait energy diagram, by the superposition of normalized a cycle image energy,
The concentration of pixel is the energy in the location of pixels human motion, but the light stream energy diagram (GFI) of this method loses before and after frames
The motion feature of connection.2014, Lee et al.[3]It is proposed a kind of statistical description of gait motion mode.Calculate gait cycle
In every each pixel of frame bi-distribution, the accumulation of the mean value and variance of all pixels obtains gait probability graph, but this method is same
Sample lacks the temporal aspect of interframe.B. multidate information introduce method: 2011, Lam et al.[4]Optical flow field is introduced into energy diagram,
The gait flow graph of proposition characterizes gait information well, but the computation complexity is higher.C. information fusion method: old reality et al.[5]
It is proposed that colored gait motion history figure describes the space time information of gait.Its single step that stands on one foot is long for the single step period of starting point
Motion history figure stands as the gait in the single step motion history figure and a gait cycle of the single step period length of starting point to parapodum
Three kinds of gray level images of energy diagram are respectively allocated to tri- channels R, G, B, realize the fusion of data.2014, Hofmann et al.[6]
Deep gradient histogram energy diagram and gait energy diagram decision level fusion are realized, but is lost part gait information.
Disclosure relevant to invention, which is reported, includes:
[1] beautifully adorned Xian is firelight or sunlight, Xu Sen, the feature representation of Wang Ke person of outstanding talent pedestrian gait and identification summary [J] pattern-recognition and artificial
Intelligence, 2012,25 (1): 71-81.
[2]Han J,Bhanu B.Statistical feature fusion for gait-based human
recognition[C],Proceedings of the 2004 IEEE Computer Society Conference on
Computer Vision and Pattern Recognition(CVPR),Washington,DC.IEEE,2004:II-842-
II-847.
[3]Lee C P,Tan A W C,Tan S C.Gait probability image:An information-
theoretic model of gait representation[J],Journal of Visual Communication and
Image Representation,2014,25(6):1489-1492.
[4]Lam T H W,Cheung K H,Liu J N K.Gait flow image:A silhouette-based
gait representation for human identification[J],Pattern recognition,2011,44
(4):973-987.
[5] old reality, Ma Tianjun, the red multilayer window image moment [J] for waiting to be used for Gait Recognition of Huang ten thousand, electronics and informatics
Report, 2009,31 (1): 116-119.
[6]Hofmann M,Geiger J,Bachmann S,et al.The TUM gait from audio,image
and depth(GAID)database:Multimodal recognition of subjects and traits[J],
Journal of Visual Communication and Image Representation,2014,25(1):195-206.
Summary of the invention
Present invention aim to address gaits under different walking states in the prior art to characterize incomplete bring discrimination
When low problem, the i.e. walking states of test gait and the walking states of registration gait mismatch, what recognition performance was decreased obviously
Problem provides a kind of Weighted Fusion triple channel eigengait characterizing method.
It is as follows that the present invention solves the technical solution that above-mentioned technical problem is taken.
Weighted Fusion triple channel eigengait characterizing method, steps are as follows:
Step 1: the binaryzation profile sequence of pedestrian is obtained from gait video flowing using code book detection method, and will be every
Frame graphics standard centralization;
Step 2: detecting non-frontal gait cycle, and one according to the periodicity of pedestrian's two leg separation degrees in walking
The global feature of gait is extracted in a period using light stream energy diagram;
Step 3: every frame step width timing information is encoded to RGB triple channel, and step width information is projected into light stream energy diagram
In, obtain the gait characterization image of RGB triple channel;
Step 4: the feature to the channel R and the channel G carries out canonical correlation analysis (CCA), two obtained vectors are added
Fused result and channel B feature are carried out canonical correlation analysis by power fusion, and two obtained vectors are weighted fusion.
Further, in the step 1, every frame graphics standard centralization is by the pedestrian people in binaryzation profile sequence
Body is placed in the middle, and every frame image unifies size.
Further, in the step 2, non-frontal gait cycle detection formula are as follows:
In formula, GiFor the i-th frame gait image leg area mean breadth, h1And h2People respectively in certain frame foreground image
The anthropometry height of ankle and knee, RiAnd LiRespectively belong to the Far Left of prospect and the pixel position of rightmost in jth row
It sets.
Further, in step 2, the formula of the global feature of gait is extracted using light stream energy diagram in one cycle
Are as follows:
In formula, uFi(x, y) and vFi(x, y) is respectively optical flow field horizontally and vertically;N is a gait week
The interim gait frame number for including;I is the time, and (x, y) is two dimensional image plane coordinate.
Further, in the step 3, every frame step width timing information is encoded to the formula of RGB triple channel are as follows:
In formula, PiFor step width information, GiIt is acquired by formula (1), GmaxAnd GminThe respectively maximum of leg area mean breadth
And minimum value, I are the maximum intensity value of image.
Further, I is 1 or 255.
Further, in the step 3, step width information is projected into the formula in light stream energy diagram are as follows:
In formula,Image is characterized for the gait of the RGB triple channel in 1/4 gait cycle, p is 1/4 gait cycle
The frame number for including, Fi(x, y) is the gait flow graph of the i-th frame, PiFor the step width information of the i-th frame, when B (), G () and R () are
Sequence information coding is to three channels of RGB color.
Further, in the step 4, canonical correlation analysis is defined as follows: X={ x1,x2,...,xn},With Y={ y1,y2,...,yn},Respectively two group of 0 mean value random vector, canonical correlation analysis are intended to seek
Look for two projecting directionsWithObjective function are as follows:
The optimization problem of formula (5) is solved using method of Lagrange multipliers:
Solution formula (6), the corresponding feature vector of maximum eigenvalue is projection vector, d (d≤min (N before choosingx,Ny),d
≤ n) to composition projection matrix Wx=[px1,px2,...,pxd]ΤAnd Wy=[py1,py2,...,pyd]Τ, extract the allusion quotation between X and Y
Type correlated characteristic u=WxX and v=WyY。
Further, in the step 4, canonical correlation analysis, two obtained are carried out to the feature in the channel R and the channel G
Vector is weighted fusion, the formula of use are as follows:
In formula, R1、G1It is the primitive character in the channel R and the channel G, R respectively2、G2It is the new feature in the channel R and the channel G respectively
Vector, It is to R respectively1、G1The projection matrix that canonical correlation analysis obtains is carried out, α, β are weighting coefficient, meet alpha+beta
=1.
Further, in the step 4, fused result and channel B feature is subjected to canonical correlation analysis, obtained
Two vectors be weighted fusion, the formula of use are as follows:
In formula, B1It is the primitive character of channel B, W2And B2For two groups of new feature vectors, It is to W respectively1、B1It carries out
The projection matrix that canonical correlation analysis obtains, γ, δ are weighting coefficient, meet+δ=1 γ.
Compared with prior art, the invention has the benefit that
The gait characterizing method of Weighted Fusion triple channel feature of the invention can extract the classification letter for being relatively beneficial to identification
Breath is effectively removed the redundancy of original triple channel gait characterization, and it is insufficient to solve existing gait feature information characterization
The problem of and existing Pixel-level merge caused by feature lack problem, under different walking states gait characterization it is complete, to table
It is effective to levy non-frontal period gait sequence, discrimination is high.
Detailed description of the invention
Fig. 1 is the flow chart of Weighted Fusion triple channel eigengait characterizing method provided by the invention.
Fig. 2 is pedestrian's gait profile that the Weighted Fusion triple channel eigengait characterizing method of the embodiment of the present invention 1 extracts.
Fig. 3 is the anthropometry height map of the embodiment of the present invention 1.
Fig. 4 is the gait cycle testing result of the Weighted Fusion triple channel eigengait characterizing method of the embodiment of the present invention 1.
Fig. 5 is the two-value step of a cycle of the Weighted Fusion triple channel eigengait characterizing method of the embodiment of the present invention 1
State sequence.
Fig. 6 is the step of a gait cycle of the Weighted Fusion triple channel eigengait characterizing method of the embodiment of the present invention 1
State flow graph.
Specific embodiment
The present invention is further illustrated below in conjunction with drawings and examples.
As shown in Figure 1, the gait characterizing method of Weighted Fusion triple channel feature of the invention, steps are as follows:
Step 1: the acquisition and processing of image
It is poly- to foreground area by establishing code book model to everyone the gait video flowing under different walking states
Class obtains foreground area, and normalized is to keep human body contour outline placed in the middle, and the size of image is unified, usually 64 × 64 picture
Element;
Step 2: extracting gait global feature
Step 2.1, according in figure after frame graphics standard centralization every in gait video flowing, people's two legs in walking separate
Degree periodicity, detect non-frontal gait cycle, detection formula are as follows:
In formula, GiFor the i-th frame gait image leg area mean breadth;h1And h2People respectively in certain frame foreground image
The anthropometry height of ankle and knee;RiAnd LiBelong to the Far Left of prospect and the pixel position of rightmost in respectively the i-th row
It sets;
Step 2.2, the global feature for extracting gait using light stream energy diagram in one cycle, formula are;
In formula, uFi(x, y) and vFi(x, y) is respectively optical flow field horizontally and vertically;N is a gait week
The interim gait frame number for including;I is the time, and (x, y) is two dimensional image plane coordinate.
Step 3: the gait for obtaining RGB triple channel characterizes image
Every frame step width timing information is encoded to RGB triple channel, formula by step 3.1 are as follows:
In formula, PiFor step width information, GiIt is acquired by formula (1), GmaxAnd GminThe respectively maximum of leg area mean breadth
And minimum value, I are the maximum intensity value of image, such as 1 or 255.
Step 3.2 projects to step width information in light stream energy diagram, obtains RGB triple channel gait and characterizes image, formula
Are as follows:
In formula,Image is characterized for the gait of the RGB triple channel in 1/4 gait cycle, p is 1/4 gait cycle
The frame number for including, Fi(x, y) is the gait flow graph of the i-th frame, PiFor the step width information of the i-th frame, when B (), G () and R () are
Sequence information coding is to three channels of RGB color.
Step 4: RGB triple channel merges
Step 4.1 is handled using feature of the canonical correlation analysis to the channel R and the channel G, two obtained vector into
Row Weighted Fusion, formula are;
In formula, R1、G1The respectively primitive character in the channel R, the channel G, R2、G2For two groups of obtained new feature vectors, Respectively to R1、G1The projection matrix that canonical correlation analysis obtains is carried out, the equal weighting coefficient of α, β meets alpha+beta=1.
Fused result and channel B feature are carried out canonical correlation analysis by step 4.2, and two obtained vectors carry out
Weighted Fusion, the formula of use are as follows:
In formula, B1For the primitive character of channel B, W2、B2For two groups of obtained new feature vectors, It is to W respectively1、
B1The projection matrix that canonical correlation analysis obtains is carried out, γ, δ are weighting coefficient, meet+δ=1 γ.
In the step of above-mentioned technical proposal four, canonical correlation analysis is defined as follows: X={ x1,x2,...,xn},With Y={ y1,y2,...,yn},Respectively two group of 0 mean value random vector, canonical correlation analysis are intended to seek
Look for two projecting directionsWithObjective function are as follows:
The optimization problem of formula (5) is solved using method of Lagrange multipliers:
Solution formula (6), the corresponding feature vector of maximum eigenvalue is projection vector, d (d≤min (N before choosingx,Ny),d
≤ n) to composition projection matrix Wx=[px1,px2,...,pxd]ΤAnd Wy=[py1,py2,...,pyd]Τ, extract the allusion quotation between X and Y
Type correlated characteristic u=WxX and v=WyY。
Embodiment 1
To prove the gait characterizing method of Weighted Fusion triple channel feature provided by the invention to gait sequence motion information
Characterization ability, characterizing method of the invention is tested with reference to embodiments.
Test sample: using USF HumanID gait data collection, (USF HumanID gait data collection is remote complexity
The outdoor gait video shot under background, the side profile of extraction it is second-rate).122 people are acquired in the view of different condition
Frequency evidence: acquisition visual angle (R/L), is walked at cement/meadow (C/G) by about 30 ° of optical axis direction angle of two cameras in left and right, carry on the back/
Not knapsack (BF/NB) is worn different shoes (A/B), different acquisition time (T);It is divided into multiple and different groups, it is specific to divide such as
Shown in table 1.
1 USF database situation of table
Test method, steps are as follows:
Step 1: the acquisition and processing of image
As shown in Fig. 2, everyone video under different walking states is concentrated to data, by establishing code book model pair
Foreground area cluster obtains foreground area, and normalized is to keep human body contour outline placed in the middle;
Step 2: extracting gait global feature
Step 2.1, according in figure after frame graphics standard centralization every in gait video flowing, people's two legs in walking separate
Degree periodicity, detect non-frontal gait cycle, detection formula are as follows:
In formula, GiFor the i-th frame gait image leg area mean breadth;As shown in figure 3, h1And h2Respectively certain frame prospect
The anthropometry height of the ankle and knee of people in image;RiAnd LiBelong to the Far Left of prospect and most right in respectively the i-th row
The location of pixels on side;Gait cycle detection waveform shown in Fig. 4 is obtained using this method, Fig. 5 is the step of corresponding a cycle
State sequence.
With both feet support for gait cycle starting point, four monocycle gaits of a gait sequence are recorded.
Step 2.2, the global feature for extracting gait using light stream energy diagram in one cycle, formula are;
In formula, uFi(x, y) and vFi(x, y) is respectively optical flow field horizontally and vertically, respectively such as Fig. 6 first
Shown in capable and the second row image;N is the gait frame number for including in a gait cycle;I is the time, and (x, y) is flat for two dimensional image
Areal coordinate.The third line of Fig. 6 is Fi(x, y) corresponding image.
Step 3: the gait for obtaining RGB triple channel characterizes image
Every frame step width timing information is encoded to RGB triple channel, formula by step 3.1 are as follows:
In formula, PiFor step width information, GiIt is acquired by formula (1), GmaxAnd GminThe respectively maximum of leg area mean breadth
And minimum value, I are the maximum intensity value of image, such as 1 or 255.
Step 3.2 projects to step width information in light stream energy diagram, obtains RGB triple channel gait and characterizes image, such as Fig. 6
Fourth line, formula are as follows:
In formula,Image is characterized for the gait of the RGB triple channel in 1/4 gait cycle, such as fifth line institute in Fig. 6
Show, p is the frame number that 1/4 gait cycle includes, Fi(x, y) is the gait flow graph of the i-th frame, PiFor the step width information of the i-th frame, B
(), G () and R () are three channels that timing information is encoded to RGB color.The 6th row of Fig. 6 is the monocycle
Triple channel characterize image.
Four monocyclic gait flow graphs of a gait sequence are acquired, it is then average to be analyzed at a width.
Step 4: RGB triple channel merges
When executing progress canonical correlation analysis, in order to avoid generalized eigenvalue decomposition encounters " dimension disaster ", first to spy
It levies matrix and carries out singular value decomposition (SVD) dimensionality reduction, the 99.9% corresponding feature vector for retaining the sum of all characteristic values carries out again
Canonical correlation analysis.
Step 4.1 is handled using feature of the canonical correlation analysis to the channel R and the channel G, two obtained vector into
Row Weighted Fusion, formula are;
In formula, R1、G1The respectively primitive character in the channel R, the channel G, R2、G2For two groups of obtained new feature vectors, Respectively to R1、G1The projection matrix that canonical correlation analysis obtains is carried out, the equal weighting coefficient of α, β meets alpha+beta=1.
Fused result and channel B feature are carried out canonical correlation analysis by step 4.2, and two obtained vectors carry out
Weighted Fusion, the formula of use are as follows:
In formula, B1For the primitive character of channel B, W2、B2For two groups of obtained new feature vectors, It is to W respectively1、
B1The projection matrix that canonical correlation analysis obtains is carried out, γ, δ are weighting coefficient, meet+δ=1 γ.
The gait feature that the fusion triple channel characterizing method of embodiment 1 obtains labeled as FTP (weighting coefficient α, beta, gamma, δ's
Value range is 0.1,0.2,0.3 ... ..., and 0.9).In the prior art: using Pixel-level Weighted Fusion triple channel characterizing method
(Weighted Fusion coefficient is set as a, b, c, and value range is 0.1,0.2,0.3 ... ..., and 0.9, and meet a+b+c=1) obtain
Gait characterization is labeled as PWP, and the gait characterization that the characterizing method of gait energy diagram obtains is labeled as GEI, the characterization of gait flow graph
The gait characterization that method obtains is labeled as GFI.GEI, GFI, FTP, PWP are input to nearest neighbor classifier and are identified.Test knot
Fruit is as shown in table 2.
The performance of table 2 present invention and the gait characterizing method of the prior art
In table 2, after Rank n is indicated according to sequencing of similarity, it is correctly to identify successfully that most like first n, which exists,.
As previously mentioned, FTP and PWP by state modulator, for the parameter area and constraint condition of FTP and PWP, is able to carry out 81 groups
And 35 groups of experiments, it will be in optimal experimental result filling table.Corresponding parameter value α=0.4, β=0.6, γ=0.9, δ=0.1;
A=0.7, b=0.1, c=0.2.From table 2 it can be seen that Weighted Fusion triple channel eigengait characterizing method provided by the invention
Comprising the more information for being beneficial to identification, optimal identification performance is obtained.
The foregoing description of the disclosed embodiments enables those skilled in the art to implement or use the present invention.
Various modifications to these embodiments will be readily apparent to those skilled in the art, as defined herein
General Principle can be realized in other embodiments without departing from the spirit or scope of the present invention.Therefore, of the invention
It is not intended to be limited to the embodiments shown herein, and is to fit to and the principles and novel features disclosed herein phase one
The widest scope of cause.
Claims (10)
1. Weighted Fusion triple channel eigengait characterizing method, which is characterized in that steps are as follows:
Step 1: the binaryzation profile sequence of pedestrian is obtained from gait video flowing using code book detection method, and by every frame figure
As center for standard;
Step 2: detecting non-frontal gait cycle, and in a week according to the periodicity of pedestrian's two leg separation degrees in walking
The interim global feature that gait is extracted using light stream energy diagram;
Step 3: every frame step width timing information is encoded to RGB triple channel, and step width information is projected in light stream energy diagram,
Obtain the gait characterization image of RGB triple channel;
Step 4: the feature to the channel R and the channel G carries out canonical correlation analysis, two obtained vectors execute Weighted Fusion, will
Fused result and channel B feature carry out canonical correlation analysis, are weighted fusion to two obtained vectors.
2. Weighted Fusion triple channel eigengait characterizing method according to claim 1, which is characterized in that the step 1
In, every frame graphics standard centralization be pedestrian's human body in binaryzation profile sequence is placed in the middle, every frame image unifies size.
3. Weighted Fusion triple channel eigengait characterizing method according to claim 1, which is characterized in that the step 2
In, non-frontal gait cycle detection formula are as follows:
In formula, GiFor the i-th frame gait image leg area mean breadth, h1And h2The ankle of people respectively in certain frame foreground image
With the anthropometry height of knee, RiAnd LiRespectively belong to the Far Left of prospect and the location of pixels of rightmost in jth row.
4. Weighted Fusion triple channel eigengait characterizing method according to claim 1, which is characterized in that in step 2,
The formula of the global feature of gait is extracted using light stream energy diagram in one cycle are as follows:
In formula, uFi(x, y) and vFi(x, y) is respectively optical flow field horizontally and vertically;N is in a gait cycle
The gait frame number for including;I is the time, and (x, y) is two dimensional image plane coordinate.
5. Weighted Fusion triple channel eigengait characterizing method according to claim 1, which is characterized in that the step 3
In, every frame step width timing information is encoded to the formula of RGB triple channel are as follows:
In formula, PiFor step width information, GiIt is acquired by formula (1), GmaxAnd GminRespectively leg area mean breadth maximum and most
Small value, I are the maximum intensity value of image.
6. Weighted Fusion triple channel eigengait characterizing method according to claim 5, which is characterized in that I is 1 or 255.
7. Weighted Fusion triple channel eigengait characterizing method according to claim 1, which is characterized in that the step 3
In, step width information is projected into the formula in light stream energy diagram are as follows:
In formula,Image is characterized for the gait of the RGB triple channel in 1/4 gait cycle, p is that 1/4 gait cycle includes
Frame number, Fi(x, y) is the gait flow graph of the i-th frame, PiFor the step width information of the i-th frame, B (), G () and R () are timing letter
Breath is encoded to three channels of RGB color.
8. Weighted Fusion triple channel eigengait characterizing method according to claim 1, which is characterized in that the step 4
In, canonical correlation analysis is defined as follows:WithRespectively
For two group of 0 mean value random vector, canonical correlation analysis is intended to find two projecting directionsWithTarget letter
Number are as follows:
The optimization problem of formula (5) is solved using method of Lagrange multipliers:
Solution formula (6), the corresponding feature vector of maximum eigenvalue is projection vector, d (d≤min (N before choosingx,Ny),d≤n)
To composition projection matrix Wx=[px1,px2,...,pxd]ΤAnd Wy=[py1,py2,...,pyd]Τ, extract the typical phase between X and Y
Close feature u=WxX and v=WyY。
9. Weighted Fusion triple channel eigengait characterizing method according to claim 1, which is characterized in that the step 4
In, canonical correlation analysis is carried out to the feature in the channel R and the channel G, two obtained vectors are weighted fusion, the formula of use
Are as follows:
In formula, R1、G1It is the primitive character in the channel R and the channel G, R respectively2、G2It is the new feature vector in the channel R and the channel G respectively,It is to R respectively1、G1Carry out the obtained projection matrix of canonical correlation analysis, α, β are weighting coefficient, meet alpha+beta=
1。
10. Weighted Fusion triple channel eigengait characterizing method according to claim 1, which is characterized in that the step
In four, fused result and channel B feature are subjected to canonical correlation analysis, two obtained vectors are weighted fusion, adopt
Formula are as follows:
In formula, B1It is the primitive character of channel B, W2And B2For two groups of new feature vectors,It is to W respectively1、B1Carry out allusion quotation
The projection matrix that type correlation analysis obtains, γ, δ are weighting coefficient, meet+δ=1 γ.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811172826.0A CN109359578A (en) | 2018-10-09 | 2018-10-09 | Weighted Fusion triple channel eigengait characterizing method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811172826.0A CN109359578A (en) | 2018-10-09 | 2018-10-09 | Weighted Fusion triple channel eigengait characterizing method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109359578A true CN109359578A (en) | 2019-02-19 |
Family
ID=65348767
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811172826.0A Pending CN109359578A (en) | 2018-10-09 | 2018-10-09 | Weighted Fusion triple channel eigengait characterizing method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109359578A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109798888A (en) * | 2019-03-15 | 2019-05-24 | 京东方科技集团股份有限公司 | Posture determining device, method and the visual odometry of mobile device |
CN111436926A (en) * | 2020-04-03 | 2020-07-24 | 山东省人工智能研究院 | Atrial fibrillation signal detection method based on statistical characteristics and convolution cyclic neural network |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2737434A1 (en) * | 2011-07-29 | 2014-06-04 | University Of Ulster | Gait recognition methods and systems |
CN106529499A (en) * | 2016-11-24 | 2017-03-22 | 武汉理工大学 | Fourier descriptor and gait energy image fusion feature-based gait identification method |
CN106803072A (en) * | 2016-12-30 | 2017-06-06 | 中国计量大学 | Variable visual angle gait recognition method based on the fusion of quiet behavioral characteristics |
-
2018
- 2018-10-09 CN CN201811172826.0A patent/CN109359578A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2737434A1 (en) * | 2011-07-29 | 2014-06-04 | University Of Ulster | Gait recognition methods and systems |
CN106529499A (en) * | 2016-11-24 | 2017-03-22 | 武汉理工大学 | Fourier descriptor and gait energy image fusion feature-based gait identification method |
CN106803072A (en) * | 2016-12-30 | 2017-06-06 | 中国计量大学 | Variable visual angle gait recognition method based on the fusion of quiet behavioral characteristics |
Non-Patent Citations (1)
Title |
---|
吕卓纹: "基于类能量图与耦合度量的步态识别方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑(月刊 )》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109798888A (en) * | 2019-03-15 | 2019-05-24 | 京东方科技集团股份有限公司 | Posture determining device, method and the visual odometry of mobile device |
CN109798888B (en) * | 2019-03-15 | 2021-09-17 | 京东方科技集团股份有限公司 | Posture determination device and method for mobile equipment and visual odometer |
CN111436926A (en) * | 2020-04-03 | 2020-07-24 | 山东省人工智能研究院 | Atrial fibrillation signal detection method based on statistical characteristics and convolution cyclic neural network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Makihara et al. | The OU-ISIR gait database comprising the treadmill dataset | |
Silberman et al. | Indoor segmentation and support inference from rgbd images | |
Wang et al. | Human identification using temporal information preserving gait template | |
Gao et al. | Multi-perspective and multi-modality joint representation and recognition model for 3D action recognition | |
Ran et al. | Applications of a simple characterization of human gait in surveillance | |
Avgerinakis et al. | Recognition of activities of daily living for smart home environments | |
CN109344694B (en) | Human body basic action real-time identification method based on three-dimensional human body skeleton | |
Singh et al. | Human activity recognition based on silhouette directionality | |
CN106909890B (en) | Human behavior recognition method based on part clustering characteristics | |
CN112001353B (en) | Pedestrian re-identification method based on multi-task joint supervised learning | |
CN109635754A (en) | Gait feature fusion method based on Non-linear coupling metric learning | |
CN108537181A (en) | A kind of gait recognition method based on the study of big spacing depth measure | |
CN109902565A (en) | The Human bodys' response method of multiple features fusion | |
CN109919137B (en) | Pedestrian structural feature expression method | |
CN114187665A (en) | Multi-person gait recognition method based on human body skeleton heat map | |
CN110032940A (en) | A kind of method and system that video pedestrian identifies again | |
Meng et al. | Activity recognition based on semantic spatial relation | |
Mota et al. | A tensor motion descriptor based on histograms of gradients and optical flow | |
CN111680560A (en) | Pedestrian re-identification method based on space-time characteristics | |
CN109359578A (en) | Weighted Fusion triple channel eigengait characterizing method | |
Bhargavas et al. | Human identification using gait recognition | |
Mu et al. | Resgait: The real-scene gait dataset | |
CN112307892A (en) | Hand motion recognition method based on first visual angle RGB-D data | |
Soriano et al. | Curve spreads-a biometric from front-view gait video | |
Galiyawala et al. | Visual appearance based person retrieval in unconstrained environment videos |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190219 |