CN101630364A - Method for gait information processing and identity identification based on fusion feature - Google Patents

Method for gait information processing and identity identification based on fusion feature Download PDF

Info

Publication number
CN101630364A
CN101630364A CN200910070174A CN200910070174A CN101630364A CN 101630364 A CN101630364 A CN 101630364A CN 200910070174 A CN200910070174 A CN 200910070174A CN 200910070174 A CN200910070174 A CN 200910070174A CN 101630364 A CN101630364 A CN 101630364A
Authority
CN
China
Prior art keywords
mrow
mtd
msub
msup
mtr
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN200910070174A
Other languages
Chinese (zh)
Inventor
明东
白艳茹
万柏坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN200910070174A priority Critical patent/CN101630364A/en
Publication of CN101630364A publication Critical patent/CN101630364A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a method for extracting and processing gait feature information and identifying identification when people walk, and aims at reducing interference of external factors such as a complicated background and the like, realizing better adaptivity to reality conditions, more exactly extracting effective information which can reflect the walking feature of a motion human body so as to improve accuracy of gait identification. The technical proposal of the invention provides the method for gait information processing and identity identification based on a fusion feature. The method comprises the following steps: inputting a video sequence, segmenting profile information of a body object in a video image by target detection, synchronously extracting gait feature parameters by adopting boundary center distance and Radon conversion, carrying out corresponding post processing on the obtained feature parameters, taking a support vector machine as a classifier for classification identification, and evaluating the identification effect. The method is mainly applied to identity identification based on the gait feature information.

Description

Gait information processing and identity recognition method based on fusion characteristics
Technical Field
The invention relates to gait feature information extraction and processing when a person walks and identity recognition based on the gait feature information, in particular to a gait information processing and identity recognition method based on fusion features.
Background
Biometric identification is the identification of individuals by various high-tech information detection means, using physiological or behavioral characteristics inherent to the human body. The biological characteristics mainly include two types of physiological characteristics and behavior characteristics: the physiological characteristics refer to inherent physical characteristics of human bodies, such as fingerprints, irises, human faces and the like; behavioral characteristics refer to characteristics extracted from the motion performed by a person, mostly acquired, such as gait, handwriting, etc. In recent years, the biometric authentication technology has a leap development and becomes a hotspot which is widely concerned in all aspects of production, study, research and management. In the journal MIT Technology Review of 2001, biometric identification Technology was listed as one of 10 technologies most likely to change the world. The biological authentication technology is expected to go deep into the aspects of our lives in the future about 10 years, and the comprehensive influence of the technology is not inferior to that of the Internet.
Gait Recognition (Gait Recognition) is one of the emerging fields in biometric identification technology. The method aims to realize personal identity recognition or physiological, pathological and psychological characteristic detection according to the walking posture of people, and has wide application prospect. Gait is a complex behavioral characteristic and is a comprehensive manifestation of human physiology, psychology and response to the outside world. Gait is also different due to differences between individuals as a function of the entire muscle and skeleton (body weight, limb length, skeletal structure, etc.) and is completely dependent on hundreds of kinematic parameters. Early medical studies showed that: there are 24 different components in human gait, which, if taken into account, are specific to the individual, which makes it possible to identify using gait. Compared with other biometric authentication technologies, gait recognition has the unique advantages of noninvasiveness, remote recognition, simplified details, difficulty in camouflage and the like.
Existing gait recognition algorithms are broadly classified into two major categories, model-based and non-model-based. The model-based method is to establish a model for a human body structure or an obvious walking characteristic expressed by a human body in a gait sequence image and extract the gait characteristic by using parameters derived from the model. The gait feature can be described more accurately, the sensitivity to the change of external conditions is greatly reduced, but the huge calculation amount is a difficult problem for the practical application with importance on the real-time property. The non-model-based method refers to a feature extracted by directly analyzing the shape or motion of a human body in the walking process. The method is characterized in that the calculated amount is relatively small, the purpose of real-time operation in a practical link is facilitated, but the method is sensitive to the change of background and illumination signals, and once a shielding phenomenon appears in a scene, the recognition capability is greatly influenced.
Gait recognition combines multiple technologies such as computer vision, mode recognition and video/image sequence processing. With the rapid development of the biometric authentication technology, the gait feature-based identity recognition technology increasingly shows the advantages thereof, and particularly has wide application prospects and economic values in the fields of access control systems, security monitoring, human-computer interaction, medical diagnosis and the like. Therefore, the method has attracted great interest of many researchers at home and abroad, and becomes the leading direction which has attracted much attention in the field of biomedical information detection in recent years.
However, none of the techniques is perfect, and capturing images from the environment in real-world conditions is affected by many factors (e.g., changes in weather conditions, changes in lighting conditions, clutter and interference of the background, shadows of moving objects, occlusions between objects and the environment or between objects, motion of the camera, etc.), which presents many difficulties to the final identification of gait. How to eliminate the influence of the factors and more accurately extract the effective gait features of the moving human body is a difficult problem in the field of gait recognition.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to provide a gait information processing and identity recognition method based on fusion characteristics, so as to reduce the interference of external factors such as complex background and the like, have better self-adaptability to practical conditions, and more accurately extract effective information capable of reflecting the walking characteristics of a moving human body, thereby improving the gait recognition accuracy.
The invention adopts the technical scheme that a gait information processing and identity recognition method based on fusion characteristics comprises the following steps: inputting a video sequence, segmenting outline information of a human body target in a video image through target detection, simultaneously using boundary center distance and Radon transformation for gait feature parameter extraction, performing corresponding post-processing on the obtained feature parameters, finally selecting a support vector machine as a classifier for classification and identification, and evaluating the identification effect.
The method for segmenting the contour information of the human body target in the video image by the target detection, namely the moving target detection and the key frame extraction comprises the following steps:
(1) modeling the background by a minimum median variance method:
if order I(x,y) tRepresenting acquired N frame sequential images, in which <math> <mrow> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <msubsup> <mi>I</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mi>t</mi> </msubsup> <mo>,</mo> </mrow> </math> t represents the frame index value (t is 1, 2, …, N), then background B(x,y)Comprises the following steps:
B ( x , y ) = min p 2 { med t ( I ( x , y ) t - P ) 2 }
wherein P is a gray value to be determined at a pixel position (x, y), three components R, G, B are respectively modeled, and a color background image in an RGB format is obtained through synthesis;
(2) motion segmentation:
the difference operation is performed using an indirect difference function:
<math> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>,</mo> <mi>b</mi> <mo>)</mo> </mrow> <mo>=</mo> <mn>1</mn> <mo>-</mo> <mfrac> <mrow> <mn>2</mn> <msqrt> <mrow> <mo>(</mo> <mi>a</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>b</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </msqrt> </mrow> <mrow> <mrow> <mo>(</mo> <mi>a</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>+</mo> <mrow> <mo>(</mo> <mi>b</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>&times;</mo> <mfrac> <mrow> <mn>2</mn> <msqrt> <mrow> <mo>(</mo> <mn>256</mn> <mo>-</mo> <mi>a</mi> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mn>256</mn> <mo>-</mo> <mi>b</mi> <mo>)</mo> </mrow> </msqrt> </mrow> <mrow> <mrow> <mo>(</mo> <mn>256</mn> <mo>-</mo> <mi>a</mi> <mo>)</mo> </mrow> <mo>+</mo> <mrow> <mo>(</mo> <mn>256</mn> <mo>-</mo> <mi>b</mi> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
wherein a and b respectively represent the gray level (intensity) of the current image and the background image at the same pixel point (x, y), f (a, b) is more than or equal to 0 and less than or equal to 1, a is more than or equal to 0 and less than or equal to 255. The sensitivity of the difference function can be automatically changed along with the background gray level, the image segmentation accuracy is improved through the self-adaptability, and a moving target binary image can be obtained through threshold segmentation after the difference:
<math> <mrow> <msub> <mi>S</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </msub> <mo>=</mo> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mi>f</mi> <mrow> <mo>(</mo> <msub> <mi>a</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </msub> <mo>,</mo> <msub> <mi>b</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </msub> <mo>)</mo> </mrow> <mo>&GreaterEqual;</mo> <mi>T</mi> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mi>Otherwise</mi> </mtd> </mtr> </mtable> </mfenced> <mo>;</mo> </mrow> </math>
(3) morphological treatment and connected domain analysis:
carrying out post-processing combining morphological processing and connected domain analysis on the moving target binary image to remove residual noise and obtain a better segmentation effect;
(4) gait cycle division and key frame extraction:
the gait cycle is divided by the width change signal of the human body contour by utilizing the characteristic that the contour width of the human body is changed synchronously and periodically along with time, and two maximum value points in one gait cycle are extracted to be used as key frames.
The gait feature extraction comprises the following steps:
firstly, extracting the human body contour line by using a boundary tracking algorithm, and carrying out necessary normalization and resampling to obtain the moving human body contour line. And then extracting boundary center distance features and Radon transformation features based on the contour. The boundary center distance feature means:
the original two-dimensional contour shape is passed through a one-dimensional distance signal D ═ D (D)1,d2,…d256) Indirect representation and has translational invariance and rotational invariance, wherein,
<math> <mrow> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>=</mo> <msqrt> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>x</mi> <mi>c</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>y</mi> <mi>c</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> <mrow> <mo>(</mo> <mi>i</mi> <mo>=</mo> <mn>1,2</mn> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mn>256</mn> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
radon transform features refer to:
the essence of the Radon transform is the projection of the image matrix in a given direction, which can be done along any angle, and in general the Radon transform of f (x, y) is the line integral in the direction parallel to the y-axis in a rotating coordinate system, of the form:
<math> <mrow> <msub> <mi>R</mi> <mi>e</mi> </msub> <mrow> <mo>(</mo> <msup> <mi>x</mi> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <mo>=</mo> <msubsup> <mo>&Integral;</mo> <mrow> <mo>-</mo> <mo>&infin;</mo> </mrow> <mo>&infin;</mo> </msubsup> <mi>f</mi> <mrow> <mo>(</mo> <msup> <mi>x</mi> <mo>&prime;</mo> </msup> <mi>cos</mi> <mi>&theta;</mi> <mo>-</mo> <msup> <mi>y</mi> <mo>&prime;</mo> </msup> <mi>sin</mi> <mi>&theta;</mi> <mo>,</mo> <msup> <mi>x</mi> <mo>&prime;</mo> </msup> <mi>sin</mi> <mi>&theta;</mi> <mo>+</mo> <msup> <mi>y</mi> <mo>&prime;</mo> </msup> <mi>cos</mi> <mi>&theta;</mi> <mo>)</mo> </mrow> <msup> <mi>dy</mi> <mo>&prime;</mo> </msup> </mrow> </math>
wherein: <math> <mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msup> <mi>x</mi> <mo>&prime;</mo> </msup> </mtd> </mtr> <mtr> <mtd> <msup> <mi>y</mi> <mo>&prime;</mo> </msup> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>cos</mi> <mi>&theta;</mi> </mtd> <mtd> <mi>sin</mi> <mi>&theta;</mi> </mtd> </mtr> <mtr> <mtd> <mo>-</mo> <mi>sin</mi> <mi>&theta;</mi> </mtd> <mtd> <mi>cos</mi> <mi>&theta;</mi> </mtd> </mtr> </mtable> </mfenced> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>x</mi> </mtd> </mtr> <mtr> <mtd> <mi>y</mi> </mtd> </mtr> </mtable> </mfenced> <mo>.</mo> </mrow> </math>
and the corresponding post-processing of the obtained characteristic parameters refers to a data dimension reduction and characteristic fusion strategy, namely applying Principal Component Analysis (PCA) to the data dimension reduction.
The classification and identification by using a support vector machine as a classifier adopts a one-to-one strategy, namely one classifier finishes one-out-of-two every time, and the method combines N classes of training data two by two to construct C N 2 = N ( N - 1 ) / 2 A support vector machine. And finally, determining a classification result by adopting a voting mode during classification: suppose there are m classes of gait to be identified, denoted as S1,S2,…,SmRandomly selecting one sample S in each classijWhere i is the class, j is the sample number in the class, training, and other samples Sit(j ≠ t) is used for testing, and during testing, the test sample S is testeditInputting the sample into a classifier obtained through training, and judging the sample as an ith class if the output is i; if the output is j, it is determined as a recognition error.
The invention can bring the following effects:
the invention simultaneously uses the boundary center distance and Radon transformation for gait feature parameter extraction, can reflect most energy information of the original outline, has appearance information of gait and dynamic information, can effectively reduce the influence of various interferences, such as shadow, illumination, shielding, background confusion and other factors, effectively makes up the defect of single feature through feature fusion, more accurately extracts effective information capable of reflecting the walking feature of a moving human body, and improves the gait recognition accuracy.
The invention can effectively reduce the data dimension on the basis of keeping most of the original information quantity by applying the idea of Principal Component Analysis (PCA) to data dimension reduction, thereby greatly reducing the calculated quantity, improving the efficiency and reducing the cost.
Drawings
FIG. 1 is a technical flow chart of the present invention.
FIG. 2 is a flow chart of moving object detection according to the present invention.
FIG. 3 is a graph of normalized boundary center-to-center distance according to the present invention.
FIG. 4 is a diagram of Radon transform coordinates of the present invention.
FIG. 5 shows a Radon transform feature vector diagram of the present invention.
Fig. 6 cumulative match score curves for different features.
Detailed Description
The invention skillfully introduces the idea of Radon transformation into the Radon transformation. The Radon transform is widely used for detecting line segments in an image, and exactly meets the characteristic that a leg is approximate to a line segment in a certain direction in an image contour, and has a large angle change relative to a horizontal axis in a walking process. This means that the characteristic parameters obtained by Radon transformation can reflect most of energy information of the original outline, and the characteristic parameters have appearance information of gait and dynamic information, so that the influence caused by self-shielding and shadow can be effectively reduced. In addition, the boundary center distance feature based on the contour is extracted and is fused with the Radon transformation feature, so that the defect of a single feature is effectively overcome. And finally inputting the fusion features into a Support Vector Machine (SVM) for gait classification and identification.
The invention discloses a method for identifying gait by using a dual-feature fusion technology, which relates to key technologies comprising the following steps: video processing, image processing, pattern recognition, etc. The technical process comprises the following steps: for an input video sequence, segmenting a moving target in a video image through target detection, extracting a moving human body outline by using a boundary tracking algorithm, and performing resampling and normalization processing on the outline; then, theories such as boundary center distance, Radon transformation and the like are simultaneously used for extracting gait characteristic parameters, and the fusion of the two characteristics is realized; and finally, selecting a support vector machine as a classifier for classification and identification. Compared with the similar technology, the method has stronger robustness to changes such as clothing, carrying articles, shelters and the like, can effectively inhibit the interference of external factors such as complex backgrounds and the like, and provides a new idea for exploring a more reliable gait identification method in the future.
The invention is further illustrated below with reference to the figures and examples.
Gait recognition is mainly to analyze and process a moving image sequence containing a person, and generally includes: human body detection, gait feature extraction, identity recognition and the like. FIG. 1 shows a technical flow chart of the present invention: the overall target is to input a video sequence, segment the outline information of a human body target in a video image through target detection, simultaneously use theories such as boundary center distance, Radon transformation and the like for gait feature parameter extraction so as to find a gait recognition method with strong robustness to the changes of clothing, carried articles, shelters and the like, perform corresponding post-processing on feature data, finally select a support vector machine as a classifier for classification and recognition, and evaluate the recognition effect.
1 moving object detection and Key frame extraction
To extract the gait feature information, the extraction of moving objects in a complex background is involved, which is the early preprocessing for gait recognition. Because various interferences, such as shadow, illumination, occlusion, background confusion and the like, often exist in the practical application environment, higher requirements are provided for the real-time performance and reliability of the algorithm. The specific flow of the moving object detection adopted by the invention is shown in fig. 2.
(1) Minimum median variance background modeling
The minimum median variance method (LmedS) is an algorithm proposed on the theoretical basis of robust statistics.
If order I(x,y) tRepresenting acquired N frame sequential images, in which <math> <mrow> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <msubsup> <mi>I</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mi>t</mi> </msubsup> <mo>,</mo> </mrow> </math> t represents the frame index value (t is 1, 2, …, N), then background B(x,y)Comprises the following steps:
B ( x , y ) = min p { med t ( I ( x , y ) t - P ) 2 } - - - ( 1 )
where P is the gray value to be determined at pixel location (x, y), where med represents taking the median value and min represents taking the minimum value. R, G, B are modeled separately, and color background images in RGB format are obtained by synthesis.
The specific flow of the algorithm is as follows:
step [1] selecting a pixel point position (x, y);
step [2] making P equal to 0;
step[3]calculating (I) in sequence1 (x,y)-P)2,(I2 (x,y)-P)2,...,(IN (x,y)-P)2
step[4]Sorting the calculation results, if N is an even number, taking the average value of the sorted Nth/2 th and (N +1)/2 th numbers, if N is an odd number, taking the Nth/2 th number, and storing the result into an array med, namely med0
step[5]P is P +1, when P < 255, returning to step [ 3%]Repeatedly execute step [3 ]]、[4]、[5]The result is saved as medPOtherwise step is performed [6 ]];
step[6]Finding med0,med1,...,med255The minimum value in (b), the corresponding P is the background gray level of the pixel point position;
step [7] reselects the pixel point position, returns to step [2] and repeats the execution until all pixel points in the image are calculated.
(2) Motion segmentation
The difference operation is performed using an indirect difference function:
<math> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>,</mo> <mi>b</mi> <mo>)</mo> </mrow> <mo>=</mo> <mn>1</mn> <mo>-</mo> <mfrac> <mrow> <mn>2</mn> <msqrt> <mrow> <mo>(</mo> <mi>a</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>b</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </msqrt> </mrow> <mrow> <mrow> <mo>(</mo> <mi>a</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>+</mo> <mrow> <mo>(</mo> <mi>b</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>&times;</mo> <mfrac> <mrow> <mn>2</mn> <msqrt> <mrow> <mo>(</mo> <mn>256</mn> <mo>-</mo> <mi>a</mi> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mn>256</mn> <mo>-</mo> <mi>b</mi> <mo>)</mo> </mrow> </msqrt> </mrow> <mrow> <mrow> <mo>(</mo> <mn>256</mn> <mo>-</mo> <mi>a</mi> <mo>)</mo> </mrow> <mo>+</mo> <mrow> <mo>(</mo> <mn>256</mn> <mo>-</mo> <mi>b</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein a and b respectively represent the gray level (intensity) of the current image and the background image at the same pixel point (x, y), f (a, b) is more than or equal to 0 and less than or equal to 1, a is more than or equal to 0 and less than or equal to 255. The sensitivity of the difference function can be automatically changed along with the background gray level, and the self-adaptability improves the accuracy of image segmentation.
Obtaining a moving target binary image through threshold segmentation after difference:
<math> <mrow> <msub> <mi>S</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </msub> <mo>=</mo> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mi>f</mi> <mrow> <mo>(</mo> <msub> <mi>a</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </msub> <mo>,</mo> <msub> <mi>b</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </msub> <mo>)</mo> </mrow> <mo>&GreaterEqual;</mo> <mi>T</mi> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mi>Otherwise</mi> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </math>
(3) morphological processing and connected component analysis
Due to the influence of other external factors such as weather, illumination, shadows and the like, noise inevitably exists in the image after motion segmentation, and a small number of points in a motion target are misjudged as a background, so that the image needs to be further processed to obtain the optimal segmentation effect. The invention uses morphological filtering to eliminate noise in binary images and fill in the lack of moving objects. As a commonly used image noise filtering method, the most basic operations of morphology for image filtering are dilation and erosion, and two other operations are derived from the mutual combination of dilation and erosion: an open operation and a close operation. Opening operation can smooth the convex contour of the object, break narrow connection and remove the thin protruding part; the closed operation smoothes the concave contour of the object, connecting the long and narrow gaps into a long and narrow curved opening. The purpose of filtering and filling the holes can be achieved by utilizing the properties.
After the morphological filtering process, there may still be some spurious noise forming blocks of different sizes, and the true moving object is often the largest of these blocks. Therefore, the image is further analyzed in the connected component, and only the moving object in the image is reserved.
(4) Gait cycle division and key frame extraction
The walking of a person is a periodic behavior, and a gait cycle is defined as follows: the time elapsed from heel strike to heel strike again on the same side includes two stance periods and two swing periods. In order to improve the efficiency, the invention utilizes the characteristic that the contour width of the human body is synchronously and periodically changed along with time, divides the gait cycle by the width change signal of the human body contour, and extracts two maximum value points in one gait cycle as key frames, thereby simplifying the research process.
2 gait feature extraction
How to extract effective gait feature parameters is the key of gait recognition. As the gait identification depends on the change of the human body contour shape along with time to a great extent, the two selected gait characteristics of the invention are based on the contour. The essence of the motion body contour line extraction is boundary tracking, and necessary normalization and resampling are carried out on the boundary tracking to obtain the motion body contour line.
2.1 boundary center-to-center distance feature
The definition of the center-to-center distance of the boundary is the distance from the boundary point to the centroid. This leads to the original two-dimensional contour shape being passed through the one-dimensional distance signal D ═ D1,d2,...d255) Indirect representation and has translation invariance and rotation invariance.
Wherein,
<math> <mrow> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>=</mo> <msqrt> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>x</mi> <mi>c</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>y</mi> <mi>c</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> <mrow> <mo>(</mo> <mi>i</mi> <mo>=</mo> <mn>1,2</mn> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mn>256</mn> <mo>)</mo> </mrow> <mo>;</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow> </math>
fig. 3 shows a normalized boundary center distance curve, in which peaks and valleys reflect the protrusion and depression of the profile, the amplitudes of the peaks and valleys have a certain relationship with the anatomical structure, walking posture, etc. of the human body, and the number of the peaks and valleys is closely related to the posture of the human body. It is because of these properties that we can use to distinguish between different objects.
2.2Radon transform characteristics
The Radon transform has superposition, linearity, scalability, delay and rotation invariance, and is widely used for line segment detection in images. The essence of the Radon transform is the projection of the image matrix in a given direction, which can be done along any angle, and in general the Radon transform of f (x, y) is the line integral in the direction parallel to the y-axis in a rotating coordinate system, of the form:
<math> <mrow> <msub> <mi>R</mi> <mi>e</mi> </msub> <mrow> <mo>(</mo> <msup> <mi>x</mi> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <mo>=</mo> <msubsup> <mo>&Integral;</mo> <mrow> <mo>-</mo> <mo>&infin;</mo> </mrow> <mo>&infin;</mo> </msubsup> <mi>f</mi> <mrow> <mo>(</mo> <msup> <mi>x</mi> <mo>&prime;</mo> </msup> <mi>cos</mi> <mi>&theta;</mi> <mo>-</mo> <msup> <mi>y</mi> <mo>&prime;</mo> </msup> <mi>sin</mi> <mi>&theta;</mi> <mo>,</mo> <msup> <mi>x</mi> <mo>&prime;</mo> </msup> <mi>sin</mi> <mi>&theta;</mi> <mo>+</mo> <msup> <mi>y</mi> <mo>&prime;</mo> </msup> <mi>cos</mi> <mi>&theta;</mi> <mo>)</mo> </mrow> <msup> <mi>dy</mi> <mo>&prime;</mo> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein: <math> <mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msup> <mi>x</mi> <mo>&prime;</mo> </msup> </mtd> </mtr> <mtr> <mtd> <msup> <mi>y</mi> <mo>&prime;</mo> </msup> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>cos</mi> <mi>&theta;</mi> </mtd> <mtd> <mi>sin</mi> <mi>&theta;</mi> </mtd> </mtr> <mtr> <mtd> <mo>-</mo> <mi>sin</mi> <mi>&theta;</mi> </mtd> <mtd> <mi>cos</mi> <mi>&theta;</mi> </mtd> </mtr> </mtable> </mfenced> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>x</mi> </mtd> </mtr> <mtr> <mtd> <mi>y</mi> </mtd> </mtr> </mtable> </mfenced> <mo>.</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow> </math>
the leg is approximate to a line segment in a certain direction in the image contour, and the magnitude in the vertical direction of the leg is larger after Radon transformation; during walking, the leg changes greatly relative to the horizontal axis. That is, the characteristic parameters obtained by Radon transformation can reflect most energy information of the original contour, and the parameters change significantly with the passage of time, namely, the parameters are expressed as swing of the lower limb. Therefore, by learning and analyzing these parameters, important information about the individual morphology and gait can be obtained.
The features extracted by image Radon transformation have appearance information of gait and dynamic information, and the influence caused by self-shielding and shadows can be effectively reduced. Another significant advantage of this algorithm is that each Radon transform parameter contains the collective contribution of many pixels and is therefore not susceptible to the effects of false pixel interference from the original contour image.
Fig. 4 depicts the relationship of the angle information after Radon transformation to the lower limb angle. When the integration direction is perpendicular to the line segment where the thigh is located, the integration value is maximum. As the angles between the big and small legs and the vertical direction are changed within the range of 0-60 degrees in the walking process of a person, and most of pixel points in other directions in the sideshadow image are noise such as shadows, the Radon transformation is only carried out on the image in the directions of 0-60 degrees and 120-180 degrees, the angle peak values on two intervals are respectively obtained, and the angle peak values are combined to obtain the characteristic vector, as shown in figure 5. The extracted characteristic information is not influenced by self-shielding of the body, the calculation amount is effectively reduced, and the method is simple and quick compared with other modeling methods.
3 data dimension reduction and feature fusion strategy
In order to reduce the operation amount and eliminate redundant information, the invention applies the idea of Principal Component Analysis (PCA) to data dimension reduction, and can effectively reduce the data dimension on the basis of keeping most of the original information amount. The specific steps of the PCA dimension reduction process can be summarized as follows:
(1) raw data normalization:
in order to eliminate the influence of different dimensions and different orders of magnitude among data, the raw data needs to be standardized and compared. The standardized method of the invention comprises the following steps: the mean of the column is subtracted from each element in the matrix and then divided by the standard deviation of the column, such that each variable is normalized to a matrix X with a mean of 0 and a variance of 1, i.e.:
<math> <mrow> <mi>X</mi> <mo>=</mo> <msup> <mrow> <mo>[</mo> <msub> <mi>X</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>X</mi> <mn>2</mn> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <msub> <mi>X</mi> <mi>n</mi> </msub> <mo>]</mo> </mrow> <mi>T</mi> </msup> <mo>=</mo> <msub> <mrow> <mo>[</mo> <msub> <mi>X</mi> <mi>ij</mi> </msub> <mo>]</mo> </mrow> <mrow> <mo>(</mo> <mi>n</mi> <mo>&times;</mo> <mi>p</mi> <mo>)</mo> </mrow> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein,
<math> <mrow> <msub> <mi>X</mi> <mi>ij</mi> </msub> <mo>=</mo> <mrow> <mo>(</mo> <msub> <mi>a</mi> <mi>ij</mi> </msub> <mo>-</mo> <mover> <msub> <mi>A</mi> <mi>j</mi> </msub> <mo>&OverBar;</mo> </mover> <mo>)</mo> </mrow> <mo>/</mo> <msub> <mi>S</mi> <mi>j</mi> </msub> <mo>,</mo> <mi>i</mi> <mo>=</mo> <mn>1,2</mn> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mi>n</mi> <mo>,</mo> <mi>j</mi> <mo>=</mo> <mn>1,2</mn> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mi>p</mi> </mrow> </math>
<math> <mrow> <mover> <msub> <mi>A</mi> <mi>j</mi> </msub> <mo>&OverBar;</mo> </mover> <mo>=</mo> <mfrac> <mn>1</mn> <mi>n</mi> </mfrac> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </msubsup> <msub> <mi>a</mi> <mi>ij</mi> </msub> <mo>,</mo> <msub> <mi>S</mi> <mi>j</mi> </msub> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <mrow> <mi>n</mi> <mo>-</mo> <mn>1</mn> </mrow> </mfrac> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </msubsup> <msup> <mrow> <mo>(</mo> <msub> <mi>a</mi> <mi>ij</mi> </msub> <mo>-</mo> <msub> <mover> <mi>A</mi> <mo>&OverBar;</mo> </mover> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow> </math>
(2) calculating a correlation coefficient matrix:
<math> <mrow> <mi>R</mi> <mo>=</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>r</mi> <mn>11</mn> </msub> </mtd> <mtd> <msub> <mi>r</mi> <mn>12</mn> </msub> </mtd> <mtd> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> </mtd> <mtd> <msub> <mi>r</mi> <mrow> <mn>1</mn> <mi>p</mi> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>r</mi> <mn>21</mn> </msub> </mtd> <mtd> <msub> <mi>r</mi> <mn>22</mn> </msub> </mtd> <mtd> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> </mtd> <mtd> <msub> <mi>r</mi> <mrow> <mn>2</mn> <mi>p</mi> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <mo>&CenterDot;</mo> </mtd> <mtd> <mo>&CenterDot;</mo> </mtd> <mtd> <mo>&CenterDot;</mo> </mtd> <mtd> <mo>&CenterDot;</mo> </mtd> </mtr> <mtr> <mtd> <mo>&CenterDot;</mo> </mtd> <mtd> <mo>&CenterDot;</mo> </mtd> <mtd> <mo>&CenterDot;</mo> </mtd> <mtd> <mo>&CenterDot;</mo> </mtd> </mtr> <mtr> <mtd> <mo>&CenterDot;</mo> </mtd> <mtd> <mo>&CenterDot;</mo> </mtd> <mtd> <mo>&CenterDot;</mo> </mtd> <mtd> <mo>&CenterDot;</mo> </mtd> </mtr> <mtr> <mtd> <msub> <mi>r</mi> <mrow> <mi>p</mi> <mn>1</mn> </mrow> </msub> </mtd> <mtd> <msub> <mi>r</mi> <mrow> <mi>p</mi> <mn>2</mn> </mrow> </msub> </mtd> <mtd> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> </mtd> <mtd> <msub> <mi>r</mi> <mi>pp</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow> </math>
r is a real symmetric matrix (i.e., R)ij=rji) Wherein r isij(i, j ═ 1, 2.... p.) is the normalized variable Xi,XjThe correlation coefficient of (2). The definition is that the covariance of the variable is divided by the standard deviation (variance) of the variable, and the calculation formula is:
<math> <mrow> <msub> <mi>r</mi> <mi>ij</mi> </msub> <mo>=</mo> <mfrac> <mrow> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>ki</mi> </msub> <mo>-</mo> <mover> <msub> <mi>X</mi> <mi>i</mi> </msub> <mo>&OverBar;</mo> </mover> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>kj</mi> </msub> <mo>-</mo> <mover> <msub> <mi>X</mi> <mi>j</mi> </msub> <mo>&OverBar;</mo> </mover> <mo>)</mo> </mrow> </mrow> <msqrt> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </msubsup> <msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>ki</mi> </msub> <mo>-</mo> <mover> <msub> <mi>X</mi> <mi>i</mi> </msub> <mo>&OverBar;</mo> </mover> <mo>)</mo> </mrow> <mn>2</mn> </msup> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </msubsup> <msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>kj</mi> </msub> <mo>-</mo> <mover> <msub> <mi>X</mi> <mi>j</mi> </msub> <mo>&OverBar;</mo> </mover> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow> </math>
in the formula: <math> <mrow> <mover> <msub> <mi>X</mi> <mi>i</mi> </msub> <mo>&OverBar;</mo> </mover> <mo>=</mo> <mfrac> <mn>1</mn> <mi>n</mi> </mfrac> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </msubsup> <msub> <mi>X</mi> <mi>ki</mi> </msub> <mo>,</mo> </mrow> </math> <math> <mrow> <mover> <msub> <mi>X</mi> <mi>j</mi> </msub> <mo>&OverBar;</mo> </mover> <mo>=</mo> <mfrac> <mn>1</mn> <mi>n</mi> </mfrac> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </msubsup> <msub> <mi>X</mi> <mi>kj</mi> </msub> </mrow> </math> respectively representing X in the original matrixiAnd XjColumn mean of vectors.
(3) And (3) decomposing the features, solving a feature value and a feature vector:
solving the characteristic equation R-Lambda E0, and calculating the characteristic value Lambda of the correlation coefficient matrix Ri(i ═ 1, 2.... p), and arranged in descending order, i.e., λ1≥λ2≥...≥λP(ii) a Then obtaining each characteristic value lambda respectivelyiCorresponding feature vector Ui(i=1,2,......p),。
(4) Determining principal components by cumulative contribution ratio:
the calculation formula of the accumulated contribution rate is as follows:
<math> <mrow> <msub> <mi>&eta;</mi> <mi>l</mi> </msub> <mo>=</mo> <mfrac> <mrow> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>l</mi> </msubsup> <msub> <mi>&lambda;</mi> <mi>k</mi> </msub> </mrow> <mrow> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>p</mi> </msubsup> <msub> <mi>&lambda;</mi> <mi>k</mi> </msub> </mrow> </mfrac> <mo>,</mo> <mrow> <mo>(</mo> <mi>l</mi> <mo>=</mo> <mn>1,2</mn> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mi>p</mi> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </mrow> </math>
when the cumulative contribution rate reaches a certain threshold (85% is taken by the invention), all the first m eigenvalues λ at this time1,λ2,...≥λm(m is less than or equal to p) and the corresponding feature vectors are reserved as main components, and the rest are discarded.
(5) Calculating scoring matrix
The characteristic vector U which corresponds to the principal component characteristic value is equal to U1,U2,...UmForm a new vector space as the coordinate axis of a new variable (principal component), also called the load axis. The scoring matrix is calculated using:
F(n×m)=X(n×p)·U(p×m) (12)
wherein, X is the original data matrix, U is the principal component load, and the scoring matrix F is the final result obtained after PCA dimensionality reduction. Each row of the vector is equivalent to the projection of all rows of the original data matrix (namely vectors formed by original variables) on a principal component coordinate axis (load axis), and the vectors formed by the new projections are principal component score vectors.
As can be seen from the above steps, the PCA algorithm approximates the full information of the original data matrix by a few maximum principal component scores. By doing so, the purpose of reducing dimension is achieved, and the correlation among data is greatly reduced, so that the data is optimized and recombined.
The single gait feature is often not stable enough and robust enough to provide enough information for identification. In view of the above, the invention tries the idea of multi-step feature fusion, and fuses the two gait features extracted as the boundary center feature and the Radon transformation feature in the feature layer, and fuses the boundary center distance feature and the Radon transformation parameter feature respectively extracted by different algorithms aiming at the same gait sequence, belonging to the fusion of the feature layer. The fusion process is essentially to splice the two extracted features together and merge the two into a feature vector, so as to obtain more accurate, more complete and more meaningful information than any single feature, and then send the fusion feature to a classifier for classification and identification, so as to improve the identification effect.
4 gait recognition based on support vector machine
The traditional statistical pattern recognition is carried out on the premise that the number of samples is enough, and the performance of the traditional statistical pattern recognition is theoretically guaranteed only when the number of samples tends to be infinite. In gait recognition applications, the number of samples is limited, and many methods have difficulty achieving the desired results. A Support Vector Machine (SVM) is modeled through a structural risk minimization principle, expected risks are reduced to the minimum, and the model identification capability of the SVM is remarkably improved. The identification method has a plurality of specific advantages in solving the problems of small samples, nonlinearity and high-dimensional pattern identification. The support vector machine has the main ideas that: and searching an optimal classification hyperplane meeting the classification requirement, so that the hyperplane can maximize the interval of the hyperplane while ensuring the classification precision. Theoretically, the support vector machine can achieve optimal classification of linearly separable data.
Gait recognition is a multi-class classification problem, and a support vector machine method is proposed aiming at the classification of two classes and cannot be directly applied to the multi-class classification problem. For multiple classes of pattern recognition problems, the support vector machine approach can be implemented by a combination of the two classes of problems. The invention adopts a one-to-one strategy, namely one classifier finishes one selection or one selection every time, and the method combines N types of training data pairwise to construct C N 2 = N ( N - 1 ) / 2 A support vector machine. And finally, determining a classification result in a voting mode during classification. Suppose there are m classes of gait to be identified, denoted as S1,S2,…,SmRandomly selecting one sample S in each classij(where i is the class and j is the sample number in the class) and the other samples Sit(j ≠ t) was used for testing. When testing, the test sample SitInputting the sample into a classifier obtained through training, and judging the sample as an ith class if the output is i; if the output is j, it is determined as a recognition error.
Advantageous effects
The experimental data of the invention are derived from Dataset B in the CASIA gait database, 20 persons are extracted from the database, only the view angle of 90 degrees is considered, and the two states are divided: natural walking and backpack walking. Each person was classified as one class, each containing multiple samples, including 6 gait cycles for normal walking and 4 gait cycles for backpacks.
In order to more fully and effectively evaluate the Recognition result, two evaluation methods, namely, the Correct Recognition rate (PCR) and the Cumulative Match Score (CMS), are matched and used as evaluation indexes.
The definition of the correct recognition rate is:
PCR=T/N×100% (7)
wherein T is the number of correct identification samples, and N is the total number of test samples. When testing, will TiInputting the input data into a classifier obtained through training, and if the output is i, determining that the identification is correct; if the output is j (i ≠ j), the current recognition is judged to be wrong. The correct recognition statistics are shown in table 1.
The cumulative match score is defined as the cumulative probability p (k) of the actual class of a test metric between its top k match values. Through accumulated matching score evaluation, the recognition rate of the highest matching degree can be obtained, and the data recognition rate of the first n-bit matching degree can be obtained, so that the recognition system is comprehensively evaluated. Fig. 6 shows a curve of cumulative matching scores of different dynamic features in a normal walking state, which also reflects the convergence speed of the algorithm from another angle.
Table 120 subject identification statistics
Experiments show that the boundary center distance characteristic and the Radon transformation characteristic are effective gait characteristics, contain rich profile information, can comprehensively reflect the gait information of people, and show good identification effect;
the fused features of the two are used for recognition, so that the recognition performance can be obviously improved, and a new idea is developed for exploring multi-feature fusion gait recognition.
The invention provides a novel gait recognition method, which effectively fuses boundary center distance features and Radon transformation features based on contours so as to reduce the interference of external factors such as complex backgrounds and the like, has better self-adaptability to practical conditions, and more accurately extracts effective information capable of reflecting the walking features of a moving human body so as to improve the gait recognition accuracy.
The invention can provide help for the effective use of the monitoring system and the reliable evaluation of the monitoring effect, and obtain considerable social benefit and the promotion of public safety service. And the method can be integrated and applied to a security access control system, so that the physical channel control management of the monitored area reaches a higher security level, and a safer and more harmonious social living environment is created. The preferred embodiment is intended for patent assignment, technology collaboration or product development.

Claims (5)

1. A gait information processing and identity recognition method based on fusion characteristics is characterized by comprising the following steps: inputting a video sequence, segmenting outline information of a human body target in a video image through target detection, simultaneously using boundary center distance and Radon transformation for gait feature parameter extraction, performing corresponding post-processing on the obtained feature parameters, finally selecting a support vector machine as a classifier for classification and identification, and evaluating the identification effect.
2. The gait information processing and identity recognition method based on the fusion features as claimed in claim 1, wherein the object detection divides the outline information of the human body object in the video image, namely the motion object detection and the key frame extraction, comprising the following steps:
(1) modeling the background by a minimum median variance method:
if order I(x,y) tRepresenting acquired N frame sequential images, in which <math> <mrow> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <msubsup> <mi>I</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mi>t</mi> </msubsup> <mo>,</mo> </mrow> </math> t represents the frame index t equal to 1, 2, …, N, and the background B(x,y)Comprises the following steps:
B ( x , y ) = min p { med t ( I ( x , y ) t - P ) 2 }
wherein P is a gray value to be determined at a pixel position (x, y), med represents a middle value, min represents a minimum value, R, G, B three components are respectively modeled, and a color background image in an RGB format is obtained through synthesis;
(2) motion segmentation:
the difference operation is performed using an indirect difference function:
<math> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>,</mo> <mi>b</mi> <mo>)</mo> </mrow> <mo>=</mo> <mn>1</mn> <mo>-</mo> <mfrac> <mrow> <mn>2</mn> <msqrt> <mrow> <mo>(</mo> <mi>a</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>b</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </msqrt> </mrow> <mrow> <mrow> <mo>(</mo> <mi>a</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>+</mo> <mrow> <mo>(</mo> <mi>b</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>&times;</mo> <mfrac> <mrow> <mn>2</mn> <msqrt> <mrow> <mo>(</mo> <mn>256</mn> <mo>-</mo> <mi>a</mi> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mn>256</mn> <mo>-</mo> <mi>b</mi> <mo>)</mo> </mrow> </msqrt> </mrow> <mrow> <mrow> <mo>(</mo> <mn>256</mn> <mo>-</mo> <mi>a</mi> <mo>)</mo> </mrow> <mo>+</mo> <mrow> <mo>(</mo> <mn>256</mn> <mo>-</mo> <mi>b</mi> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
wherein a and b respectively represent the gray levels, namely intensity levels, of the current image and the background image at the same pixel point (x, y), f (a, b) is more than or equal to 0 and less than or equal to 1, a is more than or equal to 0 and b is less than or equal to 255, the sensitivity of the difference function can be automatically changed along with the background gray level, and the binarized image of the moving target can be obtained by threshold segmentation after difference:
<math> <mrow> <msub> <mi>S</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </msub> <mo>=</mo> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mi>f</mi> <mrow> <mo>(</mo> <msub> <mi>a</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </msub> <mo>,</mo> <msub> <mi>b</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </msub> <mo>)</mo> </mrow> <mo>&GreaterEqual;</mo> <mi>T</mi> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mi>Otherwise</mi> </mtd> </mtr> </mtable> </mfenced> <mo>;</mo> </mrow> </math>
(3) morphological treatment and connected domain analysis:
carrying out post-processing combining morphological processing and connected domain analysis on the moving target binary image to remove residual noise and obtain a better segmentation effect;
(4) gait cycle division and key frame extraction:
the gait cycle is divided by the width change signal of the human body contour by utilizing the characteristic that the contour width of the human body is changed synchronously and periodically along with time, and two maximum value points in one gait cycle are extracted to be used as key frames.
3. The gait information processing and identity recognition method based on the fusion features as claimed in claim 1, wherein the gait feature extraction comprises the following steps: meanwhile, extracting the moving body contour line by adopting a boundary center distance characteristic and a Radon transformation characteristic, namely boundary tracking, and performing necessary normalization and resampling to obtain the moving body contour line, wherein the boundary center distance characteristic is as follows:
the original two-dimensional contour shape is passed through a one-dimensional distance signal D ═ D (D)1,d2,...,d256) Indirect representation and has translational invariance and rotational invariance, wherein,
d i = ( x i - x c ) 2 + ( y i - y c ) 2 (i=1,2,……256),
radon transform features refer to:
the essence of the Radon transform is the projection of the image matrix in a given direction, which can be done along any angle, and in general the Radon transform of f (x, y) is the line integral in the direction parallel to the y-axis in a rotating coordinate system, of the form:
<math> <mrow> <msub> <mi>R</mi> <mi>e</mi> </msub> <mrow> <mo>(</mo> <msup> <mi>x</mi> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <mo>=</mo> <msubsup> <mo>&Integral;</mo> <mrow> <mo>-</mo> <mo>&infin;</mo> </mrow> <mo>&infin;</mo> </msubsup> <mi>f</mi> <mrow> <mo>(</mo> <msup> <mi>x</mi> <mo>&prime;</mo> </msup> <mi>cos</mi> <mi>&theta;</mi> <mo>-</mo> <msup> <mi>y</mi> <mo>&prime;</mo> </msup> <mi>sin</mi> <mi>&theta;</mi> <mo>,</mo> <msup> <mi>x</mi> <mo>&prime;</mo> </msup> <mi>sin</mi> <mi>&theta;</mi> <mo>+</mo> <msup> <mi>y</mi> <mo>&prime;</mo> </msup> <mi>cos</mi> <mi>&theta;</mi> <mo>)</mo> </mrow> <msup> <mi>dy</mi> <mo>&prime;</mo> </msup> </mrow> </math>
wherein: <math> <mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msup> <mi>x</mi> <mo>&prime;</mo> </msup> </mtd> </mtr> <mtr> <mtd> <msup> <mi>y</mi> <mo>&prime;</mo> </msup> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>cos</mi> <mi>&theta;</mi> </mtd> <mtd> <mi>sin</mi> <mi>&theta;</mi> </mtd> </mtr> <mtr> <mtd> <mo>-</mo> <mi>sin</mi> <mi>&theta;</mi> </mtd> <mtd> <mi>cos</mi> <mi>&theta;</mi> </mtd> </mtr> </mtable> </mfenced> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>x</mi> </mtd> </mtr> <mtr> <mtd> <mi>y</mi> </mtd> </mtr> </mtable> </mfenced> <mo>.</mo> </mrow> </math>
4. the gait information processing and identity recognition method based on fusion characteristics as claimed in claim 1, wherein the corresponding post-processing of the obtained characteristic parameters is a data dimension reduction and characteristic fusion strategy, that is, Principal Component Analysis (PCA) is applied to the data dimension reduction.
5. The gait information processing and identity recognition method based on the fusion features as claimed in claim 1, wherein the classification recognition by using the support vector machine as the classifier is performed by using a one-to-one strategy, i.e. one classifier performs one selection or two selections each time, and the method combines N types of training data two by two to construct C N 2 = N ( N - 1 ) / 2 And (3) a support vector machine, and determining a classification result by adopting a voting mode during final classification: suppose there are m classes of gait to be identified, denoted as S1,S2,…,SmRandomly selecting one sample S in each classijWhere i is the class, j is the sample number in the class, training, and other samples Sit(j ≠ t) is used for testing, and during testing, the test sample S is testeditInputting into a classifier obtained by training ifIf the output is i, judging the sample as the ith class; if the output is j, it is determined as a recognition error.
CN200910070174A 2009-08-20 2009-08-20 Method for gait information processing and identity identification based on fusion feature Pending CN101630364A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200910070174A CN101630364A (en) 2009-08-20 2009-08-20 Method for gait information processing and identity identification based on fusion feature

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200910070174A CN101630364A (en) 2009-08-20 2009-08-20 Method for gait information processing and identity identification based on fusion feature

Publications (1)

Publication Number Publication Date
CN101630364A true CN101630364A (en) 2010-01-20

Family

ID=41575467

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200910070174A Pending CN101630364A (en) 2009-08-20 2009-08-20 Method for gait information processing and identity identification based on fusion feature

Country Status (1)

Country Link
CN (1) CN101630364A (en)

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101807245A (en) * 2010-03-02 2010-08-18 天津大学 Artificial neural network-based multi-source gait feature extraction and identification method
CN102289672A (en) * 2011-06-03 2011-12-21 天津大学 Infrared gait identification method adopting double-channel feature fusion
CN102298145A (en) * 2011-08-15 2011-12-28 天津职业技术师范大学 Pseudo-random code measuring device with capacity of simultaneously extracting walking features of multiple pedestrians
CN102542292A (en) * 2011-12-26 2012-07-04 湖北莲花山计算机视觉和信息科学研究院 Method for determining roles of staffs on basis of behaviors
CN102663374A (en) * 2012-04-28 2012-09-12 北京工业大学 Multi-class Bagging gait recognition method based on multi-characteristic attribute
CN101739557B (en) * 2010-01-26 2012-10-31 重庆大学 Motion cycle analysis-based method and device for identifying abnormal human behavior
CN102768732A (en) * 2012-06-13 2012-11-07 北京工业大学 Face recognition method integrating sparse preserving mapping and multi-class property Bagging
CN102831378A (en) * 2011-06-14 2012-12-19 株式会社理光 Method and system for detecting and tracking human object
CN102982323A (en) * 2012-12-19 2013-03-20 重庆信科设计有限公司 Quick gait recognition method
CN103049758A (en) * 2012-12-10 2013-04-17 北京工业大学 Method for realizing remote authentication by fusing gait flow images (GFI) and head and shoulder procrustes mean shapes (HS-PMS)
ES2432228A1 (en) * 2013-02-15 2013-12-02 Asociación Instituto De Biomecánica De Valencia Procedure and installation for characterizing the support pattern of a subject (Machine-translation by Google Translate, not legally binding)
CN104123568A (en) * 2014-07-21 2014-10-29 国家电网公司 Method for intelligent video recognition of multi-work-type workers
CN104331705A (en) * 2014-10-28 2015-02-04 中国人民公安大学 Automatic detection method for gait cycle through fusion of spatiotemporal information
CN104537356A (en) * 2015-01-12 2015-04-22 北京大学 Pedestrian re-identification method and device for carrying out gait recognition through integral scheduling
CN104598722A (en) * 2014-12-25 2015-05-06 中国科学院合肥物质科学研究院 Parkinson patient walking ability evaluation method based on gait time-space parameters and three-dimensional force characteristics
CN105023013A (en) * 2015-08-13 2015-11-04 西安电子科技大学 Target detection method based on local standard deviation and Radon transformation
CN105335725A (en) * 2015-11-05 2016-02-17 天津理工大学 Gait identification identity authentication method based on feature fusion
CN106156695A (en) * 2015-03-30 2016-11-23 日本电气株式会社 Outlet and/or entrance area recognition methods and device
CN106504267A (en) * 2016-10-19 2017-03-15 东南大学 A kind of motion of virtual human data critical frame abstracting method
CN106971059A (en) * 2017-03-01 2017-07-21 福州云开智能科技有限公司 A kind of wearable device based on the adaptive health monitoring of neutral net
CN107403084A (en) * 2017-07-21 2017-11-28 中国计量大学 A kind of personal identification method based on gait data
CN107480604A (en) * 2017-07-27 2017-12-15 西安科技大学 Gait recognition method based on the fusion of more contour features
CN108182716A (en) * 2017-12-28 2018-06-19 厦门大学 A kind of image line based on vector field towards 3D printing portrays generation method
CN108224691A (en) * 2017-12-22 2018-06-29 银河水滴科技(北京)有限公司 A kind of air conditioner system control method and device
CN108491769A (en) * 2018-03-08 2018-09-04 四川大学 Atrial fibrillation sorting technique based on phase between RR and multiple characteristic values
CN108830834A (en) * 2018-05-23 2018-11-16 重庆交通大学 A kind of cable-climbing robot video artefacts information automation extraction method
CN109345551A (en) * 2018-09-18 2019-02-15 清华大学 Detection method, system and the computer storage medium of image outer profile concave envelope
CN109460727A (en) * 2018-10-31 2019-03-12 中国矿业大学 A kind of examination hall monitoring system and method based on Human bodys' response
CN109493555A (en) * 2018-12-05 2019-03-19 郑州升达经贸管理学院 A kind of campus dormitory building safety defense monitoring system based on intelligent monitoring technology
CN109919999A (en) * 2019-01-31 2019-06-21 深兰科技(上海)有限公司 A kind of method and device of target position detection
CN110456320A (en) * 2019-07-29 2019-11-15 浙江大学 A kind of ULTRA-WIDEBAND RADAR personal identification method based on free space gait temporal aspect
CN111476078A (en) * 2019-02-28 2020-07-31 杭州芯影科技有限公司 Identity recognition method and system based on millimeter wave gait biological characteristics
CN111882006A (en) * 2020-09-28 2020-11-03 大唐环境产业集团股份有限公司 Data preprocessing method of desulfurization system based on PCA and correlation function
JP2021515321A (en) * 2018-04-12 2021-06-17 テンセント・テクノロジー・(シェンジェン)・カンパニー・リミテッド Media processing methods, related equipment and computer programs
CN117421605A (en) * 2023-10-27 2024-01-19 绍兴清研微科技有限公司 Gait recognition method and system based on block chain technology
CN118134840A (en) * 2024-01-12 2024-06-04 东北农业大学 Method and system for detecting rice paraffin pollution based on hyperspectral and deep learning

Cited By (58)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101739557B (en) * 2010-01-26 2012-10-31 重庆大学 Motion cycle analysis-based method and device for identifying abnormal human behavior
CN101807245A (en) * 2010-03-02 2010-08-18 天津大学 Artificial neural network-based multi-source gait feature extraction and identification method
CN101807245B (en) * 2010-03-02 2013-01-02 天津大学 Artificial neural network-based multi-source gait feature extraction and identification method
CN102289672A (en) * 2011-06-03 2011-12-21 天津大学 Infrared gait identification method adopting double-channel feature fusion
CN102831378A (en) * 2011-06-14 2012-12-19 株式会社理光 Method and system for detecting and tracking human object
CN102831378B (en) * 2011-06-14 2015-10-21 株式会社理光 The detection and tracking method and system of people
CN102298145A (en) * 2011-08-15 2011-12-28 天津职业技术师范大学 Pseudo-random code measuring device with capacity of simultaneously extracting walking features of multiple pedestrians
CN102542292A (en) * 2011-12-26 2012-07-04 湖北莲花山计算机视觉和信息科学研究院 Method for determining roles of staffs on basis of behaviors
CN102542292B (en) * 2011-12-26 2014-03-26 湖北莲花山计算机视觉和信息科学研究院 Method for determining roles of staffs on basis of behaviors
CN102663374A (en) * 2012-04-28 2012-09-12 北京工业大学 Multi-class Bagging gait recognition method based on multi-characteristic attribute
CN102663374B (en) * 2012-04-28 2014-06-04 北京工业大学 Multi-class Bagging gait recognition method based on multi-characteristic attribute
CN102768732A (en) * 2012-06-13 2012-11-07 北京工业大学 Face recognition method integrating sparse preserving mapping and multi-class property Bagging
CN102768732B (en) * 2012-06-13 2015-04-29 北京工业大学 Face recognition method integrating sparse preserving mapping and multi-class property Bagging
CN103049758A (en) * 2012-12-10 2013-04-17 北京工业大学 Method for realizing remote authentication by fusing gait flow images (GFI) and head and shoulder procrustes mean shapes (HS-PMS)
CN103049758B (en) * 2012-12-10 2015-09-09 北京工业大学 Merge the remote auth method of gait light stream figure and head shoulder mean shape
CN102982323B (en) * 2012-12-19 2016-05-11 重庆信科设计有限公司 Gait recognition method fast
CN102982323A (en) * 2012-12-19 2013-03-20 重庆信科设计有限公司 Quick gait recognition method
ES2432228A1 (en) * 2013-02-15 2013-12-02 Asociación Instituto De Biomecánica De Valencia Procedure and installation for characterizing the support pattern of a subject (Machine-translation by Google Translate, not legally binding)
CN104123568A (en) * 2014-07-21 2014-10-29 国家电网公司 Method for intelligent video recognition of multi-work-type workers
CN104331705A (en) * 2014-10-28 2015-02-04 中国人民公安大学 Automatic detection method for gait cycle through fusion of spatiotemporal information
CN104331705B (en) * 2014-10-28 2017-05-10 中国人民公安大学 Automatic detection method for gait cycle through fusion of spatiotemporal information
CN104598722A (en) * 2014-12-25 2015-05-06 中国科学院合肥物质科学研究院 Parkinson patient walking ability evaluation method based on gait time-space parameters and three-dimensional force characteristics
CN104598722B (en) * 2014-12-25 2017-04-19 中国科学院合肥物质科学研究院 Parkinson patient walking ability evaluation method based on gait time-space parameters and three-dimensional force characteristics
CN104537356A (en) * 2015-01-12 2015-04-22 北京大学 Pedestrian re-identification method and device for carrying out gait recognition through integral scheduling
CN106156695A (en) * 2015-03-30 2016-11-23 日本电气株式会社 Outlet and/or entrance area recognition methods and device
CN106156695B (en) * 2015-03-30 2019-09-20 日本电气株式会社 Outlet and/or entrance area recognition methods and device
CN105023013A (en) * 2015-08-13 2015-11-04 西安电子科技大学 Target detection method based on local standard deviation and Radon transformation
CN105023013B (en) * 2015-08-13 2018-03-06 西安电子科技大学 The object detection method converted based on Local standard deviation and Radon
CN105335725A (en) * 2015-11-05 2016-02-17 天津理工大学 Gait identification identity authentication method based on feature fusion
CN105335725B (en) * 2015-11-05 2019-02-26 天津理工大学 A kind of Gait Recognition identity identifying method based on Fusion Features
CN106504267A (en) * 2016-10-19 2017-03-15 东南大学 A kind of motion of virtual human data critical frame abstracting method
CN106504267B (en) * 2016-10-19 2019-05-17 东南大学 A kind of motion of virtual human data critical frame abstracting method
CN106971059A (en) * 2017-03-01 2017-07-21 福州云开智能科技有限公司 A kind of wearable device based on the adaptive health monitoring of neutral net
CN106971059B (en) * 2017-03-01 2020-08-11 福州云开智能科技有限公司 Wearable equipment based on neural network self-adaptation health monitoring
CN107403084B (en) * 2017-07-21 2019-12-20 中国计量大学 Gait data-based identity recognition method
CN107403084A (en) * 2017-07-21 2017-11-28 中国计量大学 A kind of personal identification method based on gait data
CN107480604A (en) * 2017-07-27 2017-12-15 西安科技大学 Gait recognition method based on the fusion of more contour features
CN108224691A (en) * 2017-12-22 2018-06-29 银河水滴科技(北京)有限公司 A kind of air conditioner system control method and device
CN108182716A (en) * 2017-12-28 2018-06-19 厦门大学 A kind of image line based on vector field towards 3D printing portrays generation method
CN108182716B (en) * 2017-12-28 2020-12-15 厦门大学 3D printing-oriented vector field-based image line depiction generation method
CN108491769A (en) * 2018-03-08 2018-09-04 四川大学 Atrial fibrillation sorting technique based on phase between RR and multiple characteristic values
JP7089045B2 (en) 2018-04-12 2022-06-21 テンセント・テクノロジー・(シェンジェン)・カンパニー・リミテッド Media processing methods, related equipment and computer programs
JP2021515321A (en) * 2018-04-12 2021-06-17 テンセント・テクノロジー・(シェンジェン)・カンパニー・リミテッド Media processing methods, related equipment and computer programs
CN108830834A (en) * 2018-05-23 2018-11-16 重庆交通大学 A kind of cable-climbing robot video artefacts information automation extraction method
CN109345551B (en) * 2018-09-18 2020-11-20 清华大学 Method and system for detecting concave envelope in image outer contour and computer storage medium
CN109345551A (en) * 2018-09-18 2019-02-15 清华大学 Detection method, system and the computer storage medium of image outer profile concave envelope
CN109460727A (en) * 2018-10-31 2019-03-12 中国矿业大学 A kind of examination hall monitoring system and method based on Human bodys' response
CN109460727B (en) * 2018-10-31 2021-04-06 中国矿业大学 Examination room monitoring system and method based on human body behavior recognition
CN109493555A (en) * 2018-12-05 2019-03-19 郑州升达经贸管理学院 A kind of campus dormitory building safety defense monitoring system based on intelligent monitoring technology
CN109919999A (en) * 2019-01-31 2019-06-21 深兰科技(上海)有限公司 A kind of method and device of target position detection
CN111476078A (en) * 2019-02-28 2020-07-31 杭州芯影科技有限公司 Identity recognition method and system based on millimeter wave gait biological characteristics
CN111476078B (en) * 2019-02-28 2024-06-25 杭州芯影科技有限公司 Identity recognition method and system based on millimeter wave gait biological characteristics
CN110456320A (en) * 2019-07-29 2019-11-15 浙江大学 A kind of ULTRA-WIDEBAND RADAR personal identification method based on free space gait temporal aspect
CN110456320B (en) * 2019-07-29 2021-08-03 浙江大学 Ultra-wideband radar identity recognition method based on free space gait time sequence characteristics
CN111882006A (en) * 2020-09-28 2020-11-03 大唐环境产业集团股份有限公司 Data preprocessing method of desulfurization system based on PCA and correlation function
CN117421605A (en) * 2023-10-27 2024-01-19 绍兴清研微科技有限公司 Gait recognition method and system based on block chain technology
CN117421605B (en) * 2023-10-27 2024-04-30 绍兴清研微科技有限公司 Gait recognition method and system based on block chain technology
CN118134840A (en) * 2024-01-12 2024-06-04 东北农业大学 Method and system for detecting rice paraffin pollution based on hyperspectral and deep learning

Similar Documents

Publication Publication Date Title
CN101630364A (en) Method for gait information processing and identity identification based on fusion feature
CN106529468B (en) A kind of finger vein identification method and system based on convolutional neural networks
CN101807245A (en) Artificial neural network-based multi-source gait feature extraction and identification method
Keskin et al. Hand pose estimation and hand shape classification using multi-layered randomized decision forests
CN100395770C (en) Hand-characteristic mix-together identifying method based on characteristic relation measure
Bekhouche et al. Pyramid multi-level features for facial demographic estimation
CN101751555B (en) Deformation fingerprint identification method and system
CN101251894A (en) Gait recognizing method and gait feature abstracting method based on infrared thermal imaging
CN101571924B (en) Gait recognition method and system with multi-region feature integration
CN103942577A (en) Identity identification method based on self-established sample library and composite characters in video monitoring
CN105574510A (en) Gait identification method and device
Kobayashi et al. Three-way auto-correlation approach to motion recognition
Arigbabu et al. Integration of multiple soft biometrics for human identification
CN101661554A (en) Front face human body automatic identity recognition method under long-distance video
CN106529504B (en) A kind of bimodal video feeling recognition methods of compound space-time characteristic
CN106709508A (en) Typical weight correlation analysis method utilizing characteristic information
Sun et al. [Retracted] Research on Face Recognition Algorithm Based on Image Processing
CN104217211B (en) Multi-visual-angle gait recognition method based on optimal discrimination coupling projection
Ngxande et al. Detecting inter-sectional accuracy differences in driver drowsiness detection algorithms
CN103593651B (en) Based on gait and the coal mine down-hole personnel authentication identifying method of two dimension discriminant analysis
CN112966649B (en) Occlusion face recognition method based on sparse representation of kernel extension dictionary
CN112396014B (en) Visual-touch fusion gait recognition method based on feature fusion
Alvarez et al. Gait recognition based on modified Gait energy image
Ozkaya et al. Discriminative common vector based finger knuckle recognition
Liu et al. Palm-dorsa vein recognition based on independent principle component analysis

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20100120