CN114519899A - Identity recognition method and system based on multi-biological-feature self-adaptive fusion - Google Patents

Identity recognition method and system based on multi-biological-feature self-adaptive fusion Download PDF

Info

Publication number
CN114519899A
CN114519899A CN202210166085.5A CN202210166085A CN114519899A CN 114519899 A CN114519899 A CN 114519899A CN 202210166085 A CN202210166085 A CN 202210166085A CN 114519899 A CN114519899 A CN 114519899A
Authority
CN
China
Prior art keywords
face
gait
matrix
person
recognized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210166085.5A
Other languages
Chinese (zh)
Inventor
张驰
易小西
王涵
虞贵财
尹发根
张志伟
柳向娥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yichun University
Original Assignee
Yichun University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yichun University filed Critical Yichun University
Priority to CN202210166085.5A priority Critical patent/CN114519899A/en
Publication of CN114519899A publication Critical patent/CN114519899A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an identity recognition method and system based on multi-biological-feature self-adaptive fusion. The method comprises the following steps: acquiring a video sequence in a gait cycle from a video sequence to be identified to obtain a periodic video; acquiring a PAF matrix corresponding to each frame of a periodic video of a person to be identified to obtain a walking characteristic vector diagram; respectively inputting the walking characteristic vector diagram into a first convolution neural network and a long-time and short-time memory neural network to obtain a spatial characteristic matrix and a time characteristic matrix; performing characteristic fusion on the spatial characteristic matrix and the time characteristic matrix to obtain a gait characteristic matrix of the person to be identified; processing a face image to be recognized by using a face recognition algorithm with a self-adaptive weighted HOG characteristic to obtain a face characteristic matrix; and processing the human face, the gait feature matrix and the standard image by adopting an adaptive weighting fusion algorithm based on the SVM to obtain an identity recognition result of the person to be recognized. The invention has strong adaptability and can improve the accuracy of identity recognition.

Description

Identity recognition method and system based on multi-biological-feature self-adaptive fusion
Technical Field
The invention relates to the technical field of identity recognition, in particular to an identity recognition method and system based on multi-biological-feature self-adaptive fusion.
Background
The face recognition and the gait recognition are taken as biological characteristic identity recognition technologies which are concerned about and widely applied at present, and have similarity in application occasions and application conditions, so that the face recognition and the gait recognition have the premise of fusion. In addition, in terms of influencing factors, the face recognition is easily influenced by factors such as illumination, distance, expression and posture, and the gait recognition is easily influenced by conditions such as age, walking conditions, physical conditions, psychological factors and backpacks. The two biological characteristic influencing factors do not interfere with each other when the external environment changes. Therefore, the gait and the human face can be obtained from one gait video sequence at the same time, and the two biological characteristics can be applied to the same occasion and have certain complementarity. Therefore, multi-biometric identity recognition combining human face features and gait features has become a new research direction in the field of computer vision and pattern recognition.
In the prior art, two angles from which corresponding features are easy to extract are mostly adopted for recognition research in the multi-biological-feature identity recognition of the fusion gait and the human face, only a few of the angles adopt the biological-feature images of the side face and the front gait or other certain angles for recognition, and in the fusion algorithm, four fusion algorithms including a data layer, a feature layer, a matching layer and a decision layer are provided, and the above methods are all carried out under the common condition and the single background condition, so that the several identity recognition methods have poor adaptability under the complex background, and inaccurate recognition results, and further research is needed to improve the adaptability of the identity recognition method under the complex background so as to improve the recognition accuracy.
Disclosure of Invention
The invention aims to provide an identity recognition method and system based on multi-biological-feature self-adaptive fusion, which have strong adaptability and can improve the accuracy of identity recognition under the condition of complex background.
In order to achieve the purpose, the invention provides the following scheme:
an identity recognition method based on multi-biological characteristic self-adaptive fusion comprises the following steps:
acquiring a video sequence to be identified, and acquiring a video sequence in a gait cycle from the video sequence to be identified to obtain a periodic video; the video sequence to be identified comprises a person to be identified;
acquiring a PAF matrix corresponding to each frame of the periodic video of the person to be identified to obtain a walking characteristic vector diagram; the PAF matrix comprises PAFs of limbs and PAFs of a trunk;
inputting the walking characteristic vector diagram into a first convolution neural network and a long-time and short-time memory neural network respectively to obtain a spatial characteristic matrix and a time characteristic matrix;
performing feature fusion on the space feature matrix and the time feature matrix to obtain a gait feature matrix of the person to be identified;
acquiring a face image of a person to be identified in the video sequence to be identified, and processing the face image of the person to be identified by using a face identification algorithm with a self-adaptive weighted HOG (histogram of oriented gradient) feature to obtain a face feature matrix;
And processing the face characteristic matrix, the gait characteristic matrix and a standard image in an image database by adopting an adaptive weighted fusion algorithm based on an SVM to obtain an identity recognition result of the person to be recognized.
Optionally, the gait cycle determining method includes:
and inputting the video sequence to be recognized into a trained second convolutional neural network to obtain the gait cycle of the person to be recognized.
Optionally, before the processing the face feature matrix, the gait feature matrix and the standard image in the image database by using the adaptive weighted fusion algorithm to obtain the identity recognition result of the person to be recognized, the method further includes:
and performing dimension reduction processing on the face feature matrix to obtain a face feature matrix after dimension reduction.
Optionally, the obtaining of the face image of the person to be recognized in the video sequence to be recognized and processing the face image of the person to be recognized by using a face recognition algorithm with a self-adaptive weighted HOG feature to obtain a face feature matrix specifically includes:
acquiring a face image of the person to be recognized in the video sequence to be recognized;
dividing the face image of the person to be identified into cell units with the same size;
Calculating a gradient histogram of each cell unit according to the direction and the amplitude of the pixel gradient in each cell unit;
and determining the gradient histograms of all cell units as a human face feature matrix.
Optionally, the processing, by using an adaptive weighted fusion algorithm based on an SVM, of the face feature matrix, the gait feature matrix and a standard image in an image database to obtain an identity recognition result of the person to be recognized specifically includes:
respectively projecting the gait feature matrix and the face feature matrix to a coordinate system formed by a base image to obtain a gait projection matrix and a face projection matrix;
calculating gait Euclidean distances between the images in the video sequence to be recognized and each standard image in an image database according to the gait projection matrix to obtain a gait Euclidean distance array; calculating the human face Euclidean distance between the image in the video sequence to be recognized and each standard image in an image database according to the human face projection matrix to obtain a human face Euclidean distance array;
calculating a gait confidence coefficient and a face confidence coefficient according to the gait Euclidean distance array, the face Euclidean distance array, the rejection rate of the gait feature matrix and the rejection rate of the face feature matrix;
Calculating gait fusion weight according to the gait confidence coefficient, and calculating face fusion weight according to the face confidence coefficient;
and obtaining the identity recognition result of the person to be recognized according to the gait fusion weight and the face fusion weight.
An identity recognition system based on adaptive fusion of multiple biometric features, comprising:
the video sequence acquisition module is used for acquiring a video sequence to be identified and acquiring a video sequence in a gait cycle from the video sequence to be identified to obtain a periodic video; the video sequence to be recognized comprises a person to be recognized;
the PAF matrix acquisition module is used for acquiring a PAF matrix corresponding to each frame of the person to be identified in the periodic video to obtain a walking characteristic vector diagram; the PAF matrix comprises PAFs of limbs and PAFs of the trunk;
the space-time characteristic determining module is used for respectively inputting the walking characteristic vector diagram into a first convolution neural network and a long-time and short-time memory neural network to obtain a space characteristic matrix and a time characteristic matrix;
the gait feature matrix determining module is used for performing feature fusion on the spatial feature matrix and the time feature matrix to obtain a gait feature matrix of the person to be identified;
The human face feature matrix determining module is used for acquiring a human face image of a person to be identified in the video sequence to be identified and processing the human face image of the person to be identified by using a human face identification algorithm with a self-adaptive weighted HOG feature to obtain a human face feature matrix;
and the identity recognition module is used for processing the face feature matrix, the gait feature matrix and the standard image in the image database by adopting an adaptive weighting fusion algorithm based on an SVM (support vector machine) to obtain an identity recognition result of the person to be recognized.
Optionally, the video sequence acquiring module includes:
and the gait cycle determining submodule is used for inputting the video sequence to be recognized into the trained second convolutional neural network to obtain the gait cycle of the person to be recognized.
Optionally, the identity recognition system based on adaptive fusion of multiple biometrics further includes:
and the dimension reduction module is used for carrying out dimension reduction processing on the face feature matrix to obtain the face feature matrix after dimension reduction.
Optionally, the face feature matrix determining module specifically includes:
the acquisition submodule is used for acquiring a face image of the person to be recognized in the video sequence to be recognized;
the cell unit determining submodule is used for dividing the face image of the person to be identified into cell units with the same size;
The gradient histogram determination submodule is used for calculating the gradient histogram of each cell unit according to the direction and the amplitude of the pixel gradient in each cell unit;
and the face feature matrix determining submodule is used for determining the gradient histograms of all the cell units as a face feature matrix.
Optionally, the identity module specifically includes:
the projection matrix determination submodule is used for projecting the gait feature matrix and the face feature matrix to a coordinate system formed by a base image respectively to obtain a gait projection matrix and a face projection matrix;
the Euclidean distance array determining submodule is used for calculating gait Euclidean distances between the images in the video sequence to be recognized and each standard image in the image database according to the gait projection matrix to obtain a gait Euclidean distance array; calculating the Euclidean distance between the image in the video sequence to be recognized and the face of each standard image in an image database according to the face projection matrix to obtain a face Euclidean distance array;
the confidence coefficient determining submodule is used for calculating a gait confidence coefficient and a face confidence coefficient according to the gait Euclidean distance array, the face Euclidean distance array, the rejection rate of the gait feature matrix and the rejection rate of the face feature matrix;
The fusion weight determining submodule is used for calculating the gait fusion weight according to the gait confidence coefficient and calculating the face fusion weight according to the face confidence coefficient;
and the identity recognition submodule is used for obtaining an identity recognition result of the person to be recognized according to the gait fusion weight and the face fusion weight.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects: the method comprises the steps of acquiring a video sequence in a gait cycle from a video sequence to be identified to obtain a periodic video; acquiring a PAF matrix corresponding to each frame of a periodic video of a person to be identified to obtain a walking characteristic vector diagram; respectively inputting the characteristic vector diagram into a first convolution neural network and a long-time and short-time memory neural network to obtain a spatial characteristic matrix and a time characteristic matrix; performing characteristic fusion on the spatial characteristic matrix and the time characteristic matrix to obtain a gait characteristic matrix of the person to be identified; processing a face image of a person to be identified by using a face identification algorithm with a self-adaptive weighted HOG characteristic to obtain a face characteristic matrix; the self-adaptive weighting fusion algorithm based on the SVM is adopted to process the face characteristic matrix, the gait characteristic matrix and the standard image in the image database to obtain the identity recognition result of the person to be recognized, and the HOG algorithm and the SVM are combined, so that the method has strong adaptability and can improve the accuracy of identity recognition under the condition of complex background.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
Fig. 1 is a flowchart of an identity recognition method based on adaptive fusion of multiple biological features according to an embodiment of the present invention;
FIG. 2 is a flow chart of gait recognition provided by an embodiment of the invention;
FIG. 3 is a key frame diagram of a gait cycle in a human gait sequence;
FIG. 4 is a waveform diagram of an output for determining a gait cycle according to an embodiment of the invention;
FIG. 5 is a flow chart of human gait cycle determination provided by an embodiment of the invention;
FIG. 6 is a graph of selected PAFs provided in accordance with an embodiment of the present invention;
fig. 7 is a feature vector diagram for one gait cycle according to an embodiment of the invention;
fig. 8 is a flowchart of face recognition under video surveillance according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of the HOG algorithm;
FIG. 10 is a basic flow diagram of multi-feature fusion.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
The identity recognition method based on the multi-biological characteristic self-adaptive fusion provided by the embodiment of the invention is mainly divided into the following parts: (1) researching a gait recognition method combining a convolutional neural network and a long and short memory network; (2) researching a face recognition method based on an HOG-NMF feature extraction algorithm; (3) the method comprises the following steps of extracting features of gait and face, fusing and identifying the extracted features by using a decision-layer-based adaptive weighted fusion method, carrying out target detection by combining HOG (hyper-acoustic-G) based on deep learning with an SVM (support vector machine) algorithm, fusing the gait and the face identification, and enabling the gait and the face to have strong robustness, higher identification degree and higher identification speed under the condition of complex background by using the decision-layer-based adaptive dynamic weighted fusion method, wherein the specific steps are shown in figure 1 and comprise the following steps:
Step 101: acquiring a video sequence to be identified, and acquiring a video sequence in a gait cycle from the video sequence to be identified to obtain a periodic video; the video sequence to be identified comprises a person to be identified.
Step 102: acquiring a PAF matrix corresponding to each frame of the periodic video of the person to be identified to obtain a walking characteristic vector diagram; the PAF matrix includes PAFs of limbs and PAFs of the torso.
Step 103: and respectively inputting the walking characteristic vector diagram into a first convolution neural network and a long-time and short-time memory neural network to obtain a spatial characteristic matrix and a time characteristic matrix.
Step 104: and performing characteristic fusion on the space characteristic matrix and the time characteristic matrix to obtain a gait characteristic matrix of the person to be identified.
Step 105: and acquiring a face image of a person to be recognized in the video sequence to be recognized, and processing the face image of the person to be recognized by using a face recognition algorithm with a self-adaptive weighted HOG (histogram of oriented gradient) feature to obtain a face feature matrix.
Step 106: and processing the face feature matrix, the gait feature matrix and the standard image in the image database by adopting an adaptive weighting fusion algorithm based on an SVM to obtain an identity recognition result of the person to be recognized.
In practical application, the determination method of the gait cycle comprises the following steps:
and inputting the video sequence to be recognized into a trained second convolutional neural network to obtain the gait cycle of the person to be recognized.
In practical application, the step of inputting the video sequence to be recognized into the trained second convolutional neural network to obtain the gait cycle of the person to be recognized specifically includes:
extracting a pedestrian area in each frame of image of the video sequence to be identified;
cutting pedestrian areas in all the frame images into images with the same size;
inputting all the cut images into a trained second convolutional neural network to obtain a gait waveform;
and determining a gait cycle according to the gait waveform.
In practical application, before the processing the face feature matrix, the gait feature matrix and the standard image in the image database by using the adaptive weighting fusion algorithm to obtain the identity recognition result of the person to be recognized, the method further comprises:
and performing dimension reduction processing on the face feature matrix to obtain a face feature matrix after dimension reduction.
In practical application, the step of performing dimension reduction processing on the face feature matrix to obtain a dimension-reduced face feature matrix specifically includes:
And taking an absolute value of the face feature matrix to obtain an absolute value matrix.
And carrying out NMF decomposition of the rank of the absolute value matrix to obtain a base matrix and a coefficient matrix.
And normalizing each column vector of the base matrix and the feature matrix to obtain a normalized column vector.
And all the normalized column vectors are cascaded to obtain a face feature matrix after dimension reduction.
In practical application, the acquiring a face image of a person to be identified in the video sequence to be identified and processing the face image of the person to be identified by using a face identification algorithm with a self-adaptive weighted HOG feature to obtain a face feature matrix specifically includes:
and acquiring the face image of the person to be recognized in the video sequence to be recognized.
And dividing the face image of the person to be identified into cell units with the same size.
And calculating a gradient histogram of each cell unit according to the direction and the amplitude of the pixel gradient in each cell unit.
And determining the gradient histograms of all cell units as a human face feature matrix.
In practical application, the processing of the face feature matrix, the gait feature matrix and a standard image in an image database by using an adaptive weighted fusion algorithm based on an SVM to obtain an identity recognition result of the person to be recognized specifically includes:
And respectively projecting the gait feature matrix and the face feature matrix to a coordinate system formed by a base image to obtain a gait projection matrix and a face projection matrix.
Calculating gait Euclidean distances between the images in the video sequence to be recognized and each standard image in an image database according to the gait projection matrix to obtain a gait Euclidean distance array; and calculating the face Euclidean distance between the image in the video sequence to be recognized and each standard image in an image database according to the face projection matrix to obtain a face Euclidean distance array.
And calculating a gait confidence coefficient and a face confidence coefficient according to the gait Euclidean distance array, the face Euclidean distance array, the rejection rate of the gait feature matrix and the rejection rate of the face feature matrix.
And calculating gait fusion weight according to the gait confidence coefficient, and calculating face fusion weight according to the face confidence coefficient.
And obtaining the identity recognition result of the person to be recognized according to the gait fusion weight and the face fusion weight.
The embodiment of the invention also provides an identity recognition system based on the self-adaptive fusion of the multiple biological characteristics, which corresponds to the method, and the identity recognition system comprises the following steps:
the video sequence acquisition module is used for acquiring a video sequence to be identified and acquiring a video sequence in a gait cycle from the video sequence to be identified to obtain a periodic video; the video sequence to be identified comprises a person to be identified.
The PAF matrix acquisition module is used for acquiring a PAF matrix corresponding to each frame of the person to be identified in the periodic video to obtain a walking characteristic vector diagram; the PAF matrix includes PAFs of limbs and PAFs of the torso.
And the space-time characteristic determining module is used for respectively inputting the walking characteristic vector diagram into the first convolution neural network and the long-time and short-time memory neural network to obtain a space characteristic matrix and a time characteristic matrix.
And the gait feature matrix determining module is used for performing feature fusion on the spatial feature matrix and the time feature matrix to obtain the gait feature matrix of the person to be identified.
And the face feature matrix determining module is used for acquiring a face image of the person to be identified in the video sequence to be identified and processing the face image of the person to be identified by using a face identification algorithm with a self-adaptive weighted HOG feature to obtain a face feature matrix.
And the identity recognition module is used for processing the face feature matrix, the gait feature matrix and the standard image in the image database by adopting an adaptive weighting fusion algorithm based on an SVM (support vector machine) to obtain an identity recognition result of the person to be recognized.
As an optional implementation, the video sequence acquiring module includes:
And the gait cycle determining submodule is used for inputting the video sequence to be recognized into the trained second convolutional neural network to obtain the gait cycle of the person to be recognized.
As an optional implementation manner, the identity recognition system based on adaptive fusion of multiple biometrics, further includes:
and the dimension reduction module is used for carrying out dimension reduction processing on the face feature matrix to obtain the face feature matrix after dimension reduction.
As an optional implementation manner, the face feature matrix determining module specifically includes:
and the acquisition sub-module is used for acquiring the face image of the person to be recognized in the video sequence to be recognized.
And the cell unit determining submodule is used for dividing the face image of the person to be identified into cell units with the same size.
And the gradient histogram determination submodule is used for calculating the gradient histogram of each cell unit according to the direction and the amplitude of the pixel gradient in each cell unit.
And the face feature matrix determining submodule is used for determining the gradient histograms of all the cell units as a face feature matrix.
As an optional implementation manner, the identity module specifically includes:
and the projection matrix determination submodule is used for respectively projecting the gait feature matrix and the face feature matrix to a coordinate system formed by the base image to obtain a gait projection matrix and a face projection matrix.
The Euclidean distance array determining submodule is used for calculating gait Euclidean distances between the images in the video sequence to be recognized and each standard image in the image database according to the gait projection matrix to obtain a gait Euclidean distance array; and calculating the face Euclidean distance between the image in the video sequence to be recognized and each standard image in the image database according to the face projection matrix to obtain a face Euclidean distance array.
And the confidence coefficient determining submodule is used for calculating the gait confidence coefficient and the face confidence coefficient according to the gait Euclidean distance array, the face Euclidean distance array, the rejection rate of the gait feature matrix and the rejection rate of the face feature matrix.
And the fusion weight determining submodule is used for calculating the gait fusion weight according to the gait confidence coefficient and calculating the face fusion weight according to the face confidence coefficient.
And the identity recognition submodule is used for obtaining an identity recognition result of the person to be recognized according to the gait fusion weight and the face fusion weight.
The embodiment of the invention also provides a more specific identity recognition method based on the self-adaptive fusion of the multiple biological characteristics, which comprises the following steps:
the method comprises the following steps: gait recognition
In order to improve the gait recognition accuracy under the influence of various interferences such as clothing change and carrying objects under multiple viewing angles, the embodiment adopts a gait recognition method combining a convolutional neural network and a length memory network. The method not only keeps the gait space and practice information, but also avoids redundant information, is beneficial to the learning and training of gait characteristics, and improves the recognition rate, robustness and instantaneity of the detection method. The corresponding process is as shown in fig. 2, gait cycle detection is obtained according to a pedestrian video sequence, then a human body walking characteristic vector diagram is obtained, then characteristic extraction (spatial characteristic extraction is carried out by a convolutional neural network, time characteristic extraction is carried out by a long-time memory network), then gait matching results are carried out, and finally fusion input is carried out. The method comprises the following specific steps:
1) Gait cycle detection
The gait cycle detection is completed by the convolutional neural network, the convolutional neural network algorithm has superior characteristic learning capability, the cycle characteristics of the gait contour sequence can be automatically extracted, and compared with the single artificial characteristics in the traditional method, the method can achieve better effect and robustness, so that the identification precision of subsequent gait identification in a complex scene is improved.
The definition of the gait cycle is the time taken for one foot to contact the ground and the same foot to contact the ground again, wherein 4 key frames are provided, the first frame is that the left foot is ready to step when finding two feet, the second frame is that the right foot is ready to step when finding two feet, the third frame is that the right foot is behind when two feet are separated, and the fourth frame is that the left foot is behind when two feet are separated. The extraction of the key frame greatly reduces the calculation amount of subsequent gait features. As shown in fig. 3.
The video sequence is first standardized for cropping and adjustment to avoid the effect of the distance and angle change between the person and the camera on the profile. The pedestrian's region is then extracted by the top and bottom pixels of the processed image sequence, the center of gravity of which is calculated. Each frame is cut into a pedestrian contour graph of 72 x 96 by using the gravity center, the contour height and the aspect ratio to form a gait sequence. And then, sending the gait sequence into a trained second convolutional neural network, and finally filtering the output waveform as shown in figure 4 to obtain a complete gait cycle. The specific process is shown in fig. 5.
For the input gait sequence { D1,D2,···,DnNormalizing to obtain D ═ Dnorm1,Dnorm2,···,DnormnAnd (4) performing convolution on the standard gait sequence through a trained second convolutional neural network S, and obtaining a corresponding output classification result { B ═ S × D }1,B2,···,Bn} of whichThe gait cycle is modeled by 4 classes (key frames) when B1And BiAre the same and { B2,B3,···,Bi-1When other classification results are included in the sequence, the result is { B }1,B2,···,BiOne gait cycle. As the frame number in the video is from dozens of frames to hundreds of frames, only one gait cycle of human motion needs to be detected to be taken as follow-up feature extraction, and simultaneously, each person uses the feature of one gait cycle sequence as identification, the diversity of the features is reduced, the identification rate is also improved, wherein DnShows the n-th pedestrian profile, DnormnShow passing mark
And (4) normalizing the n-th pedestrian contour map.
2) Data selection and PAF extraction
Human body Part Association Field (PAF), which is a matrix providing position and orientation information of various parts of the human body, and occurs in pairs: for each site there is one PAF in the x-direction, denoted x-PAF, and one PAF in the y-direction, denoted y-PAF. PAF is a 3-dimensional matrix of size W × H × C, where W is the width, H is the height, and C is the number of layers.
In this example, the width and height are both 46, and the number of layers is 57. The first 18 layers are the positions of 18 key points of the human body, the second 19 layers are the background, and the first 19 layers form a heat map. The rear 38 layers are PAFs, where the odd layers are x-PAFs in the x-direction and the even layers are y-PAFs in the y-direction, and these 38 layers form a PAF matrix of size 46X 38.
In the gait video, since face detection for the head is followed, and the head is often shaken irregularly, PAF extraction of the head becomes difficult and insignificant.
The present embodiment ignores the gait feature information of the head, mainly considering the limbs and the trunk, as shown in fig. 6. The number of layers of the head-related PAF removed from the original PAF becomes 24, and the PAF matrix is reduced to 46 × 46 × 24.
3) Formation of human body walking characteristic vector diagram
The matrix size of each frame of image of the reduced PAF is 46 x 24, and a video sequence along a time axis is takenColumn length of TcThen the walking feature vector diagram size is TcX 46 x 24. As shown in fig. 7.
4) Feature extraction and gait matching
And designing a space-time network to carry out feature learning and gait matching on the walking feature vector diagram. Wherein the first convolutional neural network transforms the action feature matrix MpAfter 4 times of convolution and pooling of the first convolution neural network branch, the vector is converted into a one-dimensional motion feature vector at a Flatten layer. Motion constraint matrix M of long and short memory neural network branches qConverting the short-term memory network into a one-dimensional motion constraint vector on a Flatten layer after passing through two layers of long-term memory networks and short-term memory networks; and finally, combining the two paths of one-dimensional vectors to obtain a gait matching output sequence.
Input layer: the input matrix size is Tc×46×46×24。
The second convolution neural network (spatial feature extraction): inputting the walking characteristic vector diagram into a first convolution neural network to obtain a spatial characteristic matrix, performing convolution calculation on the input, and adjusting the positions of batch normalization and linear rectification functions. Then, a convolution operation is performed to adjust the size of the input channel, and dimension reduction is performed twice on the 3 x 3 convolution layer, so that the output spatial characteristic shape is Tc×12×12×512。
A long and short memory neural network (time characteristic extraction): inputting a walking characteristic vector diagram into a long-time and short-time memory neural network to obtain a time characteristic matrix, remodeling the 1 st fully-connected output into an input form of a single-layer one-way long-time and short-time memory neural network, setting the number of training set samples of the network as n and the number of hidden layer nodes as 512, and then obtaining the structure of (n, T)c512), finally, the space characteristic matrix and the time characteristic matrix are fused to obtain a gait characteristic matrix, namely a gait matching sequence, and the gait characteristic matrix is output as a gait matching sequence (n, 512).
Step two: face recognition
A complete face recognition system based on surveillance video can be divided into: the method comprises three modules of face detection, feature extraction and classification identification, and is characterized in that a flow chart is shown in fig. 8, firstly video capture is carried out, secondly face detection is carried out, whether a face exists or not is judged, if the face does not exist, the face detection step is returned, and if the face exists, feature extraction and dimension reduction are carried out in sequence, and optimal features are selected and classified and identified to obtain face features.
The embodiment provides a face recognition algorithm with good robustness and adaptive weighting HOG characteristics through research and analysis on different recognition contribution rates of HOG characteristics and face local characteristics.
The main purpose of the HOG algorithm is to perform gradient calculation on the image and count the gradient direction and the gradient magnitude of the image. The extracted edge and gradient characteristics can well grasp the characteristics of local shapes, and because the Gamma correction and the cell mode are carried out on the image for normalization processing, the geometric and optical changes are well invariable, and the transformation or rotation has little influence on a small enough area. Therefore, the HOG algorithm has good robustness to expression change, illumination, scene change and other linear transformations.
1. The process of extracting the face features by the HOG algorithm is as follows:
1) and the whole captured human face image is regarded as a characteristic acquisition window.
2) A sliding region block is arranged in the collection window, and the block is divided into a plurality of cell units cells with uniform size, as shown in fig. 9 (a).
3) And calculating a gradient histogram of the current cell in the cell unit according to the direction and the amplitude of the gradient of each pixel, and further combining the gradient histograms of all the cells to serve as the gradient histogram of the current block, wherein the histogram needs to be normalized in order to further reduce the influence of illumination, background, expression, posture and the like. This example employs an L2-norm normalization process.
4) Then, the block is translated along the horizontal and vertical directions of the current window according to a certain step, as shown in fig. 9(b), the gradient histogram of the current block is counted every time the block is translated by a certain distance, and finally, the gradient histograms of all the blocks are cascaded to serve as the final face features.
The step of extracting the HOG features of a face image with the dimension of 80 × 120 is as follows:
the size of the sliding region block is set to 20 × 20.
And each block is uniformly divided into 16 cell units, and the size of each cell is 5 multiplied by 5. Setting the gray value of any pixel point (x, y) in the cell as H (x, y), and calculating the horizontal gradient H x(x, y) and vertical gradient Hy(x, y) the gradient magnitude I (x, y) and gradient direction of the pixel point (x, y)
Figure BDA0003516058770000141
Are respectively as
Figure BDA0003516058770000142
And
Figure BDA0003516058770000143
and thirdly, equally dividing the gradient direction of each cell into 10 signed directions, wherein the size of each direction is 18 degrees, and the size of the histogram of each cell is 10 dimensions.
Fourthly, then cascading gradient histograms of 16 cells in the current block to obtain a 160-dimensional histogram which particularly represents the gradient histogram feature of the current block, and meanwhile, carrying out normalization processing.
And fifthly, sliding the block on the face image according to the horizontal and vertical stepping values of 20, repeating the steps to obtain a gradient histogram of the sliding block, and finally cascading the gradient histograms of all the blocks to serve as the final gradient histogram feature of the face. The final gradient histogram dimension size is 4 × 6 × 160 ═ 3840 dimensions.
2. Feature extraction dimension reduction processing
The dimension of the HOG feature extraction is large, for example, a typical feature vector formed by an image with 120 × 80 resolution is 3840 dimensions, and in this embodiment, Non-Negative Matrix Factorization (NMF) is used to perform dimension reduction processing on the HOG feature extraction, so as to form a HOG-NMF feature sequence to meet the requirement of system real-time performance. NMF is an optimization process under the constraint of a certain cost function, is a matrix decomposition method under the condition that all elements in a matrix are non-negative number constraints, and can solve an approximate solution through iterative operation, wherein the specific flow is as follows:
1) Let F be an HOG feature vector of length l (face feature matrix before dimensionality reduction), take its absolute value and convert it to an m × n matrix G, where l is m × n and m > n.
2) For matrix G according to formula G ═ ZYTNMF decomposition of rank r is performed, r < m, where Z and Y are non-negative basis and coefficient matrices of m x r and n x r, respectively.
3) For each column vector h of the Z and Y matricesiIs normalized, i.e. hi=hi/||hi||。
4) Finally, all normalized column vectors h are summediAnd (5) cascading into HOG-NMF characteristics, namely final face characteristics.
In this embodiment, taking a resolution of 120 × 80 as an example, when the feature is extracted by HOG, 3840 dimensions are output, i.e., l is 3840, m is 480, n is 8, and r is 2, and after dimension reduction by NMF, the length of the feature becomes 986, and the feature dimension is reduced by 75%. The basis matrix and the coefficient matrix obtained through low-rank decomposition contain the main characteristics of the original matrix, so that the HOG-NMF characteristics well inherit the excellent characteristics of the HOG, and meanwhile, the real-time performance of a subsequent fusion algorithm is greatly improved.
Step three: identity recognition
The information fusion method of multiple biological features can be mainly divided into four types, namely data layer fusion, feature layer fusion, matching layer fusion and decision layer fusion, fig. 10 shows a process of fusion of multiple features at different layers, and the embodiment adopts a self-adaptive weighting fusion method based on a decision layer. The best information fusion effect can be achieved by carrying out self-adaptive weight distribution on the distance matching value of each biological characteristic on the decision layer to obtain the optimal joint matching score.
Decision-level image fusion is a higher-level information fusion, and the result provides a basis for various controls or decisions. The method comprises the steps of firstly processing each source data to respectively obtain judgment and identification results, and coordinating the results with the reliability of each data source decision according to a certain criterion by a fusion center to obtain an optimal decision result. The decision-level fusion method is mainly a cognitive model-based method, and needs a large-scale database and an expert decision system for analysis, reasoning, identification and judgment. The fusion is good in real-time performance and has certain fault-tolerant capability.
The decision-level fusion is a high-level fusion, and comprises the steps of independently processing different characteristics to obtain recognition results, and then integrating the obtained multiple recognition results by adopting a fusion algorithm to finally obtain a final result. The fusion of decision levels has the advantages of high flexibility, strong anti-interference capability, low processing cost, little dependence on sensors and the like.
An SVM (support vector machine) -based algorithm is one of the best methods applied in the current intelligent fusion classification method, a model with the best popularization capability is constructed according to limited sample information, namely the optimal balance is sought between the complexity of the model and the learning capability, and a plurality of specific advantages are simultaneously reflected in linear and nonlinear classification problems.
On the basis of an SVM algorithm, in order to improve the robustness of a fusion algorithm of a decision layer, the decision layer fusion is guided in an auxiliary mode by using discrimination results (distance information) of feature layers of different modes, the weight judgment of confidence is carried out by introducing the rejection rate, and a self-adaptive weighting fusion algorithm is formed, so that a recognition result is obtained.
A decision layer fusion algorithm step:
1) the gait feature obtained in the step one and the one-dimensional feature column vector c of the face feature obtained in the step two are aligned to a base image (firstly, a training image database of the face and the gait is read in, and a training image matrix V ═ is obtained (V)1,V2....Vn) N is the number of images in the training library, column vector ViRepresenting a training image. Obtaining a new matrix through an NMF (NMF) conversion formula V-AH, H is a projection coefficient matrix, each column in the A matrix is a primary image), projecting r-c-A on a coordinate system formed by the A matrix to obtain a projection matrix r (a face projection matrix and a gait projection matrix), and then obtaining Euclidean distances (face Europe distance) between an image to be recognized and all n existing training images (images or videos input into a created training library by the user or existing finished libraries) according to the projection matrixEuclidean distance and gait euclidean distance), the euclidean distances of all the existing training images are determined as euclidean distance arrays (gait euclidean distance array and face euclidean distance array). Wherein, the gait euclidean distance array is denoted as Ob ═ { Ob 1,Ob2···ObnAnd the face Euclidean distance array is marked as Of ═ Of { Of1,Of2···Ofn}. Then, the obtained gait distance matching value array and the face distance matching value array are respectively expressed according to formulas by utilizing a linear normalization method
Figure BDA0003516058770000161
Carrying out normalization processing to obtain a normalized gait distance matching value array and a normalized face distance matching value array, wherein the formula comprises the following steps: r isoIs a Euclidean distance array, rnormThe Euclidean distance array is obtained after linear normalization. Obtaining a normalized gait Euclidean distance array rb={rb1,rb2,···,rbnAnd normalized face Euclidean distance array rf={rf1,rf2,···,rfnThe Euclidean distance array can be mapped to [0, 1 ] by a linear normalization method]And the distribution form of the original matching data is not changed.
2) The normalized gait distance matching value array and the normalized face distance matching value array can be used as one of the parameters of the confidence coefficient of the subsequent calculation. However, how to allocate the weight occupied by the sub-functions in each array to achieve the best fusion effect is a core problem of the fusion of the whole decision layer. Therefore, the present embodiment introduces rejection rate into the operation of the decision layer algorithm. The rejection rate (FRR) is an important parameter for system classification performance in system identification. Especially in the multi-modal recognition process, the classification performance difference between the modalities can reduce the recognition accuracy of the system. Therefore, the rejection rate is included in the calculation of the confidence coefficient, the capability of information fusion extraction and classification can be improved, and the robustness of the system can be enhanced. In this embodiment, according to the normalized feature distance information (which is a normalized euclidean distance array of the gait and the face) and the rejection rate, the confidence of each modality (gait and face) is calculated as:
Figure BDA0003516058770000171
Wherein r isb,fRespectively normalized Euclidean distance arrays; n is the total number of numerical values in the normalized Euclidean distance array; FRRb,fRespectively modal rejection rates, wherein when the gait confidence coefficient is calculated, the normalized gait Euclidean distance array and the rejection rate of the gait features are applied in the formula, and when the face confidence coefficient is calculated, the normalized face Euclidean distance array and the rejection rate of the face features are applied in the formula. According to the confidence coefficient formula, the confidence coefficient of each mode is determined by the normalized Euclidean distance array and the performance parameters of the classifier. Therefore, the weight w of each modality fused in the decision layer can be respectively calculatedb,fComprises the following steps:
Figure BDA0003516058770000172
wherein w is more than or equal to 0b,fIs less than or equal to 1, m is the number of the identification modes, the embodiment is two types of gait and face, and then m is 2
Figure BDA0003516058770000173
The confidence coefficient is converted into weight factors of gait and human face through the exponential relationship, so that the relationship between high confidence coefficient and high weight factors is strengthened, and the obtained fusion system has stronger robustness to noise. The modal weight value reflects the status of the sub-classification results of the two modalities in the decision, so that the quality and the effect of the final decision can be directly influenced. The higher the weight factor, the stronger the classification ability of the corresponding modality, and the greater the degree of influence on the final decision. Therefore, the advantages of the two recognition modes can be well complemented, and the recognition rate and the anti-interference performance of the whole system are improved.
3) Finally, in decision fusion, the weight factor is used as the weight of the sub-decision according to the formula
Figure BDA0003516058770000174
Obtaining the final identity recognitionResults T, wherein
Figure BDA0003516058770000175
In the formula (I), the compound is shown in the specification,
Figure BDA0003516058770000176
representing the configuration identification coefficient, Tb,fRespectively identifying the modal; o isiAnd the target category is divided into two categories of collected gait and human face. In the decision fusion method based on feature information and rejection rate guidance proposed in this embodiment, the normalized feature distance (normalized euclidean distance array) comes from different modalities, and the matching distance thereof contains different information, so that the feature distance is non-standardized and dynamically changes. The method has the advantages that the confidence coefficient of the mode is obtained by extracting effective information of the normalized feature distances of different modes on distribution and adding the rejection rate of systems of different modes. The confidence coefficient is mapped to obtain a corresponding weight factor through mathematics, and the fusion of decision levels is carried out according to different weights, so that the method is a dynamic and self-adaptive weighted fusion recognition algorithm.
The invention has the following technical effects:
1. the invention has strong robustness under the condition of complex background.
2. Based on the problem that the recognition rate is reduced due to the fact that most researches do not consider interference factors such as clothing change and carried articles, the method only recognizes the walking condition of common pedestrians, and provides a method for fusing a convolutional neural network and a length memory network, so that the dynamic and static characteristics of a target can be extracted, and the influence of the interference factors such as clothing change and carried articles can be reduced.
3. On the basis of the SVM algorithm, in order to improve the robustness of the fusion algorithm of the decision layer, the decision layer fusion is guided in an auxiliary mode by using the discrimination results (distance information) of the feature layers of different modes, the weight judgment of confidence is carried out by introducing the rejection rate, and a self-adaptive weighting fusion algorithm is formed, so that the recognition result is obtained. The method well plays the complementary advantages of the two recognition modes under different angles and different scenes, and improves the recognition rate, the anti-interference performance and the application range of the whole system. In practical application, the performance of the identity recognition system can be effectively improved under the condition of slightly increasing the system cost and the user burden.
In the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (10)

1. An identity recognition method based on multi-biological characteristic self-adaptive fusion is characterized by comprising the following steps:
acquiring a video sequence to be identified, and acquiring a video sequence in a gait cycle from the video sequence to be identified to obtain a cycle video; the video sequence to be recognized comprises a person to be recognized;
acquiring a PAF matrix corresponding to each frame of the periodic video of the person to be identified to obtain a walking characteristic vector diagram; the PAF matrix comprises PAFs of limbs and PAFs of the trunk;
inputting the walking characteristic vector diagram into a first convolution neural network and a long-time and short-time memory neural network respectively to obtain a spatial characteristic matrix and a time characteristic matrix;
performing feature fusion on the space feature matrix and the time feature matrix to obtain a gait feature matrix of the person to be identified;
acquiring a face image of a person to be identified in the video sequence to be identified, and processing the face image of the person to be identified by using a face identification algorithm with a self-adaptive weighted HOG (histogram of oriented gradient) feature to obtain a face feature matrix;
and processing the face feature matrix, the gait feature matrix and the standard image in the image database by adopting an adaptive weighting fusion algorithm based on an SVM to obtain an identity recognition result of the person to be recognized.
2. The identity recognition method based on the multi-biometric feature adaptive fusion as claimed in claim 1, wherein the determination method of the gait cycle is:
and inputting the video sequence to be recognized into a trained second convolutional neural network to obtain the gait cycle of the person to be recognized.
3. The identity recognition method based on the adaptive multi-biometric fusion as claimed in claim 1, further comprising, before the processing the standard images in the face feature matrix, the gait feature matrix and the image database by the adaptive weighted fusion algorithm to obtain the identity recognition result of the person to be recognized:
and performing dimension reduction processing on the face feature matrix to obtain a face feature matrix after dimension reduction.
4. The identity recognition method based on the adaptive fusion of multiple biological features according to claim 1, wherein the obtaining of the face image of the person to be recognized in the video sequence to be recognized and the processing of the face image of the person to be recognized using a face recognition algorithm with an adaptive weighted HOG feature to obtain a face feature matrix specifically comprises:
acquiring a face image of the person to be recognized in the video sequence to be recognized;
Dividing the face image of the person to be identified into cell units with the same size;
calculating a gradient histogram of each cell unit according to the direction and the amplitude of the pixel gradient in each cell unit;
and determining the gradient histograms of all cell units as a human face feature matrix.
5. The identity recognition method based on the multi-biometric feature adaptive fusion as claimed in claim 1, wherein the processing of the standard images in the face feature matrix, the gait feature matrix and the image database by the SVM-based adaptive weighted fusion algorithm to obtain the identity recognition result of the person to be recognized specifically comprises:
respectively projecting the gait feature matrix and the face feature matrix to a coordinate system formed by a base image to obtain a gait projection matrix and a face projection matrix;
calculating gait Euclidean distances between the images in the video sequence to be recognized and each standard image in an image database according to the gait projection matrix to obtain a gait Euclidean distance array; calculating the Euclidean distance between the image in the video sequence to be recognized and the face of each standard image in an image database according to the face projection matrix to obtain a face Euclidean distance array;
Calculating a gait confidence coefficient and a face confidence coefficient according to the gait Euclidean distance array, the face Euclidean distance array, the rejection rate of the gait feature matrix and the rejection rate of the face feature matrix;
calculating a gait fusion weight according to the gait confidence coefficient, and calculating a face fusion weight according to the face confidence coefficient;
and obtaining the identity recognition result of the person to be recognized according to the gait fusion weight and the face fusion weight.
6. An identity recognition system based on adaptive fusion of multiple biological features, comprising:
the video sequence acquisition module is used for acquiring a video sequence to be identified and acquiring a video sequence in a gait cycle from the video sequence to be identified to obtain a periodic video; the video sequence to be identified comprises a person to be identified;
the PAF matrix acquisition module is used for acquiring a PAF matrix corresponding to each frame of the person to be identified in the periodic video to obtain a walking characteristic vector diagram; the PAF matrix comprises PAFs of limbs and PAFs of a trunk;
the space-time characteristic determining module is used for respectively inputting the walking characteristic vector diagram into a first convolution neural network and a long-time and short-time memory neural network to obtain a space characteristic matrix and a time characteristic matrix;
The gait feature matrix determining module is used for performing feature fusion on the spatial feature matrix and the time feature matrix to obtain a gait feature matrix of the person to be identified;
the human face feature matrix determining module is used for acquiring a human face image of a person to be identified in the video sequence to be identified and processing the human face image of the person to be identified by using a human face identification algorithm with a self-adaptive weighted HOG feature to obtain a human face feature matrix;
and the identity recognition module is used for processing the face feature matrix, the gait feature matrix and the standard image in the image database by adopting an adaptive weighting fusion algorithm based on an SVM (support vector machine) to obtain an identity recognition result of the person to be recognized.
7. The system according to claim 6, wherein the video sequence acquisition module comprises:
and the gait cycle determining submodule is used for inputting the video sequence to be recognized into the trained second convolutional neural network to obtain the gait cycle of the person to be recognized.
8. The identity recognition system based on the adaptive fusion of multiple biometrics, characterized by further comprising:
And the dimension reduction module is used for carrying out dimension reduction processing on the face feature matrix to obtain the face feature matrix after dimension reduction.
9. The identity recognition system based on the adaptive fusion of multiple biological features according to claim 6, wherein the face feature matrix determination module specifically comprises:
the acquisition sub-module is used for acquiring the face image of the person to be recognized in the video sequence to be recognized;
the cell unit determining submodule is used for dividing the face image of the person to be identified into cell units with the same size;
the gradient histogram determination submodule is used for calculating the gradient histogram of each cell unit according to the direction and the amplitude of the pixel gradient in each cell unit;
and the face feature matrix determining submodule is used for determining the gradient histograms of all the cell units as a face feature matrix.
10. The identity recognition system based on the adaptive fusion of multiple biometrics according to claim 6, wherein the identity recognition module specifically includes:
the projection matrix determination submodule is used for projecting the gait feature matrix and the face feature matrix to a coordinate system formed by a base image respectively to obtain a gait projection matrix and a face projection matrix;
The Euclidean distance array determining submodule is used for calculating gait Euclidean distances between the images in the video sequence to be recognized and each standard image in the image database according to the gait projection matrix to obtain a gait Euclidean distance array; calculating the human face Euclidean distance between the image in the video sequence to be recognized and each standard image in an image database according to the human face projection matrix to obtain a human face Euclidean distance array;
the confidence coefficient determining submodule is used for calculating a gait confidence coefficient and a face confidence coefficient according to the gait Euclidean distance array, the face Euclidean distance array, the rejection rate of the gait feature matrix and the rejection rate of the face feature matrix;
the fusion weight determining submodule is used for calculating the gait fusion weight according to the gait confidence coefficient and calculating the face fusion weight according to the face confidence coefficient;
and the identity recognition submodule is used for obtaining an identity recognition result of the person to be recognized according to the gait fusion weight and the face fusion weight.
CN202210166085.5A 2022-02-23 2022-02-23 Identity recognition method and system based on multi-biological-feature self-adaptive fusion Pending CN114519899A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210166085.5A CN114519899A (en) 2022-02-23 2022-02-23 Identity recognition method and system based on multi-biological-feature self-adaptive fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210166085.5A CN114519899A (en) 2022-02-23 2022-02-23 Identity recognition method and system based on multi-biological-feature self-adaptive fusion

Publications (1)

Publication Number Publication Date
CN114519899A true CN114519899A (en) 2022-05-20

Family

ID=81598322

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210166085.5A Pending CN114519899A (en) 2022-02-23 2022-02-23 Identity recognition method and system based on multi-biological-feature self-adaptive fusion

Country Status (1)

Country Link
CN (1) CN114519899A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117456556A (en) * 2023-11-03 2024-01-26 中船凌久高科(武汉)有限公司 Nursed outdoor personnel re-identification method based on various fusion characteristics

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117456556A (en) * 2023-11-03 2024-01-26 中船凌久高科(武汉)有限公司 Nursed outdoor personnel re-identification method based on various fusion characteristics

Similar Documents

Publication Publication Date Title
CN111797716B (en) Single target tracking method based on Siamese network
CN108460356B (en) Face image automatic processing system based on monitoring system
CN109902590B (en) Pedestrian re-identification method for deep multi-view characteristic distance learning
CN111898736B (en) Efficient pedestrian re-identification method based on attribute perception
CN110929593B (en) Real-time significance pedestrian detection method based on detail discrimination
CN108520226B (en) Pedestrian re-identification method based on body decomposition and significance detection
CN104598883B (en) Target knows method for distinguishing again in a kind of multiple-camera monitoring network
CN113592911B (en) Apparent enhanced depth target tracking method
CN109086659B (en) Human behavior recognition method and device based on multi-channel feature fusion
CN113963032A (en) Twin network structure target tracking method fusing target re-identification
CN112884742A (en) Multi-algorithm fusion-based multi-target real-time detection, identification and tracking method
CN111914643A (en) Human body action recognition method based on skeleton key point detection
CN111476077A (en) Multi-view gait recognition method based on deep learning
CN108985375B (en) Multi-feature fusion tracking method considering particle weight spatial distribution
CN112329784A (en) Correlation filtering tracking method based on space-time perception and multimodal response
Yi et al. Mining human movement evolution for complex action recognition
CN114519899A (en) Identity recognition method and system based on multi-biological-feature self-adaptive fusion
CN114708615A (en) Human body detection method based on image enhancement in low-illumination environment, electronic equipment and storage medium
CN112329662B (en) Multi-view saliency estimation method based on unsupervised learning
CN113763417B (en) Target tracking method based on twin network and residual error structure
US11854306B1 (en) Fitness action recognition model, method of training model, and method of recognizing fitness action
CN117541994A (en) Abnormal behavior detection model and detection method in dense multi-person scene
CN115311327A (en) Target tracking method and system integrating co-occurrence statistics and fhog gradient features
Chen et al. Reference set based appearance model for tracking across non-overlapping cameras
Reddy et al. Facial Recognition Enhancement Using Deep Learning Techniques

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination