CN108446601B - Face recognition method based on dynamic and static feature fusion - Google Patents

Face recognition method based on dynamic and static feature fusion Download PDF

Info

Publication number
CN108446601B
CN108446601B CN201810163721.2A CN201810163721A CN108446601B CN 108446601 B CN108446601 B CN 108446601B CN 201810163721 A CN201810163721 A CN 201810163721A CN 108446601 B CN108446601 B CN 108446601B
Authority
CN
China
Prior art keywords
dynamic
features
static
face
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810163721.2A
Other languages
Chinese (zh)
Other versions
CN108446601A (en
Inventor
帅立国
秦博豪
陈慧玲
王旭
张志胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201810163721.2A priority Critical patent/CN108446601B/en
Publication of CN108446601A publication Critical patent/CN108446601A/en
Application granted granted Critical
Publication of CN108446601B publication Critical patent/CN108446601B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a face recognition method based on dynamic and static feature fusion, in particular to a face recognition method based on comprehensive static features and dynamic features; the static features emphasize the global contour, the face image features are regarded as high-dimensional features, the high-dimensional features are mapped into a low-dimensional subspace through linear and nonlinear transformation, the features of the original sample are obtained in the low-dimensional space for classification, the dynamic features emphasize local changes, a group of feature functions of muscle changes and time are obtained mainly according to the expressions of the face, such as smile, depression and the like, through extracting the dynamic features of the muscle changes of the face, accurate identification and classification are conducted, and the face identification accuracy is improved.

Description

Face recognition method based on dynamic and static feature fusion
Technical Field
The invention relates to a face recognition method based on dynamic and static feature fusion, and belongs to the technical field of face recognition.
Background
At present, the traditional identity identification method has the risks of inconvenience in carrying, easiness in losing, easiness in damaging and easiness in cracking or stealing. Therefore, the face recognition is widely concerned, the safety is guaranteed due to the strong stability, concealment and the difference between individuals, and the application fields are increasingly wide, such as the fields of safety, civil use, military use and the like. The face recognition has a wide prospect in the fields of national defense, finance, judicial, commerce and the like as a typical application of biological feature recognition, and is closely concerned and approved by the society. Meanwhile, the accuracy of face recognition also becomes an important factor restricting the development of face recognition.
The face recognition usually encounters the problem of a small sample data set, that is, the number of training samples is far smaller than the size of a face sample to be detected, and the problem of the small sample data set makes it difficult for a traditional feature extraction method and a classification recognition method to obtain stronger robustness and better recognition rate in the face recognition. The method can greatly improve the accuracy of face recognition through a face recognition method of comprehensive static characteristics and dynamic characteristics.
In the similar patent, patent CN201010522281.9, the sparse representation face recognition method based on multi-level classification is a static feature recognition method, which is different from the method of combining static features and dynamic features emphasized in this patent; patent CN201510102708.2, a dynamic face recognition method and system, proposes a dynamic face recognition method, the dynamic state of which refers to the capturing and tracking of a person during movement, and is essentially a static feature recognition, and the dynamic state refers to the movement of the person itself. The muscle movement characteristics of the human body are directly related to the accumulation of human movement for a long time, and have obvious individual characteristics. Muscle features are features that people have long-term habits, are not easily imitated, and are obvious in features. The invention can effectively improve the accuracy of face recognition without influencing the speed by providing a method combining static and dynamic characteristics.
Disclosure of Invention
The invention provides a face recognition method based on dynamic and static feature fusion, which combines a global contour and a local dynamic feature through a static feature part and a dynamic feature part, can greatly improve the face recognition precision on the premise of not influencing the recognition speed, and solves the problem of low face recognition precision at present.
The technical scheme adopted by the invention for solving the technical problems is as follows:
a face recognition method based on dynamic and static feature fusion realizes the improvement of face recognition precision by a method of combining static features and dynamic features;
as a further preferred aspect of the present invention, the static features are overall contour features of the extracted face, and the dynamic features are muscle features of the extracted face when the expression changes;
as a further preferred aspect of the present invention, the method comprises the steps of:
step A, adopting static feature extraction, and specifically comprising the following substeps:
step A1, obtaining video stream from camera or video file stored in advance,
step a2, key frames are intercepted from the acquired video stream,
step A3, combining principal component analysis, independent component analysis and linear discrimination to obtain human face outline characteristics from the obtained key frame image information,
step A4, processing the contour feature of the human face by using a gradient image algorithm to obtain high-dimensional feature data, processing the contour feature of the human face by using binary, histogram linear or nonlinear processing to obtain low-dimensional feature data by transformation,
step A5, carrying out similarity measurement on the high-dimensional feature data and the low-dimensional feature data, namely, feature matching, and obtaining one or more similar results of static feature matching;
step B, adopting dynamic feature extraction, and concretely comprising the following substeps:
step B1, obtaining video stream from the camera or the video file stored in advance,
step B2, extracting dynamic characteristics in the video stream by using optical flow and difference method, determining the target area,
step B3, selecting the required face window from the target area, establishing a local window,
step B4, binarizing the image of the local window, extracting dynamic outline characteristics, transforming the obtained outline characteristic information into action sequences by adopting a pyramid matching kernel or a sliding window algorithm, thereby constructing expression action sequences,
step B5, generating motion vector information for matching the expression and motion sequence, extracting facial expression change by dynamic characteristics, extracting muscle dynamic change corresponding to the face by specifying the expression, establishing a motion model, and matching the motion vector with the motion model to obtain the final result;
step C, performing result set fusion on one or more similar results obtained by static characteristic matching and action vectors obtained by dynamic matching, verifying the static result set by using the dynamic result set, removing error results, obtaining a final recognition result, giving recognition confidence coefficient, and finishing recognition operation;
step D, if the confidence coefficient does not meet the requirement, restarting the whole recognition process;
as a further preferred embodiment of the present invention, the method for extracting dynamic features in the video stream by using optical flow and difference, where the optical flow is an optical flow information method, and the method for extracting dynamic features further includes a spatio-temporal feature point method or a local descriptor;
as a further preferred aspect of the present invention, the static feature extraction method includes, but is not limited to, a principal component analysis method, an independent component analysis method, or a linear discriminant method.
Through the technical scheme, compared with the prior art, the invention has the following beneficial effects:
the accuracy of face recognition is improved, the global contour and the local dynamic characteristic are combined, so that the reliability of face recognition can be greatly increased, the face recognition can be introduced into the industrial field, and the production efficiency is improved
Drawings
The invention is further illustrated with reference to the following figures and examples.
Figure 1 is a flow chart of the face recognition algorithm of the present invention,
FIG. 2 is a schematic diagram of the face recognition dynamic feature capture of the present invention,
fig. 3 is a schematic diagram of expression series constructed by the invention.
Detailed Description
The present invention will now be described in further detail with reference to the accompanying drawings. These drawings are simplified schematic views illustrating only the basic structure of the present invention in a schematic manner, and thus show only the constitution related to the present invention.
The muscle movement characteristics of the human body are directly related to the long-time movement accumulation of the human body, and have obvious individual characteristics, and the muscle characteristics are the characteristics formed by long-term habits of the human body, are not easy to imitate and have obvious characteristics.
The invention provides a face recognition method based on dynamic and static feature fusion, which realizes the improvement of face recognition precision by a method of combining static features and dynamic features;
as a further preferred aspect of the present invention, the static features are overall contour features of the extracted face, and the dynamic features are muscle features of the extracted face when the expression changes;
the face recognition method based on the dynamic and static feature fusion is divided into a learning process and a recognition process when applied;
in order to improve the real-time performance and accuracy of the face recognition system, an ELM algorithm can be adopted in the learning process, and the learning process is divided into a static feature part and a dynamic feature part; the static part comprises the steps of intercepting key frames from the obtained video stream, then obtaining the outline characteristics of the human face from the obtained image information by using static analysis methods such as a principal component analysis method, an independent component analysis method, a linear discrimination method and the like, obtaining high-dimensional characteristic data through algorithms such as a gradient image and the like, and then obtaining low-dimensional characteristic data through linear or nonlinear transformation such as binary, histogram and the like and storing the low-dimensional characteristic data;
the dynamic feature learning comprises the steps of extracting dynamic features of a video from an obtained video stream by using methods such as optical flow and difference, selecting a required face window from the dynamic features, carrying out binarization on an image, extracting contour features, constructing a motion sequence by using algorithms such as a pyramid matching kernel or a sliding window and the like to obtain dynamic contour information, and finally converting the dynamic contour information into motion vectors and storing the motion vectors; in the learning process of converting into the motion vector, methods such as RBM (limit boltzmann machine), DBN (deep belief network) and the like can be used to accelerate the convergence rate.
As shown in fig. 1, the identification process includes the following steps:
step A, adopting static feature extraction, and specifically comprising the following substeps:
step A1, obtaining video stream from camera or video file stored in advance,
step a2, key frames are intercepted from the acquired video stream,
step A3, combining principal component analysis, independent component analysis and linear discrimination to obtain human face outline characteristics from the obtained key frame image information,
step A4, processing the contour feature of the human face by using a gradient image algorithm to obtain high-dimensional feature data, processing the contour feature of the human face by using binary, histogram linear or nonlinear processing to obtain low-dimensional feature data by transformation,
step A5, carrying out similarity measurement on the high-dimensional feature data and the low-dimensional feature data, namely, feature matching, and obtaining one or more similar results of static feature matching;
step B, adopting dynamic feature extraction, and concretely comprising the following substeps:
step B1, obtaining video stream from the camera or the video file stored in advance,
step B2, extracting dynamic characteristics in the video stream by using optical flow and difference method, determining the target area,
step B3, selecting the required face window from the target area, establishing a local window,
step B4, binarizing the image of the local window, extracting dynamic outline characteristics, transforming the obtained outline characteristic information into action sequences by adopting a pyramid matching kernel or a sliding window algorithm, thereby constructing expression action sequences,
step B5, generating motion vector information for matching the expression and motion sequence, as shown in fig. 3, extracting facial expression changes by dynamic features, extracting muscle dynamic changes corresponding to a human face by specifying an expression, establishing a motion model, and matching the motion vector with the motion model to obtain the final result;
step C, performing result set fusion on one or more similar results obtained by static characteristic matching and action vectors obtained by dynamic matching, verifying the static result set by using the dynamic result set, removing error results, obtaining a final recognition result, giving recognition confidence coefficient, and finishing recognition operation;
step D, if the confidence coefficient does not meet the requirement, restarting the whole recognition process;
the static features mainly comprise the overall outline of the face, and the dynamic features mainly comprise muscle features when the expression of the face changes. As shown in fig. 2, the static features mainly extract the whole contour of the face, and perform face matching through the contour features, the dynamic features in the video stream are extracted by using the methods of optical flow and difference, the optical flow is an optical flow information method, and the extraction method of the dynamic features includes, but is not limited to, methods of optical flow information method, spatio-temporal feature point method, local description operator, and the like;
as a further preferred aspect of the present invention, the static feature extraction method includes, but is not limited to, a principal component analysis method, an independent component analysis method, or a linear discriminant method.
The invention provides a face recognition method based on dynamic and static feature fusion, which is a new face recognition method combining space and time.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The meaning of "and/or" as used herein is intended to include both the individual components or both.
The term "connected" as used herein may mean either a direct connection between components or an indirect connection between components via other components.
In light of the foregoing description of the preferred embodiment of the present invention, many modifications and variations will be apparent to those skilled in the art without departing from the spirit and scope of the invention. The technical scope of the present invention is not limited to the content of the specification, and must be determined according to the scope of the claims.

Claims (3)

1. A face recognition method based on dynamic and static feature fusion is characterized in that: the improvement of the face recognition precision is realized by a method of combining static characteristics and dynamic characteristics;
the static features are the overall contour features of the extracted face, and the dynamic features are the muscle features when the expression of the extracted face changes;
the method comprises the following steps:
step A, adopting static feature extraction, and specifically comprising the following substeps:
step A1, obtaining video stream from camera or video file stored in advance,
step a2, key frames are intercepted from the acquired video stream,
step A3, combining principal component analysis, independent component analysis and linear discrimination to obtain human face outline characteristics from the obtained key frame image information,
step A4, processing the contour feature of the human face by using a gradient image algorithm to obtain high-dimensional feature data, processing the contour feature of the human face by using binary, histogram linear or nonlinear processing to obtain low-dimensional feature data by transformation,
step A5, carrying out similarity measurement on the high-dimensional feature data and the low-dimensional feature data, namely, feature matching, and obtaining one or more similar results of static feature matching;
step B, adopting dynamic feature extraction, and concretely comprising the following substeps:
step B1, obtaining video stream from the camera or the video file stored in advance,
step B2, extracting dynamic characteristics in the video stream by using optical flow and difference method, determining the target area,
step B3, selecting the required face window from the target area, establishing a local window,
step B4, binarizing the image of the local window, extracting dynamic outline characteristics, transforming the obtained outline characteristic information into action sequences by adopting a pyramid matching kernel or a sliding window algorithm, thereby constructing expression action sequences,
step B5, generating motion vector information for matching the expression and motion sequence, extracting facial expression change by dynamic characteristics, extracting muscle dynamic change corresponding to the face by specifying the expression, establishing a motion model, and matching the motion vector with the motion model to obtain the final result;
step C, performing result set fusion on one or more similar results obtained by static characteristic matching and action vectors obtained by dynamic matching, verifying the static result set by using the dynamic result set, removing error results, obtaining a final recognition result, giving recognition confidence coefficient, and finishing recognition operation;
and D, restarting the whole recognition process if the confidence coefficient does not meet the requirement.
2. The face recognition method based on dynamic and static feature fusion of claim 1, characterized in that: the dynamic features in the video stream are extracted by using the optical flow and difference methods, wherein the optical flow is an optical flow information method, and the extraction method of the dynamic features further comprises a space-time feature point method or a local description operator.
3. The face recognition method based on dynamic and static feature fusion of claim 1, characterized in that: static feature extraction methods include, but are not limited to, principal component analysis, independent component analysis, or linear discriminant methods.
CN201810163721.2A 2018-02-27 2018-02-27 Face recognition method based on dynamic and static feature fusion Active CN108446601B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810163721.2A CN108446601B (en) 2018-02-27 2018-02-27 Face recognition method based on dynamic and static feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810163721.2A CN108446601B (en) 2018-02-27 2018-02-27 Face recognition method based on dynamic and static feature fusion

Publications (2)

Publication Number Publication Date
CN108446601A CN108446601A (en) 2018-08-24
CN108446601B true CN108446601B (en) 2021-07-13

Family

ID=63192521

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810163721.2A Active CN108446601B (en) 2018-02-27 2018-02-27 Face recognition method based on dynamic and static feature fusion

Country Status (1)

Country Link
CN (1) CN108446601B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110162664B (en) * 2018-12-17 2021-05-25 腾讯科技(深圳)有限公司 Video recommendation method and device, computer equipment and storage medium
CN111488773B (en) * 2019-01-29 2021-06-11 广州市百果园信息技术有限公司 Action recognition method, device, equipment and storage medium
CN110245593B (en) * 2019-06-03 2021-08-03 浙江理工大学 Gesture image key frame extraction method based on image similarity
CN110427825B (en) * 2019-07-01 2023-05-12 上海宝钢工业技术服务有限公司 Video flame identification method based on fusion of key frame and fast support vector machine
CN110874570A (en) * 2019-10-12 2020-03-10 深圳壹账通智能科技有限公司 Face recognition method, device, equipment and computer readable storage medium
CN111508105A (en) * 2019-12-25 2020-08-07 南通市海王电气有限公司 Access control system of intelligent power distribution cabinet
CN111652064B (en) * 2020-04-30 2024-06-07 平安科技(深圳)有限公司 Face image generation method, electronic device and readable storage medium
CN111680639B (en) * 2020-06-11 2022-08-30 支付宝(杭州)信息技术有限公司 Face recognition verification method and device and electronic equipment
CN111860400B (en) * 2020-07-28 2024-06-07 平安科技(深圳)有限公司 Face enhancement recognition method, device, equipment and storage medium
CN112749657A (en) * 2021-01-07 2021-05-04 北京码牛科技有限公司 House renting management method and system
CN113642446A (en) * 2021-08-06 2021-11-12 湖南检信智能科技有限公司 Detection method and device based on face dynamic emotion recognition
CN114299602A (en) * 2021-11-09 2022-04-08 北京九州安华信息安全技术有限公司 Micro-amplitude motion image processing method
CN115249393A (en) * 2022-05-09 2022-10-28 深圳市麦驰物联股份有限公司 Identity authentication access control system and method
CN115171176A (en) * 2022-05-24 2022-10-11 网易(杭州)网络有限公司 Object emotion analysis method and device and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101216889A (en) * 2008-01-14 2008-07-09 浙江大学 A face image super-resolution method with the amalgamation of global characteristics and local details information
CN102024141A (en) * 2010-06-29 2011-04-20 上海大学 Face recognition method based on Gabor wavelet transform and local binary pattern (LBP) optimization
CN101388075B (en) * 2008-10-11 2011-11-16 大连大学 Human face identification method based on independent characteristic fusion
CN104200804A (en) * 2014-09-19 2014-12-10 合肥工业大学 Various-information coupling emotion recognition method for human-computer interaction
CN104408440A (en) * 2014-12-10 2015-03-11 重庆邮电大学 Identification method for human facial expression based on two-step dimensionality reduction and parallel feature fusion
CN103136730B (en) * 2013-01-25 2015-06-03 西安理工大学 Fusion method of light stream of content in video image and dynamic structure of contour feature
CN103279745B (en) * 2013-05-28 2016-07-06 东南大学 A kind of face identification method based on half face multiple features fusion

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101216889A (en) * 2008-01-14 2008-07-09 浙江大学 A face image super-resolution method with the amalgamation of global characteristics and local details information
CN101388075B (en) * 2008-10-11 2011-11-16 大连大学 Human face identification method based on independent characteristic fusion
CN102024141A (en) * 2010-06-29 2011-04-20 上海大学 Face recognition method based on Gabor wavelet transform and local binary pattern (LBP) optimization
CN103136730B (en) * 2013-01-25 2015-06-03 西安理工大学 Fusion method of light stream of content in video image and dynamic structure of contour feature
CN103279745B (en) * 2013-05-28 2016-07-06 东南大学 A kind of face identification method based on half face multiple features fusion
CN104200804A (en) * 2014-09-19 2014-12-10 合肥工业大学 Various-information coupling emotion recognition method for human-computer interaction
CN104408440A (en) * 2014-12-10 2015-03-11 重庆邮电大学 Identification method for human facial expression based on two-step dimensionality reduction and parallel feature fusion

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A coordinated real-time optimal dispatch method for unbundled electricity markets;Wang X 等;《IEEE TRANSACTIONS ON POWER SYSTEMS》;20020530;全文 *
基于几何与表观特征融合的表情识别方法研究;宫玉娇;《中国优秀硕士学位论文全文数据库信息科技辑》;20140915;全文 *
采用局部线性嵌入的稀疏目标跟踪方法;孙锐 等;《电子测量与仪器学报》;20170815;全文 *

Also Published As

Publication number Publication date
CN108446601A (en) 2018-08-24

Similar Documents

Publication Publication Date Title
CN108446601B (en) Face recognition method based on dynamic and static feature fusion
Adjabi et al. Past, present, and future of face recognition: A review
He et al. Dynamic feature matching for partial face recognition
Hong et al. Multimodal deep autoencoder for human pose recovery
Eidinger et al. Age and gender estimation of unfiltered faces
JP6411510B2 (en) System and method for identifying faces in unconstrained media
US8170280B2 (en) Integrated systems and methods for video-based object modeling, recognition, and tracking
Sheng et al. Siamese denoising autoencoders for joints trajectories reconstruction and robust gait recognition
WO2019006835A1 (en) Target recognition method based on compressed sensing
CN111126307B (en) Small sample face recognition method combining sparse representation neural network
CN106096517A (en) A kind of face identification method based on low-rank matrix Yu eigenface
CN110458235B (en) Motion posture similarity comparison method in video
Xu et al. Action recognition by saliency-based dense sampling
Wu et al. Convolutional LSTM networks for video-based person re-identification
Lai et al. Visual speaker identification and authentication by joint spatiotemporal sparse coding and hierarchical pooling
An Pedestrian Re‐Recognition Algorithm Based on Optimization Deep Learning‐Sequence Memory Model
Zhou et al. Face recognition using dense sift feature alignment
Ou et al. Gan-based inter-class sample generation for contrastive learning of vein image representations
Nalty et al. A brief survey on person recognition at a distance
Liu et al. Lip event detection using oriented histograms of regional optical flow and low rank affinity pursuit
Zheng et al. A normalized light CNN for face recognition
CN116883900A (en) Video authenticity identification method and system based on multidimensional biological characteristics
CN111950452A (en) Face recognition method
Wibowo et al. Feature extraction using histogram of oriented gradient and hu invariant moment for face recognition
CN110046608B (en) Leaf-shielded pedestrian re-recognition method and system based on semi-coupling identification dictionary learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant