KR101802061B1 - Method and system for automatic biometric authentication based on facial spatio-temporal features - Google Patents

Method and system for automatic biometric authentication based on facial spatio-temporal features Download PDF

Info

Publication number
KR101802061B1
KR101802061B1 KR1020160029793A KR20160029793A KR101802061B1 KR 101802061 B1 KR101802061 B1 KR 101802061B1 KR 1020160029793 A KR1020160029793 A KR 1020160029793A KR 20160029793 A KR20160029793 A KR 20160029793A KR 101802061 B1 KR101802061 B1 KR 101802061B1
Authority
KR
South Korea
Prior art keywords
face
users
space
time
user
Prior art date
Application number
KR1020160029793A
Other languages
Korean (ko)
Other versions
KR20170045093A (en
Inventor
노용만
김성태
김형일
Original Assignee
한국과학기술원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한국과학기술원 filed Critical 한국과학기술원
Publication of KR20170045093A publication Critical patent/KR20170045093A/en
Application granted granted Critical
Publication of KR101802061B1 publication Critical patent/KR101802061B1/en

Links

Images

Classifications

    • G06K9/00221
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • G06K9/00268
    • G06K9/00288
    • G06K9/00302

Abstract

An automatic biometric authentication method based on face space-time features includes obtaining a face gesture image of a user; Extracting face space-time features from the face gesture image; And matching the face space-time feature of each of the plurality of users stored in the face feature database with the face space-time feature of the user to authenticate the user.

Description

TECHNICAL FIELD [0001] The present invention relates to an automatic biometric authentication method and system based on facial spatio-

The following embodiments relate to an automatic biometric authentication system and method thereof, and more particularly to a technique for authenticating a user through automatic biometric authentication based on facial spatio-temporal features.

In recent years, a variety of biometric information has been utilized in smart devices such as smart phones and smart TVs. In particular, biometrics technology that is non-contact and easy to acquire data is required. Existing biometric authentication systems have been realized by classical biometric recognition technology such as fingerprint and iris. However, since they are acquired through contact data acquisition, user's control is required.

As an alternative, face recognition based biometric technology has emerged. Unlike conventional biometrics technology, face recognition based biometrics technology is not only useful in terms of user convenience because it does not require special contact, it is also expected to be widely used in smart devices and can be applied to various fields I am in the spotlight.

However, biometric authentication using facial biometric information is easier to acquire data than iris or fingerprint recognition, but is limited to use for user authentication due to a relatively high false acceptance rate (false acceptance rate) There is a great demand for research on this.

Recently, various studies have been conducted on feature learning and classification algorithms for high-performance face authentication, but since they depend only on the appearance information of a still-state facial image, they are present in an image (for example, video) There is a limit in that it can not utilize all facial image information. Particularly, considering that the dynamic information related to the human face motion provides not only the face shape information but also the important clues in recognition of the face characteristics of the human, the shape information of the face image existing in the image, it is necessary to study using temporal dynamic information.

Therefore, the following embodiments propose an automatic biometric authentication technique based on face space-time characteristics in an image including a user's face.

The following embodiments provide a method and system for automatic biometric authentication using facial spatio-temporal features.

Specifically, the embodiments described below can be applied to a face space-time feature including static face information extracted from a spatial analysis and dynamic face information extracted through a temporal analysis. And provides an automatic biometric authentication method and system to be used.

According to one embodiment, an automatic biometric authentication method based on face space-time features includes: obtaining a face gesture image of a user; Extracting face space-time features from the face gesture image; And matching the face space-time feature of each of the plurality of users stored in the face feature database with the face space-time feature of the user to authenticate the user.

The face space-time feature may include static face information and dynamic face information of the user.

Wherein the extracting of the face space-time feature from the face gesture image comprises: performing spatial analysis on the face gesture image to extract static face information of the user; Performing temporal analysis on the face gesture image to extract dynamic face information of the user; And extracting the face space-time feature by fusing the static face information and the dynamic face information of the user.

The step of extracting the face space-time feature from the face gesture image may include performing face analysis on the face gesture image based on a deep learning technique.

The step of matching the face space-time feature of each of the plurality of users stored in the face feature database with the face space-time feature of the user may include a nearest neighbor classifier, a sparse representation based classifier, and matching the face space-time feature of each of the plurality of users stored in the face feature database with the face space-time feature of the user using at least one of a support vector machine.

Wherein the step of matching the face space-time features of each of the plurality of users stored in the face feature database with the face space-time feature of the user comprises: matching the face space-time feature of the user among the plurality of users based on the matching result; And identifying the user having the feature.

The automatic biometric authentication method based on the face space-time feature may further include building the face feature database in advance.

The step of constructing the facial feature database in advance includes: acquiring a face gesture image of each of the plurality of users; Extracting a face space-time feature of each of the plurality of users from a face gesture image of each of the plurality of users; And storing face space-time characteristics of each of the plurality of users in the face feature database.

Wherein the step of extracting face space-time features of each of the plurality of users from the face gesture images of the plurality of users comprises the steps of extracting static face information of each of the plurality of users included in the face space- Performing a spatial analysis on a face gesture image of each of the plurality of users to extract; Performing time analysis on the face gesture images of each of the plurality of users to extract dynamic face information of each of the plurality of users included in the face space-time feature of each of the plurality of users; And extracting facial spatio-temporal features of each of the plurality of users by fusing static face information and dynamic face information of each of the plurality of users.

The step of extracting face space-time features of each of the plurality of users from the face gesture images of each of the plurality of users comprises: measuring a quality of a face gesture image of each of the plurality of users; And extracting face space-time characteristics of each of the plurality of users from the face gesture images of the plurality of users when the quality of the face gesture images of the plurality of users is equal to or greater than a reference value.

The step of measuring the quality of the face gesture image of each of the plurality of users may further include notifying each of the plurality of users when the quality of the face gesture image of each of the plurality of users is less than the reference value .

According to an embodiment, an automatic biometric authentication system based on face space-time features includes an acquiring unit for acquiring a face gesture image of a user; An extracting unit extracting a face space-time feature from the face gesture image; A face feature database storing face space-time features of each of a plurality of users; And a matching unit for matching the face space-time characteristics of each of the plurality of users stored in the face feature database with the face space-time characteristics of the user to authenticate the user.

Wherein the extractor performs a spatial analysis on the face gesture image to extract static face information of the user and performs a spatial analysis on the face gesture image to extract dynamic face information of the user, Temporal analysis for the user, and extracts the face space-time feature by fusing the static face information and the dynamic face information of the user.

Wherein the facial feature database acquires a face gesture image of each of the plurality of users in the obtaining unit and extracts a face space-time feature of each of the plurality of users from the face gesture image of each of the plurality of users in the extracting unit And the face space-time characteristics of each of the plurality of users can be stored and built in advance.

Wherein the extracting unit measures the quality of a face gesture image of each of the plurality of users in a measurement unit further included in the automatic biometric authentication system, and when the quality of a face gesture image of each of the plurality of users is equal to or greater than a reference value, Space feature of each of the plurality of users from the face gesture images of the users of the users.

The embodiments described below can provide an automatic biometric authentication method and system using face time and space features.

Specifically, the embodiments described below can be applied to a face space-time feature including static face information extracted from a spatial analysis and dynamic face information extracted through a temporal analysis. It is possible to provide an automatic biometric authentication method and system to be used.

Therefore, the following embodiments utilize the dynamic information of the face as well as the static information of the face, so that it is possible to provide an automatic biometric authentication method and system that perform effective face authentication in terms of performance.

In addition, since the following embodiments are based on non-contact data acquisition, it is possible to provide an automatic biometric authentication method and system which is useful in terms of user convenience and widely applied.

1 is a diagram for explaining a user registration process in an automatic biometric authentication system according to an embodiment.
2 is a diagram for explaining a user authentication process in an automatic biometric authentication system according to an exemplary embodiment.
FIG. 3 is a view for explaining a process of extracting face space-time features in the automatic biometric authentication system according to an embodiment.
FIG. 4 is a diagram for explaining time analysis performed in the process of extracting face space-time features according to an exemplary embodiment.
5 is a flowchart illustrating an automatic biometric authentication method according to an embodiment.
6 is a block diagram illustrating an automatic biometric authentication system according to an embodiment.

Hereinafter, embodiments according to the present invention will be described in detail with reference to the accompanying drawings. However, the present invention is not limited to or limited by the embodiments. In addition, the same reference numerals shown in the drawings denote the same members.

Also, terminologies used herein are terms used to properly represent preferred embodiments of the present invention, which may vary depending on the user, intent of the operator, or custom in the field to which the present invention belongs. Therefore, the definitions of these terms should be based on the contents throughout this specification.

1 is a diagram for explaining a user registration process in an automatic biometric authentication system according to an embodiment.

Referring to FIG. 1, an automatic biometric authentication system according to an exemplary embodiment of the present invention can perform a user registration process of previously building a face feature database 110 in advance to perform a user authentication process, which will be described later with reference to FIG. 2 have. Here, the automatic biometric authentication system may perform a user registration process of building the face feature database 110 in advance using the acquisition unit 120 and the extraction unit 130.

Specifically, the automatic biometric authentication system can acquire a face gesture image of each of a plurality of users through the acquisition unit 120. [ Hereinafter, a face gesture image means a face image that takes a specific gesture (facial expression). In addition, the acquiring unit 120 may include a camera device and may directly acquire a face gesture image of each of a plurality of users. Alternatively, the acquiring unit 120 may include a plurality of users from a plurality of users, A face gesture image of each user can be received.

Then, the automatic biometric authentication system may extract the face-space-time feature of each of the plurality of users from the face gesture images of the plurality of users through the extraction unit 130. [ In this case, the face space-time feature of each of the plurality of users includes static face information and dynamic face information of each of a plurality of users. For this, the extracting unit 130 may perform spatial analysis and time analysis on the face gesture images of each of the plurality of users. A detailed description thereof will be described with reference to Fig.

In addition, the automatic biometric authentication system may measure the quality of the face gesture image of each of the plurality of users through the measurement unit 140 before extracting the face time-space characteristics of each of the plurality of users through the extraction unit 130 It is possible. For example, the measuring unit 140 measures the quality of the face gesture image of each of the plurality of users, and when the measured quality is equal to or greater than the reference value, the face gesture image of each of the plurality of users is extracted to the extracting unit 130 . If the measured quality is less than the reference value, the measuring unit 140 notifies each of the plurality of users, and each of the plurality of users can cause the acquiring unit 120 to retake the face gesture image.

The face space-time characteristics of each of the plurality of users extracted as described above may be matched to each of a plurality of users, and stored in the face feature database 110 and maintained. Thus, the facial feature database 110 can be used in the user authentication process described below.

2 is a diagram for explaining a user authentication process in an automatic biometric authentication system according to an exemplary embodiment.

2, an automatic biometric authentication system according to an exemplary embodiment of the present invention includes an acquiring unit 210, an extracting unit 220, a matching unit 230, and a facial feature database 240 described with reference to FIG. 1, The authentication process can be performed.

Specifically, the automatic biometric authentication system acquires a face gesture image of the user through the acquisition unit 210. [

Then, the automatic biometric authentication system extracts facial spatio-temporal features from the face gesture image through the extraction unit 220. Here, the face space-time feature includes the user's static face information and the dynamic face information.

For example, the extracting unit 220 may extract the face space-time features including the static face information and the dynamic face information by performing a spatial analysis and a time analysis on the face gesture image of the user. A detailed description thereof will be described with reference to Fig.

Thereafter, the automatic biometric authentication system performs a process for authenticating the user through the matching unit 230 based on the facial feature database 240. For example, the matching unit 230 may authenticate the user by matching the face space-time characteristics of each of the plurality of users stored in the face feature database 240 with the face space-time characteristics of the user. More specifically, for example, the matching unit 230 may match the face-space-space characteristics of the plurality of users with the face-space-time characteristics of the user, It is possible to determine whether there is a user with space-time characteristics, and if so, who the user is.

At this time, the matching unit 230 may use at least one of a nearest neighbor classifier, a sparse representation based classifier, or a support vector machine, 240 and the face space-time feature of the user, respectively. However, the present invention is not limited thereto, and the matching unit 230 may match the face space-time characteristics of each of a plurality of users with the face space-time characteristics of the user using various matching algorithms.

FIG. 3 is a view for explaining a process of extracting face space-time features in the automatic biometric authentication system according to an embodiment.

Referring to FIG. 3, the extracting unit included in the automatic biometric authentication system according to an exemplary embodiment may perform a spatial analysis 310 and a time analysis 320 to extract facial spatiotemporal characteristics from a facial gesture image.

For example, the extracting unit extracts static face information through the spatial analysis 310 for the face gesture image, extracts the dynamic face information through the time analysis 320 for the face gesture image, The information may be fused 330 to generate face space-time features.

In particular, the extractor may perform a spatial analysis 310 to extract static face information, a time analysis 320 to perform dynamic face information extraction, or a process 330 to merge static face information and dynamic face information, You can use deep learning techniques. A detailed description thereof will be described with reference to FIG.

FIG. 4 is a diagram for explaining time analysis performed in the process of extracting face space-time features according to an exemplary embodiment.

Referring to FIG. 4, the extracting unit included in the automatic biometric authentication system according to an exemplary embodiment may perform time analysis on a face gesture image using a deep learning technique.

For example, the extractor may extract dynamic face information in which a user's face land marker in each of a plurality of frames included in a face gesture image changes with time, by performing time analysis based on a deep learning technique .

Such a deep running technique is described as a case where the extracting part is used in performing time analysis, but is not limited thereto and may be used in performing spatial analysis.

5 is a flowchart illustrating an automatic biometric authentication method according to an embodiment.

Referring to FIG. 5, the automatic biometric authentication method according to one embodiment is performed by the automatic biometric authentication system. Here, the automatic biometric authentication system may be implemented in the form of a separate server, or in the form of program code for executing an automatic biometric authentication method in combination with a computer, such as an application installed in a user terminal.

First, the automatic biometric authentication system obtains a user's face gesture image (510).

Next, the automatic biometric authentication system extracts face-space-time features from the face gesture image (520). Here, the face space-time feature may include static face information of the user and dynamic face information.

Specifically, in step 520, the automatic biometric authentication system performs spatial analysis on the face gesture image to extract the user's static face information, and performs a spatial analysis on the face gesture image to extract the user's dynamic face information By performing temporal analysis, face space-time features can be extracted by fusing user's static face information and dynamic face information.

At this time, the automatic biometric authentication system can perform facial analysis on the face gesture image based on the deep learning technique in the process of extracting the face space-time feature.

Then, in order to authenticate the user, the automatic biometric authentication system 530 matches the face space-time characteristics of each of the plurality of users stored in the face feature database with the face space-time characteristics of the user.

For example, in step 530, the automatic biometric authentication system may use at least one of a nearest neighbor classifier, a sparse representation based classifier, or a support vector machine (SVM) The face space-time feature of each of a plurality of users stored in the face feature database and the face space-time feature of the user can be matched. However, the present invention is not limited to this, and in step 530, the automatic biometric authentication system may match the face space-time characteristics of each of a plurality of users stored in the face feature database with the face space-time characteristics of the user using various matching algorithms.

Accordingly, in step 530, the automatic biometric authentication system can identify the user having the face space-time feature matching the face space-time feature of the user among the plurality of users based on the matching result.

Although not shown in the drawing, the automatic biometric authentication system may perform a user registration process of building a face feature database in advance before performing the user authentication process in steps 510 to 530 as described above. The user registration process is as follows.

First, the automatic biometric authentication system acquires a face gesture image of each of a plurality of users.

Then, the automatic biometric authentication system extracts the face space-time characteristics of each of the plurality of users from the face gesture images of each of the plurality of users.

Here, the process of acquiring the face gesture image of each of the plurality of users and the process of extracting the face space-time feature of each of the plurality of users may be performed for each of the plurality of users in the same manner as the steps 510 and 520 .

For example, in the process of extracting the face space-time features of each of the plurality of users from the face gesture images of the plurality of users, the automatic biometric authentication system may include a plurality of users included in the face space- A spatial analysis is performed on the face gesture images of each of a plurality of users to extract respective static face information, and dynamic face information of each of a plurality of users included in the face space-time feature of each of a plurality of users is extracted The face time-space feature of each of the plurality of users can be extracted by fusing the static face information and the dynamic face information of each of the plurality of users by performing time analysis on the face gesture images of each of the plurality of users.

Particularly, the automatic biometric authentication system measures the quality of the face gesture image of each of the plurality of users before extracting the face space-time feature of each of the plurality of users from the face gesture images of each of the plurality of users, Space feature of each of the plurality of users can be extracted from the face gesture image of each of the plurality of users only when the quality of each face gesture image is equal to or greater than the reference value.

If the quality of the face gesture image of each of the plurality of users is less than the reference value, the automatic biometric authentication system can notify each of the plurality of users that the quality of the face gesture image is less than the reference value.

Then, the automatic biometric authentication system stores the face space-time characteristics of each of the plurality of users in the face feature database.

6 is a block diagram illustrating an automatic biometric authentication system according to an embodiment.

6, an automatic biometric authentication system according to an exemplary embodiment includes an acquisition unit 610, an extraction unit 620, a facial feature database 630, and a matching unit 640. Referring to FIG.

The acquiring unit 610 acquires a face gesture image of the user.

The extraction unit 620 extracts face space-time features from the face gesture image. Here, the face space-time feature may include static face information of the user and dynamic face information.

Specifically, the extractor 620 performs spatial analysis on the face gesture image to extract the user's static face information, and performs face analysis on the face gesture image to extract the dynamic face information of the user. By performing temporal analysis on the image, face space-time features can be extracted by converging user's static face information and dynamic face information.

At this time, the extracting unit 620 can perform facial analysis on the face gesture image based on the deep learning technique in the process of extracting the face space-time feature.

The facial feature database 630 stores facial spatio-temporal features of each of a plurality of users.

Specifically, the facial feature database 630 acquires a face gesture image of each of a plurality of users in the acquiring unit 610, and extracts a face gesture image of each of the plurality of users from the face gesture image of each of the plurality of users in the extracting unit 620 The face space-time feature of each of the plurality of users can be stored and built in advance by extracting the face space-time feature of the user. Here, the process of storing the face space-time characteristics of each of the plurality of users in the face feature database 630 may be performed by the extracting unit 620. [

In extracting the static face information of each of the plurality of users included in the face space-time feature of each of the plurality of users, the extraction unit 620 extracts the static face information of each of the plurality of users, A face gesture image of each of a plurality of users is subjected to a spatial analysis on each face gesture image of each of a plurality of users, By performing time analysis, it is possible to extract face space-time characteristics of each of a plurality of users by fusing static face information and dynamic face information of each of a plurality of users.

The extracting unit 620 extracts the quality of the face gesture image of each of the plurality of users from the measurement unit (not shown) included in the automatic biometric authentication system in the process of building the face feature database 630 in advance And the face space-time feature of each of the plurality of users can be extracted from the face gesture image of each of the plurality of users only when the quality of the face gesture image of each of the plurality of users is equal to or greater than the reference value.

If the quality of the face gesture image of each of the plurality of users is less than the reference value, the measuring unit may notify each of the plurality of users that the quality of the face gesture image is less than the reference value.

The matching unit 640 matches the face space-time characteristics of each of the plurality of users stored in the face feature database 630 with the face space-time characteristics of the user to authenticate the user.

For example, the matching unit 640 may use at least one of a nearest neighbor classifier, a sparse representation based classifier, and a support vector machine, The face space-time feature of each of the plurality of users stored in the storage unit 630 and the face space-time feature of the user. However, the present invention is not limited thereto, and the matching unit 640 may match the face space-time characteristics of each of the plurality of users stored in the face feature database 630 with the face space-time characteristics of the user using various matching algorithms.

Accordingly, the matching unit 640 can identify the user having the face space-time feature matching the face space-time feature of the user among the plurality of users based on the matching result.

The apparatus described above may be implemented as a hardware component, a software component, and / or a combination of hardware components and software components. For example, the apparatus and components described in the embodiments may be implemented within a computer system, such as, for example, a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable array (FPA) A programmable logic unit (PLU), a microprocessor, or any other device capable of executing and responding to instructions. The processing device may execute an operating system (OS) and one or more software applications running on the operating system. The processing device may also access, store, manipulate, process, and generate data in response to execution of the software. For ease of understanding, the processing apparatus may be described as being used singly, but those skilled in the art will recognize that the processing apparatus may have a plurality of processing elements and / As shown in FIG. For example, the processing unit may comprise a plurality of processors or one processor and one controller. Other processing configurations are also possible, such as a parallel processor.

The software may include a computer program, code, instructions, or a combination of one or more of the foregoing, and may be configured to configure the processing device to operate as desired or to process it collectively or collectively Device can be commanded. The software and / or data may be in the form of any type of machine, component, physical device, virtual equipment, computer storage media, or device , Or may be permanently or temporarily embodied in a transmitted signal wave. The software may be distributed over a networked computer system and stored or executed in a distributed manner. The software and data may be stored on one or more computer readable recording media.

The method according to an embodiment may be implemented in the form of a program command that can be executed through various computer means and recorded in a computer-readable medium. The computer-readable medium may include program instructions, data files, data structures, and the like, alone or in combination. The program instructions to be recorded on the medium may be those specially designed and configured for the embodiments or may be available to those skilled in the art of computer software. Examples of computer-readable recording media include magnetic media such as hard disks, floppy disks and magnetic tape, optical recording media such as CD-ROMs and DVDs, magnetic recording media such as floptical disks, Includes hardware devices that are specially configured to store and execute program instructions such as magneto-OTPical media and ROM, RAM, flash memory, and the like. Examples of program instructions include machine language code such as those produced by a compiler, as well as high-level language code that can be executed by a computer using an interpreter or the like. The hardware devices described above may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. For example, it is to be understood that the techniques described may be performed in a different order than the described methods, and / or that components of the described systems, structures, devices, circuits, Lt; / RTI > or equivalents, even if it is replaced or replaced.

Therefore, other implementations, other embodiments, and equivalents to the claims are also within the scope of the following claims.

Claims (15)

1. An automatic biometric authentication method based on facial spatio-
Building a facial feature database in advance;
Obtaining a face gesture image of a user;
Extracting face space-time features from the face gesture image; And
Matching the face space-time features of each of the plurality of users stored in the face feature database with the face space-time feature of the user to authenticate the user;
Lt; / RTI >
The facial spatio-
The user's static face information and dynamic face information,
The step of pre-constructing the facial feature database
Obtaining a face gesture image of each of the plurality of users;
Measuring a quality of a face gesture image of each of the plurality of users;
Extracting a face space-time feature of each of the plurality of users from a face gesture image of each of the plurality of users when the quality of the face gesture image of each of the plurality of users is equal to or greater than a reference value; And
Storing face space-time features of each of the plurality of users in the face feature database
Wherein the biometrics authentication method comprises:
delete The method according to claim 1,
The step of extracting face space-time features from the face gesture image
Performing spatial analysis on the face gesture image to extract static face information of the user;
Performing temporal analysis on the face gesture image to extract dynamic face information of the user; And
Extracting the face space-time feature by fusing static face information and dynamic face information of the user
Wherein the biometrics authentication method comprises:
The method according to claim 1,
The step of extracting face space-time features from the face gesture image
Performing face analysis on the face gesture image based on a deep learning technique
Wherein the biometrics authentication method comprises:
The method according to claim 1,
Wherein the step of matching the face space-time feature of each of the plurality of users stored in the face feature database with the face space-
A face feature database, at least one of a nearest neighbor classifier, a sparse representation based classifier, and a support vector machine (SVM) Matching the spatiotemporal feature with the face space-time feature of the user
Wherein the biometrics authentication method comprises:
The method according to claim 1,
Wherein the step of matching the face space-time feature of each of the plurality of users stored in the face feature database with the face space-
Identifying a user having a face space-time feature matching the face space-time feature of the user among the plurality of users based on a result of matching the face space-time feature of each of the plurality of users with the face space-
Wherein the biometrics authentication method comprises:
delete delete The method according to claim 1,
The step of extracting face space-time features of each of the plurality of users from the face gesture images of the plurality of users
Performing spatial analysis on the face gesture images of each of the plurality of users to extract static face information of each of the plurality of users included in the face space-time feature of each of the plurality of users;
Performing time analysis on a face gesture image of each of the plurality of users to extract dynamic face information of each of the plurality of users included in the face space-time feature of each of the plurality of users; And
Extracting facial spatio-temporal features of each of the plurality of users by fusing static face information and dynamic face information of each of the plurality of users
Wherein the biometrics authentication method comprises:
delete The method according to claim 1,
Wherein measuring the quality of the face gesture image of each of the plurality of users comprises:
When the quality of the face gesture image of each of the plurality of users is less than a reference value, notifying each of the plurality of users
Further comprising the steps of:
1. An automatic biometric authentication system based on face space-time features,
An acquiring unit acquiring a face gesture image of a user;
An extracting unit extracting a face space-time feature from the face gesture image;
A face feature database storing face space-time features of each of a plurality of users; And
A matching unit for matching the face space-time feature of each of the plurality of users stored in the face feature database with the face space-time feature of the user,
Lt; / RTI >
The facial spatio-
The user's static face information and dynamic face information,
The facial feature database
Wherein the acquisition unit acquires a face gesture image of each of the plurality of users and measures a quality of a face gesture image of each of the plurality of users in a measurement unit further included in the automatic biometric authentication system, Wherein the extracting unit extracts the face space-time feature of each of the plurality of users from the face gesture images of the plurality of users when the quality of each face gesture image is equal to or greater than a reference value, An automatic biometric authentication system in which space-time features are stored and built in advance.
13. The method of claim 12,
The extracting unit
Performing a spatial analysis on the face gesture image to extract static face information of the user and performing time analysis on the face gesture image to extract dynamic face information of the user, and extracts the face space-time feature by fusing static face information and dynamic face information of the user.
delete delete
KR1020160029793A 2015-10-16 2016-03-11 Method and system for automatic biometric authentication based on facial spatio-temporal features KR101802061B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020150144766 2015-10-16
KR20150144766 2015-10-16

Publications (2)

Publication Number Publication Date
KR20170045093A KR20170045093A (en) 2017-04-26
KR101802061B1 true KR101802061B1 (en) 2017-11-27

Family

ID=58705086

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020160029793A KR101802061B1 (en) 2015-10-16 2016-03-11 Method and system for automatic biometric authentication based on facial spatio-temporal features

Country Status (1)

Country Link
KR (1) KR101802061B1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10635893B2 (en) * 2017-10-31 2020-04-28 Baidu Usa Llc Identity authentication method, terminal device, and computer-readable storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101297736B1 (en) * 2013-05-07 2013-08-23 주식회사 파이브지티 Face recognition method and system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101297736B1 (en) * 2013-05-07 2013-08-23 주식회사 파이브지티 Face recognition method and system

Also Published As

Publication number Publication date
KR20170045093A (en) 2017-04-26

Similar Documents

Publication Publication Date Title
KR102299847B1 (en) Face verifying method and apparatus
KR102415509B1 (en) Face verifying method and apparatus
KR102466998B1 (en) Method and apparatus for image fusion
KR102486699B1 (en) Method and apparatus for recognizing and verifying image, and method and apparatus for learning image recognizing and verifying
US11908238B2 (en) Methods and systems for facial point-of-recognition (POR) provisioning
WO2019071664A1 (en) Human face recognition method and apparatus combined with depth information, and storage medium
US11200405B2 (en) Facial verification method and apparatus based on three-dimensional (3D) image
KR102288302B1 (en) Authentication method and authentication apparatus using infrared ray(ir) image
US20140169663A1 (en) System and Method for Video Detection and Tracking
US20160217198A1 (en) User management method and apparatus
JP2019109709A (en) Image processing apparatus, image processing method and program
KR102459852B1 (en) Method and device to select candidate fingerprint image for recognizing fingerprint
US10764563B2 (en) 3D enhanced image correction
JP5087037B2 (en) Image processing apparatus, method, and program
US20130083965A1 (en) Apparatus and method for detecting object in image
KR20220076398A (en) Object recognition processing apparatus and method for ar device
Bedagkar-Gala et al. Gait-assisted person re-identification in wide area surveillance
KR102434574B1 (en) Method and apparatus for recognizing a subject existed in an image based on temporal movement or spatial movement of a feature point of the image
CN109711287B (en) Face acquisition method and related product
KR101802061B1 (en) Method and system for automatic biometric authentication based on facial spatio-temporal features
Zhang et al. A retrieval algorithm for specific face images in airport surveillance multimedia videos on cloud computing platform
US20240037995A1 (en) Detecting wrapped attacks on face recognition
US10580145B1 (en) Motion-based feature correspondence
KR102301785B1 (en) Method and appauatus for face continuous authentication
KR102380426B1 (en) Method and apparatus for verifying face

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant