AU2020203692A1 - System and method for facial recognition - Google Patents

System and method for facial recognition Download PDF

Info

Publication number
AU2020203692A1
AU2020203692A1 AU2020203692A AU2020203692A AU2020203692A1 AU 2020203692 A1 AU2020203692 A1 AU 2020203692A1 AU 2020203692 A AU2020203692 A AU 2020203692A AU 2020203692 A AU2020203692 A AU 2020203692A AU 2020203692 A1 AU2020203692 A1 AU 2020203692A1
Authority
AU
Australia
Prior art keywords
video
person
image processing
face
processing system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
AU2020203692A
Inventor
Desire Armand Pierre
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Identity Systems Australia Pty Ltd
Original Assignee
Identity Systems Australia Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2019901928A external-priority patent/AU2019901928A0/en
Application filed by Identity Systems Australia Pty Ltd filed Critical Identity Systems Australia Pty Ltd
Publication of AU2020203692A1 publication Critical patent/AU2020203692A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

A facial recognition system comprising a video capture device, and a processing server having an image processing system. The image processing system is configured to receive a video from the video capture device, the video showing a person, detect a face of the person in the video, execute facial recognition algorithms comparing the face in the video with a face of a person in an identifying document, calculate a similarity score for said faces, and confirm an identity of the person in the video matches an identity of the person in the identifying document if the similarity score is at least equal to a threshold value. 1/5 Processing Server 120 100 Image Processing System 121 Network (Cellular, WAN, etc.) 125 User Device User Device 110 112 Video Video Capture Capture Device 111 Device 113 FIG. 1

Description

1/5
Processing Server 120 100 Image Processing System 121
Network (Cellular, WAN, etc.) 125
User Device User Device 110 112 Video Video Capture Capture Device 111 Device 113
FIG. 1
SYSTEM AND METHOD FOR FACIAL RECOGNITION FIELD OF THE INVENTION
[0001] The present invention relates to systems and methods for facial
recognition. More particularly, embodiments of the invention reside in systems
and methods for identifying a person's face, and confirming and verifying a
person's identity to approve that person to access certain services or perform
certain tasks.
BACKGROUND
[0002] Any references to methods, apparatus or documents of the prior art are
not to be taken as constituting any evidence or admission that they formed, or
form, part of the common general knowledge.
[0003] Some existing systems provide components and functionality that
capture an image of a person and then perform image processing, namely
digital image processing, by comparing the captured image with a stored or
archived image to determine whether the person in the captured image is the
person in the stored image. In the event that the person in the captured image
is confirmed to be the person in the stored image, typically by confirming that
the features of the face in the captured image substantially match those of the
face in stored image.
[0004] However, these systems can sometimes be deceived through the use of
photographs or similar pictures of a person.
[0005] In other systems, the verification process can fail if the captured image
is not centred on the person or the person's face has atypical or non-uniform
lighting, e.g. due to positioning of the person under lights.
[0006] Thus, there is a need for improved systems and method for facial
recognition.
OBJECT OF THE INVENTION
[0007] It is an aim of this invention to provide a system or method for identifying
a person, and confirming and verifying a person's identity to approve that
person to access certain services or perform certain tasks which overcomes or
ameliorates one or more of the disadvantages or problems described above, or
which at least provides a useful commercial alternative.
[0008] Other preferred objects of the present invention will become apparent
from the following description.
SUMMARY OF THE INVENTION
[0009] In a first form, there is provided a facial recognition system comprising:
a video capture device; and
a processing server having an image processing system configured to:
receive a video from the video capture device, the video showing a
person;
detect a face of the person in the video; execute facial recognition algorithms comparing the face in the video with a face of a person in an identifying document; calculate a similarity score for said faces; confirm an identity of the person in the video matches an identity of the person in the identifying document if the similarity score is at least equal to a threshold value.
[0010] In another form, there is provided a method of facial recognition, the
method comprising the steps of:
receiving a video from a video capture device, the video showing a person;
detecting a face of the person in the video;
executing facial recognition algorithms comparing the face in the video with a
face of a person in an identifying document;
calculating a similarity score for said faces;
confirming an identity of the person in the video matches an identity of the
person in the identifying document if the similar score is at least equal to a threshold
value.
[0011] In another form, the invention resides in a non-transitory computer
readable storage medium containing instructions executable by a processor,
the non-transitory computer-readable storage medium storing instructions for:
detecting a face of a person in a video;
executing facial recognition algorithms comparing the face in the video with a
face of a person in an identifying document;
calculating a similarity score for said faces; confirming an identity of the person in the video matches an identity of the person in the identifying document if the similarity score is at least equal to a threshold value.
[0012] Preferably, the system or method receives a query comprising image
data. Preferably, the system or method receives a query comprising video data.
[0013] Preferably, the system or method extracts metadata associated with the
video and/or the identifying document.
[0014] Preferably, the system or method stores the metadata associated with
the video and/or the identifying document in a database.
[0015] Preferably, image processing system is configured to identify objects in
the video. More preferably, the image processing system is configured to detect
and read text from the video. In some embodiments, the image processing
system detects and reads street names, captions, product names and license
plates or vehicle number plates.
[0016] Preferably, the image processing system detects attributes of the
detected face. For example, the image processing system may detect glasses
or facial hair, that the person's eyes are open or closed and may determine an
age range of the person in the video.
[0017] Preferably, the image processing system analyses the video and
determines a quality score. Preferably, the quality score is associated with
image quality of the video. Preferably, the image quality is determined by
analysing lighting and resolution.
[0018] Preferably, the image processing system extracts one or more frames
from the video. In some embodiments, the system creates a three-dimensional
model of the face from multiple angles, using the one or more frames of video.
Preferably, the image processing system algorithmically rotates the three
dimensional model to one or more frontal views of the face of the person in the
video. Suitably, data derived from the one or more frontal views can be
averaged for comparison to a base set of facial data in order to determine the
similarity score.
[0019] Preferably, the system further comprises a speech analysis system.
[0020] Preferably, the system and/or method comprises recording speech of
the person in the video. Preferably, the person recites one or more phrases that
are recorded. Preferably, the phrases are predetermined or randomly
generated. Preferably, the speech recording is converted from speech to text.
Preferably, the text and/or speech recording are analysed by the speech
analysis system for natural language and sentiment analysis to improve overall
confidence level of the identification process.
[0021] Further features and advantages of the present invention will become
apparent from the following detailed description.
BRIEF DESCRIPTION OF THE DRAWINGS
[0022] By way of example only, preferred embodiments of the invention will be
described more fully hereinafter with reference to the accompanying figures,
wherein:
Figure 1 illustrates a block diagram of a system for facial recognition according
to an embodiment of the present invention;
Figure 2 illustrates a flow diagram of a method for facial recognition according
to an embodiment of the present invention;
Figure 3 illustrates a block diagram of the user device and processing server of
the system of Figure 1;
Figure 4 illustrates another flow diagram of a method for facial recognition and
identity verification according to an embodiment of the present invention; and
Figure 5 illustrates an example of a user application for facial recognition.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
[0023] The present invention described herein provides a facial recognition
system and method for acquiring, processing and comparing a video with a
stored image to identify a person, and confirm and verify a person's identity to
approve that person to access certain services or perform certain tasks. The
facial recognition system and method identifies a person and confirms a
person's identity by determining if a person in the video is the person in the
stored image.
[0024] The systems and methods described herein include a video capture
stage, a video analysis stage, a photo capture stage and a verification stage.
[0025] Referring to Figure 1, there is shown a system 100 including two user
devices 110, 112, each user device 110, 112 having a video capture device
111, 113. The system 100 also includes a processing server 120 having an
image processing system 121connected by to each of the user devices 110,
112 by network 125.
[0026] The user devices 110, 112 may include any suitable computer device
such as a mobile phone or smartphone, or a PC or laptop computer. The video
capture devices 111, 113 may include any device capable of capturing and transferring video to a computer device, such as a camera and camera application on a smartphone, a video camera or a webcam. Accordingly, the video capture devices 111, 113 may be either integrated with the user device
110, 112 or connected as an external hardware device.
[0027] The network can be any suitable network, such as a cellular network or
a WAN, for example.
[0028] Turning to Figure 2, there is illustrated a flow chart of a method 200 for
facial recognition that is executed by the system 100.
[0029] In use, the system 100 of Figure 1 functions by capturing a video on the
video capture device 111 of user device 110 (of course it will be understood
that video capture device 113 and user device 112 could also be used). This is
step 205 of method 200 shown in Figure 2.
[0030] The video must show a person's face and may be any length of time. In
some embodiments, a minimum video length time (for example, 60 seconds)
may be implemented to allow sufficient angles and other conditions to be met
within the video recording. Other conditions for the video may also be
implemented. For example, the person may be required to recite a phrase (that
may be predetermined or randomly generated) or answer a series of questions.
[0031] The video is transferred to the processing service 120 over network 125
for analysis. In some preferred embodiments, the video is a live video that is
transferred to the processing server in real time as the video is captured.
[0032] At step 210, the video is received by the processing server 120 and the
image processing system 121 of the processing server 121 detects the face of
the person in the video. Of course, if a face is not present the video will be
rejected and a new video will be required.
[0033] The image processing system 121, at step 215 of the method 200, then
executes facial recognition algorithms that compare the face in the video with
a face in one or more identifying documents associated with the person in the
video.
[0034] The identifying document takes the form of a photographic personal
identification document, such as a driver's license, passport or the like.
[0035] The facial recognition algorithms compare and analyse the face in the
video with the faces in the identifying documents and detect patterns based on
the contours of the faces in the video and the identifying documents.
[0036] The facial recognition algorithms extract metadata which is indexed and
stored in a database or memory 355.
[0037] The image processing server 121 then calculates a similarity score for
the face in the video and the face in the identifying document at step 220 based
on the comparison in step 215.
[0038] Finally, at step 225, if the similarity score equals or exceeds a threshold
value (for example, a 75% match between the two faces), the identity of the
person in the video is confirmed as matching the identity of the person in the
identifying document.
[0039] This identity confirmation can then be used to grant that person access
to a variety of services or systems. In some embodiments, these documents
may be provided by the person or may be accessed from a secure database.
[0040] FIG. 3 is a block diagram of the processing server 120 and user device
110 which can be used to implement various exemplary embodiments of the
invention.
[0041] User device 110 includes a bus 315 coupling a memory 320 (storing
firmware and/or software 321), a display 325, a communication device in the
form of a data transfer unit 330, an image capture device 112 and a processor
335. The display 325, such as a liquid crystal display adapted for touchscreen
interface, displays information to a user of the user device and may also operate
as an input device. The display 325 communicates inputs and command
requests to the processor 335.
[0042] According to some embodiments of the invention, the processes and
methods described herein are performed by the electronic device 110, in
response to the processor 335 executing an instruction contained in
memory320 (which may be volatile memory, non-volatile memory or a
combination of the two). Execution of the instructions contained in
memory 320 cause the processor 335 to perform one or more of the process or
method steps (such as those described above in relation to FIG. 1 and 2)
described herein above.
[0043] One or more processors in a multiprocessing arrangement may also be
employed to execute the instructions contained in memory 320. In alternative
embodiments, hard-wired circuitry may be used in place of or in combination
with software instructions to implement embodiments of the invention. Thus,
embodiments of the invention are not limited to any specific combination of
hardware circuitry and software.
[0044] As mentioned, the user device 110 also includes a communication
device in the form of a data transfer unit 330 coupled to bus 315.
[0045] The data transfer unit 330 provides a two-way data communication
coupling to a network 340 using wireless radio signals. It will be appreciated that the communication device could also take the form of a digital subscriber line (DSL) card or modem, an integrated services digital network (ISDN) card, a cable modem, a telephone modem, or any other communication interface to provide a data communication connection to a corresponding type of communication line. As another example, communication device may be a local area network (LAN) card (e.g. for EthernetTM or an Asynchronous Transfer
Model (ATM) network) to provide a data communication connection to a
compatible LAN. Wireless links can also be implemented. In any such
implementation, the communication device sends and receives electrical,
electromagnetic, or optical signals that carry digital data streams representing
various types of information. Further, the communication device can include
peripheral interface devices, such as a Universal Serial Bus (USB) interface, a
PCMCIA (Personal Computer Memory Card International Association)
interface, etc. Although a single communication interface is depicted in FIG. 3,
multiple communication interfaces can also be employed.
[0046] The network link 345 typically provides data communication through one
or more networks to other electronic devices. For example, the network link 345
may provide a connection through cellular network 340 to processing server
120, which also has connectivity to network 340 (e.g. a wide area network
(WAN) or the Internet) or to data equipment operated by a service provider. The
local network and the network both use electrical, electromagnetic, or optical
signals to convey information and instructions. The signals through the various
networks and the signals on the network link and through the communication
device, which communicate digital data with the electronic devices, are
exemplary forms of carrier waves bearing the information and instructions.
[0047] The processing server 120 includes a bus 350 coupling a memory 355
(storing firmware or software including the image processing system and facial
recognition algorithms 121, a communication device in the form a data transfer
unit 360 and a processor 365.
[0048] Turning to Figure 4, there is another embodiment of the facial recognition
system of the present invention. The embodiment shown relates to a method
400 that may be executed by system 100.
[0049] In Process 1 of the method 400, a Client creates a login ID to a
verification platform or service (herein referred to as "MYID Platform). An email
confirmation is sent to the client for account activation. Once account activation
has been successfully completed, the client is then re-directed to the MYID
platform to begin the identification verification process.
[0050] At Processes 2 & 3, the video capture is performed. The client is
instructed to commence video capture for verification. An example of this
instruction is shown in Figure 5 at item 501.
[0051] The captured footage (indicated in Figure 5 as item 502) identifies
objects, people, text, scenes and activities. This video capture process provides
a highly accurate facial analysis and facial recognition on images and video.
The algorithm analyses the attributes of faces in images and videos and
provides additional determinations such as age range, whether the person's
eyes are open, glasses, facial hair, etc.
[0052] The algorithm can also detect and recognise text from images, such as
street names, captions, product names and license plates.
[0053] Furthermore, during the video capture stage, the client may be required
to recite a phrase (that is predetermined or randomly generated). This facilitates biometric testing whereby the recorded audio from the recited phrase is converted to text for analysis. In some embodiments, the audio or text analysis may include analysing speech patterns for irregularities.
[0054] Moving to Processes 4-6, the capture of photo identification documents
occurs. In these processes, a fast and accurate search capability is provided
by allowing identification of a person within a photo or video using a private
repository of face images (pre-loaded by the client). As an example, referring
to Figure 5, there is shown a user interface 500 where the client is directed to
provide three documents as at items 503-505. The documents provided are
shown at the bottom of the illustration by items 506-508.
[0055] The system 100 can quickly identify people in the video footage provided
in Process 1 and/or image libraries providing in Processes 4-6 to catalogue
clients for identification verification for numerous industry sectors, such as
financial services, telecommunications and government.
[0056] Turning to Processes 7 & 8, the client accesses an online form and/or
document upload to capture additional verification of client's details. The details
may vary based on the service being offered or requested.
[0057] The clients will be able to distinguish a list of required documents to
confirm address, previous address history, government reference material,
Australian travel documents, etc. An example of this online form 500 is shown
in Figure 5 as mentioned earlier.
[0058] Moving to Process 9, an image processing system and facial recognition
system (such as those described above) analyses the video and documents to
confirm the identity of the client by computing a similarity score (such as score
509 shown in Figure 5) that must equal or exceed a minimum threshold of similarity. If the minimum threshold is met, the identity of the client is confirmed and access is granted to the requested service. The system then produces a
Certification Document, which may be in the form of an encrypted digital
certificate that is recognised by one or more agencies or services providers
(e.g. banks or other credit providers, for example).
[0059] In some embodiments, the video is analysed on a frame-by-frame basis.
In some further embodiments, this frame-by-frame analysis may be used to
construct a three-dimensional model of a face in the video.
[0060] Embodiments of the present invention provide real-time feedback to
ensure that prospective users are guided to provide quality facial images to the
system for comparison to stored facial data in identifying documents (such as
a driver's license or passport, for example).
[0061] In some embodiments, the present invention advantageously utilizes
multiple frames of video in order to derive a relatively high confidence of a given
user's identity rather than relying upon a single frame, as in some existing
systems.
[0062] Further, the present system may construct a three-dimensional model of
a face from multiple angles, using one or more frames of video, then
algorithmically rotates the three-dimensional model to form one or more frontal
views of the face. Advantageously, data derived from the one or more frontal
views can be averaged for comparison to a base set of facial data in order to
determine the similarity score.
[0063] In some embodiments, the invention can significantly increase the
accuracy of correct facial recognition by eliminating the reliance upon a single image, ensuring that the frames of video provided by a prospective user are of adequate quality, and by confirming the facial recognition results over multiple frames of video.
[0064] In this specification, adjectives such as first and second, left and right,
top and bottom, and the like may be used solely to distinguish one element or
action from another element or action without necessarily requiring or implying
any actual such relationship or order. Where the context permits, reference to
an integer or a component or step (or the like) is not to be interpreted as being
limited to only one of that integer, component, or step, but rather could be one
or more of that integer, component, or step, etc.
[0065] The above detailed description of various embodiments of the present
invention is provided for purposes of description to one of ordinary skill in the
related art. It is not intended to be exhaustive or to limit the invention to a single
disclosed embodiment. As mentioned above, numerous alternatives and
variations to the present invention will be apparent to those skilled in the art of
the above teaching. Accordingly, while some alternative embodiments have
been discussed specifically, other embodiments will be apparent or relatively
easily developed by those of ordinary skill in the art. The invention is intended
to embrace all alternatives, modifications, and variations of the present
invention that have been discussed herein, and other embodiments that fall
within the spirit and scope of the above described invention.
[0066] In this specification, the terms 'comprises', 'comprising', 'includes',
'including', or similar terms are intended to mean a non-exclusive inclusion,
such that a method, system or apparatus that comprises a list of elements does not include those elements solely, but may well include other elements not listed.
[0067] Throughout the specification and claims (if present), unless the context
requires otherwise, the term "substantially" or "about" will be understood to not
be limited to the specific value or range qualified by the terms.

Claims (26)

1. A facial recognition system comprising:
a video capture device; and
a processing server having an image processing system configured to:
receive a video from the video capture device, the video showing a
person;
detect a face of the person in the video;
execute facial recognition algorithms comparing the face in the video
with a face of a person in an identifying document;
calculate a similarity score for said faces;
confirm an identity of the person in the video matches an identity of the
person in the identifying document if the similarity score is at least equal to a
threshold value.
2. The system of claim 1, wherein the image processing system is further
configured to receive a query comprising image data.
3. The system of claim 1 or claim 2, wherein the image processing system is
further configured to receive a query comprising video data.
4. The system of any one of claims 1-3, wherein the image processing system is
further configured to extract metadata associated with the video and/or the
identifying document.
5. The system of claim 4, wherein the image processing system is further
configured to store the metadata associated with the video and/or the identifying document in a database in electrical communication with the processing server.
6. The system of any one of the preceding claims, wherein the image processing
system is further configured to identify objects in the video.
7. The system of any one of the preceding claims, wherein the image processing
system is further configured to detect and read text from the video.
8. The system of claim 7, wherein the image processing system is further
configured to detect and read text comprising street names, captions, product
names, licence plates and vehicle number plates.
9. The system of any one of the preceding claims, wherein the image processing
system is further configured to detect attributes of the detected face.
10. The system of claim 9, wherein the attributes comprise the presence of glasses
or facial hair, open or closed eyes and/or an age range of the person in the
video.
11.The system of any one of the preceding claims, wherein the image processing
system is further configured to analyse the video and determine a quality score.
12.The system of claim 11, wherein the quality score is associated with image
quality of the video.
13.The system of claim 12, wherein the quality score is determined by analysing
lighting and resolution of the video.
14.The system of any one of the preceding claims, wherein the image processing
system is further configured to extract one or more frames from the video.
15.The system of claim 14, wherein the image processing system is further
configured to create a three-dimensional model of the face from multiple angles
using the one or more frames of the video.
16.The system of claim 15, wherein the image processing system is further
configured to algorithmically rotate the three-dimensional model to one or more
frontal views of the face of the person in the video.
17.The system of claim 16, wherein the image processing system is further
configured to derive data from the one or more frontal views and average the
derived data for comparison to a base set of facial data to determine the
similarity score.
18.The system of any one of the preceding claims, wherein the processing server
further comprises a speech analysis system.
19.The system of claim 18, wherein the video capture device is configured to
record audio associated with the video.
20.The system of claim 19, wherein the audio comprises speech of the person in
the video, and the speech analysis system is configured to convert speech to
text.
21.The system of claim 20, wherein the speech analysis system is further
configured to analyse the text and/or speech for natural language and
sentiment to improve an overall confidence level of the identification process.
22.The system of any one of the preceding claims, wherein the image processing
system receives a live video in real time from the video capture device.
23.A method of facial recognition, the method comprising the steps of:
receiving a video from a video capture device, the video showing a
person;
detecting a face of the person in the video;
executing facial recognition algorithms comparing the face in the video
with a face of a person in an identifying document; calculating a similarity score for said faces; confirming an identity of the person in the video matches an identity of the person in the identifying document if the similar score is at least equal to a threshold value.
24.The method of claim 23 further comprising the steps of:
recording the video on the video capture device, wherein the video is a
live video recorded in real time.
25.A non-transitory computer-readable storage medium containing instructions
executable by a processor, the non-transitory computer-readable storage
medium storing instructions for:
detecting a face of a person in a video;
executing facial recognition algorithms comparing the face in the video
with a face of a person in an identifying document;
calculating a similarity score for said faces;
confirming an identity of the person in the video matches an identity of
the person in the identifying document if the similarity score is at least equal to
a threshold value.
26.The non-transitory computer-readable storage medium of claim 25 storing
further instructions to:
activating a video capture device; and
recording a video using the video capture device.
AU2020203692A 2019-06-04 2020-06-04 System and method for facial recognition Pending AU2020203692A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AU2019901928A AU2019901928A0 (en) 2019-06-04 System and method for facial recognition
AU2019901928 2019-06-04

Publications (1)

Publication Number Publication Date
AU2020203692A1 true AU2020203692A1 (en) 2020-12-24

Family

ID=73838732

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2020203692A Pending AU2020203692A1 (en) 2019-06-04 2020-06-04 System and method for facial recognition

Country Status (1)

Country Link
AU (1) AU2020203692A1 (en)

Similar Documents

Publication Publication Date Title
EP3477519B1 (en) Identity authentication method, terminal device, and computer-readable storage medium
US10839061B2 (en) Method and apparatus for identity authentication
CN111886842B (en) Remote user authentication using threshold-based matching
US6810480B1 (en) Verification of identity and continued presence of computer users
US20200065460A1 (en) Method and computer readable storage medium for remote interview signature
US20170103397A1 (en) Video identification method and computer program product thereof
KR101534808B1 (en) Method and System for managing Electronic Album using the Facial Recognition
JP2002251380A (en) User collation system
WO2020051643A1 (en) Remotely verifying an identity of a person
CN112908325B (en) Voice interaction method and device, electronic equipment and storage medium
US20180107865A1 (en) Biometric Facial Recognition for Accessing Device and Authorizing Event Processing
US20200302715A1 (en) Face authentication based smart access control system
US10558886B2 (en) Template fusion system and method
CN111241873A (en) Image reproduction detection method, training method of model thereof, payment method and payment device
WO2020007191A1 (en) Method and apparatus for living body recognition and detection, and medium and electronic device
JP2006085289A (en) Facial authentication system and facial authentication method
WO2021049234A1 (en) Image analysis device, control method, and program
KR102215535B1 (en) Partial face image based identity authentication method using neural network and system for the method
AU2020203692A1 (en) System and method for facial recognition
CN110516426A (en) Identity identifying method, certification terminal, device and readable storage medium storing program for executing
CN112367314B (en) Identity authentication method, device, computing equipment and medium
CN111259698A (en) Method and device for acquiring image
CN114707163A (en) Method for creating table to obtain access authority, terminal equipment and storage medium
JP2022100522A (en) Person identifying method, program and information system
CN110891049A (en) Video-based account login method, device, medium and electronic equipment