WO2022084824A1 - System and method for face anti-counterfeiting - Google Patents

System and method for face anti-counterfeiting Download PDF

Info

Publication number
WO2022084824A1
WO2022084824A1 PCT/IB2021/059565 IB2021059565W WO2022084824A1 WO 2022084824 A1 WO2022084824 A1 WO 2022084824A1 IB 2021059565 W IB2021059565 W IB 2021059565W WO 2022084824 A1 WO2022084824 A1 WO 2022084824A1
Authority
WO
WIPO (PCT)
Prior art keywords
visual media
subsystem
detection subsystem
face
capturing unit
Prior art date
Application number
PCT/IB2021/059565
Other languages
French (fr)
Inventor
Yadavaprasath GOPI
Original Assignee
Gopi Yadavaprasath
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gopi Yadavaprasath filed Critical Gopi Yadavaprasath
Publication of WO2022084824A1 publication Critical patent/WO2022084824A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Definitions

  • Embodiments of a present invention relate to face anti-counterfeiting, and more particularly, to a system and method for ani-counterfeiting.
  • Face anti-counterfeiting is defined as an act of detecting and preventing one or more fraudster activities while authenticating an authorized user by performing the face recognition operation.
  • Traditional face recognition systems recognize the face of the authorized user based on one or more features extracted from the one or more images captured to perform face recognition operation.
  • a fraudster can obtain a face image of the authorized user by some means and create a photo, video, mask, or the like and present it to the system in order to obtain illegal rights, the traditional face recognition systems fails to identify a photo, a video, a mask, or the like.
  • a depth camera and a three- dimensional camera is used to detect face anti-counterfeiting activity.
  • implementation of such approach is expensive leading to increasing the cost of the end product that can be sold in market.
  • people belonging to small sectors cannot afford the cost of such a product and this makes technology reach to only the multinational companies.
  • a system deployed in a device for face ani-counterfeiting includes one or more processors.
  • the system also includes a visual media receiving subsystem operable by the one or more processors.
  • the visual media receiving subsystem is configured to receive one or more visual media captured via a visual media capturing unit in real-time.
  • the one or more visual media captured include one or more images or one or more videos of an environment in front of the visual media capturing unit.
  • the system also includes a face detection subsystem operable by the one or more processors.
  • the face detection subsystem is configured to detect for a presence of one or more faces in the one or more visual media received using one or more image processing techniques.
  • the face detection subsystem is also configured to count the one or more faces detected in the one or more visual media received.
  • the face detection subsystem is also configured to generate one or more rectangular frames of one of the one or more faces detected in the one or more visual media received when a count of the one or more faces detected in the visual media received is unity.
  • the system also includes a first level counterfeit detection subsystem operable by the one or more processors.
  • the first level counterfeit detection subsystem is configured to measure a distance between an entity in front of the visual media capturing unit and the visual media capturing unit using a distance measuring unit upon generation of the one or more rectangular frames by the face detection subsystem.
  • the entity includes one of the one or more faces detected by the face detection subsystem.
  • the first level counterfeit detection subsystem is also configured to calculate one or more dimensions of the one or more rectangular frames generated by the face detection subsystem.
  • the first level counterfeit detection subsystem is also configured to perform a dimension calibration on the one or more dimensions calculated in accordance with the distance measured between the corresponding entity and the visual media capturing unit.
  • the first level counterfeit detection subsystem is also configured to compare the dimension calibration performed on the corresponding one or more dimensions with a pre-defined calibration threshold value.
  • the system also includes a second level counterfeit detection subsystem operable by the one or more processors.
  • the second level counterfeit detection subsystem is configured to compare temperature of the entity in front of the visual media capturing unit with a preset human temperature when the dimension calibration performed matches with the pre-defined calibration threshold value upon performing the comparison by the first level counterfeit detection subsystem, wherein the temperature is sensed via a temperature sensing unit.
  • the system also includes a third level counterfeit detection subsystem operable by the one or more processors.
  • the third level counterfeit detection subsystem is configured to detect for a presence of one or more counterfeiting entities in front of the visual media capturing unit using a deep-learning model when the temperature sensed matches with the preset human temperature upon performing the comparison by the second level counterfeit detection subsystem.
  • the system also includes an alert generation subsystem operable by the one or more processors.
  • the alert generation subsystem is configured to generate an alert when one of the count of the one or more faces detected is greater than unity, the dimension calibration performed mismatches with the predefined calibration threshold value, the temperature sensed mismatches with the preset human temperature, the presence of the one or more counterfeiting entities is detected, or a combination thereof.
  • a method for face anti-counterfeiting includes receiving one or more visual media captured via a visual media capturing unit in real-time.
  • the method also includes detecting for a presence of one or more faces in the one or more visual media received using one or more image processing techniques.
  • the method also includes counting the one or more faces detected in the one or more visual media received.
  • the method also includes generating one or more rectangular frames of one of the one or more faces detected in the one or more visual media received when a count of the one or more faces detected in the visual media received is unity.
  • the method also includes measuring a distance between an entity in front of the visual media capturing unit and the visual media capturing unit using a distance measuring unit upon generation of the one or more rectangular frames by the face detection subsystem, wherein the entity comprises one of the one or more faces detected by the face detection subsystem.
  • the method also includes calculating one or more dimensions of the one or more rectangular frames generated by the face detection subsystem.
  • the method also includes performing, by the first level counterfeit detection subsystem, a dimension calibration on the one or more dimensions calculated in accordance to the distance measured between the corresponding entity and the visual media capturing unit.
  • the method also includes comparing the dimension calibration performed on the corresponding one or more dimensions with a pre-defined calibration threshold value.
  • the method also includes comparing a temperature of the entity in front of the visual media capturing unit with a preset human temperature when the dimension calibration performed matches with the pre-defined calibration threshold value upon performing the comparison by the first level counterfeit detection subsystem, wherein the temperature is sensed via a temperature sensing unit.
  • the method also includes detecting for a presence of one or more counterfeiting entities in front of the visual media capturing unit using a deeplearning model when the temperature sensed matches with the preset human temperature upon performing the comparison by the second level counterfeit detection subsystem.
  • the method also includes generating an alert when one of the count of the one or more faces detected is greater than unity, the dimension calibration performed mismatches with the pre-defined calibration threshold value, the temperature sensed mismatches with the preset human temperature, the presence of the one or more counterfeiting entities is detected, or a combination thereof.
  • FIG. 1 is a block diagram representation of a system deployed in a device for face ani- counterfeiting in accordance with an embodiment of the present disclosure
  • FIG. 2 is a block diagram representation of an exemplary embodiment of the system for the face ani-counterfeiting of FIG. 1 in accordance with an embodiment of the present disclosure
  • FIG. 3 is a block diagram of a face anti-counterfeiting computer or a face anticounterfeiting server in accordance with an embodiment of the present disclosure.
  • FIG. 4 is a flow chart representing steps involved in a method for face ani- counterfeiting in accordance with an embodiment of the present disclosure.
  • Embodiments of the present disclosure relate to a system deployed in a device for face ani-counterfeiting.
  • Counterfeiting is defined as an act of imitating something authentic, with the intent to steal, destroy, or replace the original, for use in illegal transactions, or otherwise to deceive individuals into believing that the fake is of equal or greater value than the real thing.
  • face of an authorized user may be used for an authentication of the corresponding authorized user.
  • a fraudster can obtain a face image of the authorized user by some means and create a photo, video, mask, or the like presented to a system in order to obtain illegal rights.
  • face anti-counterfeiting is defined as an act of detecting and preventing such fraudster activities while face recognition operation.
  • the system as described hereafter in FIG. 1 is the system for face ani-counterfeiting.
  • FIG. 1 is a block diagram representation of a system (10) deployed in a device (20) for face ani-counterfeiting in accordance with an embodiment of the present disclosure.
  • the device (20) includes an integrated unit of one or more Raspberry Pi-based modules.
  • the device (20) is used in integration with a user device via which an authorized user is performing one or more operations using face of the authorized user as an authentication means.
  • the user device includes a laptop, a desktop computer, a standalone biometric device, and the like.
  • the one or more operations include an access control to the corresponding user device, attendance recording, wireless gate opening operation, monetary-related operation at a bank or a locker, or the like.
  • the system (10) includes one or more processors (30).
  • the one or more processors (30) include Raspberry Pi, Nvidia Jetson Nano processor, or the like.
  • the authorized user may have to be registered on a centralized platform.
  • the system (10) includes a registration subsystem (as sown in FIG. 2) operable by the one or more processors (30).
  • the registration subsystem registers the authorized user on the centralized platform upon receiving a plurality of authorized user details via an authorized user interface associated to the device (20).
  • the plurality of authorized user details comprises an authorized username, a contact number, e-mail ID, one or more authorized user images, an authorized user voice recording, and the like.
  • the system (10) needs to check for face counterfeiting activity, when an entity tries to log in on the centralized platform to perform at least one of the one or more operations.
  • the system (10) may have to receive a visual media of the entity and an environment surrounding the entity, and hence to perform such an operation, the system (10) also includes a visual media receiving subsystem (40) operable by the one or more processors (30).
  • the visual media receiving subsystem (40) is operatively coupled to the registration subsystem.
  • the visual media receiving subsystem (40) receives one or more visual media captured via a visual media capturing unit (50) in real-time.
  • the one or more visual media captured include one or more images or one or more videos of the environment in front of the visual media capturing unit (50).
  • the environment may also include the entity trying to login on the centralized platform.
  • the visual media capturing unit (50) includes a normal camera, a Raspberry Pi camera, a webcam, or the like.
  • the system (10) also includes a face detection subsystem (60) operable by the one or more processors (30).
  • the face detection subsystem (60) is operatively coupled to the visual media receiving subsystem (40).
  • the face detection subsystem (60) detects for a presence of one or more faces in the one or more visual media received using one or more image processing techniques.
  • the face detection subsystem (60) also counts the one or more faces detected in the one or more visual media received.
  • the face detection subsystem (60) also generates one or more rectangular frames of one of the one or more faces detected in the one or more visual media received when a count of the one or more faces detected in the visual media received is unity.
  • the entity includes an identity card, a photo, an image in a mobile phone, or the like of the authorized user which a fraudster is holding very close to the visual media capturing unit (50).
  • the entity includes a face mask resembling the face of the authorized user worn by the fraudster, a print of the face of the authorized user having a standard size of the face held by the fraudster, an image poster of the face of the authorized user with the standard size of the face held by the fraudster, a model having the face of the authorized user, one or more counterfeiting entities, or the like.
  • the one or more counterfeiting entities include one or more two-dimensional (2-D) entities, one or more three- dimensional (3-D) entities, or the like.
  • the system (10) needs to check for the face counterfeiting activity, when the fraudster might use the identity card, the photo, the image in the mobile phone, or the like of the authorized user to login on the centralized platform.
  • the system (10) includes a first level counterfeit detection subsystem (70) operable by the one or more processors (30).
  • the first level counterfeit detection subsystem (70) is operatively coupled to the face detection subsystem (60).
  • the first level counterfeit detection subsystem (70) measures a distance between the entity in front of the visual media capturing unit (50) and the visual media capturing unit (50) using a distance measuring unit (80) upon generation of the one or more rectangular frames by the face detection subsystem (60).
  • the distance measuring unit (80) includes one of an Ultrasonic distance measuring sensor, a laser ranging sensor, and the like.
  • the term “Ultrasonic distance measuring sensor” is defined as a sensor which includes a head that emits an ultrasonic wave and receives the wave reflected back from the target.
  • the term “laser ranging sensor” is defined as a sensor which works by use measuring the time it takes a pulse of laser light to be reflected off a target and returned to the sender.
  • the laser ranging sensor includes VL53L0 laser ranging sensor.
  • the entity includes one of the one or more faces detected by the face detection subsystem (60).
  • the first level counterfeit detection subsystem (70) also calculates one or more dimensions of the one or more rectangular frames generated by the face detection subsystem (60).
  • the first level counterfeit detection subsystem (70) also performs a dimension calibration on the one or more dimensions calculated in accordance with the distance measured between the corresponding entity and the visual media capturing unit (50).
  • the one or more dimensions include length, and breadth, a perimeter, an area, or the like of the corresponding one or more rectangular frames.
  • the dimension calibration includes an addition or subtraction of a pre-defined value to the one or more dimensions calculated in order to match with the distance measured between the corresponding entity and the visual media capturing unit (50).
  • the first level counterfeit detection subsystem (70) also compares the dimension calibration performed on the corresponding one or more dimensions with a predefined calibration threshold value. In one embodiment, the first level counterfeit detection subsystem (70) compares the pre-defined value added to or subtracted from the one or more dimensions calculated with the pre-defined calibration threshold value.
  • the system (10) needs to check for the face counterfeiting activity, when the fraudster might wear the face mask resembling the face of the authorized user, hold the print of the face of the authorized user having the standard size of the face, or the like.
  • the first level counterfeit detection subsystem (70) might successfully identify the corresponding face detected to be the face of the authorized user.
  • the system (10) further includes a second level counterfeit detection subsystem (90) operable by the one or more processors (30).
  • the second level counterfeit detection subsystem (90) is operatively coupled to the first level counterfeit detection subsystem (70).
  • the second level counterfeit detection subsystem (90) compares temperature of the entity in front of the visual media capturing unit (50) with a preset human temperature when the dimension calibration performed matches with the pre-defined calibration threshold value upon performing the comparison by the first level counterfeit detection subsystem (70).
  • the temperature sensing unit (100) includes an Infrared (IR) thermal sensor.
  • IR thermal sensor is defined as a sensor used to measure temperature of an object placed at a distance without toughing the corresponding object.
  • a typical IR thermal sensor may use a lens to focus light on an object which reflects back and fall onto a detector, wherein the detector may be called as a thermopile which absorbs IR radiations reflected and turns it into heat and hence temperature of the object is measured.
  • the fraudster is wearing the face mask resembling the face of the authorized user, the fraudster has successfully incorporated the preset human temperature into the entity having the face of the authorized user and placed in front of the visual capturing unit, or the like.
  • the entity may be the model having the face of the authorized user.
  • the second level counterfeit detection subsystem (90) might successfully identify the corresponding face detected to be the face of the authorized user.
  • the system (10) also includes a third level counterfeit detection subsystem (110) operable by the one or more processors (30).
  • the third level counterfeit detection subsystem (110) is operatively coupled to the second level counterfeit detection subsystem (90).
  • the third level counterfeit detection subsystem (110) detects for a presence of the one or more counterfeiting entities in front of the visual media capturing unit (50) using a deep-learning model when the temperature sensed matches with the preset human temperature upon performing the comparison by the second level counterfeit detection subsystem (90).
  • the one or more counterfeiting entities include the face mask, a secondary entity holding the model having the face of the authorized user, the secondary entity holding the image poster of the face of the authorized user with the standard size of the face, one or more spoofing gadgets, or the like.
  • the third level counterfeit detection subsystem (110) generates the deep-learning model which is trained to differentiate between real face and fake face.
  • the deep-learning model may be associated with a database (120).
  • the database (120) may store data associated to one or more features of the real face as a real face data and the data associated to one or more features of the fake face as the fake face data.
  • the third level counterfeit detection subsystem (110) may detect the presence of the fake face in front of the visual media capturing unit (50) using the deep-learning model based on the data stored in the corresponding database (120). In another exemplary embodiment, the third level counterfeit detection subsystem (110) generates the deep-learning model which detects for the presence of more than one object in the one or more visual media received by the visual media receiving subsystem (40). In such embodiment, the third level counterfeit detection subsystem (110) successfully rejects the face detected when an additional object is detected.
  • the system (10) includes an emotion recognition subsystem (as shown in FIG. 2) operable by the one or more processors (30).
  • the emotion recognition subsystem is operatively coupled to the third level counterfeit detection subsystem (110).
  • the emotion recognition subsystem may generate a notification for the entity to perform at least one emotional gesture when the third level counterfeit detection subsystem (110) fails to detect the presence of the one or more counterfeiting entities.
  • the at least one emotional gesture may be generated randomly by the emotion recognition subsystem.
  • the at least one emotional gesture includes blinking, smiling, frowning, or the like.
  • the emotion recognition subsystem may capture an emotional gesture performed by the entity via the visual media capturing unit (50). Later, the emotion recognition subsystem may compare the emotional gesture captured with the at least one emotional gesture notified by the emotion recognition subsystem to detect for face counterfeiting.
  • the system (10) also includes a voice recognition subsystem (as shown in FIG. 2) operable by the one or more processors (30).
  • the voice recognition subsystem may generate a notification for the entity to pronounce at least one word when the emotional gesture performed matches with the at least one emotional gesture notified by the emotion recognition subsystem, wherein the at least one word is randomly generated by the voice recognition subsystem.
  • the voice recognition subsystem may record a voice of the entity uttering a word via a voice recording unit. Later, the voice recognition subsystem may compare the voice recorded, and the word uttered by the entity with a pre-stored authorized user voice recording, and the at least one word requested by the voice recognition subsystem respectively to detect for the face counterfeiting.
  • the notification generated includes the notification in a form of one of a text message, a voice message, a visual media message, or the like via the authorized user interface associated to the device (20).
  • the system (10) also includes an alert generation subsystem (130) operable by the one or more processors (30).
  • the alert generation subsystem (130) generates an alert when one of the count of the one or more faces detected is greater than unity, the dimension calibration performed mismatches with the pre-defined calibration threshold value, the temperature sensed mismatches with the preset human temperature, the presence of the one or more counterfeiting entities is detected, or a combination thereof.
  • the alert generation subsystem (130) also generates the alert when the emotional gesture performed mismatches with the at least one emotional gesture requested by the emotion recognition subsystem. In another embodiment, the alert generation subsystem (130) also generates the alert when one of the voice recorded uttering the word mismatches with the pre-stored authorized user voice recording, the word uttered by the entity mismatches with the at least one word requested by the voice recognition subsystem, or a combination thereof. In one exemplary embodiment, the alert generated includes an audio alert, a visual alert, a text message, and the like.
  • the system (10) includes a power supply unit, the power supply unit supplies power to the device (20), the one or more processors (30), the visual media capturing unit (50), the distance measuring unit (80), the temperature sensing unit (100), and the like for proper functioning.
  • the power supply unit may include a battery.
  • FIG. 2 is a block diagram representation of an exemplary embodiment of the system (10) for the face ani-counterfeiting of FIG. 1 in accordance with an embodiment of the present disclosure.
  • the authorized user (140) has a locker (150) at his residence (160) to safely store precious accessories.
  • the authorized user (140) uses the system (10) deployed in a Raspberry Pi based device (170).
  • the system (10) includes a Raspberry Pi based processor (180).
  • the authorized user (140) registers to the system (10) via the registration subsystem (190) of the system (10) upon providing the plurality of authorized user details via the authorized user interface associated to the Raspberry Pi based device (170).
  • a fraudster might have made a robotic model having the face resembling the authorized user (140). So, the fraudster might break into the corresponding residence (160) and somehow find the locker (150).
  • the Raspberry Pi camera (200) captures the one or more visual media of the environment in front of the Raspberry Pi camera (200) and the visual media receiving subsystem (40) of the system (10) receives the one or more visual media captured. Further, the system (10) detects the presence of one or more faces in the one or more visual media received by the face detection subsystem (60) of the system (10). The face detection subsystem (60) detects a single face and hence generates the one or more rectangular frames of the face detected.
  • the system (10) measures the distance between the entity and the Raspberry Pi camera (200) via the first level counterfeit detection subsystem (70) of the system (10) using the laser ranging sensor (210). Later, the system (10) calculates the one or more dimensions of the one or more rectangular frames, performs the dimension calibration and compares with the pre-defined calibration threshold value via the first level counterfeit detection subsystem (70). However, since the robotic model is used, the dimension calibration matches with the pre-defined calibration threshold value.
  • the system (10) measures the temperature of the entity in front of the Raspberry Pi camera (200) using the IR thermal sensor (220) via the second level counterfeit detection subsystem (90) of the system (10). Later, the temperature measured is compared with the preset human temperature. However, since, the fraudster had incorporated the preset human temperature into the robotic model, the temperature measured matches with the preset human temperature.
  • the system (10) detects the presence of the one or more counterfeiting entities using the deep-learning model via the third level counterfeit detection subsystem (110) of the system. However, since the fraudster is controlling the robotic model from a distance, the one or more counterfeiting entities are not detected.
  • the system (10) generates the notification for the entity to perform the blinking gesture via the emotion recognition subsystem (230) of the system. Later, the robotic model performs a gesture upon receiving the corresponding notification which is recorded and compared with the blinking gesture requested by the emotion recognition subsystem (230). However, since the robotic model was advanced enough to perform the blinking gesture, the gestures matches.
  • the system (10) generates the notification for the entity to pronounce a random word via the voice recognition subsystem (235) of the system (10).
  • the robotic model has a memory with a limited storage as it was not that advanced to have a huge memory to store huge amount of data.
  • the random word which the robotic model is supposed to pronounce is absent in the memory of the robotic model, then the robotic model may not utter any word and after a pre-defined time period, the unlocking process might re-start.
  • the robotic model pronounces the corresponding random word, however, a voice texture might not match with the voice texture of the pre-stored authorized user voice recording as it is difficult to generate highly accurate audio.
  • the fraudster fails to unlock the locker (150) and the alert is sent to the authorized user (140) regarding the theft activity taking place at his residence (160) and the authorized user (140) can take immediate actions to catch the fraudster.
  • the Raspberry Pi based device (170), the Raspberry Pi based processor (180), the Raspberry Pi camera (200), the laser ranging sensor (210), and the IR thermal sensor (220), are substantially similar to a device (20), one or more processors (30), a visual media capturing unit (50), a distance measuring unit (80), and a temperature sensing unit (100) of FIG. 1.
  • FIG. 3 is a block diagram of a face anti-counterfeiting computer or a face anticounterfeiting server (240) in accordance with an embodiment of the present disclosure.
  • the face anti-counterfeiting server (240) includes processor(s) (250), and a memory (260) coupled to a bus (270).
  • the processor(s) (250) and the memory (260) are substantially similar to the system ( 10) of FIG. 1.
  • the memory (260) is located in a local storage device.
  • the processor(s) (250), as used herein, means any type of computational circuit, such as, but not limited to, a microprocessor, a microcontroller, a complex instruction set computing microprocessor, a reduced instruction set computing microprocessor, a very long instruction word microprocessor, an explicitly parallel instruction computing microprocessor, a digital signal processor, or any other type of processing circuit, or a combination thereof.
  • Computer memory elements may include any suitable memory device(s) for storing data and executable program, such as read only memory, random access memory, erasable programmable read only memory, electrically erasable programmable read only memory, hard drive, removable media drive for handling memory cards and the like.
  • Embodiments of the present subject matter may be implemented in conjunction with program modules, including functions, procedures, data structures, and application programs, for performing tasks, or defining abstract data types or low-level hardware contexts.
  • Executable program stored on any of the above-mentioned storage media may be executable by the processor(s) (250).
  • the memory (260) includes a plurality of subsystems stored in the form of executable program which instructs the processor(s) (250) to perform method steps illustrated in FIG. 3.
  • the memory (260) has following subsystems: a visual media receiving subsystem (40), a face detection subsystem (60), a first level counterfeit detection subsystem (70), a second level counterfeit detection subsystem (90), a third level counterfeit detection subsystem (110), and an alert generation subsystem (130).
  • the visual media receiving subsystem (40) is configured to receive one or more visual media captured via a visual media capturing unit (50) in real-time, wherein the one or more visual media captured comprise one or more images or one or more videos of an environment in front of the visual media capturing unit (50).
  • the face detection subsystem (60) is configured to detect for a presence of one or more faces in the one or more visual media received using one or more image processing techniques.
  • the face detection subsystem (60) is also configured to count the one or more faces detected in the one or more visual media received.
  • the face detection subsystem (60) is also configured to generate one or more rectangular frames of one of the one or more faces detected in the one or more visual media received when a count of the one or more faces detected in the visual media received is unity.
  • the first level counterfeit detection subsystem (70) is configured to measure a distance between an entity in front of the visual media capturing unit (50) and the visual media capturing unit (50) using a distance measuring unit (80) upon generation of the one or more rectangular frames by the face detection subsystem (60), wherein the entity comprises one of the one or more faces detected by the face detection subsystem (60).
  • the first level counterfeit detection subsystem (70) is also configured to calculate one or more dimensions of the one or more rectangular frames generated by the face detection subsystem (60).
  • the first level counterfeit detection subsystem (70) is also configured to perform a dimension calibration on the one or more dimensions calculated in accordance with the distance measured between the corresponding entity and the visual media capturing unit (50).
  • the first level counterfeit detection subsystem (70) is also configured to compare the dimension calibration performed on the corresponding one or more dimensions with a pre-defined calibration threshold value.
  • the second level counterfeit detection subsystem (90) is configured to compare temperature of the entity in front of the visual media capturing unit (50) with a preset human temperature when the dimension calibration performed matches with the predefined calibration threshold value upon performing the comparison by the first level counterfeit detection subsystem (70), wherein the temperature is sensed via a temperature sensing unit (100).
  • the third level counterfeit detection subsystem (110) is configured to detect for a presence of one or more counterfeiting entities in front of the visual media capturing unit (50) using a deep-learning model when the temperature sensed matches with the preset human temperature upon performing the comparison by the second level counterfeit detection subsystem (90).
  • the alert generation subsystem (130) is configured to generate an alert when one of the count of the one or more faces detected is greater than unity, the dimension calibration performed mismatches with the pre-defined calibration threshold value, the temperature sensed mismatches with the preset human temperature, the presence of the one or more counterfeiting entities is detected, or a combination thereof.
  • FIG. 4 is a flow chart representing steps involved in a method (280) for face ani- counterfeiting in accordance with an embodiment of the present disclosure.
  • the method (280) includes receiving one or more visual media captured via a visual media capturing unit in real-time in step 290.
  • receiving the one or more visual media captured via the visual media capturing unit in real-time includes receiving the one or more visual media captured via the visual media capturing unit in real-time by a visual media receiving subsystem (40).
  • the method (280) also includes registering an authorized on a centralized platform upon receiving a plurality of authorized user details via an authorized user interface associated with the device (20).
  • by registering the authorized on the centralized platform includes registering the authorized on the centralized platform a registration subsystem.
  • the method (280) also includes detecting for a presence of one or more faces in the one or more visual media received using one or more image processing techniques in step 300.
  • detecting for the presence of the one or more faces in the one or more visual media received includes detecting for the presence of the one or more faces in the one or more visual media received by a face detection subsystem (60).
  • receiving the one or more visual media captured includes receiving one or more images or one or more videos of an environment in front of the visual media capturing unit.
  • the method (280) includes counting the one or more faces detected in the one or more visual media received in step 310.
  • counting the one or more faces detected in the one or more visual media received includes counting the one or more faces detected in the one or more visual media received by the face detection subsystem (60).
  • the method (280) also includes generating one or more rectangular frames of one of the one or more faces detected in the one or more visual media received when a count of the one or more faces detected in the visual media received is unity in step 320.
  • generating the one or more rectangular frames of one of the one or more faces detected includes generating the one or more rectangular frames of one of the one or more faces by the face detection subsystem (60).
  • the method (280) also includes measuring a distance between an entity in front of the visual media capturing unit and the visual media capturing unit using a distance measuring unit upon generation of the one or more rectangular frames by the face detection subsystem, wherein the entity comprises one of the one or more faces detected by the face detection subsystem in step 330.
  • measuring the distance between the entity in front of the visual media capturing unit and the visual media capturing unit includes measuring the distance between the entity in front of the visual media capturing unit and the visual media capturing unit by a first level counterfeit detection subsystem (70).
  • the method (280) also includes calculating one or more dimensions of the one or more rectangular frames generated by the face detection subsystem in step 340.
  • calculating the one or more dimensions of the one or more rectangular frames includes calculating the one or more dimensions of the one or more rectangular frames by the first level counterfeit detection subsystem (70).
  • the method (280) also includes performing a dimension calibration on the one or more dimensions calculated in accordance to the distance measured between the corresponding entity and the visual media capturing unit in step 350.
  • performing the dimension calibration on the one or more dimensions includes performing the dimension calibration on the one or more dimensions by the first level counterfeit detection subsystem (70).
  • the method (280) also includes comparing the dimension calibration performed on the corresponding one or more dimensions with a pre-defined calibration threshold value in step 360.
  • comparing the dimension calibration performed with the pre-defined calibration threshold value includes comparing the dimension calibration performed with the pre-defined calibration threshold value by the first level counterfeit detection subsystem (70).
  • the method (280) also includes comparing a temperature of the entity in front of the visual media capturing unit with a preset human temperature when the dimension calibration performed matches with the pre-defined calibration threshold value upon performing the comparison by the first level counterfeit detection subsystem, wherein the temperature is sensed via a temperature sensing unit in step 370.
  • comparing the temperature of the entity with the preset human temperature includes comparing the temperature of the entity with the preset human temperature by a second level counterfeit detection subsystem (90).
  • the method (280) also includes detecting for a presence of one or more counterfeiting entities in front of the visual media capturing unit using a deep-learning model when the temperature sensed matches with the preset human temperature upon performing the comparison by the second level counterfeit detection subsystem in step 380.
  • detecting for the presence of the one or more counterfeiting entities includes detecting for the presence of the one or more counterfeiting entities by a third level counterfeit detection subsystem (110).
  • the method (280) also includes generating an alert when one of the count of the one or more faces detected is greater than unity, the dimension calibration performed mismatches with the pre-defined calibration threshold value, the temperature sensed mismatches with the preset human temperature, the presence of the one or more counterfeiting entities is detected, or a combination thereof by in step 390.
  • generating the alert includes generating an alert by an alert generation subsystem (130).
  • Various embodiments of the present disclosure enable an implementation of the system for face anti-counterfeiting easy and cost effective, thereby enabling people from small sectors to be able to afford the system. Further, the system provides better accuracy because of usage of the Ultrasonic distance measuring sensor. Also, the system finds application in all kind of access control zones, attendance system, wireless gate opening system, high security at banks and lockers, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

A system for face ani-counterfeiting is provided. The system includes a visual media receiving subsystem which receives visual media. The system includes a face detection subsystem which detects for face(s), counts, and generates rectangular frame(s) of one of the face(s). The system includes a first level counterfeit detection subsystem which measures a distance between an entity and a visual media capturing unit, calculates dimension(s) of the rectangular frame(s), performs a dimension calibration and compares with a pre-defined calibration threshold value. The system includes a second level counterfeit detection subsystem which compares temperature of the entity with a preset human temperature when the dimension calibration matches with the pre-defined calibration threshold value. The system also includes a third level counterfeit detection subsystem which detects for presence of counterfeiting entities when the temperature matches with the preset human temperature. The system also includes an alert generation subsystem which generates an alert.

Description

SYSTEM AND METHOD FOR FACE ANTI-COUNTERFEITING
EARLIEST PRIORITY DATE:
This Application claims priority from a Complete patent application filed in India having Patent Application No. 202041045873, filed on October 21, 2020 and titled “SYSTEM AND METHOD FOR FACE ANTI-COUNTERFEITING”.
FIELD OF INVENTION
Embodiments of a present invention relate to face anti-counterfeiting, and more particularly, to a system and method for ani-counterfeiting.
BACKGROUND
Face anti-counterfeiting is defined as an act of detecting and preventing one or more fraudster activities while authenticating an authorized user by performing the face recognition operation. Traditional face recognition systems recognize the face of the authorized user based on one or more features extracted from the one or more images captured to perform face recognition operation. However, there is possibility that a fraudster can obtain a face image of the authorized user by some means and create a photo, video, mask, or the like and present it to the system in order to obtain illegal rights, the traditional face recognition systems fails to identify a photo, a video, a mask, or the like.
In one approach, a depth camera and a three- dimensional camera is used to detect face anti-counterfeiting activity. However, implementation of such approach is expensive leading to increasing the cost of the end product that can be sold in market. Thus, people belonging to small sectors cannot afford the cost of such a product and this makes technology reach to only the multinational companies.
Hence, there is a need for an improved system and method for face ani-counterfeiting which addresses the aforementioned issues. BRIEF DESCRIPTION
In accordance with one embodiment of the disclosure, a system deployed in a device for face ani-counterfeiting is provided. The system includes one or more processors. The system also includes a visual media receiving subsystem operable by the one or more processors. The visual media receiving subsystem is configured to receive one or more visual media captured via a visual media capturing unit in real-time. The one or more visual media captured include one or more images or one or more videos of an environment in front of the visual media capturing unit. The system also includes a face detection subsystem operable by the one or more processors. The face detection subsystem is configured to detect for a presence of one or more faces in the one or more visual media received using one or more image processing techniques. The face detection subsystem is also configured to count the one or more faces detected in the one or more visual media received. The face detection subsystem is also configured to generate one or more rectangular frames of one of the one or more faces detected in the one or more visual media received when a count of the one or more faces detected in the visual media received is unity. Further, the system also includes a first level counterfeit detection subsystem operable by the one or more processors. The first level counterfeit detection subsystem is configured to measure a distance between an entity in front of the visual media capturing unit and the visual media capturing unit using a distance measuring unit upon generation of the one or more rectangular frames by the face detection subsystem. The entity includes one of the one or more faces detected by the face detection subsystem. The first level counterfeit detection subsystem is also configured to calculate one or more dimensions of the one or more rectangular frames generated by the face detection subsystem. The first level counterfeit detection subsystem is also configured to perform a dimension calibration on the one or more dimensions calculated in accordance with the distance measured between the corresponding entity and the visual media capturing unit. The first level counterfeit detection subsystem is also configured to compare the dimension calibration performed on the corresponding one or more dimensions with a pre-defined calibration threshold value. Further, the system also includes a second level counterfeit detection subsystem operable by the one or more processors. The second level counterfeit detection subsystem is configured to compare temperature of the entity in front of the visual media capturing unit with a preset human temperature when the dimension calibration performed matches with the pre-defined calibration threshold value upon performing the comparison by the first level counterfeit detection subsystem, wherein the temperature is sensed via a temperature sensing unit. The system also includes a third level counterfeit detection subsystem operable by the one or more processors. The third level counterfeit detection subsystem is configured to detect for a presence of one or more counterfeiting entities in front of the visual media capturing unit using a deep-learning model when the temperature sensed matches with the preset human temperature upon performing the comparison by the second level counterfeit detection subsystem. The system also includes an alert generation subsystem operable by the one or more processors. The alert generation subsystem is configured to generate an alert when one of the count of the one or more faces detected is greater than unity, the dimension calibration performed mismatches with the predefined calibration threshold value, the temperature sensed mismatches with the preset human temperature, the presence of the one or more counterfeiting entities is detected, or a combination thereof.
In accordance with another embodiment, a method for face anti-counterfeiting is provided. The method includes receiving one or more visual media captured via a visual media capturing unit in real-time. The method also includes detecting for a presence of one or more faces in the one or more visual media received using one or more image processing techniques. The method also includes counting the one or more faces detected in the one or more visual media received. The method also includes generating one or more rectangular frames of one of the one or more faces detected in the one or more visual media received when a count of the one or more faces detected in the visual media received is unity. The method also includes measuring a distance between an entity in front of the visual media capturing unit and the visual media capturing unit using a distance measuring unit upon generation of the one or more rectangular frames by the face detection subsystem, wherein the entity comprises one of the one or more faces detected by the face detection subsystem. The method also includes calculating one or more dimensions of the one or more rectangular frames generated by the face detection subsystem. The method also includes performing, by the first level counterfeit detection subsystem, a dimension calibration on the one or more dimensions calculated in accordance to the distance measured between the corresponding entity and the visual media capturing unit. The method also includes comparing the dimension calibration performed on the corresponding one or more dimensions with a pre-defined calibration threshold value. The method also includes comparing a temperature of the entity in front of the visual media capturing unit with a preset human temperature when the dimension calibration performed matches with the pre-defined calibration threshold value upon performing the comparison by the first level counterfeit detection subsystem, wherein the temperature is sensed via a temperature sensing unit. The method also includes detecting for a presence of one or more counterfeiting entities in front of the visual media capturing unit using a deeplearning model when the temperature sensed matches with the preset human temperature upon performing the comparison by the second level counterfeit detection subsystem. The method also includes generating an alert when one of the count of the one or more faces detected is greater than unity, the dimension calibration performed mismatches with the pre-defined calibration threshold value, the temperature sensed mismatches with the preset human temperature, the presence of the one or more counterfeiting entities is detected, or a combination thereof.
To further clarify the advantages and features of the present disclosure, a more particular description of the disclosure will follow by reference to specific embodiments thereof, which are illustrated in the appended figures. It is to be appreciated that these figures depict only typical embodiments of the disclosure and are therefore not to be considered limiting in scope. The disclosure will be described and explained with additional specificity and detail with the appended figures.
BRIEF DESCRIPTION OF THE DRAWINGS
The disclosure will be described and explained with additional specificity and detail with the accompanying figures in which:
FIG. 1 is a block diagram representation of a system deployed in a device for face ani- counterfeiting in accordance with an embodiment of the present disclosure;
FIG. 2 is a block diagram representation of an exemplary embodiment of the system for the face ani-counterfeiting of FIG. 1 in accordance with an embodiment of the present disclosure; FIG. 3 is a block diagram of a face anti-counterfeiting computer or a face anticounterfeiting server in accordance with an embodiment of the present disclosure; and
FIG. 4 is a flow chart representing steps involved in a method for face ani- counterfeiting in accordance with an embodiment of the present disclosure.
Further, those skilled in the art will appreciate that elements in the figures are illustrated for simplicity and may not have necessarily been drawn to scale. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the figures by conventional symbols, and the figures may show only those specific details that are pertinent to understanding the embodiments of the present disclosure so as not to obscure the figures with details that will be readily apparent to those skilled in the art having the benefit of the description herein.
DETAILED DESCRIPTION
For the purpose of promoting an understanding of the principles of the disclosure, reference will now be made to the embodiment illustrated in the figures and specific language will be used to describe them. It will nevertheless be understood that no limitation of the scope of the disclosure is thereby intended. Such alterations and further modifications in the illustrated system, and such further applications of the principles of the disclosure as would normally occur to those skilled in the art are to be construed as being within the scope of the present disclosure.
The terms "comprises", "comprising", or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such a process or method. Similarly, one or more devices or sub-systems or elements or structures or components preceded by "comprises... a" does not, without more constraints, preclude the existence of other devices, sub-systems, elements, structures, components, additional devices, additional sub-systems, additional elements, additional structures or additional components. Appearances of the phrase "in an embodiment", "in another embodiment" and similar language throughout this specification may, but not necessarily do, all refer to the same embodiment. Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by those skilled in the art to which this disclosure belongs. The system, methods, and examples provided herein are only illustrative and not intended to be limiting.
In the following specification and the claims, reference will be made to a number of terms, which shall be defined to have the following meanings. The singular forms “a”, “an”, and “the” include plural references unless the context clearly dictates otherwise.
Embodiments of the present disclosure relate to a system deployed in a device for face ani-counterfeiting. Counterfeiting is defined as an act of imitating something authentic, with the intent to steal, destroy, or replace the original, for use in illegal transactions, or otherwise to deceive individuals into believing that the fake is of equal or greater value than the real thing. There is a plurality of applications where face of an authorized user may be used for an authentication of the corresponding authorized user. There is possibility that a fraudster can obtain a face image of the authorized user by some means and create a photo, video, mask, or the like presented to a system in order to obtain illegal rights. Thus, face anti-counterfeiting is defined as an act of detecting and preventing such fraudster activities while face recognition operation. The system as described hereafter in FIG. 1 is the system for face ani-counterfeiting.
FIG. 1 is a block diagram representation of a system (10) deployed in a device (20) for face ani-counterfeiting in accordance with an embodiment of the present disclosure. In one embodiment, the device (20) includes an integrated unit of one or more Raspberry Pi-based modules. In such embodiment, the device (20) is used in integration with a user device via which an authorized user is performing one or more operations using face of the authorized user as an authentication means. In one exemplary embodiment, the user device includes a laptop, a desktop computer, a standalone biometric device, and the like. In one exemplary embodiment, the one or more operations include an access control to the corresponding user device, attendance recording, wireless gate opening operation, monetary-related operation at a bank or a locker, or the like.
Further, the system (10) includes one or more processors (30). In one exemplary embodiment, the one or more processors (30) include Raspberry Pi, Nvidia Jetson Nano processor, or the like. The authorized user may have to be registered on a centralized platform. Thus, in one embodiment, the system (10) includes a registration subsystem (as sown in FIG. 2) operable by the one or more processors (30). The registration subsystem registers the authorized user on the centralized platform upon receiving a plurality of authorized user details via an authorized user interface associated to the device (20). In one embodiment, the plurality of authorized user details comprises an authorized username, a contact number, e-mail ID, one or more authorized user images, an authorized user voice recording, and the like.
Further, the system (10) needs to check for face counterfeiting activity, when an entity tries to log in on the centralized platform to perform at least one of the one or more operations. Thus, initially, the system (10) may have to receive a visual media of the entity and an environment surrounding the entity, and hence to perform such an operation, the system (10) also includes a visual media receiving subsystem (40) operable by the one or more processors (30). The visual media receiving subsystem (40) is operatively coupled to the registration subsystem. The visual media receiving subsystem (40) receives one or more visual media captured via a visual media capturing unit (50) in real-time. The one or more visual media captured include one or more images or one or more videos of the environment in front of the visual media capturing unit (50). The environment may also include the entity trying to login on the centralized platform. In one embodiment, the visual media capturing unit (50) includes a normal camera, a Raspberry Pi camera, a webcam, or the like.
Further, the system (10) also includes a face detection subsystem (60) operable by the one or more processors (30). The face detection subsystem (60) is operatively coupled to the visual media receiving subsystem (40). The face detection subsystem (60) detects for a presence of one or more faces in the one or more visual media received using one or more image processing techniques.
Further, the face detection subsystem (60) also counts the one or more faces detected in the one or more visual media received. The face detection subsystem (60) also generates one or more rectangular frames of one of the one or more faces detected in the one or more visual media received when a count of the one or more faces detected in the visual media received is unity. In one embodiment, the entity includes an identity card, a photo, an image in a mobile phone, or the like of the authorized user which a fraudster is holding very close to the visual media capturing unit (50). In another embodiment, the entity includes a face mask resembling the face of the authorized user worn by the fraudster, a print of the face of the authorized user having a standard size of the face held by the fraudster, an image poster of the face of the authorized user with the standard size of the face held by the fraudster, a model having the face of the authorized user, one or more counterfeiting entities, or the like. In such embodiment, the one or more counterfeiting entities include one or more two-dimensional (2-D) entities, one or more three- dimensional (3-D) entities, or the like.
Further, the system (10) needs to check for the face counterfeiting activity, when the fraudster might use the identity card, the photo, the image in the mobile phone, or the like of the authorized user to login on the centralized platform. Thus, to detect such face counterfeiting activity, the system (10) includes a first level counterfeit detection subsystem (70) operable by the one or more processors (30). The first level counterfeit detection subsystem (70) is operatively coupled to the face detection subsystem (60). The first level counterfeit detection subsystem (70) measures a distance between the entity in front of the visual media capturing unit (50) and the visual media capturing unit (50) using a distance measuring unit (80) upon generation of the one or more rectangular frames by the face detection subsystem (60).
In one embodiment, the distance measuring unit (80) includes one of an Ultrasonic distance measuring sensor, a laser ranging sensor, and the like. As used herein, the term “Ultrasonic distance measuring sensor” is defined as a sensor which includes a head that emits an ultrasonic wave and receives the wave reflected back from the target. Further, as used herein, the term “laser ranging sensor” is defined as a sensor which works by use measuring the time it takes a pulse of laser light to be reflected off a target and returned to the sender. In one exemplary embodiment, the laser ranging sensor includes VL53L0 laser ranging sensor.
The entity includes one of the one or more faces detected by the face detection subsystem (60). The first level counterfeit detection subsystem (70) also calculates one or more dimensions of the one or more rectangular frames generated by the face detection subsystem (60). The first level counterfeit detection subsystem (70) also performs a dimension calibration on the one or more dimensions calculated in accordance with the distance measured between the corresponding entity and the visual media capturing unit (50).
In one exemplary embodiment, the one or more dimensions include length, and breadth, a perimeter, an area, or the like of the corresponding one or more rectangular frames. In one embodiment, the dimension calibration includes an addition or subtraction of a pre-defined value to the one or more dimensions calculated in order to match with the distance measured between the corresponding entity and the visual media capturing unit (50).
The first level counterfeit detection subsystem (70) also compares the dimension calibration performed on the corresponding one or more dimensions with a predefined calibration threshold value. In one embodiment, the first level counterfeit detection subsystem (70) compares the pre-defined value added to or subtracted from the one or more dimensions calculated with the pre-defined calibration threshold value.
Further, the system (10) needs to check for the face counterfeiting activity, when the fraudster might wear the face mask resembling the face of the authorized user, hold the print of the face of the authorized user having the standard size of the face, or the like. In such embodiment, the first level counterfeit detection subsystem (70) might successfully identify the corresponding face detected to be the face of the authorized user. Thus, the system (10) further includes a second level counterfeit detection subsystem (90) operable by the one or more processors (30). The second level counterfeit detection subsystem (90) is operatively coupled to the first level counterfeit detection subsystem (70). The second level counterfeit detection subsystem (90) compares temperature of the entity in front of the visual media capturing unit (50) with a preset human temperature when the dimension calibration performed matches with the pre-defined calibration threshold value upon performing the comparison by the first level counterfeit detection subsystem (70).
The temperature is sensed via a temperature sensing unit (100). In one embodiment, the temperature sensing unit (100) includes an Infrared (IR) thermal sensor. As used herein, the term “IR thermal sensor” is defined as a sensor used to measure temperature of an object placed at a distance without toughing the corresponding object. A typical IR thermal sensor may use a lens to focus light on an object which reflects back and fall onto a detector, wherein the detector may be called as a thermopile which absorbs IR radiations reflected and turns it into heat and hence temperature of the object is measured.
Further, in one embodiment the fraudster is wearing the face mask resembling the face of the authorized user, the fraudster has successfully incorporated the preset human temperature into the entity having the face of the authorized user and placed in front of the visual capturing unit, or the like. The entity may be the model having the face of the authorized user. In such embodiment, the second level counterfeit detection subsystem (90) might successfully identify the corresponding face detected to be the face of the authorized user. Thus, the system (10) also includes a third level counterfeit detection subsystem (110) operable by the one or more processors (30). The third level counterfeit detection subsystem (110) is operatively coupled to the second level counterfeit detection subsystem (90). The third level counterfeit detection subsystem (110) detects for a presence of the one or more counterfeiting entities in front of the visual media capturing unit (50) using a deep-learning model when the temperature sensed matches with the preset human temperature upon performing the comparison by the second level counterfeit detection subsystem (90).
In one embodiment, the one or more counterfeiting entities include the face mask, a secondary entity holding the model having the face of the authorized user, the secondary entity holding the image poster of the face of the authorized user with the standard size of the face, one or more spoofing gadgets, or the like. Further, in one exemplary embodiment, the third level counterfeit detection subsystem (110) generates the deep-learning model which is trained to differentiate between real face and fake face. In one embodiment, the deep-learning model may be associated with a database (120). The database (120) may store data associated to one or more features of the real face as a real face data and the data associated to one or more features of the fake face as the fake face data. Further, the third level counterfeit detection subsystem (110) may detect the presence of the fake face in front of the visual media capturing unit (50) using the deep-learning model based on the data stored in the corresponding database (120). In another exemplary embodiment, the third level counterfeit detection subsystem (110) generates the deep-learning model which detects for the presence of more than one object in the one or more visual media received by the visual media receiving subsystem (40). In such embodiment, the third level counterfeit detection subsystem (110) successfully rejects the face detected when an additional object is detected.
Further, a robotic model may be used as the entity which is placed in front of the visual media capturing unit (50). In such embodiment, the third level counterfeit detection subsystem (110) might not detect the any additional object, the face mask might be mistaken to be the real face, or the like, thereby leading to successfully identify the face of the authorized user. Thus, in one embodiment, the system (10) includes an emotion recognition subsystem (as shown in FIG. 2) operable by the one or more processors (30). The emotion recognition subsystem is operatively coupled to the third level counterfeit detection subsystem (110). The emotion recognition subsystem may generate a notification for the entity to perform at least one emotional gesture when the third level counterfeit detection subsystem (110) fails to detect the presence of the one or more counterfeiting entities. The at least one emotional gesture may be generated randomly by the emotion recognition subsystem. In one embodiment, the at least one emotional gesture includes blinking, smiling, frowning, or the like.
Further, the emotion recognition subsystem may capture an emotional gesture performed by the entity via the visual media capturing unit (50). Later, the emotion recognition subsystem may compare the emotional gesture captured with the at least one emotional gesture notified by the emotion recognition subsystem to detect for face counterfeiting.
Further, in one embodiment, the system (10) also includes a voice recognition subsystem (as shown in FIG. 2) operable by the one or more processors (30). The voice recognition subsystem may generate a notification for the entity to pronounce at least one word when the emotional gesture performed matches with the at least one emotional gesture notified by the emotion recognition subsystem, wherein the at least one word is randomly generated by the voice recognition subsystem.
Further, the voice recognition subsystem may record a voice of the entity uttering a word via a voice recording unit. Later, the voice recognition subsystem may compare the voice recorded, and the word uttered by the entity with a pre-stored authorized user voice recording, and the at least one word requested by the voice recognition subsystem respectively to detect for the face counterfeiting. In one embodiment, the notification generated includes the notification in a form of one of a text message, a voice message, a visual media message, or the like via the authorized user interface associated to the device (20).
Further, the authorized user may have to be notified about the face counterfeiting activity. Thus, the system (10) also includes an alert generation subsystem (130) operable by the one or more processors (30). The alert generation subsystem (130) generates an alert when one of the count of the one or more faces detected is greater than unity, the dimension calibration performed mismatches with the pre-defined calibration threshold value, the temperature sensed mismatches with the preset human temperature, the presence of the one or more counterfeiting entities is detected, or a combination thereof.
In one embodiment, the alert generation subsystem (130) also generates the alert when the emotional gesture performed mismatches with the at least one emotional gesture requested by the emotion recognition subsystem. In another embodiment, the alert generation subsystem (130) also generates the alert when one of the voice recorded uttering the word mismatches with the pre-stored authorized user voice recording, the word uttered by the entity mismatches with the at least one word requested by the voice recognition subsystem, or a combination thereof. In one exemplary embodiment, the alert generated includes an audio alert, a visual alert, a text message, and the like.
In one exemplary embodiment, the system (10) includes a power supply unit, the power supply unit supplies power to the device (20), the one or more processors (30), the visual media capturing unit (50), the distance measuring unit (80), the temperature sensing unit (100), and the like for proper functioning. The power supply unit may include a battery.
FIG. 2 is a block diagram representation of an exemplary embodiment of the system (10) for the face ani-counterfeiting of FIG. 1 in accordance with an embodiment of the present disclosure. Suppose the authorized user (140) has a locker (150) at his residence (160) to safely store precious accessories. The authorized user (140) uses the system (10) deployed in a Raspberry Pi based device (170). The system (10) includes a Raspberry Pi based processor (180). Thus, the authorized user (140) registers to the system (10) via the registration subsystem (190) of the system (10) upon providing the plurality of authorized user details via the authorized user interface associated to the Raspberry Pi based device (170). A fraudster might have made a robotic model having the face resembling the authorized user (140). So, the fraudster might break into the corresponding residence (160) and somehow find the locker (150).
Initially, the Raspberry Pi camera (200) captures the one or more visual media of the environment in front of the Raspberry Pi camera (200) and the visual media receiving subsystem (40) of the system (10) receives the one or more visual media captured. Further, the system (10) detects the presence of one or more faces in the one or more visual media received by the face detection subsystem (60) of the system (10). The face detection subsystem (60) detects a single face and hence generates the one or more rectangular frames of the face detected.
Further, the system (10) measures the distance between the entity and the Raspberry Pi camera (200) via the first level counterfeit detection subsystem (70) of the system (10) using the laser ranging sensor (210). Later, the system (10) calculates the one or more dimensions of the one or more rectangular frames, performs the dimension calibration and compares with the pre-defined calibration threshold value via the first level counterfeit detection subsystem (70). However, since the robotic model is used, the dimension calibration matches with the pre-defined calibration threshold value.
Further, the system (10) measures the temperature of the entity in front of the Raspberry Pi camera (200) using the IR thermal sensor (220) via the second level counterfeit detection subsystem (90) of the system (10). Later, the temperature measured is compared with the preset human temperature. However, since, the fraudster had incorporated the preset human temperature into the robotic model, the temperature measured matches with the preset human temperature.
Further, the system (10) detects the presence of the one or more counterfeiting entities using the deep-learning model via the third level counterfeit detection subsystem (110) of the system. However, since the fraudster is controlling the robotic model from a distance, the one or more counterfeiting entities are not detected.
Further, the system (10) generates the notification for the entity to perform the blinking gesture via the emotion recognition subsystem (230) of the system. Later, the robotic model performs a gesture upon receiving the corresponding notification which is recorded and compared with the blinking gesture requested by the emotion recognition subsystem (230). However, since the robotic model was advanced enough to perform the blinking gesture, the gestures matches.
Lastly, the system (10) generates the notification for the entity to pronounce a random word via the voice recognition subsystem (235) of the system (10). The robotic model has a memory with a limited storage as it was not that advanced to have a huge memory to store huge amount of data. Thus, the random word which the robotic model is supposed to pronounce is absent in the memory of the robotic model, then the robotic model may not utter any word and after a pre-defined time period, the unlocking process might re-start.
Suppose the robotic model pronounces the corresponding random word, however, a voice texture might not match with the voice texture of the pre-stored authorized user voice recording as it is difficult to generate highly accurate audio. Thus, at this stage the fraudster fails to unlock the locker (150) and the alert is sent to the authorized user (140) regarding the theft activity taking place at his residence (160) and the authorized user (140) can take immediate actions to catch the fraudster.
Furthermore, the Raspberry Pi based device (170), the Raspberry Pi based processor (180), the Raspberry Pi camera (200), the laser ranging sensor (210), and the IR thermal sensor (220), are substantially similar to a device (20), one or more processors (30), a visual media capturing unit (50), a distance measuring unit (80), and a temperature sensing unit (100) of FIG. 1.
FIG. 3 is a block diagram of a face anti-counterfeiting computer or a face anticounterfeiting server (240) in accordance with an embodiment of the present disclosure. The face anti-counterfeiting server (240) includes processor(s) (250), and a memory (260) coupled to a bus (270). As used herein, the processor(s) (250) and the memory (260) are substantially similar to the system ( 10) of FIG. 1. Here, the memory (260) is located in a local storage device.
The processor(s) (250), as used herein, means any type of computational circuit, such as, but not limited to, a microprocessor, a microcontroller, a complex instruction set computing microprocessor, a reduced instruction set computing microprocessor, a very long instruction word microprocessor, an explicitly parallel instruction computing microprocessor, a digital signal processor, or any other type of processing circuit, or a combination thereof.
Computer memory elements may include any suitable memory device(s) for storing data and executable program, such as read only memory, random access memory, erasable programmable read only memory, electrically erasable programmable read only memory, hard drive, removable media drive for handling memory cards and the like. Embodiments of the present subject matter may be implemented in conjunction with program modules, including functions, procedures, data structures, and application programs, for performing tasks, or defining abstract data types or low-level hardware contexts. Executable program stored on any of the above-mentioned storage media may be executable by the processor(s) (250).
The memory (260) includes a plurality of subsystems stored in the form of executable program which instructs the processor(s) (250) to perform method steps illustrated in FIG. 3. The memory (260) has following subsystems: a visual media receiving subsystem (40), a face detection subsystem (60), a first level counterfeit detection subsystem (70), a second level counterfeit detection subsystem (90), a third level counterfeit detection subsystem (110), and an alert generation subsystem (130).
The visual media receiving subsystem (40) is configured to receive one or more visual media captured via a visual media capturing unit (50) in real-time, wherein the one or more visual media captured comprise one or more images or one or more videos of an environment in front of the visual media capturing unit (50).
The face detection subsystem (60) is configured to detect for a presence of one or more faces in the one or more visual media received using one or more image processing techniques. The face detection subsystem (60) is also configured to count the one or more faces detected in the one or more visual media received. The face detection subsystem (60) is also configured to generate one or more rectangular frames of one of the one or more faces detected in the one or more visual media received when a count of the one or more faces detected in the visual media received is unity.
The first level counterfeit detection subsystem (70) is configured to measure a distance between an entity in front of the visual media capturing unit (50) and the visual media capturing unit (50) using a distance measuring unit (80) upon generation of the one or more rectangular frames by the face detection subsystem (60), wherein the entity comprises one of the one or more faces detected by the face detection subsystem (60). The first level counterfeit detection subsystem (70) is also configured to calculate one or more dimensions of the one or more rectangular frames generated by the face detection subsystem (60). The first level counterfeit detection subsystem (70) is also configured to perform a dimension calibration on the one or more dimensions calculated in accordance with the distance measured between the corresponding entity and the visual media capturing unit (50). The first level counterfeit detection subsystem (70) is also configured to compare the dimension calibration performed on the corresponding one or more dimensions with a pre-defined calibration threshold value.
The second level counterfeit detection subsystem (90) is configured to compare temperature of the entity in front of the visual media capturing unit (50) with a preset human temperature when the dimension calibration performed matches with the predefined calibration threshold value upon performing the comparison by the first level counterfeit detection subsystem (70), wherein the temperature is sensed via a temperature sensing unit (100).
The third level counterfeit detection subsystem (110) is configured to detect for a presence of one or more counterfeiting entities in front of the visual media capturing unit (50) using a deep-learning model when the temperature sensed matches with the preset human temperature upon performing the comparison by the second level counterfeit detection subsystem (90).
The alert generation subsystem (130) is configured to generate an alert when one of the count of the one or more faces detected is greater than unity, the dimension calibration performed mismatches with the pre-defined calibration threshold value, the temperature sensed mismatches with the preset human temperature, the presence of the one or more counterfeiting entities is detected, or a combination thereof.
FIG. 4 is a flow chart representing steps involved in a method (280) for face ani- counterfeiting in accordance with an embodiment of the present disclosure. The method (280) includes receiving one or more visual media captured via a visual media capturing unit in real-time in step 290. In one embodiment, receiving the one or more visual media captured via the visual media capturing unit in real-time includes receiving the one or more visual media captured via the visual media capturing unit in real-time by a visual media receiving subsystem (40).
In one exemplary embodiment, the method (280) also includes registering an authorized on a centralized platform upon receiving a plurality of authorized user details via an authorized user interface associated with the device (20). In such embodiment, by registering the authorized on the centralized platform includes registering the authorized on the centralized platform a registration subsystem.
The method (280) also includes detecting for a presence of one or more faces in the one or more visual media received using one or more image processing techniques in step 300. In one embodiment, detecting for the presence of the one or more faces in the one or more visual media received includes detecting for the presence of the one or more faces in the one or more visual media received by a face detection subsystem (60). In such embodiment, receiving the one or more visual media captured includes receiving one or more images or one or more videos of an environment in front of the visual media capturing unit.
Furthermore, the method (280) includes counting the one or more faces detected in the one or more visual media received in step 310. In one embodiment, counting the one or more faces detected in the one or more visual media received includes counting the one or more faces detected in the one or more visual media received by the face detection subsystem (60).
Furthermore, the method (280) also includes generating one or more rectangular frames of one of the one or more faces detected in the one or more visual media received when a count of the one or more faces detected in the visual media received is unity in step 320. In one embodiment, generating the one or more rectangular frames of one of the one or more faces detected includes generating the one or more rectangular frames of one of the one or more faces by the face detection subsystem (60).
Furthermore, the method (280) also includes measuring a distance between an entity in front of the visual media capturing unit and the visual media capturing unit using a distance measuring unit upon generation of the one or more rectangular frames by the face detection subsystem, wherein the entity comprises one of the one or more faces detected by the face detection subsystem in step 330. In one embodiment, measuring the distance between the entity in front of the visual media capturing unit and the visual media capturing unit includes measuring the distance between the entity in front of the visual media capturing unit and the visual media capturing unit by a first level counterfeit detection subsystem (70).
Furthermore, the method (280) also includes calculating one or more dimensions of the one or more rectangular frames generated by the face detection subsystem in step 340. In one embodiment, calculating the one or more dimensions of the one or more rectangular frames includes calculating the one or more dimensions of the one or more rectangular frames by the first level counterfeit detection subsystem (70).
Furthermore, the method (280) also includes performing a dimension calibration on the one or more dimensions calculated in accordance to the distance measured between the corresponding entity and the visual media capturing unit in step 350. In one embodiment, performing the dimension calibration on the one or more dimensions includes performing the dimension calibration on the one or more dimensions by the first level counterfeit detection subsystem (70).
Furthermore, the method (280) also includes comparing the dimension calibration performed on the corresponding one or more dimensions with a pre-defined calibration threshold value in step 360. In one embodiment, comparing the dimension calibration performed with the pre-defined calibration threshold value includes comparing the dimension calibration performed with the pre-defined calibration threshold value by the first level counterfeit detection subsystem (70).
Furthermore, the method (280) also includes comparing a temperature of the entity in front of the visual media capturing unit with a preset human temperature when the dimension calibration performed matches with the pre-defined calibration threshold value upon performing the comparison by the first level counterfeit detection subsystem, wherein the temperature is sensed via a temperature sensing unit in step 370. In one embodiment, comparing the temperature of the entity with the preset human temperature includes comparing the temperature of the entity with the preset human temperature by a second level counterfeit detection subsystem (90).
Furthermore, the method (280) also includes detecting for a presence of one or more counterfeiting entities in front of the visual media capturing unit using a deep-learning model when the temperature sensed matches with the preset human temperature upon performing the comparison by the second level counterfeit detection subsystem in step 380. In one embodiment, detecting for the presence of the one or more counterfeiting entities includes detecting for the presence of the one or more counterfeiting entities by a third level counterfeit detection subsystem (110).
Furthermore, the method (280) also includes generating an alert when one of the count of the one or more faces detected is greater than unity, the dimension calibration performed mismatches with the pre-defined calibration threshold value, the temperature sensed mismatches with the preset human temperature, the presence of the one or more counterfeiting entities is detected, or a combination thereof by in step 390. In one embodiment, generating the alert includes generating an alert by an alert generation subsystem (130).
Various embodiments of the present disclosure enable an implementation of the system for face anti-counterfeiting easy and cost effective, thereby enabling people from small sectors to be able to afford the system. Further, the system provides better accuracy because of usage of the Ultrasonic distance measuring sensor. Also, the system finds application in all kind of access control zones, attendance system, wireless gate opening system, high security at banks and lockers, and the like.
While specific language has been used to describe the disclosure, any limitations arising on account of the same are not intended. As would be apparent to a person skilled in the art, various working modifications may be made to the method in order to implement the inventive concept as taught herein. The figures and the foregoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, order of processes described herein may be changed and are not limited to the manner described herein. Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all of the acts need to be necessarily performed. Also, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of embodiments is by no means limited by these specific examples.

Claims

I/WE CLAIM:
1. A system (10) deployed in a device (20) for face anti-counterfeiting, wherein the system (10) comprises: one or more processors (30); a visual media receiving subsystem (40) operable by the one or more processors (30), wherein the visual media receiving subsystem (40) is configured to receive one or more visual media captured via a visual media capturing unit (50) in real-time, wherein the one or more visual media captured comprise one or more images or one or more videos of an environment in front of the visual media capturing unit (50); a face detection subsystem (60) operable by the one or more processors (30), wherein the face detection subsystem (60) is configured to: detect for a presence of one or more faces in the one or more visual media received using one or more image processing techniques; count the one or more faces detected in the one or more visual media received; and generate one or more rectangular frames of one of the one or more faces detected in the one or more visual media received when a count of the one or more faces detected in the visual media received is unity; a first level counterfeit detection subsystem (70) operable by the one or more processors (30), wherein the first level counterfeit detection subsystem (70) is configured to: measure a distance between an entity in front of the visual media capturing unit (50) and the visual media capturing unit (50) using a distance measuring unit (80) upon generation of the one or more rectangular frames by the face detection subsystem (60), wherein the entity comprises one of the one or more faces detected by the face detection subsystem (60); calculate one or more dimensions of the one or more rectangular frames generated by the face detection subsystem (60); perform a dimension calibration on the one or more dimensions calculated in accordance with the distance measured between the corresponding entity and the visual media capturing unit (50); and compare the dimension calibration performed on the corresponding one or more dimensions with a pre-defined calibration threshold value; a second level counterfeit detection subsystem (90) operable by the one or more processors (30), wherein the second level counterfeit detection subsystem (90) is configured to compare temperature of the entity in front of the visual media capturing unit (50) with a preset human temperature when the dimension calibration performed matches with the pre-defined calibration threshold value upon performing the comparison by the first level counterfeit detection subsystem (70), wherein the temperature is sensed via a temperature sensing unit (100); a third level counterfeit detection subsystem (110) operable by the one or more processors (30), wherein the third level counterfeit detection subsystem (110) is configured to detect for a presence of one or more counterfeiting entities in front of the visual media capturing unit (50) using a deep-learning model when the temperature sensed matches with the preset human temperature upon performing the comparison by the second level counterfeit detection subsystem (90); and an alert generation subsystem (130) operable by the one or more processors (30), wherein the alert generation subsystem (130) is configured to generate an alert when one of the count of the one or more faces detected is greater than unity, the dimension calibration performed mismatches with the pre-defined calibration threshold value, the temperature sensed mismatches with the preset human temperature, the presence of the one or more counterfeiting entities is detected, or a combination thereof.
2. The system (10) as claimed in claim 1, wherein the visual media capturing unit (50) comprises a normal camera, a Raspberry Pi camera, or a webcam.
3. The system (10) as claimed in claim 1, comprises a registration subsystem (190) operable by the one or more processors (30), the registration subsystem (190) configured to register an authorized user (140) on a centralized platform upon receiving a plurality of authorized user details.
4. The system (10) as claimed in claim 2, wherein the plurality of authorized user details comprises an authorized username, a contact number, e-mail ID, one or more authorized user images, and an authorized user voice recording.
5. The system (10) as claimed in claim 1, comprises an emotion recognition subsystem (230) operable by the one or more processors (30), the emotion recognition subsystem (230) is configured to: generate a notification for the entity to perform at least one emotional gesture when the third level counterfeit detection subsystem (110) fails to detect the presence of the one or more counterfeiting entities, wherein the at least one emotional gesture is generated randomly by the emotion recognition subsystem (230); capture an emotional gesture performed by the entity via the visual media capturing unit (50); and compare the emotional gesture captured with the at least one emotional gesture notified by the emotion recognition subsystem (230) to detect for face counterfeiting.
6. The system (10) as claimed in claim 5, comprises a voice recognition subsystem (235) operable by the one or more processors (30), the voice recognition subsystem (235) configured to: generate a notification for the entity to pronounce at least one word when the emotional gesture performed matches with the at least one emotional gesture notified by the emotion recognition subsystem (230), wherein the at least one word is randomly generated by the voice recognition subsystem (235); record a voice of the entity uttering a word via a voice recording unit; and compare the voice recorded, and the word uttered by the entity with a prestored authorized user voice recording, and the at least one word requested by the voice recognition subsystem (235) respectively to detect for the face counterfeiting.
7. The system (10) as claimed in claim 5 and claim 6, wherein the notification generated comprises the notification in a form of one of a text message, a voice message, or a visual media message via an authorized user interface associated to the device (20).
8. A method (280) for face anti-counterfeiting, wherein the method (280) comprises: receiving, by a visual media receiving subsystem (40), one or more visual media captured via a visual media capturing unit in real-time; (290) detecting, by a face detection subsystem (60), for a presence of one or more faces in the one or more visual media received using one or more image processing techniques; (300) counting, by the face detection subsystem (60), the one or more faces detected in the one or more visual media received; (310) generating, by the face detection subsystem (60), one or more rectangular frames of one of the one or more faces detected in the one or more visual media received when a count of the one or more faces detected in the visual media received is unity; (320) measuring, by a first level counterfeit detection subsystem (70), a distance between an entity in front of the visual media capturing unit and the visual media capturing unit using a distance measuring unit upon generation of the one or more rectangular frames by the face detection subsystem, wherein the entity comprises one of the one or more faces detected by the face detection subsystem; (330) calculating, by the first level counterfeit detection subsystem (70), one or more dimensions of the one or more rectangular frames generated by the face detection subsystem; (340) performing, by the first level counterfeit detection subsystem (70), a dimension calibration on the one or more dimensions calculated in accordance to the distance measured between the corresponding entity and the visual media capturing unit; and (350) comparing, by the first level counterfeit detection subsystem (70), the dimension calibration performed on the corresponding one or more dimensions with a pre-defined calibration threshold value; (360) comparing, by a second level counterfeit detection subsystem (90), a temperature of the entity in front of the visual media capturing unit with a preset human temperature when the dimension calibration performed matches with the pre-defined calibration threshold value upon performing the comparison by the first level counterfeit detection subsystem, wherein the temperature is sensed via a temperature sensing unit; (370) detecting, by a third level counterfeit detection subsystem (110), for a presence of one or more counterfeiting entities in front of the visual media capturing unit using a deep-learning model when the temperature sensed matches with the preset human temperature upon performing the comparison by the second level counterfeit detection subsystem; and (380) generating, by an alert generation subsystem (130), an alert when one of the count of the one or more faces detected is greater than unity, the dimension calibration performed mismatches with the pre-defined calibration threshold value, the temperature sensed mismatches with the preset human temperature, the presence of the one or more counterfeiting entities is detected, or a combination thereof (390).
9. The method (280) as claimed in claim 8, wherein receiving the one or more visual media captured comprises receiving one or more images or one or more videos of an environment in front of the visual media capturing unit.
10. The method (280) as claimed in claim 8, comprises registering, by a registration subsystem (190), an authorized user on a centralized platform upon receiving a plurality of authorized user details.
PCT/IB2021/059565 2020-10-21 2021-10-18 System and method for face anti-counterfeiting WO2022084824A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN202041045873 2020-10-21
IN202041045873 2020-10-21

Publications (1)

Publication Number Publication Date
WO2022084824A1 true WO2022084824A1 (en) 2022-04-28

Family

ID=81289719

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2021/059565 WO2022084824A1 (en) 2020-10-21 2021-10-18 System and method for face anti-counterfeiting

Country Status (1)

Country Link
WO (1) WO2022084824A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013131407A1 (en) * 2012-03-08 2013-09-12 无锡中科奥森科技有限公司 Double verification face anti-counterfeiting method and device
US20190073547A1 (en) * 2010-06-07 2019-03-07 Affectiva, Inc. Personal emotional profile generation for vehicle manipulation
CN109522798A (en) * 2018-10-16 2019-03-26 平安科技(深圳)有限公司 Video anticounterfeiting method, system, device based on vivo identification and can storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190073547A1 (en) * 2010-06-07 2019-03-07 Affectiva, Inc. Personal emotional profile generation for vehicle manipulation
WO2013131407A1 (en) * 2012-03-08 2013-09-12 无锡中科奥森科技有限公司 Double verification face anti-counterfeiting method and device
CN109522798A (en) * 2018-10-16 2019-03-26 平安科技(深圳)有限公司 Video anticounterfeiting method, system, device based on vivo identification and can storage medium

Similar Documents

Publication Publication Date Title
Hadid Face biometrics under spoofing attacks: Vulnerabilities, countermeasures, open issues, and research directions
Galbally et al. Biometric antispoofing methods: A survey in face recognition
Kumar et al. A comparative study on face spoofing attacks
US10757323B2 (en) Electronic device with image capture command source identification and corresponding methods
CN104094192B (en) Audio input from user
CN102956068B (en) Automatic teller machine and voice prompting method thereof
WO2016066040A1 (en) Identity authentication method and device
TW201401186A (en) System and method for identifying human face
Das et al. Recent advances in biometric technology for mobile devices
US20130088685A1 (en) Iris Cameras
US11093770B2 (en) System and method for liveness detection
US20220198893A1 (en) Asset tracking and notification processing
Kant et al. Fake face recognition using fusion of thermal imaging and skin elasticity
Anusha et al. Locker security system using facial recognition and One Time Password (OTP)
JP2008009689A (en) Face registering device, face authentication device, and face registration method
US10867187B2 (en) Visual-based security compliance processing
WO2022084824A1 (en) System and method for face anti-counterfeiting
McQuillan Is lip-reading the secret to security?
WO2016095680A1 (en) Intrusion detection method and device for private data
Kuang et al. LipAuth: Securing Smartphone User Authentication with Lip Motion Patterns
Mitra et al. ◾ Overview of Biometric Authentication
Hortai Possibilities of dynamic biometrics for authentication and the circumstances for using dynamic biometric signature
Mishra et al. Integrating State-of-the-Art Face Recognition and Anti-Spoofing Techniques into Enterprise Information Systems
Lott Biometrics: modernising customer authentication for financial services and payments
RU2815689C1 (en) Method, terminal and system for biometric identification

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21882270

Country of ref document: EP

Kind code of ref document: A1