CN112906610A - Method for living body detection, electronic circuit, electronic apparatus, and medium - Google Patents

Method for living body detection, electronic circuit, electronic apparatus, and medium Download PDF

Info

Publication number
CN112906610A
CN112906610A CN202110245618.4A CN202110245618A CN112906610A CN 112906610 A CN112906610 A CN 112906610A CN 202110245618 A CN202110245618 A CN 202110245618A CN 112906610 A CN112906610 A CN 112906610A
Authority
CN
China
Prior art keywords
image
detected
illumination
living body
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110245618.4A
Other languages
Chinese (zh)
Inventor
张晓琳
赵亚峰
周骥
冯歆鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NextVPU Shanghai Co Ltd
Original Assignee
NextVPU Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NextVPU Shanghai Co Ltd filed Critical NextVPU Shanghai Co Ltd
Priority to CN202110245618.4A priority Critical patent/CN112906610A/en
Publication of CN112906610A publication Critical patent/CN112906610A/en
Priority to PCT/CN2022/078053 priority patent/WO2022183992A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/141Control of illumination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)

Abstract

A method, an electronic circuit, an electronic apparatus, and a medium for living body detection are provided. The method comprises the following steps: controlling the lighting equipment to illuminate based on the current illumination mode, and controlling the image acquisition equipment to acquire an image of the object to be detected while illuminating; determining a predicted illumination mode based on the image of the object to be detected; and determining that the object to be detected passes live detection at least in response to determining that the predicted illumination pattern and the current illumination pattern are consistent. By using the method provided by the embodiment of the disclosure, the monitoring on the living body prediction result can be realized by using the prediction result aiming at the illumination mode, so that the accuracy of the living body prediction is improved.

Description

Method for living body detection, electronic circuit, electronic apparatus, and medium
Technical Field
The present disclosure relates to the field of image processing, and in particular, to a method, an electronic circuit, an electronic device, and a medium for in vivo detection.
Background
Face recognition can be achieved by means of image processing, and it can be determined in various ways that the recognized face belongs to a living body, thereby avoiding fraudulent acts such as photo attacks.
The approaches described in this section are not necessarily approaches that have been previously conceived or pursued. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, unless otherwise indicated, the problems mentioned in this section should not be considered as having been acknowledged in any prior art.
Disclosure of Invention
According to an aspect of the present disclosure, there is provided a method for in vivo detection, including: controlling the lighting equipment to illuminate based on the current illumination mode, and controlling the image acquisition equipment to acquire an image of the object to be detected while illuminating; determining a predicted illumination mode based on the image of the object to be detected; and determining that the object to be detected passes live detection at least in response to determining that the predicted illumination pattern and the current illumination pattern are consistent.
According to another aspect of the present disclosure, there is provided an electronic circuit comprising: circuitry configured to perform the steps of the above-described method.
According to another aspect of the present disclosure, there is also provided an electronic device including: a processor; and a memory storing a program comprising instructions which, when executed by the processor, cause the processor to perform the method described above.
According to another aspect of the present disclosure, there is also provided a non-transitory computer readable storage medium storing a program, the program comprising instructions which, when executed by a processor of an electronic device, cause the electronic device to perform the above-described method.
According to another aspect of the present disclosure, there is also provided a computer program product comprising a computer program, wherein the computer program realizes the above-mentioned method when executed by a processor.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the embodiments and, together with the description, serve to explain the exemplary implementations of the embodiments. The illustrated embodiments are for purposes of illustration only and do not limit the scope of the claims. Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements.
FIG. 1 illustrates a schematic diagram of an exemplary system 100 in which various methods and apparatus described herein may be implemented in accordance with embodiments of the present disclosure;
FIG. 2 shows a schematic flow diagram of a method for in vivo detection according to an embodiment of the present disclosure;
FIG. 3 shows a schematic flow chart diagram of a process for identification according to an embodiment of the present disclosure;
FIG. 4 shows a schematic block diagram of an apparatus for in vivo detection according to an embodiment of the present disclosure;
FIG. 5 shows a schematic view of a detection apparatus according to an embodiment of the present disclosure;
FIG. 6 shows another schematic diagram of a detection apparatus according to an embodiment of the present disclosure;
fig. 7 shows a schematic view of an installation position of a first lighting unit in a lighting device according to an embodiment of the present disclosure;
FIG. 8 shows a schematic diagram of a portion of a light field distribution of a detection apparatus according to the present disclosure;
9A-9C show one schematic of an arrangement of a detection device according to an embodiment of the present disclosure;
10A-10B show another schematic diagram of an arrangement of detection devices according to embodiments of the present disclosure;
FIG. 11 shows yet another schematic diagram of an arrangement of a detection device according to an embodiment of the present disclosure;
12-14 illustrate exemplary block diagrams of detection devices according to embodiments of the present disclosure;
FIG. 15 shows a schematic block diagram of a detection device according to an embodiment of the present disclosure;
FIG. 16 shows one example of a liveness detection process according to an embodiment of the present disclosure;
FIG. 17 shows one example of an identification process according to an embodiment of the present disclosure;
fig. 18 shows one example of a face registration process according to an embodiment of the present disclosure;
fig. 19 shows another example of a face registration process according to an embodiment of the present disclosure;
fig. 20 is a block diagram illustrating an example of an electronic device according to an exemplary embodiment of the present disclosure.
Detailed Description
In the present disclosure, unless otherwise specified, the use of the terms "first", "second", etc. to describe various elements is not intended to limit the positional relationship, the timing relationship, or the importance relationship of the elements, and such terms are used only to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, based on the context, they may also refer to different instances.
The terminology used in the description of the various described examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, if the number of elements is not specifically limited, the elements may be one or more. Furthermore, the term "and/or" as used in this disclosure is intended to encompass any and all possible combinations of the listed items.
By using the artificial intelligence based method, the image of the object to be detected (such as a human face image) can be subjected to image processing for identity recognition. In some cases, identification systems may have difficulty identifying whether the detected image is a living body or an image presented by a photograph or video. In order to improve the security of the identification system, there is a need to determine whether an object to be detected is a living body.
In some cases, a depth camera or a binocular camera may be employed to acquire depth information of an object to be detected to determine whether the object to be detected is a planar two-dimensional object or a stereoscopic three-dimensional object. Further, the object to be detected can be required to make corresponding actions (such as blinking, mouth opening and the like) according to the indication, so that the three-dimensional prosthesis model is further prevented from being mistaken as a real human body object by the identity recognition system.
Fig. 1 illustrates a schematic diagram of an exemplary system 100 in which various methods and apparatus described herein may be implemented in accordance with embodiments of the present disclosure. Referring to fig. 1, the system 100 includes one or more terminal devices 101, a server 120, and one or more communication networks 110 coupling the one or more terminal devices to the server 120. Terminal device 101 may be configured to execute one or more applications.
In an embodiment of the present disclosure, the server 120 may run one or more services or software applications that enable the execution of the method for liveness detection according to the present disclosure. In some embodiments, one or more services or software applications of the method for liveness detection according to the present disclosure may also be run using the terminal device 101. In some implementations, the terminal device 101 may be implemented as an access control device, a payment device, or the like.
In some embodiments, the server 120 may also provide other services or software applications that may include non-virtual environments and virtual environments. In some embodiments, these services may be provided as web-based services or cloud services, for example, provided to users of the terminal devices 101 under a software as a service (SaaS) model.
In the configuration shown in fig. 1, server 120 may include one or more components that implement the functions performed by server 120. These components may include software components, hardware components, or a combination thereof, which may be executed by one or more processors. A user operating terminal device 101 may, in turn, utilize one or more terminal applications to interact with server 120 to take advantage of the services provided by these components. It should be understood that a variety of different system configurations are possible, which may differ from system 100. Accordingly, fig. 1 is one example of a system for implementing the various methods described herein and is not intended to be limiting.
The terminal device may provide an interface that enables a user of the terminal device to interact with the terminal device. The terminal device may also output information to the user via the interface. Although fig. 1 depicts only one terminal device, those skilled in the art will appreciate that any number of terminal devices may be supported by the present disclosure.
The terminal device 101 may include various types of computer devices, such as portable handheld devices, general purpose computers (such as personal computers and laptop computers), workstation computers, wearable devices, gaming systems, thin clients, various messaging devices, sensors or other sensing devices, and so forth. These computer devices may run various types and versions of software applications and operating systems, such as Microsoft Windows, Apple iOS, UNIX-like operating systems, Linux, or Linux-like operating systems (e.g., Google Chrome OS); or include various Mobile operating systems, such as Microsoft Windows Mobile OS, iOS, Windows Phone, Android. Portable handheld devices may include cellular telephones, smart phones, tablets, Personal Digital Assistants (PDAs), and the like. Wearable devices may include head mounted displays and other devices. The gaming system may include a variety of handheld gaming devices, internet-enabled gaming devices, and the like. The terminal device is capable of executing a variety of different applications, such as various Internet-related applications, communication applications (e.g., email applications), Short Message Service (SMS) applications, and may use a variety of communication protocols.
Network 110 may be any type of network known to those skilled in the art that may support data communications using any of a variety of available protocols, including but not limited to TCP/IP, SNA, IPX, etc. By way of example only, one or more networks 110 may be a Local Area Network (LAN), an ethernet-based network, a token ring, a Wide Area Network (WAN), the internet, a virtual network, a Virtual Private Network (VPN), an intranet, an extranet, a Public Switched Telephone Network (PSTN), an infrared network, a wireless network (e.g., bluetooth, WIFI), and/or any combination of these and/or other networks.
The server 120 may include one or more general purpose computers, special purpose server computers (e.g., PC (personal computer) servers, UNIX servers, mid-end servers), blade servers, mainframe computers, server clusters, or any other suitable arrangement and/or combination. The server 120 may include one or more virtual machines running a virtual operating system, or other computing architecture involving virtualization (e.g., one or more flexible pools of logical storage that may be virtualized to maintain virtual storage for the server). In various embodiments, the server 120 may run one or more services or software applications that provide the functionality described below.
The computing units in server 120 may run one or more operating systems including any of the operating systems described above, as well as any commercially available server operating systems. The server 120 may also run any of a variety of additional server applications and/or middle tier applications, including HTTP servers, FTP servers, CGI servers, JAVA servers, database servers, and the like.
In some embodiments, the server 120 may include one or more applications to analyze and consolidate data feeds and/or event updates received from users of the terminal devices 101. The server 120 may also include one or more applications to display data feeds and/or real-time events via one or more display devices of the terminal device 101.
In some embodiments, the server 120 may be a server of a distributed system, or a server incorporating a blockchain. The server 120 may also be a cloud server, or a smart cloud computing server or a smart cloud host with artificial intelligence technology. The cloud Server is a host product in a cloud computing service system, and is used for solving the defects of high management difficulty and weak service expansibility in the traditional physical host and Virtual Private Server (VPS) service.
The system 100 may also include one or more databases 130. In some embodiments, these databases may be used to store data and other information. For example, one or more of the databases 130 may be used to store information such as audio files and video files. The data store 130 may reside in various locations. For example, the data store used by the server 120 may be local to the server 120, or may be remote from the server 120 and may communicate with the server 120 via a network-based or dedicated connection. The data store 130 may be of different types. In certain embodiments, the data store used by the server 120 may be a database, such as a relational database. One or more of these databases may store, update, and retrieve data to and from the database in response to the command.
In some embodiments, one or more of the databases 130 may also be used by applications to store application data. The databases used by the application may be different types of databases, such as key-value stores, object stores, or regular stores supported by a file system.
The system 100 of fig. 1 may be configured and operated in various ways to enable application of the various methods and apparatus described in accordance with the present disclosure.
Fig. 2 shows a schematic flow diagram of a method for in vivo detection according to an embodiment of the present disclosure. The method illustrated in fig. 2 may be performed by the terminal device 101 or the server 120 illustrated in fig. 1. The terminal device can comprise an illuminating device and an image acquisition device. The image acquisition device may be configured to acquire an image of an object to be detected for in-vivo detection, and the illumination device may be configured to illuminate the object to be detected.
As shown in fig. 2, in step S202, the illumination device may be controlled to illuminate based on the current illumination mode, and the image capturing device may be controlled to capture an image of the object to be detected while illuminating.
In some embodiments, the illumination device may be a light emitting diode capable of emitting visible light. In other embodiments, the illumination device may be an infrared illumination device capable of emitting infrared light. It is to be understood that the lighting device may also be a lighting device capable of emitting visible light and infrared light simultaneously or selectively. In the case where the image pickup device includes an infrared camera and the illumination device includes an infrared illumination device, infrared information of the object to be detected can be picked up to assist the living body detection.
In some embodiments, the illumination device may include a first illumination unit located at one side of the image capture device and a second illumination unit located at another side of the image capture device. Wherein the first illumination unit and the second illumination unit may be symmetrically arranged with respect to the image capturing device. In some implementations, the first illumination unit can be configured to illuminate from the left side of the object to be detected and the second illumination unit can be configured to illuminate from the right side of the object to be detected. For example, the first illumination unit may be disposed at a position for illuminating a left face of the object to be detected, and the second illumination unit may be disposed at a position for illuminating a right face of the object to be detected. In further implementations, the first illumination unit may be configured to illuminate from above the object to be detected and the second illumination unit may be configured to illuminate from below the object to be detected. For example, the first illumination unit may be disposed at a position for illuminating an upper half portion of the face of the object to be detected, and the second illumination unit may be disposed at a position for illuminating a lower half portion of the face of the object to be detected. It is understood that the first and second lighting units may be disposed at different positions according to actual situations by those skilled in the art.
In some implementations, in a case where the image capture device is unable to capture depth information, the stereoscopic object to be detected may be identified by controlling the illumination device to illuminate in different ways. Taking the example where the illumination apparatus includes the first illumination unit for illuminating from the left side of the object to be detected and the second illumination unit for illuminating from the right side of the object to be detected, it is possible to capture an image when only the left side of the object to be detected is illuminated and an image when only the right side of the object to be detected is illuminated by controlling the illumination apparatus. Since images acquired with illumination from the left and right sides, respectively, have a brightness difference for a stereoscopic three-dimensional object, it is possible to determine whether the object to be detected is a living body by detecting such an image having a brightness difference even without depth information.
The illumination pattern is used to indicate the particular manner in which the illumination device is illuminated. In some embodiments, the illumination pattern may indicate a duration of time for which the illumination device is turned on or off, such as 0.5, 1, or 2 seconds of continuous illumination or 0.5, 1, or 2 seconds of remaining off. The specific value of the duration can be set by a person skilled in the art according to the actual situation. In other embodiments, where the lighting device includes a plurality of lighting units, the lighting pattern may indicate that some or all of the plurality of lighting units are illuminated. In still other embodiments, the lighting pattern may indicate both the lighting unit that is illuminated and the duration for which the lighting unit is illuminated.
Some examples of lighting patterns are shown in table 1, and in the example shown in table 1, the lighting device includes a first lighting unit and a second lighting unit. Wherein the first illumination unit may be located at one side of the image capturing device and the second illumination unit may be located at the other side of the image capturing device.
TABLE 1
Figure BDA0002963977620000061
Figure BDA0002963977620000071
In some embodiments, the current illumination pattern may include a sequence of illumination patterns for multiple illuminations. The lighting device may be controlled to illuminate a plurality of times based on the sequence of illumination patterns.
The sequence of illumination patterns may comprise a sequence of one or more illumination patterns. Table 2 shows an example of a sequence of illumination patterns formed using the illumination patterns shown in table 1. Wherein the sequence of illumination patterns may comprise at least one illumination pattern as shown in table 1.
TABLE 2
Serial number Illumination pattern sequence
Illumination pattern 1, illumination pattern 2, and illumination pattern 3
Illumination pattern 1, illumination pattern 3, and illumination pattern 2
Illumination pattern 3, illumination pattern 1, and illumination pattern 2
Illumination pattern 3, illumination pattern 2, and illumination pattern 1
Illumination pattern 2, illumination pattern 1, and illumination pattern 3
Illumination pattern 2, illumination pattern 3, and illumination pattern 1
Illumination pattern 1, illumination pattern 2, and illumination pattern 3
Illumination pattern 1, illumination pattern 2, illumination pattern 4, illumination pattern 5, and illumination pattern 3
Taking the sequence (r) in table 2 as an example, the illumination pattern sequence may include the first illumination unit illuminating, the second unit illuminating, and utilizing simultaneous illumination on both sides.
It will be appreciated that only some examples of illumination pattern sequences formed using the illumination patterns provided in table 1 are shown in table 2. The skilled person can construct different illumination pattern sequences according to the actual situation from the illumination patterns shown in table 1. Further, the number of random illumination pattern sequences is not limited to the 8 shown in table 2. One skilled in the art can set more or less lighting pattern sequences depending on the actual situation.
In some embodiments, the sequence of illumination patterns may be a random sequence of illumination patterns. A random lighting pattern sequence for the lighting device may be determined as the current lighting pattern from a plurality of lighting pattern sequences set in advance. For example, a random number may be generated and a sequence of lighting patterns corresponding to the generated random number is selected as the random sequence of lighting patterns for the lighting device.
Taking the example in table 2 as an example, random numbers may be generated within the range of the number 1 ~ 8, and an illumination pattern sequence corresponding to the generated random numbers is selected as a random illumination pattern sequence for the illumination apparatus.
Taking the illumination pattern sequence in which the current illumination pattern is the serial number (r) in table 2 as an example, the illumination apparatus may be controlled to illuminate in such a manner that the first illumination unit is illuminated for 1 second, the second illumination unit is illuminated for 1 second, and the first illumination unit and the second illumination unit are illuminated simultaneously for 1 second.
The image acquisition device can be controlled to acquire images of the object to be detected in different illumination modes while the illumination device is controlled to illuminate. Taking the illumination mode sequence determined in step 2 as the serial number (r) in table 2 as an example, the image capturing device may be controlled to capture images during the period of lighting the first illumination unit for 1 second, during the period of lighting the second illumination unit for 1 second, and simultaneously lighting the first illumination unit and the second illumination unit for 1 second, respectively, so as to obtain an image sequence of the image to be detected.
In step S204, a predicted illumination pattern may be determined based on the image of the object to be detected acquired in step S202.
In some embodiments, the image of the object to be detected may be image classified to derive a predicted illumination pattern. In some implementations, the image of the object to be detected can be input into a first neural network model trained in advance for image classification. The first neural network model is trained to predict an illumination pattern used when the input image is acquired, and outputs a classification result indicating the predicted illumination pattern. Taking the example shown in table 1 as an example, the first neural network model may classify the image of the object to be detected and output a class to which the image belongs as class 3, which indicates that the predicted illumination pattern of the image is illumination pattern 3 in table 1.
In the case where the current illumination mode includes an illumination mode sequence for a plurality of illuminations, the acquired image of the object to be detected may include an image sequence formed of images respectively acquired at the plurality of illuminations. The sequence of images may be input into a pre-trained first neural network model to derive a predicted illumination pattern. Taking the example shown in table 2 as an example, the first neural network model may classify the image sequence of the object to be detected, and output the class to which the image sequence belongs as class 1, which indicates that the predicted illumination pattern of the image sequence is illumination pattern sequence 1 in table 2.
In step S206, it is determined that the object to be detected passes live body detection at least in response to determining that the predicted illumination pattern and the current illumination pattern coincide.
In some embodiments, the image of the object to be detected acquired in step S202 may be subjected to image classification to obtain a living body prediction result of the object to be detected. Wherein the living body prediction result indicates that the object to be detected is a living body or that the object to be detected is a non-living body. In some implementations, the image of the object to be detected may be input into a pre-trained second neural network model for image classification. The second neural network model is trained to predict whether an object to be detected present in the input image is a living body, and outputs a classification result indicating whether the object to be detected is a living body or whether the object to be detected is a non-living body.
In some examples, the first and second neural network models described above may be implemented separately with two branches of the same prediction network. Wherein the prediction network may comprise a backbone network and a first output module and a second output module connecting the backbone network. For example, an image (sequence) of the object to be detected may be input into the prediction network. And processing the image (sequence) of the object to be detected by using a backbone network to obtain the image characteristics of the object to be detected. The image features may be processed with a first output module to derive a predicted illumination pattern. Meanwhile, the image features can be processed by utilizing a second output module to obtain a living body prediction result. In some examples, the first output module and the second data module may be implemented with a fully connected layer.
With the above procedure, the classification result of the predicted illumination pattern and the classification result indicating whether or not the object to be detected is a living body can be obtained by performing the image classification operation once on the image of the object to be detected.
In other examples, the second neural network model and the first neural network model may be different models. For example, the classification result of the predicted illumination pattern and the classification result indicating whether the object to be detected is a living body may be obtained by different image classification operations, respectively.
In some embodiments, in response to the living body prediction result indicating that the object to be detected is a living body, and in response to determining that the predicted illumination pattern obtained in step S204 and the current illumination pattern for controlling illumination in step S202 coincide, it may be determined that the object to be detected passes the living body detection.
With the method for in-vivo detection provided by the embodiment of the present disclosure, the condition that the object to be detected passes through in-vivo detection includes not only indicating that the object to be detected is in-vivo based on the image classification result of the object to be detected, but also that the predicted bright pattern determined based on the image of the object to be detected is consistent with the current illumination pattern used when the image is actually acquired. It is understood that the result indicating whether the object to be detected is a living body based on the image classification is not 100% correct. In some cases, since the quality of the acquired image for living body detection is low, the living body prediction result and the real situation may be inconsistent. For example, a prediction result indicating that the object to be detected is a non-living body may be output for an object to be detected that is a living body, and a prediction result indicating that the object to be detected is a living body may also be output for an object to be detected that is not a living body.
In order to improve the accuracy of in-vivo detection, the present disclosure supervises an in-vivo prediction result using a determination result of whether a predicted illumination pattern determined based on an image (sequence) of an object to be detected and a current illumination pattern coincide. In case it is determined that the predicted illumination pattern and the current illumination pattern are consistent, the image quality of the acquired image may be considered acceptable, and hence the result of the in-vivo prediction based on such an image is reliable. In the case where it is determined that the predicted illumination pattern and the current illumination pattern do not coincide, it can be considered that the image quality of the captured image is low, and thus a correct illumination pattern prediction result cannot be obtained. The result of the prediction of the living body based on such a low-quality image is therefore not reliable. In the latter case, even if the living body prediction result indicates that the object to be detected is a living body, the living body detection cannot be passed. Therefore, in the model training and actual use process, the prediction result of the illumination mode can play a role in supervising the in-vivo detection result, so that the accuracy of the in-vivo detection result is higher.
Fig. 3 shows a schematic flow chart of a process for identification according to an embodiment of the present disclosure. The method illustrated in fig. 3 may be performed by the terminal device 101 or the server 120 illustrated in fig. 1. The terminal device can comprise an illuminating device and an image acquisition device. The image acquisition device may be configured to acquire an image of an object to be detected for in-vivo detection, and the illumination device may be configured to illuminate the object to be detected. Wherein the illumination device comprises a first illumination unit located at one side of the image capturing device and a second illumination unit located at the other side of the image capturing device.
As shown in fig. 3, the method 300 begins at step S301.
In step S302, a random lighting pattern sequence for the lighting device may be determined. The random illumination pattern sequence determined in step S302 includes at least an illumination pattern in which both sides of the object to be detected are illuminated simultaneously by the first illumination unit and the second illumination unit. For example, the random illumination pattern sequence may be a sequence comprising: the first lighting unit illuminates; the second lighting unit lights; and both sides are illuminated simultaneously.
In step S304, the illumination device may be controlled to illuminate based on the random illumination pattern sequence determined in step S302, and the image capturing device may be controlled to capture an image of the object to be detected while illuminating. The image of the object to be detected may be a face image sequence of the object to be detected.
In step S306, the predicted illumination pattern and the result of the in-vivo prediction of the object to be detected may be determined based on the image sequence acquired in step S304.
Steps S304 to S306 shown in fig. 3 can be implemented by using steps S202 to S204 described in conjunction with fig. 2, and will not be described again here.
In step S308, it may be determined whether the predicted lighting pattern obtained in step S306 and the random lighting pattern sequence determined in step S302 coincide.
In the case where it is determined in step S308 that the predicted illumination pattern and the random illumination pattern sequence do not coincide, this time the identification process fails. It is possible to return to step S301 to start a new identification process.
In the event that a determination is made in step S308 that the predicted lighting pattern and the random lighting pattern sequence are consistent, the method 300 may proceed to step S310.
In step S310, the result of the living body prediction of the object to be detected obtained in step S306 may be acquired. Wherein the living body prediction result indicates that the object to be detected is a living body or that the object to be detected is a non-living body.
In the case where the living body prediction result acquired in step S310 indicates that the object to be detected is a non-living body, this time the identification process fails. It is possible to return to step S301 to start a new identification process.
In the case where the living body prediction result obtained in step S310 indicates that the object to be detected is a living body, the method 300 may proceed to step S312.
In step S312, the face images acquired when the two sides of the object to be detected are illuminated simultaneously in the face image sequence may be used as the recognition images.
In step S314, the recognition image determined in step S312 may be subjected to image processing to obtain a face recognition result. In some embodiments, the trained neural network model for face recognition may be used to process the recognition image to obtain the face features of the object to be detected. By comparing the face features of the object to be detected with the face features of the plurality of identities pre-stored in the database, the identity corresponding to the object to be detected can be obtained as the face recognition result of the object to be detected.
Fig. 4 shows a schematic block diagram of an apparatus for in vivo detection according to an embodiment of the present disclosure. As shown in fig. 4, the apparatus for living body detection 400 may include a control unit 410, a prediction unit 420, and a detection unit 430.
The control unit 410 may be configured to control the illumination device to illuminate based on the current illumination mode, and to control the image capturing device to capture an image of the object to be detected while illuminating. The prediction unit 430 may be configured to determine a predicted illumination pattern based on an image of the object to be detected. The detection unit 430 may be configured to determine that the object to be detected passes live detection at least in response to determining that the predicted illumination pattern and the current illumination pattern coincide.
The operations of the above units 410-430 of the apparatus 400 for detecting a living body are similar to the operations of the above steps S202-S206, respectively, and will not be described again.
The apparatus for liveness detection 400 may further comprise a face recognition unit (not shown) in case the sequence of images of the object to be detected acquired by the image acquisition device is a sequence of face images of the object to be detected. The face recognition unit may be configured to determine, in the image sequence, an image acquired when both sides are illuminated simultaneously as a recognition image, and perform image processing on the recognition image to obtain a face recognition result of the object to be detected.
With the device for live body detection provided by the embodiment of the present disclosure, the condition that the object to be detected passes through live body detection includes not only indicating that the object to be detected is a live body based on the image classification result of the object to be detected, but also that the predicted illumination mode determined based on the image of the object to be detected is consistent with the current illumination mode used when the image is actually acquired. The living body prediction result is supervised by using a judgment result of whether a prediction illumination mode determined based on an image of an object to be detected is consistent with a current illumination mode used when the image is acquired, so that the accuracy of living body detection is improved. In case it is determined that the predicted illumination pattern and the current illumination pattern are consistent, the image quality of the acquired image may be considered acceptable, and hence the result of the in-vivo prediction based on such an image is reliable. In the case where it is determined that the predicted illumination pattern and the current illumination pattern do not coincide, it can be considered that the image quality of the captured image is low, and thus a correct illumination pattern prediction result cannot be obtained. The result of the prediction of the living body based on such a low-quality image is therefore not reliable. In the latter case, even if the living body prediction result indicates that the object to be detected is a living body, the living body detection cannot be passed.
Exemplary methods according to the present disclosure have been described above in connection with the accompanying drawings. Exemplary embodiments utilizing the electronic circuit, electronic device, and the like of the present disclosure will be further described with reference to the accompanying drawings.
According to another aspect of the present disclosure, there is provided an electronic circuit comprising: circuitry configured to perform the steps of the methods described in this disclosure.
According to another aspect of the present disclosure, there is provided an electronic device including: a processor; and a memory storing a program comprising instructions that, when executed by the processor, cause the processor to perform the method described in this disclosure.
According to another aspect of the present disclosure, there is provided a computer readable storage medium storing a program, the program comprising instructions which, when executed by a processor of an electronic device, cause the electronic device to perform the method described in the present disclosure.
According to another aspect of the disclosure, a computer program product is provided, comprising a computer program comprising instructions which, when executed by a processor, perform the method described in the disclosure.
The following describes a detection device structure that can be used for the terminal device for living body detection described in the present disclosure, in conjunction with fig. 5 to 15.
FIG. 5 shows a schematic diagram of a detection device according to an embodiment of the present disclosure.
As shown in fig. 5, the detection device 500 may include an image capture device 510 and an illumination device. Therein, the illumination device may include a first illumination unit 5201 located at one side of the image capture device 510 and a second illumination unit 5202 located at the other side of the image capture device 510. Where a dashed line 511 shows the optical axis of the image acquisition device 510 and a dashed line 521 shows the first optical axis of the first illumination unit 5201, where the first optical axis 521 intersects the optical axis 511 of the image acquisition device. The dashed line 522 shows a second optical axis of the second illumination unit 5202, wherein the second optical axis 522 intersects the optical axis 511 of the image acquisition device.
In some embodiments, the intersection of the first optical axis 521 of the first illumination unit 5201 and the second optical axis 522 of the second illumination unit 1202 is on the optical axis 511 of the optical system of the image capture device.
In some implementations, the first illumination unit can be configured to illuminate from the left side of the object to be detected and the second illumination unit can be configured to illuminate from the right side of the object to be detected. For example, the first illumination unit may be disposed at a position for illuminating a left face of the object to be detected, and the second illumination unit may be disposed at a position for illuminating a right face of the object to be detected. In further implementations, the first illumination unit may be configured to illuminate from above the object to be detected and the second illumination unit may be configured to illuminate from below the object to be detected. For example, the first illumination unit may be disposed at a position for illuminating an upper half portion of the face of the object to be detected, and the second illumination unit may be disposed at a position for illuminating a lower half portion of the face of the object to be detected. It is understood that the first and second lighting units may be disposed at different positions according to actual situations by those skilled in the art.
In some embodiments, the first illumination unit 5201 and the second illumination unit 5202 may be symmetrically arranged with respect to the image acquisition device 510. In some examples, the first illumination unit 5201 and the second illumination unit 5202 may have the same parameters. For example, the first illumination unit 5201 and the second illumination unit 5202 may have the same illumination range, emission wavelength, power, and the like.
With the detection apparatus shown in fig. 5, it is possible to perform live body detection by acquiring an image of an object to be detected without acquiring depth information. Taking the example that the first illumination unit illuminates the object to be detected from the left side and the second illumination unit illuminates the object to be detected from the right side, when illumination is performed only on one side, the illumination will form a shadow on the other side of the three-dimensional object to be detected. Thus, different illumination light projection directions may cause differences in light field distribution of the object to be detected. And such a difference cannot be formed on a two-dimensional object to be detected. Therefore, even without depth information, the difference between the three-dimensional object and the two-dimensional object can be reflected.
In some embodiments, the illumination device 520 may be a light emitting diode capable of emitting visible light. In other embodiments, the illumination device 520 may be an infrared illumination device capable of emitting infrared light. In this case, the image pickup apparatus may include an infrared camera to pick up an infrared image. It is to be understood that the illumination device 520 may also be an illumination device capable of emitting visible light and infrared light simultaneously or selectively. In the case where the illumination device 520 includes an infrared illumination device, infrared information of the object to be detected may be collected to assist in the liveness detection.
With the detection device provided by the embodiment of the present disclosure, by making the intersection point of the optical axis of the first illumination unit and the optical axis of the second illumination unit on the optical axis of the optical system of the image capture device, a better illumination effect can be obtained when capturing an image of an object to be detected with the image capture device 510. Because the optical axes of the lighting units positioned at the two sides are intersected at one point, the lighting effect of the first lighting unit and the second lighting unit on the object to be detected is more uniform, and the quality of the collected image of the object to be detected is higher.
Fig. 6 shows another schematic diagram of a detection device according to an embodiment of the present disclosure.
As shown in fig. 6, the detection device 600 may include an image capture device 610 and an illumination device 620. Among them, the illumination apparatus 620 may include a first illumination unit 6201 located at one side of the image capturing apparatus 610 and a second illumination unit 6202 located at the other side of the image capturing apparatus 610. Wherein the intersection of the optical axis 621 of the first illumination unit 6201 and the optical axis 622 of the second illumination unit 6202 is on the optical axis 611 of the optical system of the image capturing device.
The range of use of the image acquisition device 610 is also shown in fig. 6. Wherein the range of use of image capture device 610 may indicate an imaging area between a closest imaging plane of the image capture device and a farthest imaging plane of the image capture device.
As shown in fig. 6, the image capturing device 610 is used in a range between a first imaging plane 631 and a second imaging plane 632. Wherein the distance between the first imaging plane 631 and the image acquisition device 610 is smaller than the distance between the second imaging plane 632 and the image acquisition device 610. In the range between the first imaging plane 631 and the second imaging plane 632, the image acquisition device 610 can clearly image the object to be detected. In some embodiments, the first imaging plane 631 may be the closest imaging plane of the image acquisition device 610 and the second imaging plane 632 may be the furthest imaging plane of the image acquisition device 610.
In some embodiments, the positions of the first imaging plane 631 and the second imaging plane 632 may be determined based on the depth of field of the image acquisition device 610. For example, the distance between the first imaging plane 631 and the image acquisition device 610 may be greater than or equal to the nearest sharp imaging plane of the image acquisition device 610, and the distance between the second imaging plane 632 and the image acquisition device 610 may be less than or equal to the farthest sharp imaging plane of the image acquisition device 610.
The positions of the first 631 and second 632 imaging planes may further be determined based on the proportion of the object to be detected that occupies in the acquired image. For example, when the distance of the object to be detected from the image capturing apparatus time is between the first imaging plane 631 and the second imaging plane 632, the proportion of the image of the object to be detected occupied in the captured image is within a predetermined range. Wherein in the image acquired at the first imaging plane 631 the object to be detected occupies a predetermined maximum proportion in the image and in the image acquired at the second imaging plane 632 the object to be detected occupies a predetermined minimum proportion in the image.
In some embodiments, the range of use of image capture device 610 may be 40cm to 80 cm. That is, the first imaging plane 631 (i.e., the closest imaging plane of the image capture device) is 40cm from the image capture device 610, and the second imaging plane 632 (i.e., the farthest imaging plane of the image capture device) is 80cm from the image capture device 610. Wherein the distance between the imaging plane and the image capturing device is a distance along an optical axis direction of the image capturing device. When the image pickup apparatus is mounted on a vertical wall surface, the distance between the imaging plane and the image pickup apparatus is a distance in the horizontal direction.
In some implementations, an intersection of the first optical axis of the first illumination unit 6201 and the second optical axis of the second illumination unit 6202 can be located on a central imaging plane of the image capture device 610, where the central imaging plane is located at a central position between the closest imaging plane 231 of the image capture device and the farthest imaging plane 232 of the image capture device.
Fig. 7 shows a schematic view of an installation position of a first lighting unit in a lighting device according to an embodiment of the present disclosure.
As shown in fig. 7, for the illumination apparatus 700, a coordinate system with the image pickup apparatus 710 as an origin may be established, in which the Y axis coincides with the optical axis of the image pickup apparatus 710 and the X axis is parallel to the imaging plane of the image pickup apparatus 710.
In the above coordinate system, the distance dis between the image pickup device 710 and an intersection point of the optical axis of the first illumination unit 7201 and the optical axis 721 of the image pickup device 710 may be basedmidThe location of the first lighting unit 7201 is determined. In some implementations, the dis can be determined based on the range of use of image capture device 710midThe value of (c). For example, dis may be combinedmidIs determined as the middle of the range of use of image capture device 710. In the case where the image pickup device 710 has a use range of 40cm to 80cm, dismidCan be determined as 60 cm.
In the example shown in fig. 7, point a is an intersection of the optical axis of the first illumination unit 7201 and the optical axis of the image pickup device 710, point B is a position of the first illumination unit 7201, point C is a perpendicular point of point B on the X axis, point D is a perpendicular point of point B on the Y axis, and point E is a position of the image pickup device 710.
Equation (1) can be determined based on the proportional relationship between similar triangles Δ BCE and Δ ABD:
Figure BDA0002963977620000151
where the value of lightX can be determined based on whether the point B is located on the positive or negative half of the X-axis, lightX when the point B is located on the positive half of the X-axis, and lightX when the point B is located on the negative half of the X-axis.
With the value of dismid known, the relationship between the abscissa lightX and the ordinate lightY of the B point can be determined based on formula (1). In an actual application scenario, the value of one of lightX and lightY may be specified according to actual conditions, and the value of the other of lightX and lightY is calculated based on formula (1). For example, the distance in the X direction of the first lighting device 7201 from the image capturing device 710, i.e., the value specifying lightX, may be determined according to the actual installation location of the detection device. The value of lightY may then be determined based on the specified value of lightX.
It is understood that any ray passing through the D point on a plane passing through the D point and perpendicular to the Y axis may be selected as the X axis, and those skilled in the art can select an appropriate X axis according to the actual situation. Also, for the X axis determined in any manner, after the position of B is determined based on the method shown in fig. 7, any position where the B point is rotated about the Y axis may be determined as the installation position of the first lighting.
In some embodiments, the illumination range when simultaneously illuminated with the first illumination unit and the second illumination unit is greater than or equal to the field of view range of the image capture device within the range of use of the image capture device. That is, in the region between the closest imaging plane and the farthest imaging plane of the image pickup device, the illumination range in which the first illumination unit and the second illumination unit simultaneously illuminate covers the field of view range of the image pickup device.
Fig. 8 shows a schematic diagram of a portion of a light field distribution of a detection apparatus according to the present disclosure.
In the example shown in fig. 8, the point B corresponds to the farthest distance in the use range of the image pickup apparatus, the points a and C correspond to the field boundaries of the field FOV of the image pickup apparatus on the imaging plane passing through the point B, the point E corresponds to the intersection of the optical axis of the first lighting unit and the optical axis of the image pickup apparatus, the point F is the position of the first lighting unit, the point G is the position of the image pickup apparatus, the point H is the position of the second lighting unit, the point I is the corner point of the connecting line of the first lighting unit and the second lighting unit with the optical axis of the image pickup apparatus, FA, FD respectively represent the boundaries of the field FOV of the first lighting unit, and GA, GC respectively represent the boundaries of the field FOV of the image pickup apparatus. Wherein the first illumination unit and the second illumination unit are symmetrically arranged with respect to an optical axis of the image capturing device.
As can be seen from fig. 8, in order to make the illumination range greater than or equal to the field range of the image pickup device when the illumination is simultaneously performed by the first illumination unit and the second illumination unit within the use range of the image pickup device, the illumination range of the first illumination unit shown in fig. 8 is the minimum illumination range that can satisfy the above-described conditions. In the light field distribution shown in fig. 8, the illumination range of the first illumination unit is always larger than the field of view range of the image pickup device on the side where the first illumination unit is located within the use range of the image pickup device. Because the second lighting units are symmetrically arranged on the other side of the image acquisition equipment, the lighting range of the second lighting unit on the other side is always larger than the field range of the image acquisition equipment within the use range of the image acquisition equipment.
With the FOV of the image capturing device and the positions of the image capturing device and the first lighting unit known, the lighting angle of view of the first lighting unit shown in fig. 8 may be calculated based on the geometric relationship shown in fig. 8.
Wherein, for the triangle AFE shown in fig. 8, cos & lt AFE can be calculated based on the following equation (2):
cos∠AFE=(AF2+EF2-AE2)/(2AF*EF) (2)
taking GE ═ 600mm and FH ═ 300mm as an example, GH ═ lightY ═ 7.7913mm can be calculated based on formula (1), where | lightX | ═ FH ═ 300mm, dis in formula (1)mid=GE=600mm。
In the case where the FOV of the image pickup apparatus is 90 ° and GB is 800mm, the coordinates of the point a are determined to be (-800,800), the coordinates of the point E are determined to be (0,600), and the coordinates of the point F are determined to be (-30, 7.7913).
The value of cos AFE can be calculated by substituting the coordinates of point A, E, F into equation (2), and can be further calculated to obtain 47.08 °. The illumination angle of view of the first illumination unit shown in fig. 4 can thus be found to be 94.2 degrees.
It is understood that the value of the illumination viewing angle &calculatedbased on the formula (2) is the minimum illumination range for satisfying the illumination ranges of the first and second illumination units. In practical applications, a person skilled in the art can determine the illumination range of the illumination unit to be any value larger than the minimum illumination range calculated based on the formula (2) according to practical situations.
Fig. 9A-9C show one schematic of an arrangement of a detection device according to an embodiment of the present disclosure.
In the example shown in fig. 9A to 9C, the image pickup device 910, the first illumination unit 9201, and the second illumination unit 9202 are mounted on the same chassis. As shown in fig. 9A, the first illumination unit 9201 is installed on the left side of the image pickup apparatus 910, and the second illumination unit 9202 is installed on the right side of the image pickup apparatus 910. As shown in fig. 9B, a first illumination unit 9201 is installed at an upper side of the image pickup apparatus 910, and a second illumination unit 9202 is installed at a lower side of the image pickup apparatus 910.
Fig. 9C shows a perspective structure of an arrangement that can be used for the detection device shown in fig. 9A and 9B. As shown in fig. 9C, the first illumination unit 9201, the image capturing apparatus 910, and the second illumination unit 9202 may be mounted on a base 930. Wherein the base 930 may be formed using a plate-shaped material. For example, a flat plate-shaped material may be appropriately bent so that the intersection point of the optical axes of the first and second illumination units 9201 and 9202 installed at both sides is located on the optical axis of the image pickup device 910. In some examples, a flexible material may be used as the material of the base 930.
Fig. 10A-10B show another schematic diagram of an arrangement of a detection device according to an embodiment of the present disclosure.
In the example shown in fig. 10A to 10B, the image pickup apparatus 1010 is mounted on a first base 1030, the first illumination unit 10201, the second illumination unit 10202 are mounted on a second base 1040, and the first base and the second base are assembled in such a manner that an intersection point of an optical axis of the first illumination unit 10201 and an optical axis of the second illumination unit 10202 is on an optical axis of an optical system of the image pickup apparatus 1010.
As shown in fig. 10A, the first illumination unit 10201 is located at the left side of the image pickup apparatus 1010, and the second illumination unit 10202 is located at the right side of the image pickup apparatus 1010.
As shown in fig. 10B, the first and second lighting units 10201 and 10202 are mounted on a second base 1040. The second base 1040 may be formed of a bent plate-shaped material on which the first and second illumination units 10201 and 10202 are mounted such that optical axes of the first and second illumination units 10201 and 10202 intersect at one point. Further, the image pickup device 1010 is mounted on the first base 1030. The first base 1030 and the second base 1040 are assembled in such a manner that an intersection point of an optical axis of the first illumination unit 10201 and an optical axis of the second illumination unit 10202 is on an optical axis of an optical system of the image pickup apparatus 1010. For example, the image pickup apparatus 1010 may be installed in the middle of the first illumination unit 10201 and the second illumination unit 10202 by a cutout portion formed on the second base 1040, as illustrated in fig. 10B.
Fig. 11 shows yet another schematic diagram of an arrangement of a detection device according to an embodiment of the present disclosure.
As shown in fig. 11, the illumination apparatus further includes a first auxiliary illumination unit 11203 positioned at one side of the image capture apparatus 1110 and a second auxiliary illumination unit 11204 positioned at the other side of the image capture apparatus 1110. Wherein the first and second auxiliary lighting units 11203 and 11204 and the first and second lighting units 11201 and 11202 may have the same parameters. Although only two first auxiliary lighting units 11203 and two second auxiliary lighting units 11204 are shown in fig. 11, a greater or lesser number of first auxiliary lighting units and second auxiliary lighting units may be provided by those skilled in the art, depending on the actual situation.
Fig. 12-14 illustrate exemplary block diagrams of detection devices according to embodiments of the disclosure.
In the exemplary structure shown in fig. 12, the optical axis of the first auxiliary lighting unit 12203 is parallel to the optical axis of the first lighting unit 12201, and the optical axis of the second auxiliary lighting unit 12204 is parallel to the optical axis of the second lighting unit 12202. The image capturing apparatus 1210, the first illumination unit 12201, the second illumination unit 12202, the first auxiliary illumination unit 12203, and the second auxiliary illumination unit 12204 are mounted on the same base 1230. Wherein, an intersection point of optical axes of the first and second illumination units 12201 and 12202 installed at both sides of the image pickup apparatus 1210 is located on an optical axis of an optical system of the image pickup apparatus 1210.
In the exemplary structure shown in fig. 13, the optical axis of the first auxiliary lighting unit 13203 is parallel to the optical axis of the first lighting unit 13201, and the optical axis of the second auxiliary lighting unit 13204 is parallel to the optical axis of the second lighting unit 13202. Wherein the image capturing apparatus 1310 is mounted on the first base 1330, and the first illumination unit 13201, the second illumination unit 13202, the first auxiliary illumination unit 13203, and the second auxiliary illumination unit 13204 are mounted on the second base 1340. The first base 1330 and the second base 1340 are assembled in such a manner that an intersection point of the optical axis of the first illumination unit 13201 and the optical axis of the second illumination unit 13202 is on the optical axis of the optical system of the image pickup device 1310. For example, as shown in fig. 13, the image capturing device 1310 may be installed between the first illumination unit 13201 and the second illumination unit 13202 by forming a cutout on the second base 1340.
In the exemplary configuration shown in fig. 14, the intersection point of the optical axis of the first auxiliary illumination unit 14203 and the optical axis of the second auxiliary illumination unit 14204 in the detection apparatus 1400 is also on the optical axis of the optical system of the image pickup apparatus. In some embodiments, an intersection point of an optical axis of the first illumination unit 14201 and an optical axis of the second illumination unit 14202 in the detection apparatus 1400 is on an optical axis of an optical system of the image capturing apparatus 1410, and an optical axis of the first auxiliary illumination unit 14203 and an optical axis of the second auxiliary illumination unit 14204 also pass through the intersection point. In the structure shown in fig. 14, the first illumination unit 14201, the second illumination unit 14202, the first auxiliary illumination unit 14203, and the second auxiliary illumination unit 14204 are installed on a base of a spherical crown type such that optical axes of the first illumination unit 14201, the second illumination unit 14202, the first auxiliary illumination unit 14203, and the second auxiliary illumination unit 14204 meet at the same point. In some embodiments, the image capturing device 14010 may be mounted at the center of the spherical cap base, and the first lighting unit 14201, the at least one first auxiliary lighting unit 14203, the second lighting unit 14202, and the at least one second auxiliary lighting unit 14204 may be symmetrically arranged with respect to the image capturing device 1410.
The structure shown in fig. 14 is merely an exemplary illustration, and the grid lines shown in fig. 14 may not be included in a structure of a practical application.
By adding a larger number of auxiliary lighting units, a better lighting effect can be obtained.
Fig. 15 shows a schematic block diagram of a detection device according to an embodiment of the present disclosure.
As shown in fig. 15, the detection device 1500 may include an image acquisition device 1510, an illumination device 1520, and a processor 1530. The image capturing device 1510 may be configured to capture an image of an object to be detected, and the illumination device 1520 may be configured to illuminate the object to be detected. Wherein the illumination device 1520 may include a first illumination unit located at one side of the image capturing device and a second illumination unit located at the other side of the image capturing device. The structures of the image capturing device 1510 and the illumination device 1520 shown in fig. 15 may be implemented in connection with the embodiments described in fig. 5 to 14, and will not be described again.
The processor 1530 may be configured to control the illumination device to illuminate, and control the image acquisition device to acquire an image to be detected of the object to be detected while illuminating, and to perform image processing on the image to be detected to determine whether the object to be detected is a living body.
As shown in fig. 15, the processor 1530 may include a sequence generation module 1531, a light control module 1532, and an exposure control module 1533.
The sequence generation module 1531 may be configured to generate a sequence of illumination patterns for the lighting device. The sequence of illumination patterns may be a sequence formed by a combination of: the first lighting unit illuminates; the second lighting unit lights; and both sides are illuminated simultaneously.
In some embodiments, the sequence of illumination patterns may be randomly generated.
The light control module 1532 may be configured to control the first lighting unit and the second lighting unit to illuminate based on the sequence of lighting patterns generated by the sequence generation module 1531. While illuminated, the exposure control module 1533 may be configured to control the image capture device 1510 to capture a sequence of images of the object to be inspected.
In some embodiments, the processor 1530 may further include an image classification module (not shown), and the image classification module may be configured to perform image classification on the image to be detected acquired by the image acquisition device to obtain a living body prediction result of the object to be detected, wherein the living body prediction result indicates that the object to be detected is a living body or that the object to be detected is a non-living body.
Fig. 16 shows one example of a living body detection process according to an embodiment of the present disclosure.
As shown in fig. 16, at 1601, a sequence of images of an object to be detected is acquired by an image acquisition device. Wherein the sequence of images may be images acquired under illumination conditions controlled according to the sequence of illumination patterns. The sequence of acquired images can be input to a face detection module at 1602. The face detection module may process the acquired sequence of images and output a face box shown at 1603 corresponding to each image in the sequence of images. Where multiple faces are included in an image, the face detection module may detect multiple face frames in the image.
N detected face boxes for each image (where N is an integer greater than or equal to 1) may be input to the face box decision module shown at 1604. For each image in the image sequence, the face frame decision module may determine a largest face frame of the N face frames as the face frame 1605 of the image.
The image sequence 1601 can be cropped based on the face frame 1605 using a face cropping module shown at 1606 to obtain a face sequence 1607. The face sequence 1607 may be processed with the liveness detection module at 1608 to obtain an illumination pattern sequence prediction result and a liveness detection result for the image sequence.
When different light fields are projected on the same target object, the acquired image can reflect the form of the light field. Thus, a sequence of illumination patterns used in acquiring the sequence of images may be predicted based on the sequence of images. The in vivo prediction results obtained using the process of FIG. 16 are only reliable if the predicted light sequence coincides with the actual illumination pattern sequence used when the images were actually acquired.
Fig. 17 illustrates one example of an identification process according to an embodiment of the present disclosure.
As shown in fig. 17, at 1701, a sequence of images of an object to be detected is acquired by an image acquisition device. Wherein the sequence of images may be images acquired under illumination conditions controlled according to the sequence of illumination patterns. The sequence of acquired images may be input to a face detection module at 1702. The face detection module may process the acquired sequence of images and output a face box corresponding to each image in the sequence of images shown at 1703. Where multiple faces are included in an image, the face detection module may detect multiple face frames in the image.
The N detected face boxes for each image (where N is an integer greater than or equal to 1) may be input to a face box decision module shown at 1704. For each image in the image sequence, the face frame decision module may determine a largest face frame of the N face frames as the face frame 1705 of the image.
Based on the random lighting sequence 1706 used when the image sequence 1701 is captured, images captured when the first illumination unit and the second illumination unit respectively located on both sides in the illumination apparatus are illuminated simultaneously may be extracted from the image sequence as recognition images using an extraction module shown at 1707, and face information 1708 in the recognition images may be determined based on the face frame 1705, where the face information may include the face frame and the face image in the recognition images.
The face information 1708 may be processed to derive keypoints 1710 therein using the keypoint detection module shown at 1709. Using the face alignment module shown at 1711, alignment may be performed based on the face image in the face information 1708 and the keypoints 1710 to obtain an aligned face 1712. The accuracy of the face features obtained using aligned faces will be higher.
The aligned face 1712 may be processed using a face coding module shown at 1713 to obtain a face code 1714 representing the identity of the object to be detected. By comparing the face code 1714 with a plurality of face codes pre-stored in the database, the identity information of the object to be detected can be obtained.
Fig. 18 shows one example of a face registration process according to an embodiment of the present disclosure.
As shown in fig. 18, at 1801, a sequence of images of an object to be detected is acquired by an image acquisition device. Wherein the sequence of images may be images acquired under illumination conditions controlled according to the sequence of illumination patterns. The acquired image sequence may be input 1802 to a face detection module. The face detection module may process the acquired sequence of images and output a face box corresponding to each image in the sequence of images as shown at 1803. Where multiple faces are included in an image, the face detection module may detect multiple face frames in the image.
The N detected face boxes for each image (where N is an integer greater than or equal to 1) can be input 1804 to the face box decision module shown at 1804. For each image in the image sequence, the face frame decision module may determine a largest face frame of the N face frames as the face frame 1805 of the image.
Based on the random lighting sequence 1806 used when acquiring the image sequence 1801, an image acquired when the first lighting unit and the second lighting unit respectively located at two sides in the lighting device are simultaneously illuminated may be extracted from the image sequence as a recognition image by using an extraction module shown at 1807, and face information 1808 in the recognition image may be determined based on the face frame 1805, where the face information may include the face frame and the face image in the recognition image.
The face information 1808 may be processed to obtain keypoints 1810 therein using a keypoint detection module shown at 1809.
The keypoints 1810 can be processed by a face quality control module, shown at 1815, to derive quality information 1816 for the identified image. The quality information may include, but is not limited to, whether the expression, the occluded ratio, the head angle, and the illumination condition of the object to be detected satisfy a predetermined quality determination condition.
In the case where the quality information 1816 indicates that the face image quality of the object to be detected in the recognition image is not acceptable, this face registration is terminated. The method may proceed to 1817 to start a new round of image acquisition or terminate the entire face registration process.
In the case that the vector information 1816 indicates that the quality of the face image of the object to be detected in the recognition image is qualified, the method proceeds to 1811, and a face alignment module may be used to perform alignment based on the face image in the face information 1808 and the key point 1810, so as to obtain an aligned face 1812. The accuracy of the face features obtained using aligned faces will be higher.
Using a face coding module shown at 1813, the aligned face 1812 may be processed to obtain a face code 1814 representing the identity of the object to be detected. The face code 1814 corresponding to the object to be detected and the identity information of the object to be detected may be stored in association in a database to complete the registration.
Fig. 19 shows another example of a face registration process according to an embodiment of the present disclosure.
As shown in fig. 19, a plurality of person information and face codes corresponding to the respective person information are stored in the existing registry 1901. After the face code of the object to be detected is obtained by using the process in fig. 18, the face code of the object to be detected may be determined as the face code to be put into a library 1902 shown in fig. 19. Using a code comparison module shown at 1903, the face code 1902 to be put in storage and the codes in the existing registry 1901 can be compared to obtain a comparison score list 1904.
Using the database duplication module at 1905, it may be determined whether the face code 1902 to be binned is a duplicate Identity (ID) or a new ID based on the comparison score list 1904. For example, when a comparison score higher than a predetermined score threshold exists in the comparison score list, it may be considered that a code with a higher similarity to the face code to be put in storage exists in the existing registry. In this case, it may be because the information of the object to be detected has already been entered into the existing registry, or it may be because a face code similar to the face code existing in the object to be detected exists in the existing registry. If the face code of the object to be detected is entered in this case, a recognition error may be caused in the future face recognition process.
In the case that it is determined that the face code to be warehoused belongs to the new ID 1907, the database update module at 1909 may be used to record the face code to be warehoused 1902 and the person information associated with the face code to be warehoused 1902 into the existing registry 1901.
In the event that it is determined that the face code to be binned belongs to duplicate ID 1906, processing using re-registration 1910, registration rejection 1911, or manual intervention 1912 may be attempted using the duplicate ID processing module at 1908.
Fig. 20 is a block diagram illustrating an example of an electronic device according to an exemplary embodiment of the present disclosure. It is noted that the structure shown in fig. 20 is only one example, and the electronic device of the present disclosure may include only one or more of the constituent parts shown in fig. 20 according to a specific implementation.
The electronic device 2000 may be, for example, a general purpose computer (e.g., various computers such as a laptop computer, a tablet computer, etc.), a mobile phone, a personal digital assistant. According to some embodiments, the electronic device 2000 may be a vision-impaired auxiliary device. The electronic device 2000 may include a camera, an illumination device, and an electronic circuit for living body detection. Wherein the camera may be configured to acquire images, the illumination device may be configured to illuminate an object to be detected, and the electronic circuitry may be configured to perform the method for in vivo detection described in connection with fig. 2, 3.
According to some embodiments, the electronic device 2000 may be configured to comprise a spectacle frame or be configured to be detachably mountable to a spectacle frame (e.g. a frame of a spectacle frame, a connector connecting two frames, a temple or any other part) so as to be able to take an image approximately comprising a field of view of a user.
According to some embodiments, the electronic device 2000 may also be mounted to or integrated with other wearable devices. The wearable device may be, for example: a head-mounted device (e.g., a helmet or hat, etc.), an ear-wearable device, etc. According to some embodiments, the electronic device may be implemented as an accessory attachable to a wearable device, for example as an accessory attachable to a helmet or cap, or the like.
According to some embodiments, the electronic device 2000 may also have other forms. For example, the electronic device 2000 may be a mobile phone, a general purpose computing device (e.g., a laptop computer, a tablet computer, etc.), a personal digital assistant, and so forth. The electronic device 2000 may also have a base so as to be able to be placed on a table top.
The electronic device 2000 may include a camera 2004 for acquiring images. The video camera 2004 may include, but is not limited to, a webcam or a camera, etc. The electronic device 2000 may further comprise a text recognition circuit 2005, the text recognition circuit 2005 being configured to perform text detection and/or recognition (e.g. OCR processing) on text contained in the image, thereby obtaining text data. The character recognition circuit 2005 can be realized by a dedicated chip, for example. The electronic device 2000 may further include a voice conversion circuit 2006, the voice conversion circuit 2006 configured to convert the text data into voice data. The sound conversion circuit 2006 may be realized by a dedicated chip, for example. The electronic device 2000 may further include a voice output circuit 2007, the voice output circuit 2007 configured to output the voice data. The sound output circuit 2007 may include, but is not limited to, an earphone, a speaker, a vibrator, or the like, and its corresponding driving circuit.
The electronic device 2000 may further comprise a living body detection circuitry (electronic circuitry) 2100, the living body detection circuitry (electronic circuitry) 2100 comprising circuitry configured to perform the steps of the method for living body detection as previously described (e.g. the method steps shown in the flowcharts of fig. 1, 3, 5).
According to some embodiments, the electronic device 2000 may further include image processing circuitry 2008, and the image processing circuitry 2008 may include circuitry configured to perform various image processing on the image. The image processing circuitry 2008 may include, for example, but not limited to, one or more of the following: circuitry configured to reduce noise in an image, circuitry configured to deblur an image, circuitry configured to geometrically correct an image, circuitry configured to feature extract an image, circuitry configured to detect and/or identify objects in an image, circuitry configured to detect words contained in an image, circuitry configured to extract lines of text from an image, circuitry configured to extract coordinates of words from an image, circuitry configured to extract object boxes from an image, circuitry configured to extract text boxes from an image, circuitry configured to perform layout analysis (e.g., paragraph segmentation) based on an image, and so forth.
According to some embodiments, electronic device 2000 may further include word processing circuitry 2009, which word processing circuitry 2009 may be configured to perform various processing based on extracted information relating to a word (e.g., word data, text box, paragraph coordinates, text line coordinates, word coordinates, etc.) to obtain processing results such as paragraph ordering, word semantic analysis, layout analysis results, and so forth.
One or more of the various circuits described above (e.g., word recognition circuit 2005, voice conversion circuit 2006, voice output circuit 2007, image processing circuit 2008, word processing circuit 2009, liveness detection circuit (electronic circuit) 2100) may use custom hardware, and/or may be implemented in hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. For example, one or more of the various circuits described above can be implemented by programming hardware (e.g., programmable logic circuits including Field Programmable Gate Arrays (FPGAs) and/or Programmable Logic Arrays (PLAs)) in an assembly language or hardware programming language (such as VERILOG, VHDL, C + +) using logic and algorithms according to the present disclosure.
According to some embodiments, electronic device 2000 may also include communications circuitry 2010, which communications circuitry 2010 may be any type of device or system that enables communication with an external device and/or with a network and may include, but is not limited to, a modem, a network card, an infrared communications device, a wireless communications device, and/or a chipset, such as a bluetooth device, 1302.11 device, a WiFi device, a WiMax device, a cellular communications device, and/or the like.
According to some embodiments, the electronic device 2000 may also include an input device 2011, which may be any type of device 2011 capable of inputting information to the electronic device 2000, and may include, but is not limited to, various sensors, mice, keyboards, touch screens, buttons, levers, microphones, and/or remote controls, among others.
According to some embodiments, the electronic device 2000 may also include an output device 2012, which output device 2012 may be any type of device capable of presenting information and may include, but is not limited to, a display, a visual output terminal, a vibrator, and/or a printer, among others. Although the electronic device 2000 is used for a vision-impaired auxiliary device according to some embodiments, the vision-based output device may facilitate a user's family or service personnel, etc. to obtain output information from the electronic device 2000.
According to some embodiments, the electronic device 2000 may further comprise a processor 2001. The processor 2001 may be any type of processor and may include, but is not limited to, one or more general purpose processors and/or one or more special purpose processors (e.g., special purpose processing chips). The processor 2001 may be, for example, but not limited to, a central processing unit CPU or a microprocessor MPU or the like. The electronic device 2000 may also include a working memory 2002, which working memory 2002 may store programs (including instructions) and/or data (e.g., images, text, sound, and other intermediate data, etc.) useful for the operation of the processor 2001, and may include, but is not limited to, a random access memory and/or a read only memory device. The electronic device 2000 may also include a storage device 2003, which may include any non-transitory storage device, which may be non-transitory and may implement any storage device for data storage, and may include, but is not limited to, a disk drive, an optical storage device, a solid state memory, a floppy disk, a flexible disk, a hard disk, a magnetic tape, or any other magnetic medium, an optical disk or any other optical medium, a ROM (read only memory), a RAM (random access memory), a cache memory, and/or any other memory chip or cartridge, and/or any other medium from which a computer may read data, instructions, and/or code. The working memory 2002 and the storage device 2003 may be collectively referred to as "memory" and may be used concurrently with each other in some cases.
According to some embodiments, the processor 2001 may control and schedule at least one of the camera 2004, the text recognition circuit 2005, the sound conversion circuit 2006, the sound output circuit 2007, the image processing circuit 2008, the text processing circuit 2009, the communication circuit 2010, the living body detection circuit (electronic circuit) 2100, and other various devices and circuits included in the electronic apparatus 2000. According to some embodiments, at least some of the various components described in fig. 20 may be interconnected and/or in communication by a bus 2013.
Software elements (programs) may reside in the working memory 2002 including, but not limited to, an operating system 2002a, one or more application programs 2002b, drivers, and/or other data and code.
According to some embodiments, instructions for performing the aforementioned control and scheduling may be included in the operating system 2002a or one or more application programs 2002 b.
According to some embodiments, instructions to perform method steps described in the present disclosure (e.g., the method steps shown in the flowcharts of fig. 2, 3) may be included in one or more application programs 2002b, and the various modules of the electronic device 2000 described above may be implemented by the processor 2001 reading and executing the instructions of the one or more application programs 2002 b. In other words, the electronic device 2000 may comprise a processor 2001 as well as a memory (e.g. working memory 2002 and/or storage device 2003) storing a program comprising instructions which, when executed by the processor 2001, cause the processor 2001 to perform a method according to various embodiments of the present disclosure.
According to some embodiments, some or all of the operations performed by at least one of the text recognition circuit 2005, the sound conversion circuit 2006, the image processing circuit 2008, the text processing circuit 2009, and the living body detection circuit (electronic circuit) 2100 may be implemented by instructions of one or more application programs 2002 being read and executed by the processor 2001.
Executable code or source code of instructions of the software elements (programs) may be stored in a non-transitory computer readable storage medium, such as the storage device 2003, and may be stored in the working memory 2001 (possibly compiled and/or installed) upon execution. Accordingly, the present disclosure provides a computer readable storage medium storing a program comprising instructions that, when executed by a processor of an electronic device (e.g., a vision-impaired auxiliary device), cause the electronic device to perform a method as described in various embodiments of the present disclosure. According to another embodiment, the executable code or source code of the instructions of the software elements (programs) may also be downloaded from a remote location.
It will also be appreciated that various modifications may be made in accordance with specific requirements. For example, customized hardware might also be used and/or individual circuits, units, modules, or elements might be implemented in hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. For example, some or all of the circuits, units, modules, or elements encompassed by the disclosed methods and apparatus may be implemented by programming hardware (e.g., programmable logic circuitry including Field Programmable Gate Arrays (FPGAs) and/or Programmable Logic Arrays (PLAs)) in an assembly language or hardware programming language such as VERILOG, VHDL, C + +, using logic and algorithms in accordance with the present disclosure.
The processor 2001 in the electronic device 2000 may be distributed over a network according to some embodiments. For example, some processes may be performed using one processor while other processes may be performed by another processor that is remote from the one processor. Other modules of the electronic device 2001 may also be similarly distributed. As such, the electronic device 2001 may be interpreted as a distributed computing system that performs processing at multiple locations.
Some exemplary aspects of the disclosure are described below.
Aspect 1. a method for in vivo testing, comprising:
controlling the lighting equipment to illuminate based on the current illumination mode, and controlling the image acquisition equipment to acquire an image of the object to be detected while illuminating;
determining a predicted illumination mode based on the image of the object to be detected; and
determining that the object to be detected passes live detection at least in response to determining that the predicted illumination pattern and the current illumination pattern are consistent.
Aspect 2 the method of aspect 1, wherein the current illumination mode comprises a sequence of illumination modes for a plurality of illuminations.
Aspect 3 the method of aspect 1, wherein determining a predicted illumination pattern based on the image of the object to be detected comprises:
and carrying out image classification on the image of the object to be detected to obtain the predicted illumination mode.
Aspect 4 the method of any of aspects 1-3, further comprising:
and carrying out image classification on the image of the object to be detected to obtain a living body prediction result of the object to be detected, wherein the living body prediction result indicates that the object to be detected is a living body or the object to be detected is a non-living body.
Aspect 5 the method of aspect 4, wherein image classifying the image of the object to be detected comprises:
inputting the image of the object to be detected into a prediction network, wherein the prediction network comprises a backbone network and a first output module and a second output module which are connected with the backbone network;
processing the image of the object to be detected by using the backbone network to obtain the image characteristics of the object to be detected;
processing the image features with the first output module to obtain the predicted lighting pattern;
and processing the image characteristics by utilizing the second output module to obtain the living body prediction result.
Aspect 6 the method of aspect 4, wherein determining that the object to be detected passes live inspection, at least in response to determining that the predicted illumination pattern and the current illumination pattern are consistent, comprises:
determining that the object to be detected passes live detection in response to determining that the predicted illumination pattern and the current illumination pattern coincide and in response to the live prediction result indicating that the object to be detected is a live body.
Aspect 7 the method of aspect 1, wherein the image capture device comprises an infrared camera and the illumination device is an infrared illumination device.
The method of aspect 1, wherein the illumination device comprises a first illumination unit located on one side of the image capture device and a second illumination unit located on the other side of the image capture device.
Aspect 9 the method of aspect 2, wherein the current lighting pattern is a lighting pattern sequence comprising:
the first lighting unit illuminates;
the second lighting unit lights; and
both sides are illuminated simultaneously.
Aspect 10 the method of aspect 9, wherein the sequence of illumination patterns is a sequence of random illumination patterns.
Aspect 11 the method of aspect 9, wherein the image of the object to be detected is a sequence of face images of the object to be detected, the method further comprising:
determining a face image collected when two sides are illuminated simultaneously in the face image sequence as an identification image; and
and carrying out image processing on the identification image to obtain a face identification result.
Aspect 12 the method of aspect 8, wherein the first and second illumination units are symmetrically arranged with respect to the image capture device.
The method of aspect 8, wherein a first optical axis of the first illumination unit intersects an optical axis of an optical system of the image capture device and a second optical axis of the second illumination unit intersects the optical axis of the optical system of the image capture device.
Aspect 14 the method of aspect 13, wherein an intersection of the first optical axis and the second optical axis is on an optical axis of an optical system of the image capture device.
Aspect 15 the method of aspect 13, wherein an intersection of the first optical axis and the second optical axis is located on a central imaging plane of the image acquisition device, wherein the central imaging plane is located centrally between a nearest imaging plane of the image acquisition device and a farthest imaging plane of the image acquisition device.
Aspect 16 the method of aspect 15, wherein a nearest imaging plane of the image acquisition device is 40cm from the image acquisition device and a farthest imaging plane of the image acquisition device is 80cm from the image acquisition device.
Aspect 17 the method of aspect 15, wherein the illumination range of the first and second illumination units when illuminated simultaneously covers the field of view of the image capture device in a region between the closest and farthest imaging planes of the image capture device.
Aspect 18 the method of aspect 14, wherein the illumination device further comprises at least one first auxiliary illumination unit located on the one side of the image capture device and at least one second auxiliary illumination unit located on the other side of the image capture device.
Aspect 19 the method of aspect 18, wherein an intersection of the optical axis of the at least one first auxiliary lighting unit and the optical axis of the at least one second auxiliary lighting unit is also on the optical axis of the optical system of the image capture device.
Aspect 20 the method of aspect 18, wherein the first lighting unit, the at least one first auxiliary lighting unit, the second unit, and the at least one second auxiliary lighting unit are located on a spherical cap shaped base.
Aspect 21 the method of aspect 20, wherein the image capture device is mounted in the center of the spherical cap base, the first lighting unit, the at least one first auxiliary lighting unit, the second lighting unit, and the at least one second auxiliary lighting unit being symmetrically arranged with respect to the image capture device.
Aspect 22 the method of aspect 18, wherein an optical axis of the first auxiliary lighting unit is parallel to the first optical axis of the first lighting unit and an optical axis of the second auxiliary lighting unit is parallel to the second optical axis of the second lighting unit.
Aspect 23 the method of aspect 8, wherein the image capture device, the first illumination unit, and the second illumination unit are located on a same base.
Aspect 24 the method of aspect 8, wherein the image capture device is located on a first mount, the first illumination unit and the second illumination unit are located on a second mount, the first mount and the second mount assembled such that an intersection of the first optical axis of the first illumination unit and the second optical axis of the second illumination unit is on an optical axis of an optical system of the image capture device.
Aspect 25 is an electronic circuit comprising:
circuitry configured to perform the steps of the method of any of aspects 1-24.
An electronic device, comprising:
a processor; and
a memory storing a program comprising instructions that, when executed by the processor, cause the processor to perform the method of any of aspects 1-24.
Aspect 27 a non-transitory computer readable storage medium storing a program, the program comprising instructions that, when executed by a processor of an electronic device, cause the electronic device to perform the method of any of aspects 1-24.
Aspect 28 a computer program product comprising a computer program, wherein the computer program realizes the method of any of aspects 1-24 when executed by a processor.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the above-described methods, systems and apparatus are merely exemplary embodiments or examples and that the scope of the present invention is not limited by these embodiments or examples, but only by the claims as issued and their equivalents. Various elements in the embodiments or examples may be omitted or may be replaced with equivalents thereof. Further, the steps may be performed in an order different from that described in the present disclosure. Further, various elements in the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced with equivalent elements that appear after the present disclosure.

Claims (10)

1. A method for in vivo testing, comprising:
controlling the lighting equipment to illuminate based on the current illumination mode, and controlling the image acquisition equipment to acquire an image of the object to be detected while illuminating;
determining a predicted illumination mode based on the image of the object to be detected; and
determining that the object to be detected passes live detection at least in response to determining that the predicted illumination pattern and the current illumination pattern are consistent.
2. The method of claim 1, wherein the current illumination pattern comprises a sequence of illumination patterns for a plurality of illuminations.
3. The method of claim 1, wherein determining a predicted illumination pattern based on the image of the object to be detected comprises:
and carrying out image classification on the image of the object to be detected to obtain the predicted illumination mode.
4. The method of any of claims 1-3, further comprising:
and carrying out image classification on the image of the object to be detected to obtain a living body prediction result of the object to be detected, wherein the living body prediction result indicates that the object to be detected is a living body or the object to be detected is a non-living body.
5. The method of claim 4, wherein image classifying the image of the object to be detected comprises:
inputting the image of the object to be detected into a prediction network, wherein the prediction network comprises a backbone network and a first output module and a second output module which are connected with the backbone network;
processing the image of the object to be detected by using the backbone network to obtain the image characteristics of the object to be detected;
processing the image features with the first output module to obtain the predicted lighting pattern;
and processing the image characteristics by utilizing the second output module to obtain the living body prediction result.
6. The method of claim 4, wherein determining that the object to be detected passes live inspection in response to at least determining that the predicted illumination pattern and the current illumination pattern are consistent comprises:
determining that the object to be detected passes live detection in response to determining that the predicted illumination pattern and the current illumination pattern coincide and in response to the live prediction result indicating that the object to be detected is a live body.
7. An electronic circuit, comprising:
circuitry configured to perform the steps of the method of any of claims 1-6.
8. An electronic device, comprising:
a processor; and
a memory storing a program comprising instructions that, when executed by the processor, cause the processor to perform the method of any of claims 1-6.
9. A non-transitory computer readable storage medium storing a program, the program comprising instructions that, when executed by a processor of an electronic device, cause the electronic device to perform the method of any of claims 1-6.
10. A computer program product comprising a computer program, wherein the computer program realizes the method of any one of claims 1-6 when executed by a processor.
CN202110245618.4A 2021-03-05 2021-03-05 Method for living body detection, electronic circuit, electronic apparatus, and medium Pending CN112906610A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110245618.4A CN112906610A (en) 2021-03-05 2021-03-05 Method for living body detection, electronic circuit, electronic apparatus, and medium
PCT/CN2022/078053 WO2022183992A1 (en) 2021-03-05 2022-02-25 Living body detection method, electronic circuit, electronic device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110245618.4A CN112906610A (en) 2021-03-05 2021-03-05 Method for living body detection, electronic circuit, electronic apparatus, and medium

Publications (1)

Publication Number Publication Date
CN112906610A true CN112906610A (en) 2021-06-04

Family

ID=76107019

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110245618.4A Pending CN112906610A (en) 2021-03-05 2021-03-05 Method for living body detection, electronic circuit, electronic apparatus, and medium

Country Status (1)

Country Link
CN (1) CN112906610A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022183992A1 (en) * 2021-03-05 2022-09-09 上海肇观电子科技有限公司 Living body detection method, electronic circuit, electronic device and medium
WO2023061123A1 (en) * 2021-10-15 2023-04-20 北京眼神科技有限公司 Facial silent living body detection method and apparatus, and storage medium and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022183992A1 (en) * 2021-03-05 2022-09-09 上海肇观电子科技有限公司 Living body detection method, electronic circuit, electronic device and medium
WO2023061123A1 (en) * 2021-10-15 2023-04-20 北京眼神科技有限公司 Facial silent living body detection method and apparatus, and storage medium and device

Similar Documents

Publication Publication Date Title
JP7004017B2 (en) Object tracking system, object tracking method, program
CN110266916B (en) Method and system for processing glare in eye tracking
EP3467707B1 (en) System and method for deep learning based hand gesture recognition in first person view
EP3284011B1 (en) Two-dimensional infrared depth sensing
CN110232369B (en) Face recognition method and electronic equipment
JP5160235B2 (en) Detection and tracking of objects in images
WO2019033569A1 (en) Eyeball movement analysis method, device and storage medium
CN113487742A (en) Method and system for generating three-dimensional model
CN109325462B (en) Face recognition living body detection method and device based on iris
JP5361524B2 (en) Pattern recognition system and pattern recognition method
JP2013504114A (en) Eye state detection apparatus and method
CN106663196A (en) Computerized prominent person recognition in videos
Chaudhry et al. Design of a mobile face recognition system for visually impaired persons
CN109766779A (en) It hovers personal identification method and Related product
CN112052186A (en) Target detection method, device, equipment and storage medium
CN112906610A (en) Method for living body detection, electronic circuit, electronic apparatus, and medium
CN111783640A (en) Detection method, device, equipment and storage medium
WO2020089252A2 (en) Interactive user verification
CN113409056B (en) Payment method and device, local identification equipment, face payment system and equipment
JP4708835B2 (en) Face detection device, face detection method, and face detection program
KR20160046399A (en) Method and Apparatus for Generation Texture Map, and Database Generation Method
Shieh et al. Fast facial detection by depth map analysis
JP4550768B2 (en) Image detection method and image detection apparatus
WO2022183992A1 (en) Living body detection method, electronic circuit, electronic device and medium
CN114898447A (en) Personalized fixation point detection method and device based on self-attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination