WO2022183992A1 - 用于活体检测的方法、电子电路、电子设备和介质 - Google Patents

用于活体检测的方法、电子电路、电子设备和介质 Download PDF

Info

Publication number
WO2022183992A1
WO2022183992A1 PCT/CN2022/078053 CN2022078053W WO2022183992A1 WO 2022183992 A1 WO2022183992 A1 WO 2022183992A1 CN 2022078053 W CN2022078053 W CN 2022078053W WO 2022183992 A1 WO2022183992 A1 WO 2022183992A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
lighting unit
lighting
optical axis
detected
Prior art date
Application number
PCT/CN2022/078053
Other languages
English (en)
French (fr)
Inventor
周骥
冯歆鹏
Original Assignee
上海肇观电子科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202120476916.XU external-priority patent/CN214202417U/zh
Priority claimed from CN202110245618.4A external-priority patent/CN112906610A/zh
Application filed by 上海肇观电子科技有限公司 filed Critical 上海肇观电子科技有限公司
Publication of WO2022183992A1 publication Critical patent/WO2022183992A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the present disclosure relates to the field of image processing, and in particular, to a method, electronic circuit, electronic device and medium for living body detection.
  • an electronic circuit comprising: a circuit configured to perform the steps of the above-described method.
  • a non-transitory computer-readable storage medium storing a program, the program comprising instructions that, when executed by a processor of an electronic device, cause the electronic device to perform the above-mentioned Methods.
  • a computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements the above-mentioned method.
  • a detection device characterized by: an image capture device; and an illumination device, including a first illumination unit located on one side of the image capture device and a first illumination unit located on the other side of the image capture device a second lighting unit on the side; wherein a first optical axis of the first lighting unit intersects with an optical axis of an optical system of the image capture device, and a second optical axis of the second lighting unit and the image The optical axes of the optical systems of the acquisition device intersect.
  • FIG. 1 shows a schematic diagram of an exemplary system in which the various methods and apparatuses described herein may be implemented in accordance with embodiments of the present disclosure
  • FIG. 2 shows a schematic flowchart of a method for living body detection according to an embodiment of the present disclosure
  • FIG. 3 shows a schematic flowchart of a process for identification according to an embodiment of the present disclosure
  • FIG. 4 shows a schematic block diagram of an apparatus for living body detection according to an embodiment of the present disclosure
  • FIG. 5 shows a schematic diagram of a detection device according to an embodiment of the present disclosure
  • FIG. 6 shows another schematic diagram of a detection device according to an embodiment of the present disclosure
  • FIG. 8 shows a schematic diagram of a portion of a light field distribution of a detection device according to the present disclosure
  • FIGS. 9A-9C show a schematic diagram of an arrangement of a detection device according to an embodiment of the present disclosure.
  • FIGS. 10A-10B show another schematic diagram of the arrangement of the detection device according to an embodiment of the present disclosure.
  • FIG. 11 shows yet another schematic diagram of the arrangement of the detection device according to an embodiment of the present disclosure.
  • FIG. 15 shows a schematic block diagram of a detection device according to an embodiment of the present disclosure
  • FIG. 16 shows an example of a liveness detection process according to an embodiment of the present disclosure
  • FIG. 17 shows an example of an identification process according to an embodiment of the present disclosure
  • FIG. 18 shows an example of a face registration process according to an embodiment of the present disclosure
  • Figure 19 shows another example of a face registration process according to an embodiment of the present disclosure
  • FIG. 20 is a block diagram illustrating an example of an electronic device according to an exemplary embodiment of the present disclosure.
  • first, second, etc. to describe various elements is not intended to limit the positional relationship, timing relationship or importance relationship of these elements, and such terms are only used for Distinguish one element from another.
  • first element and the second element may refer to the same instance of the element, while in some cases they may refer to different instances based on the context of the description.
  • a depth camera or a binocular camera may be used to obtain the depth information of the object to be detected, so as to determine whether the object to be detected is a planar two-dimensional object or a three-dimensional three-dimensional object. Further, it is also possible to further prevent the identity recognition system from mistaking the three-dimensional prosthetic model as a real human object by requiring the object to be detected to perform corresponding actions (such as blinking, opening the mouth, etc.) according to the instructions.
  • the system 100 includes one or more terminal devices 101 , a server 120 , and one or more communication networks 110 coupling the one or more terminal devices to the server 120 .
  • Terminal device 101 may be configured to execute one or more application programs.
  • the server 120 may run one or more services or software applications that enable execution of the method for liveness detection according to the present disclosure.
  • the terminal device 101 may also be used to run one or more services or software applications of the method for liveness detection according to the present disclosure.
  • the terminal device 101 may be implemented as an access control device, a payment device, or the like.
  • server 120 may also provide other services or software applications that may include non-virtual and virtual environments.
  • these services may be provided as web-based services or cloud services, eg, to users of end devices 101 under a software-as-a-service (SaaS) model.
  • SaaS software-as-a-service
  • server 120 may include one or more components that implement the functions performed by server 120 . These components may include software components executable by one or more processors, hardware components, or a combination thereof. A user operating the terminal device 101 may in turn utilize one or more terminal applications to interact with the server 120 to utilize the services provided by these components. It should be understood that a variety of different system configurations are possible, which may differ from system 100 . Accordingly, FIG. 1 is one example of a system for implementing the various methods described herein, and is not intended to be limiting.
  • the terminal device may provide an interface that enables a user of the terminal device to interact with the terminal device.
  • the terminal device can also output information to the user via the interface.
  • FIG. 1 depicts only one terminal device, those skilled in the art will appreciate that the present disclosure may support any number of terminal devices.
  • Terminal devices 101 may include various types of computer devices, such as portable handheld devices, general purpose computers (such as personal computers and laptops), workstation computers, wearable devices, gaming systems, thin clients, various messaging devices, sensors or other sensing devices, etc. These computer devices may run various types and versions of software applications and operating systems, such as Microsoft Windows, Apple iOS, UNIX-like operating systems, Linux or Linux-like operating systems (such as Google Chrome OS); or include various mobile operating systems , such as Microsoft Windows Mobile OS, iOS, Windows Phone, Android.
  • Portable handheld devices may include cellular phones, smart phones, tablet computers, personal digital assistants (PDAs), and the like.
  • Wearable devices can include head-mounted displays and other devices.
  • Gaming systems may include various handheld gaming devices, Internet-enabled gaming devices, and the like. Terminal devices are capable of executing various applications, such as various Internet-related applications, communication applications (eg, e-mail applications), Short Message Service (SMS) applications, and may use various communication protocols.
  • SMS Short Message Service
  • Server 120 may include one or more general purpose computers, special purpose server computers (eg, PC (personal computer) servers, UNIX servers, midrange servers), blade servers, mainframe computers, server clusters, or any other suitable arrangement and/or combination .
  • Server 120 may include one or more virtual machines running virtual operating systems, or other computing architectures that involve virtualization (eg, may be virtualized to maintain one or more flexible pools of logical storage devices of the server's virtual storage devices).
  • server 120 may run one or more services or software applications that provide the functionality described below.
  • server 120 may run one or more operating systems including any of the operating systems described above, as well as any commercially available server operating systems.
  • Server 120 may also run any of a variety of additional server applications and/or middle-tier applications, including HTTP servers, FTP servers, CGI servers, JAVA servers, database servers, and the like.
  • server 120 may include one or more applications to analyze and consolidate data feeds and/or event updates received from users of end devices 101 .
  • Server 120 may also include one or more applications to display data feeds and/or real-time events via one or more display devices of end device 101 .
  • the server 120 may be a server of a distributed system, or a server combined with a blockchain.
  • the server 120 may also be a cloud server, or an intelligent cloud computing server or an intelligent cloud host with artificial intelligence technology.
  • Cloud server is a host product in the cloud computing service system to solve the defects of difficult management and weak business expansion in traditional physical host and virtual private server (VPS, Virtual Private Server) services.
  • System 100 may also include one or more databases 130 .
  • these databases may be used to store data and other information.
  • one or more of the databases 130 may be used to store information such as audio files and video files.
  • Data repository 130 may reside in various locations.
  • the data repository used by server 120 may be local to server 120, or may be remote from server 120 and may communicate with server 120 via a network-based or dedicated connection.
  • Data repository 130 can be of different types.
  • the data repository used by server 120 may be a database, such as a relational database.
  • One or more of these databases may store, update, and retrieve data to and from the databases in response to commands.
  • one or more of the databases 130 may also be used by applications to store application data.
  • Databases used by applications can be different types of databases such as key-value stores, object stores, or regular stores backed by a file system.
  • the system 100 of FIG. 1 may be configured and operated in various ways to enable application of the various methods and apparatuses described in accordance with the present disclosure.
  • FIG. 2 shows a schematic flowchart of a method for living body detection according to an embodiment of the present disclosure.
  • the method shown in FIG. 2 may be performed by the terminal device 101 or the server 120 shown in FIG. 1 .
  • the terminal device may include a lighting device and an image acquisition device.
  • the image acquisition device can be used to acquire an image of the object to be detected for living body detection, and the lighting device can be used to illuminate the object to be detected.
  • the lighting device may be controlled to perform lighting based on the current lighting mode, and the image acquisition device may be controlled to collect an image of the object to be detected while lighting.
  • the lighting device may be a light emitting diode capable of emitting visible light.
  • the lighting device may be an infrared lighting device capable of emitting infrared light. It is understood that the lighting device may also be a lighting device capable of simultaneously emitting or selectively emitting visible light and infrared light.
  • the image acquisition device includes an infrared camera and the illumination device includes an infrared illumination device, infrared information of the object to be detected can be collected to assist in living body detection.
  • the lighting device may include a first lighting unit on one side of the image capture device and a second lighting unit on the other side of the image capture device. Wherein, the first lighting unit and the second lighting unit may be symmetrically arranged with respect to the image capturing device.
  • the first lighting unit when the object to be detected is located directly in front of the image capturing device, the first lighting unit may be configured to illuminate from the left side of the object to be detected, and the second lighting unit may be configured to illuminate from the left side of the object to be detected The right side of the detection object is illuminated.
  • the first lighting unit may be arranged at a position for illuminating the left face of the subject to be detected, and the second lighting unit may be arranged at a position for illuminating the right face of the subject to be detected.
  • the first lighting unit may be configured to illuminate from above the object to be detected, and the second lighting unit may be configured to illuminate from below the object to be detected.
  • the first lighting unit may be positioned at a position for illuminating the upper half of the face of the subject to be detected, and the second lighting unit may be positioned at a position for illuminating the lower half of the face of the subject to be detected Location. It can be understood that, those skilled in the art can set the first lighting unit and the second lighting unit in different positions according to the actual situation.
  • the lighting apparatus includes a first lighting unit and a second lighting unit.
  • the first lighting unit may be located on one side (eg, the left side) of the image capturing device
  • the second lighting unit may be located at the other side (eg, the right side) of the image capturing device.
  • serial number lighting mode 1 Turn on the first lighting unit for 1 second 2 Turn on the second lighting unit for 1 second 3 Light up the first lighting unit and the second lighting unit at the same time for 1 second 4 Turn on the first lighting unit for 2 seconds 5 Turn on the second lighting unit for 2 seconds 6 Simultaneously turn on the first lighting unit and the second lighting unit for 2 seconds
  • the current illumination mode may include a sequence of illumination modes for multiple illuminations.
  • the lighting device can be controlled to illuminate multiple times based on a sequence of lighting modes.
  • the lighting pattern sequence may include a sequence of one or more lighting patterns.
  • Table 2 shows an example of an illumination pattern sequence formed using the illumination patterns shown in Table 1 .
  • the lighting mode sequence may include at least one lighting mode shown in Table 1.
  • Lighting Mode 3 Serial number Lighting Pattern Sequence 1 Lighting Mode 1, Lighting Mode 2, Lighting Mode 3 2 Lighting Mode 1, Lighting Mode 3, Lighting Mode 2 3 Lighting Mode 3, Lighting Mode 1, Lighting Mode 2 4 Lighting Mode 3, Lighting Mode 2, Lighting Mode 1 5 Lighting Mode 2, Lighting Mode 1, Lighting Mode 3 6 Lighting Mode 2, Lighting Mode 3, Lighting Mode 1 7 Lighting Mode 1, Lighting Mode 1, Lighting Mode 2, Lighting Mode 2, Lighting Mode 3 8 Lighting Mode 1, Lighting Mode 2, Lighting Mode 4, Lighting Mode 5, Lighting Mode 3
  • illumination pattern sequences formed using the illumination patterns provided in Table 1 are shown in Table 2.
  • Those skilled in the art can construct different lighting mode sequences through the lighting modes shown in Table 1 according to the actual situation.
  • the number of random illumination pattern sequences is not limited to the eight shown in Table 2.
  • Those skilled in the art can set more or less lighting mode sequences according to the actual situation.
  • the sequence of illumination patterns may be a sequence of random illumination patterns.
  • a random lighting mode sequence for the lighting device may be determined from a plurality of preset lighting mode sequences as the current lighting mode. For example, random numbers may be generated, and a sequence of lighting patterns corresponding to the generated random numbers may be selected as the sequence of random lighting patterns for the lighting device.
  • the lighting device can be controlled to light the first lighting unit for 1 second, the second lighting unit for 1 second, and the first lighting unit and the second lighting unit at the same time.
  • the lighting unit is illuminated for 1 second.
  • the image capturing device can be controlled to capture images of the object to be detected under different lighting modes. Taking the lighting mode sequence determined in step 2 as the sequence number 1 in Table 2 as an example, the image acquisition device can be controlled to light up the first lighting unit for 1 second, light the second lighting unit for 1 second, and simultaneously light up the first lighting unit. An illumination unit and a second illumination unit acquire images for 1 second to obtain an image sequence of images to be detected.
  • step S204 the predicted illumination mode may be determined based on the image of the object to be detected collected in step S202.
  • images of objects to be detected may be image classified to obtain predicted illumination patterns.
  • the image of the object to be detected may be input into a pre-trained first neural network model for image classification.
  • the first neural network model is trained to predict the illumination pattern in which the input image was acquired, and outputs a classification result indicative of the predicted illumination pattern.
  • the first neural network model can classify the image of the object to be detected, and the category to which the output image belongs is category 3, which means that the predicted lighting pattern of the image is the lighting pattern 3 in Table 1. .
  • the captured image of the object to be detected may include an image sequence formed by images captured respectively during multiple illuminations.
  • the sequence of images can be fed into a pre-trained first neural network model to obtain predicted lighting patterns.
  • the first neural network model can classify the image sequence of the object to be detected, and the category to which the output image sequence belongs is category 1, which means that the predicted lighting pattern of the image sequence is in Table 2. Lighting pattern sequence 1.
  • step S206 at least in response to determining that the predicted lighting mode is consistent with the current lighting mode, it is determined that the object to be detected has passed the living body detection.
  • the first neural network model and the second neural network model described above may be implemented using two branches of the same prediction network, respectively.
  • the prediction network may include a backbone network and a first output module and a second output module connected to the backbone network.
  • an image (or sequence of images) of the object to be detected can be fed into the prediction network.
  • the image (or image sequence) of the object to be detected is processed by the backbone network to obtain the image features of the object to be detected.
  • the image features may be processed using the first output module to obtain a predicted illumination pattern.
  • the image features can be processed by the second output module to obtain the living body prediction result.
  • the first output module and the second data module may be implemented using fully connected layers.
  • the classification result of the predicted illumination pattern and the classification result indicating whether the object to be detected is a living body can be obtained by performing one image classification operation on the image of the object to be detected.
  • the second neural network model and the first neural network model may be different models.
  • the classification result for predicting the illumination pattern and the classification result indicating whether the object to be detected is a living body can be obtained separately through different image classification operations.
  • the object to be detected in response to the living body prediction result indicating that the object to be detected is a living body, and in response to determining that the predicted lighting mode obtained in step S204 is consistent with the current lighting mode used to control the lighting in step S202, it may be determined that the object to be detected passes through Liveness detection.
  • the condition for the object to be detected to pass the living body detection not only includes indicating that the object to be detected is a living body based on the image classification result of the object to be detected, but also includes determining the object to be detected based on the image of the object to be detected.
  • the predicted bright mode is consistent with the current illumination mode used when the image was actually acquired. It can be understood that the result obtained based on the image classification indicating whether the object to be detected is a living body is not 100% correct. In some cases, due to the low quality of the acquired images for liveness detection, the liveness prediction results may not be consistent with the actual situation. For example, a prediction result indicating that the object to be detected is non-living may be output for a living object to be detected, and a prediction result that the object to be detected is a living body may also be output for a non-living object to be detected.
  • the present disclosure uses the judgment result of whether the predicted lighting mode determined based on the image (or image sequence) of the object to be detected is consistent with the current lighting mode to supervise the living body prediction result.
  • the predicted illumination mode is consistent with the current illumination mode
  • it can be considered that the image quality of the image collected for detection is low, and therefore a correct illumination mode prediction result cannot be obtained. Therefore, in vivo prediction results based on such low-quality images are unreliable.
  • the living body prediction result indicates that the object to be detected is a living body
  • the living body detection cannot be passed. Therefore, in the process of model training and actual use, the prediction result of the lighting pattern can play a supervisory role on the living body detection result, thereby improving the accuracy of the living body detection result.
  • FIG. 3 shows a schematic flowchart of a process for identification according to an embodiment of the present disclosure.
  • the method shown in FIG. 3 may be performed by the terminal device 101 or the server 120 shown in FIG. 1 .
  • the terminal device may include a lighting device and an image acquisition device.
  • the image acquisition device can be used to acquire an image of the object to be detected for living body detection, and the lighting device can be used to illuminate the object to be detected.
  • the lighting device includes a first lighting unit on one side of the image capturing device and a second lighting unit on the other side of the image capturing device.
  • the method 300 starts at step S301.
  • a random lighting pattern sequence for the lighting device may be determined.
  • the random illumination pattern sequence determined in step S302 at least includes an illumination pattern in which both sides of the object to be detected are illuminated simultaneously by the first illumination unit and the second illumination unit.
  • the random lighting pattern sequence may be a sequence comprising: a first lighting unit lighting; a second lighting unit lighting; and simultaneous lighting on both sides.
  • step S304 the lighting device may be controlled to perform lighting based on the random lighting pattern sequence determined in step S302, and the image acquisition device may be controlled to capture an image of the object to be detected while lighting.
  • the image of the object to be detected may be a sequence of face images of the object to be detected.
  • step S306 the predicted illumination pattern and the in vivo prediction result of the object to be detected may be determined based on the image sequence acquired in step S304.
  • Steps S304 to S306 shown in FIG. 3 can be implemented by using steps S202 to S204 described in conjunction with FIG. 2 , and details are not repeated here.
  • step S308 it can be determined whether the predicted illumination pattern obtained in step S306 is consistent with the random illumination pattern sequence determined in step S302.
  • step S308 the method 300 may proceed to step S310.
  • step S310 the in vivo prediction result of the object to be detected obtained in step S306 may be acquired.
  • the living body prediction result indicates that the object to be detected is a living body or that the object to be detected is a non-living body.
  • Step S301 can be returned to start a new identification process.
  • step S310 In the case where the living body prediction result obtained in step S310 indicates that the object to be detected is a living body, the method 300 may proceed to step S312.
  • FIG. 4 shows a schematic block diagram of an apparatus for living body detection according to an embodiment of the present disclosure.
  • the apparatus 400 for living body detection may include a control unit 410 , a prediction unit 420 and a detection unit 430 .
  • the apparatus 400 for living body detection may further include a face recognition unit (not shown).
  • the face recognition unit may be configured to determine an image collected when both sides are illuminated simultaneously as a recognition image in the image sequence, and perform image processing on the recognition image to obtain a face recognition result of the object to be detected.
  • the condition for the object to be detected to pass the living body detection not only includes indicating that the object to be detected is a living body based on the image classification result of the object to be detected, but also includes determining the object to be detected based on the image of the object to be detected.
  • the predicted illumination mode is the same as the current illumination mode used when the image was actually acquired.
  • the present disclosure uses the judgment result of whether the predicted illumination mode determined based on the image of the object to be detected is consistent with the current illumination mode used when capturing the image to supervise the living body prediction result to improve the accuracy of living body detection.
  • an electronic circuit comprising: a circuit configured to perform the steps of the methods described in the present disclosure.
  • an electronic device comprising: a processor; and a memory storing a program including instructions that, when executed by the processor, cause the processor to perform the present disclosure method described in.
  • a computer program product comprising a computer program comprising instructions that, when executed by a processor, perform the method described in this disclosure.
  • the following describes the structure of a detection device that can be used in the terminal device for living body detection described in the present disclosure with reference to FIGS. 5-15 .
  • FIG. 5 shows a schematic diagram of a detection device according to an embodiment of the present disclosure.
  • the detection device 500 may include an image acquisition device 510 and a lighting device.
  • the lighting device may include a first lighting unit 5201 on one side of the image capturing device 510 and a second lighting unit 5202 on the other side of the image capturing device 510 .
  • the dashed line 511 shows the optical axis of the image capture device 510
  • the dotted line 521 shows the first optical axis of the first lighting unit 5201, wherein the first optical axis 521 intersects with the optical axis 511 of the image capture device.
  • the dashed line 522 shows the second optical axis of the second lighting unit 5202, where the second optical axis 522 intersects the optical axis 511 of the image capture device.
  • the intersection of the first optical axis 521 of the first lighting unit 5201 and the second optical axis 522 of the second lighting unit 1202 is on the optical axis 511 of the optical system of the image capture device.
  • the first lighting unit may be configured to illuminate from the left side of the object to be inspected, and the second lighting unit may be configured to illuminate from the right side of the object to be inspected.
  • the first lighting unit may be arranged at a position for illuminating the left face of the subject to be detected, and the second lighting unit may be arranged at a position for illuminating the right face of the subject to be detected.
  • the first lighting unit may be configured to illuminate from above the object to be detected, and the second lighting unit may be configured to illuminate from below the object to be detected.
  • the first lighting unit 5201 and the second lighting unit 5202 may be symmetrically arranged with respect to the image capturing device 510 .
  • the first lighting unit 5201 and the second lighting unit 5202 may have the same parameters.
  • the first lighting unit 5201 and the second lighting unit 5202 may have the same lighting range, emission wavelength, power, and the like.
  • living body detection can be performed by acquiring an image of the object to be detected. Taking the first lighting unit lighting the object to be detected from the left side and the second lighting unit lighting the object to be detected from the right side as an example, when lighting is only performed on one side, for the three-dimensional object to be detected, the lighting will be on the object to be detected. The other side of the object forms a shadow. Therefore, different illumination light projection directions will result in differences in the light field distribution of the object to be detected. However, such a difference cannot be formed on a two-dimensional object to be detected. Therefore, even without depth information, the difference between three-dimensional objects and two-dimensional objects can be reflected.
  • lighting device 520 may be a light emitting diode capable of emitting visible light.
  • the lighting device 520 may be an infrared lighting device capable of emitting infrared light.
  • the image capture device may include an infrared camera to capture infrared images. It is understood that the lighting device 520 may also be a lighting device capable of simultaneously emitting or selectively emitting visible light and infrared light. In the case where the lighting device 520 includes an infrared lighting device, infrared information of the object to be detected can be collected to assist the detection of living bodies.
  • the image capturing device 510 can be used Get better lighting when capturing images of objects to be inspected. Since the optical axes of the lighting units located on both sides and the optical axis of the image acquisition device intersect at the same point, the lighting effects of the first lighting unit and the second lighting unit on the object to be detected are more uniform, and the quality of the collected image of the object to be detected is better. high.
  • the detection device 600 may include an image acquisition device 610 and an illumination device 620 .
  • the lighting device 620 may include a first lighting unit 6201 on one side of the image capturing device 610 and a second lighting unit 6202 on the other side of the image capturing device 610 .
  • the intersection of the optical axis 621 of the first lighting unit 6201 and the optical axis 622 of the second lighting unit 6202 is on the optical axis 611 of the optical system of the image capturing device.
  • the range of use of the image capture device 610 is also shown in FIG. 6 .
  • the usage range of the image capturing device 610 may indicate the imaging area between the nearest imaging plane of the image capturing device and the farthest imaging plane of the image capturing device.
  • the usage range of the image capturing device 610 is between the first imaging plane 631 and the second imaging plane 632 .
  • the distance between the first imaging plane 631 and the image capturing device 610 is smaller than the distance between the second imaging plane 632 and the image capturing device 610 .
  • the image acquisition device 610 can clearly image the object to be detected.
  • the first imaging plane 631 may be the closest imaging plane of the image acquisition device 610 and the second imaging plane 632 may be the farthest imaging plane of the image acquisition device 610.
  • the positions of the first imaging plane 631 and the second imaging plane 632 may be determined based on the depth of field of the image capture device 610 .
  • the distance between the first imaging plane 631 and the image acquisition device 610 may be greater than or equal to the closest clear imaging plane of the image acquisition device 610, and the distance between the second imaging plane 632 and the image acquisition device 610 may be less than or equal to the image acquisition plane The farthest clear imaging plane of device 610.
  • the positions of the first imaging plane 631 and the second imaging plane 632 may be further determined based on the proportion of the object to be detected in the captured image. For example, when the temporal distance between the object to be detected and the image acquisition device is between the first imaging plane 631 and the second imaging plane 632, the proportion of the image of the object to be detected in the captured image is within a predetermined range. In the image collected at the first imaging plane 631, the object to be detected occupies a predetermined maximum proportion in the image, and in the image collected at the second imaging plane 632, the object to be detected occupies a predetermined minimum proportion in the image.
  • the usage range of the image capture device 610 may be 40cm-80cm. That is, the distance between the first imaging plane 631 (ie, the closest imaging plane of the image capturing device) and the image capturing device 610 is 40 cm, and the second imaging plane 632 (ie, the farthest imaging plane of the image capturing device) and the image capturing device The distance of the device 610 is 80 cm. Wherein, the distance between the imaging plane and the image acquisition device is the distance along the optical axis direction of the image acquisition device. When the image capturing device is installed on a vertical wall, the distance between the imaging plane and the image capturing device is the distance in the horizontal direction.
  • the intersection of the first optical axis of the first illumination unit 6201 and the second optical axis of the second illumination unit 6202 may be located on the central imaging plane of the image capture device 610, where the central imaging plane is located at the center of the image capture device An intermediate position between the nearest imaging plane 231 and the farthest imaging plane 232 of the image acquisition device. Further, the distance between the central imaging plane and the nearest imaging plane 231 is equal to the distance between the central imaging plane and the farthest imaging plane 232 .
  • FIG. 7 is a schematic diagram illustrating an installation position of a first lighting unit in a lighting apparatus according to an embodiment of the present disclosure.
  • a coordinate system with the image capture device 710 as the origin can be established, wherein the Y axis coincides with the optical axis of the image capture device 710 , and the X axis is parallel to the imaging plane of the image capture device 710 .
  • the position of the first lighting unit 7201 can be determined based on the distance dis mid between the intersection of the optical axis of the first lighting unit 7201 and the optical axis 721 of the image capturing device 710 and the image capturing device 710 .
  • the value of dis mid may be determined based on the range of use of the image capture device 710 .
  • dis mid may be determined as the middle value of the usage range of the image capture device 710 . In the case where the usage range of the image capturing device 710 is 40cm ⁇ 80cm, the dis mid may be determined to be 60cm.
  • point A is the intersection of the optical axis of the first lighting unit 7201 and the optical axis of the image capture device 710
  • point B is the position of the first lighting unit 7201
  • point C is point B at X
  • point D is the vertical point of point B on the Y axis
  • point E is the position of the image acquisition device 710 .
  • Formula (1) can be determined based on the proportional relationship between similar triangles ⁇ BCE and ⁇ ABD:
  • can be determined according to the position of point B on the positive or negative half-axis of the X-axis.
  • lightX
  • point B is on the positive half-axis of the X-axis
  • lightX
  • point B is on the negative half-axis of the X-axis
  • -lightX.
  • the relationship between the abscissa lightX and the ordinate lightY of point B can be determined based on formula (1).
  • the value of one of lightX and lightY may be specified according to the actual situation, and the value of the other of lightX and lightY may be calculated based on formula (1).
  • the distance in the X direction between the first lighting device 7201 and the image capturing device 710 may be determined according to the actual installation site of the detection device, that is, the value of lightX is specified. Then, the value of lightY can be determined based on the specified value of lightX.
  • any ray passing through point D can be selected as the X axis, and those skilled in the art can select an appropriate X axis according to the actual situation.
  • any position obtained by rotating point B around the Y-axis can be determined as the installation position of the first lighting.
  • the illumination range when the first illumination unit and the second illumination unit are simultaneously illuminated is greater than or equal to the field of view of the image capture device. That is, in the area between the nearest imaging plane and the farthest imaging plane of the image acquisition device, the illumination range when the first illumination unit and the second illumination unit perform simultaneous illumination covers the field of view of the image acquisition device.
  • Figure 8 shows a schematic diagram of a portion of the light field distribution of a detection device according to the present disclosure.
  • the lighting range of the first lighting unit is the minimum lighting range that can satisfy the above conditions.
  • the illumination range of the first illumination unit is always larger than the field of view range of the image capture device. Since the second lighting unit is symmetrically arranged on the other side of the image capturing device, within the usage range of the image capturing device, the illumination range of the second lighting unit on the other side is always larger than the field of view of the image capturing device.
  • the illumination angle of view of the first lighting unit shown in FIG. 8 can be calculated based on the geometric relationship shown in FIG. 8 .
  • the value of the illumination angle of view ⁇ AFD calculated based on the formula (2) is the minimum illumination range for satisfying the illumination ranges of the first illumination unit and the second illumination unit.
  • those skilled in the art can determine the illumination range of the lighting unit to be any value greater than the minimum illumination range calculated based on formula (2) according to the actual situation.
  • the image capturing device 910 , the first lighting unit 9201 and the second lighting unit 9202 are mounted on the same base.
  • the first lighting unit 9201 is installed on the left side of the image capturing device 910
  • the second lighting unit 9202 is installed at the right side of the image capturing device 910 .
  • the first lighting unit 9201 is installed on the upper side of the image capturing apparatus 910
  • the second lighting unit 9202 is installed at the lower side of the image capturing apparatus 910 .
  • Figure 9C shows a three-dimensional structure that can be used for the arrangement of the detection device shown in Figures 9A and 9B.
  • the first lighting unit 9201 , the image capturing device 910 , and the second lighting unit 9202 may be installed on the base 930 .
  • the base 930 may be formed of a flat material.
  • the plate-shaped material can be appropriately bent so that the intersection of the optical axes of the first lighting unit 9201 and the second lighting unit 9202 installed on both sides is on the optical axis of the image capturing device 910 .
  • a flexible material may be used as the material for the base 930 .
  • FIGS 10A-10B show another schematic diagram of the arrangement of the detection device according to an embodiment of the present disclosure.
  • the image acquisition device 1010 is installed on the first base 1030, the first lighting unit 10201 and the second lighting unit 10202 are installed on the second base 1040, the first base and the first base
  • the two bases are assembled in such a way that the intersection of the optical axis of the first lighting unit 10201 and the optical axis of the second lighting unit 10202 is on the optical axis of the optical system of the image capturing apparatus 1010 .
  • the first lighting unit 10201 and the second lighting unit 10202 are mounted on the second base 1040 .
  • the second base 1040 may be formed of a bent flat material on which the first lighting unit 10201 and the second lighting unit 10202 are mounted so that the optical axes of the first lighting unit 10201 and the second lighting unit 10202 intersect at one point .
  • the image capturing device 1010 is mounted on the first base 1030 .
  • the first base 1030 and the second base 1040 are assembled in such a way that the intersection of the optical axis of the first lighting unit 10201 and the optical axis of the second lighting unit 10202 is on the optical axis of the optical system of the image capturing apparatus 1010 .
  • the image capturing device 1010 may be installed in the middle of the first lighting unit 10201 and the second lighting unit 10202 through a hollow portion formed on the second base 1040 .
  • FIG. 11 shows yet another schematic diagram of the arrangement of a detection device according to an embodiment of the present disclosure.
  • the lighting device further includes a first auxiliary lighting unit 11203 located on one side of the image capturing device 1110 and a second auxiliary lighting unit 11204 located at the other side of the image capturing device 1110 .
  • the first auxiliary lighting unit 11203 and the second auxiliary lighting unit 11204 and the first lighting unit 11201 and the second lighting unit 11202 may have the same parameters.
  • only two first auxiliary lighting units 11203 and two second auxiliary lighting units 11204 are shown in FIG. 11 , those skilled in the art can set more or less number of first auxiliary lighting units and second auxiliary lighting units according to actual situations. Two auxiliary lighting units.
  • FIGS. 12-14 illustrate exemplary structural diagrams of a detection apparatus according to an embodiment of the present disclosure.
  • the optical axis of the first auxiliary lighting unit 12203 is parallel to the optical axis of the first lighting unit 12201
  • the optical axis of the second auxiliary lighting unit 12204 is parallel to the light of the second lighting unit 12202 axis.
  • the image capturing device 1210 , the first lighting unit 12201 , the second lighting unit 12202 , the first auxiliary lighting unit 12203 and the second auxiliary lighting unit 12204 are mounted on the same base 1230 .
  • the intersection of the optical axes of the first lighting unit 12201 and the second lighting unit 12202 installed on both sides of the image capturing device 1210 is located on the optical axis of the optical system of the image capturing device 1210 .
  • the optical axis of the first auxiliary lighting unit 13203 is parallel to the optical axis of the first lighting unit 13201
  • the optical axis of the second auxiliary lighting unit 13204 is parallel to the light of the second lighting unit 13202 axis.
  • the image capturing device 1310 is installed on the first base 1330
  • the first lighting unit 13201 , the second lighting unit 13202 , the first auxiliary lighting unit 13203 and the second auxiliary lighting unit 13204 are installed on the second base 1340 .
  • the intersection of the optical axis of the first auxiliary lighting unit 14203 and the optical axis of the second auxiliary lighting unit 14204 in the detection device 1400 is also on the optical axis of the optical system of the image capturing device .
  • the intersection of the optical axis of the first illumination unit 14201 and the optical axis of the second illumination unit 14202 in the detection device 1400 is on the optical axis of the optical system of the image capture device 1410, and the first auxiliary illumination unit 14203 and the optical axis of the second auxiliary lighting unit 14204 also pass through the above-mentioned intersection.
  • the first lighting unit 14201, the second lighting unit 14202, the first auxiliary lighting unit 14203, and the second auxiliary lighting unit 14204 are mounted on a spherical cap type base, so that the first lighting unit
  • the image capturing device 14010 may be installed at the center of the spherical cap base, the first lighting unit 14201, the at least one first auxiliary lighting unit 14203, the second lighting unit 14202, and the at least one
  • the second auxiliary lighting unit 14204 may be symmetrically arranged with respect to the image capturing device 1410 .
  • FIG. 14 is only an exemplary illustration, and the grid lines shown in FIG. 14 may not be included in the structure of practical application.
  • FIG. 15 shows a schematic block diagram of a detection device according to an embodiment of the present disclosure.
  • the detection device 1500 may include an image acquisition device 1510 , a lighting device 1520 including at least one lighting unit, and a processor 1530 .
  • the image acquisition device 1510 may be used to acquire an image of the object to be detected, and the lighting device 1520 may be used to illuminate the object to be detected.
  • the lighting device 1520 may include a first lighting unit on one side of the image capturing device and a second lighting unit on the other side of the image capturing device.
  • the structures of the image capturing device 1510 and the lighting device 1520 shown in FIG. 15 can be implemented in conjunction with the embodiments described in FIGS. 5-14 , and details are not repeated here.
  • the processor 1530 may be configured to control the lighting device to perform lighting, and control the image acquisition device to capture the image to be detected of the object to be detected while illuminating, and to perform image processing on the image to be detected to determine whether the object to be detected is a living body.
  • the processor 1530 may include a sequence generation module 1531 , a lighting control module 1532 and an exposure control module 1533 .
  • the sequence generation module 1531 may be configured to generate a lighting pattern sequence for the lighting device.
  • the sequence of illumination patterns may be a sequence formed by a combination of: illumination by the first illumination unit; illumination by the second illumination unit; and simultaneous illumination on both sides.
  • the sequence of lighting patterns may be randomly generated.
  • the lighting control module 1532 may be configured to control the first lighting unit and the second lighting unit to perform lighting based on the lighting pattern sequence generated by the sequence generating module 1531 . While illuminating, the exposure control module 1533 may be configured to control the image capture device 1510 to capture a sequence of images of the object to be inspected.
  • the processor 1530 may further include an image classification module (not shown), and the image classification module may be configured to perform image classification on the to-be-detected images collected by the image acquisition device, so as to obtain the in vivo prediction result of the to-be-detected object,
  • the living body prediction result indicates that the object to be detected is a living body or that the object to be detected is a non-living body.
  • FIG. 16 shows one example of a living body detection process according to an embodiment of the present disclosure.
  • an image sequence of an object to be detected is acquired by an image acquisition device.
  • the sequence of images may be images acquired under lighting conditions controlled according to the sequence of lighting modes.
  • the acquired sequence of images can be input to the face detection module at 1602.
  • the face detection module may process the acquired image sequence and output the face frame shown at 1603 corresponding to each image in the image sequence.
  • the face detection module may detect multiple face frames in the image.
  • the N face boxes detected for each image can be input into the face box decision module shown at 1604 .
  • the face frame decision module may determine the largest face frame in the N face frames as the face frame 1605 of the image.
  • the image sequence 1601 may be cropped based on the face frame 1605 using the face cropping module shown at 1606 to obtain the face sequence 1607.
  • the face sequence 1607 can be processed by the living body detection module at 1608 to obtain the illumination pattern sequence prediction result and the living body detection result of the image sequence.
  • the captured images can reflect the shape of the light fields.
  • the sequence of illumination patterns used when the sequence of images was acquired can be predicted based on the sequence of images.
  • the in vivo prediction results obtained using the process in Figure 16 are reliable only if the predicted sequence of lights is consistent with the actual sequence of illumination patterns used when the images were actually acquired.
  • FIG. 17 shows an example of an identification process according to an embodiment of the present disclosure.
  • an image sequence of an object to be detected is acquired by an image acquisition device.
  • the sequence of images may be images acquired under lighting conditions controlled according to the sequence of lighting modes.
  • the acquired sequence of images can be input into the face detection module at 1702.
  • the face detection module can process the acquired image sequence and output the face frame shown at 1703 corresponding to each image in the image sequence.
  • the face detection module may detect multiple face frames in the image.
  • the N face boxes detected for each image can be input into the face box decision module shown at 1704 .
  • the face frame decision module may determine the largest face frame in the N face frames as the face frame 1705 of the image.
  • the extraction module shown at 1707 can be used to extract from the image sequence the data collected when the first lighting unit and the second lighting unit on both sides of the lighting device are illuminated at the same time.
  • the image is used as the recognition image, and the face information 1708 in the recognition image can be determined based on the face frame 1705 , wherein the face information can include the face frame and the face image in the recognition image.
  • the face information 1708 can be processed to obtain keypoints 1710 therein.
  • the face alignment module shown at 1711 alignment can be performed based on the face image in the face information 1708 and the key points 1710 to obtain an aligned face 1712 . Facial features obtained with aligned faces will be more accurate.
  • the aligned face 1712 can be processed to obtain a face encoding 1714 representing the identity of the object to be detected.
  • the identity information of the object to be detected can be obtained.
  • FIG. 18 shows an example of a face registration process according to an embodiment of the present disclosure.
  • the extraction module shown at 1807 can be used to extract from the image sequence the data collected when the first lighting unit and the second lighting unit on both sides of the lighting device are illuminated at the same time.
  • the image is used as the recognition image, and the face information 1808 in the recognition image can be determined based on the face frame 1805, where the face information can include the face frame and the face image in the recognition image.
  • the key points 1810 may be processed using the face quality control module shown at 1815 to obtain quality information 1816 of the recognized image.
  • the quality information may include, but is not limited to, the expression of the object to be detected, the occlusion ratio, the head angle, and whether the lighting situation satisfies the predetermined quality judgment condition.
  • the quality information 1816 indicates that the quality of the face image of the object to be detected in the recognition image is unqualified, this face registration is terminated.
  • the method may proceed to 1817 to start a new round of image acquisition or terminate the entire face registration process.
  • the method proceeds to 1811, and the face alignment module can be used to align based on the face image in the face information 1808 and the key points 1810, to get aligned faces 1812. Facial features obtained with aligned faces will be more accurate.
  • the aligned face 1812 can be processed to obtain a face encoding 1814 representing the identity of the object to be detected.
  • the face code 1814 corresponding to the object to be detected and the identity information of the object to be detected may be associated and stored in the database to complete the registration.
  • FIG. 19 shows another example of a face registration process according to an embodiment of the present disclosure.
  • a plurality of personal information and face codes corresponding to each personal information are stored in the existing registration repository 1901 .
  • the face code of the object to be detected may be determined as the face code to be stored 1902 shown in FIG. 19 .
  • the code comparison module shown at 1903 the face code to be stored 1902 can be compared with the code in the existing registration library 1901 to obtain a comparison score list 1904 .
  • the database duplicate checking module at 1905 it can be determined based on the comparison score list 1904 whether the face code 1902 to be stored is a duplicate identity (ID) or a newly added ID. For example, when there is a comparison score higher than a predetermined score threshold in the comparison score list, it can be considered that there is a code with a higher similarity to the face code to be stored in the existing registration database. In this case, it may be because the information of the object to be detected has been entered into the existing registration library, or it may be because the face encoding similar to the face encoding of the object to be detected exists in the existing registration library. If the face code of the object to be detected is entered in this case, it may cause recognition errors in the future face recognition process.
  • ID duplicate identity
  • the database update module at 1909 can be used to input the face code to be stored 1902 and the personnel information associated with the face code to be stored 1902 into the existing database. Registry 1901.
  • FIG. 20 is a block diagram illustrating an example of an electronic device according to an exemplary embodiment of the present disclosure. It should be noted that the structure shown in FIG. 20 is only an example, and according to specific implementations, the electronic device of the present disclosure may only include one or more of the components shown in FIG. 20 .
  • the electronic device 2000 may be, for example, a general purpose computer (eg, a laptop computer, a tablet computer, etc. various computers), a mobile phone, a personal digital assistant. According to some embodiments, the electronic device 2000 may be a visually impaired assistive device.
  • the electronic device 2000 may include a camera, lighting, and electronic circuits for liveness detection. The camera may be configured to acquire images, the lighting device may be used to illuminate the object to be detected, and the electronic circuit may be configured to perform the method for living body detection described in conjunction with FIG. 2 and FIG. 3 .
  • the electronic device 2000 can also be installed on other wearable devices, or integrated with other wearable devices.
  • the wearable device may be, for example, a head-mounted device (such as a helmet or a hat, etc.), a device that can be worn on the ear, and the like.
  • the electronic device may be implemented as an accessory attachable to a wearable device, eg as an accessory attachable to a helmet or hat, or the like.
  • the electronic device 2000 may also have other forms.
  • electronic device 2000 may be a mobile phone, a general purpose computing device (eg, a laptop computer, tablet computer, etc.), a personal digital assistant, or the like.
  • the electronic device 2000 may also have a base so that it can be placed on a desktop.
  • the electronic device 2000 may include a camera 2004 for acquiring images.
  • the camera 2004 may include, but is not limited to, a camera or a camera, or the like.
  • the electronic device 2000 may further include a text recognition circuit 2005 configured to perform text detection and/or recognition (eg, OCR processing) on the text contained in the image to obtain text data.
  • the character recognition circuit 2005 can be implemented by, for example, a dedicated chip.
  • the electronic device 2000 may also include a sound conversion circuit 2006 configured to convert the text data into sound data.
  • the sound conversion circuit 2006 can be implemented by, for example, a dedicated chip.
  • the electronic device 2000 may further include a sound output circuit 2007 configured to output the sound data.
  • the sound output circuit 2007 may include, but is not limited to, an earphone, a speaker, or a vibrator, etc., and a corresponding driving circuit thereof.
  • the electronic device 2000 may also include a liveness detection circuit (electronic circuit) 2100 comprising steps configured to perform the method for liveness detection as previously described (eg, FIGS. 2 , 3 ). the method steps shown in the flowchart) of the circuit.
  • a liveness detection circuit electronic circuit 2100 comprising steps configured to perform the method for liveness detection as previously described (eg, FIGS. 2 , 3 ). the method steps shown in the flowchart) of the circuit.
  • the electronic device 2000 may also include an image processing circuit 2008, which may include circuits configured to perform various image processing on images.
  • Image processing circuitry 2008 may include, for example, but is not limited to, one or more of the following: circuitry configured to denoise the image, circuitry configured to deblur the image, circuitry configured to geometrically correct the image A circuit, a circuit configured to perform feature extraction on an image, a circuit configured to perform object detection and/or recognition of objects in an image, a circuit configured to perform text detection on text contained in an image, a circuit configured to perform text detection from Circuitry for extracting text lines from an image, Circuitry configured to extract text coordinates from an image, Circuitry configured to extract object boxes from an image, Circuitry configured to extract text boxes from an image, Circuitry configured to extract text boxes from an image, Circuits for layout analysis (eg paragraph division), etc.
  • the electronic device 2000 may further include a word processing circuit 2009 that may be configured based on the extracted text-related information (eg, text data, text boxes, paragraph coordinates, text line coordinates, Various processing are performed to obtain processing results such as paragraph sorting, text semantic analysis, and layout analysis results.
  • a word processing circuit 2009 may be configured based on the extracted text-related information (eg, text data, text boxes, paragraph coordinates, text line coordinates, Various processing are performed to obtain processing results such as paragraph sorting, text semantic analysis, and layout analysis results.
  • One or more of the various circuits described above can be customized using hardware, and/or may be implemented in hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof.
  • one or more of the various circuits described above may be implemented on hardware (eg, including field programmable gate arrays) in assembly language or hardware programming languages (such as VERILOG, VHDL, C++) using logic and algorithms according to the present disclosure.
  • FPGA field programmable gate array
  • PLA Programmable Logic Array
  • electronic device 2000 may also include communication circuitry 2010, which may be any type of device or system that enables communication with external devices and/or with a network, and may include, but is not limited to, modems, network cards , infrared communication devices, wireless communication devices and/or chipsets such as Bluetooth devices, 1302.11 devices, WiFi devices, WiMax devices, cellular communication devices and/or the like.
  • communication circuitry 2010, may be any type of device or system that enables communication with external devices and/or with a network, and may include, but is not limited to, modems, network cards , infrared communication devices, wireless communication devices and/or chipsets such as Bluetooth devices, 1302.11 devices, WiFi devices, WiMax devices, cellular communication devices and/or the like.
  • the electronic device 2000 may further include an input device 2011, which may be any type of device capable of inputting information to the electronic device 2000, and may include, but is not limited to, various sensors, a mouse, a keyboard, a touch screen , buttons, joysticks, microphones and/or remote controls, etc.
  • the electronic device 2000 may also include an output device 2012, which may be any type of device capable of presenting information, and may include, but is not limited to, a display, a visual output terminal, a vibrator, and/or a printer, etc. .
  • an output device 2012 may be any type of device capable of presenting information, and may include, but is not limited to, a display, a visual output terminal, a vibrator, and/or a printer, etc.
  • a vision-based output device may facilitate the user's family members or maintenance workers, etc. to obtain output information from the electronic device 2000 .
  • the electronic device 2000 may also include a processor 2001 .
  • the processor 2001 may be any type of processor, and may include, but is not limited to, one or more general-purpose processors and/or one or more special-purpose processors (eg, special processing chips).
  • the processor 2001 may be, for example, but not limited to, a central processing unit CPU or a microprocessor MPU, or the like.
  • the electronic device 2000 may also include a working memory 2002, which may store programs (including instructions) and/or data (eg, images, text, sounds, and other intermediate data, etc.) useful for the operation of the processor 2001.
  • memory and may include, but is not limited to, random access memory and/or read only memory devices.
  • the electronic device 2000 may also include a storage device 2003, which may include any non-transitory storage device, which may be any storage device that is non-transitory and that enables data storage, and may include, but is not limited to Disk drives, optical storage devices, solid-state memory, floppy disks, flexible disks, hard disks, magnetic tapes or any other magnetic media, optical discs or any other optical media, ROM (read only memory), RAM (random access memory), cache memory and /or any other memory chip or cartridge, and/or any other medium from which a computer may read data, instructions and/or code.
  • the working memory 2002 and the storage device 2003 may be collectively referred to as "memory" and may be used concurrently with each other in some cases.
  • the processor 2001 can monitor the camera 2004, the character recognition circuit 2005, the sound conversion circuit 2006, the sound output circuit 2007, the image processing circuit 2008, the word processing circuit 2009, the communication circuit 2010, the living body detection circuit (electronic circuit) 2100 and at least one of other various devices and circuits included in the electronic device 2000 to be controlled and scheduled.
  • at least some of the various components described in FIG. 20 may be interconnected and/or in communication via bus 2013 .
  • Software elements may reside in the working memory 2002, including, but not limited to, an operating system 2002a, one or more application programs 2002b, drivers, and/or other data and code.
  • instructions for performing the aforementioned control and scheduling may be included in the operating system 2002a or one or more application programs 2002b.
  • instructions for performing the method steps described in the present disclosure may be included in one or more applications 2002b, and the electronic device 2000 described above
  • the various modules of the can be implemented by the processor 2001 reading and executing the instructions of one or more application programs 2002b.
  • the electronic device 2000 may include a processor 2001 and a memory (eg, working memory 2002 and/or storage device 2003 ) storing programs including instructions that, when executed by the processor 2001 cause the processing
  • the controller 2001 performs methods as described in various embodiments of the present disclosure.
  • some or all of the operations performed by at least one of the character recognition circuit 2005, the sound conversion circuit 2006, the image processing circuit 2008, the word processing circuit 2009, the living body detection circuit (electronic circuit) 2100 may be performed by the processor 2001 does this by reading and executing one or more instructions of application 2002.
  • the executable or source code of the instructions of the software element (program) may be stored in a non-transitory computer-readable storage medium (such as the storage device 2003), and may be stored in the working memory 2001 (possibly by the storage device 2003) when executed. compile and/or install). Accordingly, the present disclosure provides a computer-readable storage medium storing a program comprising instructions that, when executed by a processor of an electronic device (eg, a visually impaired assistive device), cause the electronic device to perform various functions of the present disclosure. method described in the examples. According to another embodiment, the executable code or source code of the instructions of the software element (program) may also be downloaded from a remote location.
  • the processor 2001 in the electronic device 2000 may be distributed over a network. For example, some processing may be performed using one processor, while other processing may be performed by another processor remote from the one processor. Other modules of electronic device 2001 may be similarly distributed. As such, electronic device 2001 may be interpreted as a distributed computing system that performs processing in multiple locations.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

提供一种用于活体检测的方法、电子电路、电子设备和介质。方法包括:基于当前照明模式控制照明设备进行照明,并在照明的同时控制图像采集设备采集待检测对象的图像(S202);基于待检测对象的图像确定预测照明模式(S204);以及至少响应于确定预测照明模式和当前照明模式一致,确定待检测对象通过活体检测(S206)。利用本公开的实施例提供的方法,能够利用针对照明模式的预测结果实现对于活体预测结果的监督,从而提高活体预测的准确性。

Description

用于活体检测的方法、电子电路、电子设备和介质 技术领域
本公开涉及图像处理领域,特别涉及一种用于活体检测的方法、电子电路、电子设备和介质。
背景技术
通过图像处理的方式能够实现人脸识别,并且可以通过各种方式确定所识别的人脸属于活体,从而避免例如照片攻击的欺诈行为。
在此部分中描述的方法不一定是之前已经设想到或采用的方法。除非另有指明,否则不应假定此部分中描述的任何方法仅因其包括在此部分中就被认为是现有技术。类似地,除非另有指明,否则此部分中提及的问题不应认为在任何现有技术中已被公认。
发明内容
根据本公开的一个方面,提供了一种用于活体检测的方法,包括:基于当前照明模式控制照明设备进行照明,并在照明的同时控制图像采集设备采集待检测对象的图像;基于所述待检测对象的图像确定预测照明模式;以及至少响应于确定所述预测照明模式和所述当前照明模式一致,确定所述待检测对象通过活体检测。
根据本公开的另一方面,提供一种电子电路,包括:被配置为执行上述方法的步骤的电路。
根据本公开的另一方面,还提供一种电子设备,包括:处理器;以及存储程序的存储器,所述程序包括指令,所述指令在由所述处理器执行时使所述处理器执行上述的方法。
根据本公开的另一方面,还提供一种存储程序的非暂态计算机可读存储介质,所述程序包括指令,所述指令在由电子设备的处理器执行时,致使所述电子设备执行上述的方法。
根据本公开的另一方面,还提供一种计算机程序产品,包括计算机程序,其中,所述计算机程序在被处理器执行时实现上述的方法。
根据本公开的另一方面,提供了一种检测设备,其特征在于:图像采集设备;以及照明设备,包括位于所述图像采集设备一侧的第一照明单元和位于所述图像采集设备另一侧的第二照明单元;其中,所述第一照明单元的第一光轴与所述图像采集设备的光学系统的光轴相交,并且所述第二照明单元的第二光轴与所述图像采集设备的光学系统的光轴相交。
附图说明
附图示例性地示出了实施例并且构成说明书的一部分,与说明书的文字描述一起用于讲解实施例的示例性实施方式。所示出的实施例仅出于例示的目的,并不限制权利要求的范围。在所有附图中,相同的附图标记指代类似但不一定相同的要素。
图1示出了根据本公开的实施例可以将本文描述的各种方法和装置在其中实施的示例性系统的示意图;
图2示出了根据本公开的实施例的用于活体检测的方法的示意性的流程图;
图3示出了根据本公开的实施例的用于身份识别的过程的示意性的流程图;
图4示出了根据本公开的实施例的用于活体检测的装置的示意性的框图;
图5示出了根据本公开的实施例的检测设备的一种示意图;
图6示出了根据本公开的实施例的检测设备的另一种示意图;
图7示出了根据本公开的实施例的照明设备中的第一照明单元的安装位置的示意图;
图8示出了根据本公开的检测设备的光场分布的一部分的示意图;
图9A-图9C示出了根据本公开的实施例的检测设备的布置的一种示意图;
图10A-图10B示出了根据本公开的实施例的检测设备的布置的另一种示意图;
图11示出了根据本公开的实施例的检测设备的布置的又一种示意图;
图12-图14示出了根据本公开的实施例的检测设备的示例性的结构图;
图15示出了根据本公开的实施例的检测设备的示意性的框图;
图16示出了根据本公开的实施例的活体检测过程的一个示例;
图17示出了根据本公开的实施例的身份识别过程的一个示例;
图18示出了根据本公开的实施例的人脸注册过程的一个示例;
图19示出了根据本公开的实施例的人脸注册过程的另一个示例;
图20是示出根据本公开的示例性实施例的电子设备的示例的框图。
具体实施方式
在本公开中,除非另有说明,否则使用术语“第一”、“第二”等来描述各种要素不意图限定这些要素的位置关系、时序关系或重要性关系,这种术语只是用于将一个元件与另一元件区分开。在一些示例中,第一要素和第二要素可以指向该要素的同一实例,而在某些情况下,基于上下文的描述,它们也可以指代不同实例。
在本公开中对各种所述示例的描述中所使用的术语只是为了描述特定示例的目的,而并非旨在进行限制。除非上下文另外明确地表明,如果不特意限定要素的数量,则该要素可以是一个也可以是多个。此外,本公开中所使用的术语“和/或”涵盖所列出的项目中的任何一个以及全部可能的组合方式。
利用基于人工智能的方法,可以对待检测对象的图像(如人脸图像)进行图像处理以进行身份识别。在一些情况下,身份识别系统可能难以识别被检测的图像是活体还是通过照片或视频展示的图像。为了提高身份识别系统的安全性,存在确定待检测对象是否是活体的必要。
在一些情况下,可以采用深度相机或者双目相机获取待检测对象的深度信息,以判断待检测对象是平面的二维对象还是立体的三维对象。进一步地,还可以通过要求待检测对象根据指示做出相应动作(如眨眼、张嘴等),来进一步避免身份识别系统将三维假体模型误认为是真实人体对象。
图1示出了根据本公开的实施例可以将本文描述的各种方法和装置在其中实施的示例性系统100的示意图。参考图1,该系统100包括一个或多个终端设备101、服务器120以及将一个或多个终端设备耦接到服务器120的一个或多个通信网络110。终端设备101可以被配置为执行一个或多个应用程序。
在本公开的实施例中,服务器120可以运行使得能够执行根据本公开的用于活体检测的方法的一个或多个服务或软件应用。在一些实施例中,也可以使用终端设备101运行根据本公开的用于活体检测的方法的一个或多个服务或软件应用。在一些实现方式中,终端设备101可以实现为门禁设备、支付设备等。
在某些实施例中,服务器120还可以提供可以包括非虚拟环境和虚拟环境的其他服务或软件应用。在某些实施例中,这些服务可以作为基于web的服务或云服务提供,例如在软件即服务(SaaS)模型下提供给终端设备101的用户。
在图1所示的配置中,服务器120可以包括实现由服务器120执行的功能的一个或多个组件。这些组件可以包括可由一个或多个处理器执行的软件组件、硬件组件或其组 合。操作终端设备101的用户可以依次利用一个或多个终端应用程序来与服务器120进行交互以利用这些组件提供的服务。应当理解,各种不同的系统配置是可能的,其可以与系统100不同。因此,图1是用于实施本文所描述的各种方法的系统的一个示例,并且不旨在进行限制。
终端设备可以提供使终端设备的用户能够与终端设备进行交互的接口。终端设备还可以经由该接口向用户输出信息。尽管图1仅描绘了一个终端设备,但是本领域技术人员将能够理解,本公开可以支持任何数量的终端设备。
终端设备101可以包括各种类型的计算机设备,例如便携式手持设备、通用计算机(诸如个人计算机和膝上型计算机)、工作站计算机、可穿戴设备、游戏系统、瘦客户端、各种消息收发设备、传感器或其他感测设备等。这些计算机设备可以运行各种类型和版本的软件应用程序和操作系统,例如Microsoft Windows、Apple iOS、类UNIX操作系统、Linux或类Linux操作系统(例如Google Chrome OS);或包括各种移动操作系统,例如Microsoft Windows Mobile OS、iOS、Windows Phone、Android。便携式手持设备可以包括蜂窝电话、智能电话、平板电脑、个人数字助理(PDA)等。可穿戴设备可以包括头戴式显示器和其他设备。游戏系统可以包括各种手持式游戏设备、支持互联网的游戏设备等。终端设备能够执行各种不同的应用程序,例如各种与Internet相关的应用程序、通信应用程序(例如电子邮件应用程序)、短消息服务(SMS)应用程序,并且可以使用各种通信协议。
网络110可以是本领域技术人员熟知的任何类型的网络,其可以使用多种可用协议中的任何一种(包括但不限于TCP/IP、SNA、IPX等)来支持数据通信。仅作为示例,一个或多个网络110可以是局域网(LAN)、基于以太网的网络、令牌环、广域网(WAN)、因特网、虚拟网络、虚拟专用网络(VPN)、内部网、外部网、公共交换电话网(PSTN)、红外网络、无线网络(例如蓝牙、WIFI)和/或这些和/或其他网络的任意组合。
服务器120可以包括一个或多个通用计算机、专用服务器计算机(例如PC(个人计算机)服务器、UNIX服务器、中端服务器)、刀片式服务器、大型计算机、服务器群集或任何其他适当的布置和/或组合。服务器120可以包括运行虚拟操作系统的一个或多个虚拟机,或者涉及虚拟化的其他计算架构(例如可以被虚拟化以维护服务器的虚拟存储设备的逻辑存储设备的一个或多个灵活池)。在各种实施例中,服务器120可以运行提供下文所描述的功能的一个或多个服务或软件应用。
服务器120中的计算单元可以运行包括上述任何操作系统以及任何商业上可用的服务器操作系统的一个或多个操作系统。服务器120还可以运行各种附加服务器应用程序和/或中间层应用程序中的任何一个,包括HTTP服务器、FTP服务器、CGI服务器、JAVA服务器、数据库服务器等。
在一些实施方式中,服务器120可以包括一个或多个应用程序,以分析和合并从终端设备101、的用户接收的数据馈送和/或事件更新。服务器120还可以包括一个或多个应用程序,以经由终端设备101的一个或多个显示设备来显示数据馈送和/或实时事件。
在一些实施方式中,服务器120可以为分布式系统的服务器,或者是结合了区块链的服务器。服务器120也可以是云服务器,或者是带人工智能技术的智能云计算服务器或智能云主机。云服务器是云计算服务体系中的一项主机产品,以解决传统物理主机与虚拟专用服务器(VPS,Virtual Private Server)服务中存在的管理难度大、业务扩展性弱的缺陷。
系统100还可以包括一个或多个数据库130。在某些实施例中,这些数据库可以用于存储数据和其他信息。例如,数据库130中的一个或多个可用于存储诸如音频文件和视频文件的信息。数据存储库130可以驻留在各种位置。例如,由服务器120使用的数据存储库可以在服务器120本地,或者可以远离服务器120且可以经由基于网络或专用的 连接与服务器120通信。数据存储库130可以是不同的类型。在某些实施例中,由服务器120使用的数据存储库可以是数据库,例如关系数据库。这些数据库中的一个或多个可以响应于命令而存储、更新和检索到数据库以及来自数据库的数据。
在某些实施例中,数据库130中的一个或多个还可以由应用程序使用来存储应用程序数据。由应用程序使用的数据库可以是不同类型的数据库,例如键值存储库,对象存储库或由文件系统支持的常规存储库。
图1的系统100可以以各种方式配置和操作,以使得能够应用根据本公开所描述的各种方法和装置。
图2示出了根据本公开的实施例的用于活体检测的方法的示意性的流程图。图2中示出的方法可以由图1中示出的终端设备101或服务器120来执行。其中,终端设备可以包括照明设备和图像采集设备。图像采集设备可以用于获取用于活体检测的待检测对象的图像,照明设备可以用于对待检测对象进行照明。
如图2所示,在步骤S202中,可以基于当前照明模式控制照明设备进行照明,并在照明的同时控制图像采集设备采集待检测对象的图像。
在一些实施例中,照明设备可以是能够发射可见光的发光二极管。在另一些实施例中,照明设备可以是能够发射红外光的红外照明设备。可以理解的是,照明设备也可以是能够同时发射或选择性地发射可见光和红外光的照明设备。在图像采集设备包括红外摄像装置并且照明设备包括红外照明设备的情况下,可以采集待检测对象的红外信息以辅助活体检测。
在一些实施例中,照明设备可以包括位于图像采集设备的一侧的第一照明单元和位于图像采集设备的另一侧的第二照明单元。其中,第一照明单元和第二照明单元可以相对于图像采集设备对称布置。在一些实现方式中,当待检测对象位于图像采集设备正前方时,第一照明单元可以被设置为用于从待检测对象的左侧进行照明,第二照明单元可以被设置为用于从待检测对象的右侧进行照明。例如,第一照明单元可以被设置在用于照亮待检测对象的左侧脸的位置,第二照明单元可以被设置在用于照亮待检测对象的右侧脸的位置。在另一些实现方式中,第一照明单元可以被设置为用于从待检测对象的上方进行照明,第二照明单元可以被设置为用于从待检测对象的下方进行照明。例如,第一照明单元可以被设置在用于照亮待检测对象的脸部的上半部分的位置,第二照明单元可以被设置在用于照亮待检测对象的脸部的下半部分的位置。可以理解的是,本领域技术人员可以根据实际情况将第一照明单元和第二照明单元设置在不同的位置。
在一些实现方式中,在图像采集设备无法采集深度信息的情况下,可以通过控制照明设备以不同的方式进行照明来识别立体的待检测对象。以照明设备包括用于从待检测对象的左侧进行照明的第一照明单元和从待检测对象的右侧进行照明的第二照明单元为例,可以通过控制照明设备采集仅对待检测对象的左侧进行照明时的图像和仅对待检测对象的右侧进行照明时的图像。由于对于立体的三维对象来说,在分别从左右两侧进行照明的情况下采集到的图像具有亮度差异,因此,即使在没有深度信息的情况下,也可以通过对这样具有亮度差异的图像进行检测以确定待检测对象是否是三维对象,并进一步确定待检测对象是否是活体。
照明模式用于指示照明设备进行照明的具体方式。在一些实施例中,照明模式可以指示点亮或熄灭照明设备的持续时间,例如持续照明0.5秒、1秒或2秒或者保持熄灭0.5秒、1秒或2秒。本领域技术人员可以根据实际情况设置持续时间的具体数值。在另一些实施例中,在照明设备包括多个照明单元的情况下,照明模式可以指示点亮多个照明单元中的一部分或点亮全部照明单元。在又一些实施例中,照明模式可以同时指示被点亮的照明单元以及点亮照明单元的持续时间。
表1中示出了照明模式的部分示例,在表1中示出的示例中,照明设备包括第一照明单元和第二照明单元。其中,第一照明单元可以位于图像采集设备的一侧(如左侧),第二照明单元可以位于图像采集设备的另一侧(如右侧)。
表1
序号 照明模式
1 点亮第一照明单元1秒
2 点亮第二照明单元1秒
3 同时点亮第一照明单元和第二照明单元1秒
4 点亮第一照明单元2秒
5 点亮第二照明单元2秒
6 同时点亮第一照明单元和第二照明单元2秒
在一些实施例中,当前照明模式可以包括用于多次照明的照明模式序列。可以基于照明模式序列控制照明设备进行多次照明。
照明模式序列可以包括由一个或多个照明模式组成的序列。表2示出了利用表1中示出的照明模式形成的照明模式序列的示例。其中,照明模式序列可以包括表1示出的至少一个照明模式。
表2
序号 照明模式序列
照明模式1、照明模式2、照明模式3
照明模式1、照明模式3、照明模式2
照明模式3、照明模式1、照明模式2
照明模式3、照明模式2、照明模式1
照明模式2、照明模式1、照明模式3
照明模式2、照明模式3、照明模式1
照明模式1、照明模式1、照明模式2、照明模式2、照明模式3
照明模式1、照明模式2、照明模式4、照明模式5、照明模式3
以表2中的序列①为例,照明模式序列可以包括依次启用第一照明单元进行照明、第二单元进行照明以及利用两侧同时照明。
可以理解的是,表2中仅示出了利用表1中提供的照明模式形成的照明模式序列的一些示例。本领域技术人员可以根据实际情况通过表1中示出的照明模式构建不同的照明模式序列。此外,随机照明模式序列的数量也不限于表2中示出的8个。本领域技术人员可以根据实际情况设置更多或更少的照明模式序列。
在一些实施例中,照明模式序列可以是随机照明模式序列。可以从预先设置的多个照明模式序列中确定用于照明设备的随机照明模式序列作为当前照明模式。例如,可以生成随机数,并选择对应于所生成的随机数的照明模式序列作为用于照明设备的随机照明模式序列。
以表2中的示例为例,可以在序号1~8的范围内生成随机数,并选择对应于所生成的随机数的照明模式序列作为用于照明设备的随机照明模式序列。
以当前照明模式是表2中序号①的照明模式序列为例,可以控制照明设备按照点亮第一照明单元1秒、点亮第二照明单元1秒以及同时点亮第一照明单元和第二照明单元1秒的方式进行照明。
在控制照明设备进行照明的同时,可以控制图像采集设备在不同的照明模式下采集待检测对象的图像。以步骤2中确定的照明模式序列是表2中的序号①为例,可以控制图像采集设备分别在点亮第一照明单元1秒期间、点亮第二照明单元1秒期间以及同时点亮第一照明单元和第二照明单元1秒采集图像,以得到待检测图像的图像序列。
在步骤S204中,可以基于步骤S202中采集的待检测对象的图像确定预测照明模式。
在一些实施例中,可以对待检测对象的图像进行图像分类以得到预测照明模式。在一些实现方式中,可以将待检测对象的图像输入预先训练好的用于图像分类的第一神经网络模型。第一神经网络模型被训练成能够预测输入的图像被采集时所使用的照明模式,并输出指示预测照明模式的分类结果。以表1中示出的示例为例,第一神经网络模型可以对待检测对象的图像进行分类,并输出图像所属的类别为类别3,这表示图像的预测照明模式是表1中的照明模式3。
在当前照明模式包括用于多次照明的照明模式序列的情况下,所采集的待检测对象的图像可以包括在多次照明时分别采集的图像形成的图像序列。可以将图像序列输入预先训练好的第一神经网络模型以得到预测照明模式。以表2中示出的示例为例,第一神经网络模型可以对待检测对象的图像序列进行分类,并输出图像序列所属的类别为类别1,这表示图像序列的预测照明模式是表2中的照明模式序列①。
在步骤S206中,至少响应于确定预测照明模式和当前照明模式一致,确定待检测对象通过活体检测。
在一些实施例中,可以对步骤S202中采集的待检测对象的图像进行图像分类,以得到待检测对象的活体预测结果。其中,活体预测结果指示待检测对象是活体或待检测对象是非活体。在一些实现方式中,可以将待检测对象的图像输入预先训练好的用于图像分类的第二神经网络模型。第二神经网络模型被训练成能够预测输入的图像中存在的待检测对象是否是活体,并输出指示待检测对象是活体或待检测对象是非活体的分类结果。
在一些示例中,可以利用同一预测网络的两个分支分别实现上述第一神经网络模型和第二神经网络模型。其中预测网络可以包括骨干网络和连接骨干网络的第一输出模块和第二输出模块。例如,可以将待检测对象的图像(或图像序列)输入预测网络。利用骨干网络对待检测对象的图像(或图像序列)进行处理以得到所述待检测对象的图像特征。可以利用第一输出模块对图像特征进行处理以得到预测照明模式。同时,可以利用第二输出模块对图像特征进行处理以得到活体预测结果。在一些示例中,第一输出模块和第二数据模块可以利用全连接层实现。
利用上述过程,可以通过对待检测对象的图像进行一次图像分类的操作来获得预测照明模式的分类结果和指示待检测对象是否是活体的分类结果。
在另一些示例中,第二神经网络模型和第一神经网络模型可以是不同的模型。例如,可以通过不同的图像分类操作分别获得预测照明模式的分类结果和指示待检测对象是否是活体的分类结果。
在一些实施例中,响应于活体预测结果指示待检测对象是活体,并且响应于确定步骤S204中得到的预测照明模式和步骤S202中用于控制照明的当前照明模式一致,可以确定待检测对象通过活体检测。
利用本公开的实施例提供的用于活体检测的方法,待检测对象通过活体检测的条件不仅包括基于待检测对象的图像分类结果指示待检测对象是活体,还包括基于待检测对象的图像确定的预测的明模式和实际采集图像时使用的当前照明模式是一致的。可以理解的是,基于图像分类得到的指示待检测对象是否是活体的结果并不是100%正确的。在 一些情况下,由于所采集的用于活体检测的图像质量较低的情况下,活体预测结果和真实情况可能是不一致的。例如,可能针对活体的待检测对象输出指示待检测对象是非活体的预测结果,也可能针对非活体的待检测对象输出待检测对象是活体的预测结果。
为了提高活体检测的准确率,本公开利用基于待检测对象的图像(或图像序列)确定的预测照明模式和当前照明模式是否一致的判断结果对活体预测结果进行监督。在确定预测照明模式和当前照明模式一致的情况下,可以认为所采集的用于检测的图像的图像质量是可以接受的,因此基于这样的图像得到的活体预测结果是可信的。在确定预测照明模式和当前照明模式不一致的情况下,可以认为所采集的用于检测的图像的图像质量较低,因此无法得到正确的照明模式预测结果。因此基于这种低质量的图像得到的活体预测结果是不可信的。在后一种情况下,即使活体预测结果指示待检测对象是活体,也无法通过活体检测。因此,在模型训练和实际使用过程中,照明模式的预测结果可以对活体检测结果起监督作用,从而提高活体检测结果的准确性。
图3示出了根据本公开的实施例的用于身份识别的过程的示意性的流程图。图3中示出的方法可以由图1中示出的终端设备101或服务器120来执行。其中,终端设备可以包括照明设备和图像采集设备。图像采集设备可以用于获取用于活体检测的待检测对象的图像,照明设备可以用于对待检测对象进行照明。其中,照明设备包括位于图像采集设备一侧的第一照明单元和位于图像采集设备另一侧的第二照明单元。
如图3所示,方法300开始于步骤S301。
在步骤S302中,可以确定用于照明设备的随机照明模式序列。步骤S302中确定的随机照明模式序列中至少包括以第一照明单元和第二照明单元对待检测对象的两侧同时进行照明的照明模式。例如,随机照明模式序列可以是包括以下各项的序列:第一照明单元进行照明;第二照明单元进行照明;以及两侧同时照明。
在步骤S304中,可以基于步骤S302中确定的随机照明模式序列控制照明设备进行照明,并在照明的同时控制图像采集设备采集待检测对象的图像。其中,待检测对象的图像可以是待检测对象的人脸图像序列。
在步骤S306中,可以基于步骤S304中采集的图像序列确定预测照明模式和待检测对象的活体预测结果。
可以利用结合图2描述的步骤S202~S204实现图3中示出的步骤S304~S306,在此不再加以赘述。
在步骤S308中,可以确定步骤S306中得到的预测照明模式和步骤S302中确定的随机照明模式序列是否一致。
在步骤S308中确定预测照明模式和随机照明模式序列不一致的情况下,此次身份识别过程失败。可以返回步骤S301以开始新的身份识别过程。
在步骤S308中确定预测照明模式和随机照明模式序列一致的情况下,方法300可以前进到步骤S310。
在步骤S310中,可以获取步骤S306中得到的待检测对象的活体预测结果。其中活体预测结果指示待检测对象是活体或待检测对象是非活体。
在步骤S310中获取的活体预测结果指示待检测对象是非活体的情况下,此次身份识别过程失败。可以返回步骤S301以开始新的身份识别过程。
在步骤S310中得到的活体预测结果指示待检测对象是活体的情况下,方法300可以前进到步骤S312。
在步骤S312中,可以将人脸图像序列中从待检测对象两侧同时进行照明时采集的人脸图像作为识别图像。
在步骤S314中,可以对步骤S312中确定的识别图像进行图像处理,以得到人脸识别结果。在一些实施例中,可以利用训练好的用于人脸识别的神经网络模型对识别图像 进行处理,以得到待检测对象的人脸特征。通过将待检测对象的人脸特征与数据库中预先存储的多个身份的人脸特征进行比对,可以得到对应于待检测对象的身份作为待检测对象的人脸识别结果。
图4示出了根据本公开的实施例的用于活体检测的装置的示意性的框图。如图4所示,用于活体检测的装置400可以包括控制单元410、预测单元420以及检测单元430。
控制单元410可以配置成基于当前照明模式控制照明设备进行照明,并在照明的同时控制图像采集设备采集待检测对象的图像。预测单元430可以配置成基于所述待检测对象的图像确定预测照明模式。检测单元430可以配置成至少响应于确定所述预测照明模式和所述当前照明模式一致,确定所述待检测对象通过活体检测。
这里所说的用于活体检测的装置400的上述各单元410~430的操作分别与前面描述的步骤S202~S206的操作类似,在此不再加以赘述。
在图像采集设备采集的待检测对象的图像序列是待检测对象的人脸图像的序列的情况下,用于活体检测的装置400还可以包括人脸识别单元(未示出)。人脸识别单元可以配置成在图像序列中确定两侧同时照明时采集的图像作为识别图像,并对识别图像进行图像处理以得到待检测对象的人脸识别结果。
利用本公开的实施例提供的用于活体检测的装置,待检测对象通过活体检测的条件不仅包括基于待检测对象的图像分类结果指示待检测对象是活体,还包括基于待检测对象的图像确定的预测照明模式和实际采集图像时使用的当前照明模式是一致的。本公开利用基于待检测对象的图像确定的预测照明模式和采集图像时使用的当前照明模式是否一致的判断结果对活体预测结果进行监督以提高活体检测的准确率。在确定预测照明模式和当前照明模式一致的情况下,可以认为所采集的用于检测的图像的图像质量是可以接受的,因此基于这样的图像得到的活体预测结果是可信的。在确定预测照明模式和当前照明模式不一致的情况下,可以认为所采集的用于检测的图像的图像质量较低,因此无法得到正确的照明模式预测结果。因此基于这种低质量的图像得到的活体预测结果是不可信的。在后一种情况下,即使活体预测结果指示待检测对象是活体,也无法通过活体检测。
以上已经结合附图描述了根据本公开的示例性方法。下面将结合附图对利用本公开的电子电路以及电子设备等的示例性实施例进行进一步描述。
根据本公开的另一个方面,提供一种电子电路,包括:被配置为执行本公开中所述的方法的步骤的电路。
根据本公开的另一个方面,提供一种电子设备,包括:处理器;以及存储程序的存储器,所述程序包括指令,所述指令在由所述处理器执行时使所述处理器执行本公开中所述的方法。
根据本公开的另一个方面,提供一种存储程序的计算机可读存储介质,所述程序包括指令,所述指令在由电子设备的处理器执行时,致使所述电子设备执行本公开中所述的方法。
根据本公开的另一个方面,提供一种计算机程序产品,包括计算机程序,所述程序包括指令,所述指令在被处理器执行时执行本公开中所述的方法。
以下结合图5-图15,描述可以用于本公开描述的用于活体检测的终端设备的检测设备结构。
图5示出了根据本公开的实施例的检测设备的一种示意图。
如图5所示,检测设备500可以包括图像采集设备510和照明设备。其中,照明设备可以包括位于图像采集设备510一侧的第一照明单元5201以及位于图像采集设备510另一侧的第二照明单元5202。其中,虚线511示出了图像采集设备510的光轴,虚线521示出了第一照明单元5201的第一光轴,其中第一光轴521与图像采集设备的光轴511相 交。虚线522示出了第二照明单元5202的第二光轴,其中第二光轴522与图像采集设备的光轴511相交。
在一些实施例中,第一照明单元5201的第一光轴521和第二照明单元1202的第二光轴522的交点在所述图像采集设备的光学系统的光轴511上。
在一些实现方式中,第一照明单元可以被设置为用于从待检测对象的左侧进行照明,第二照明单元可以被设置为用于从待检测对象的右侧进行照明。例如,第一照明单元可以被设置在用于照亮待检测对象的左侧脸的位置,第二照明单元可以被设置在用于照亮待检测对象的右侧脸的位置。在另一些实现方式中,第一照明单元可以被设置为用于从待检测对象的上方进行照明,第二照明单元可以被设置为用于从待检测对象的下方进行照明。例如,第一照明单元可以被设置在用于照亮待检测对象的脸部的上半部分的位置,第二照明单元可以被设置在用于照亮待检测对象的脸部的下半部分的位置。可以理解的是,本领域技术人员可以根据实际情况将第一照明单元和第二照明单元设置在不同的位置。
在一些实施例中,第一照明单元5201和第二照明单元5202可以是相对于图像采集设备510对称布置。在一些示例中,第一照明单元5201和第二照明单元5202可以具有相同的参数。例如,第一照明单元5201和第二照明单元5202可以具有相同的照明范围、发射波长、功率等。
利用图5中示出的检测设备,可以在不获取深度信息的情况下,通过采集待检测对象的图像进行活体检测。以第一照明单元从左侧对待检测对象进行照明,第二照明单元从右侧对待检测对象进行照明为例,当仅在一侧进行照明时,对于三维的待检测对象,照明将在待检测对象的另一侧形成阴影。由此,不同的照明光投射方向会导致待检测对象的光场分布的差异。而在二维的待检测对象上不能形成这样的差异。因此,即使没有深度信息,也可以体现三维对象和二维对象的差异。
在一些实施例中,照明设备520可以是能够发射可见光的发光二极管。在另一些实施例中,照明设备520可以是能够发射红外光的红外照明设备。在这种情况下,图像采集设备可以包括红外摄像装置,以采集红外图像。可以理解的是,照明设备520也可以是能够同时发射或选择性地发射可见光和红外光的照明设备。在照明设备520包括红外照明设备的情况下,可以采集待检测对象的红外信息以辅助活体检测。
利用本公开的实施例提供的检测设备,通过使得第一照明单元的光轴和第二照明单元的光轴的交点在图像采集设备的光学系统的光轴上,可以使得在利用图像采集设备510采集待检测对象的图像时获得更好的照明效果。由于位于两侧的照明单元的光轴与图像采集设备的光轴相交于同一点,第一照明单元和第二照明单元对待检测对象的照明效果更均匀,所采集的待检测对象的图像质量更高。
图6示出了根据本公开的实施例的检测设备的另一种示意图。
如图6所示,检测设备600可以包括图像采集设备610和照明设备620。其中,照明设备620可以包括位于图像采集设备610一侧的第一照明单元6201以及位于图像采集设备610另一侧的第二照明单元6202。其中,第一照明单元6201的光轴621和第二照明单元6202的光轴622的交点在所述图像采集设备的光学系统的光轴611上。
图6中还示出了图像采集设备610的使用范围。其中,图像采集设备610的使用范围可以指示图像采集设备的最近成像平面和图像采集设备的最远成像平面之间的成像区域。
如图6所示,图像采集设备610的使用范围在第一成像平面631与第二成像平面632之间。其中第一成像平面631与图像采集设备610之间的距离小于第二成像平面632与图像采集设备610之间的距离。在第一成像平面631与第二成像平面632之间的范围内,图像采集设备610可以对待检测对象清晰成像。在一些实施例中,第一成像平面631可 以是图像采集设备610的最近成像平面,第二成像平面632可以是图像采集设备610的最远成像平面。
在一些实施例中,可以基于图像采集设备610的景深确定第一成像平面631和第二成像平面632的位置。例如,第一成像平面631与图像采集设备610之间的距离可以大于或等于图像采集设备610的最近清晰成像平面,第二成像平面632与图像采集设备610之间的距离可以小于或等于图像采集设备610的最远清晰成像平面。
还可以进一步基于待检测对象在采集到的图像中占据的比例来确定第一成像平面631和第二成像平面632的位置。例如,当待检测对象与图像采集设备时间的距离在第一成像平面631和第二成像平面632之间时,待检测对象的图像在采集到的图像中占据的比例在预定范围以内。其中在第一成像平面631处采集的图像中,待检测对象在图像中占据预定的最大比例,在第二成像平面632处采集的图像中,待检测对象在图像中占据预定的最小比例。
在一些实施例中,图像采集设备610的使用范围可以是40cm~80cm。也就是说,第一成像平面631(即,图像采集设备的最近成像平面)与图像采集设备610的距离是40cm,第二成像平面632(即,图像采集设备的最远成像平面)与图像采集设备610的距离是80cm。其中,成像平面与图像采集设备之间的距离是沿图像采集设备的光轴方向的距离。在图像采集设备被安装在竖直墙面上时,成像平面与图像采集设备之间的距离是沿水平方向的距离。
在一些实现方式中,第一照明单元6201的第一光轴和第二照明单元6202的第二光轴的交点可以位于图像采集设备610的中心成像平面上,其中中心成像平面位于图像采集设备的最近成像平面231和图像采集设备的最远成像平面232之间的中间位置。进一步地,中心成像平面与最近成像平面231的距离等于中心成像平面与最远成像平面232的距离。
图7示出了根据本公开的实施例的照明设备中的第一照明单元的安装位置的示意图。
如图7所示,针对照明设备700,可以建立以图像采集设备710作为原点的坐标系,其中Y轴与图像采集设备710的光轴重合,X轴平行于图像采集设备710的成像平面。
在上述坐标系下,可以基于第一照明单元7201的光轴和图像采集设备710的光轴721的交点与图像采集设备710之间的距离dis mid确定第一照明单元7201的位置。在一些实现方式中,可以基于图像采集设备710的使用范围确定dis mid的值。例如,可以将dis mid确定为图像采集设备710的使用范围的中间值。在图像采集设备710的使用范围是40cm~80cm的情况下,dis mid可以被确定为60cm。
在图7中示出的示例中,A点是第一照明单元7201的光轴和图像采集设备710的光轴的交点,B点是第一照明单元7201的位置,C点是B点在X轴上的垂点,D点是B点在Y轴上的垂点,E点是图像采集设备710的位置。
可以基于相似三角形△BCE和△ABD之间的比例关系确定公式(1):
Figure PCTCN2022078053-appb-000001
其中,可以根据B点位于X轴的正半轴或负半轴确定|lightX|的值,当B点位于X轴的正半轴时,|lightX|=lightX,当B点位于X轴的负半轴时,|lightX|=-lightX。
在已知dis mid的值的情况下,可以基于公式(1)确定B点的横坐标lightX和纵坐标lightY之间的关系。在实际应用场景中,可以根据实际情况指定lightX和lightY中的一个的值,并基于公式(1)计算lightX和lightY中的另一个的值。例如,可以根据检测设备的实际安装地点确定第一照明设备7201与图像采集设备710在X方向上的距离,即指定lightX的值。然后,可以基于所指定的lightX的值确定lightY的值。
可以理解的是,在通过D点并与Y轴垂直的平面上,可以选择通过D点的任一射线作为X轴,本领域技术人员可以根据实际情况选择适当的X轴。并且,对于任何方式确 定的X轴,在基于图7中示出的方法确定B的位置后,可以将B点以Y轴为轴旋转得到的任何位置确定为第一照明的安装位置。
在一些实施例中,在图像采集设备的使用范围以内,当利用第一照明单元和第二照明单元同时进行照明时的照明范围大于或等于图像采集设备的视场范围。也就是说,在图像采集设备的最近成像平面和最远成像平面之间的区域内,第一照明单元和第二照明单元同时进行照明时的照明范围覆盖图像采集设备的视场范围。
图8示出了根据本公开的检测设备的光场分布的一部分的示意图。
在图8示出的示例中,B点对应于图像采集设备的使用范围中最远距离,A点和C点对应于图像采集设备的视场FOV在通过B点的成像平面上的视场边界,E点对应于第一照明单元的光轴和图像采集设备的光轴的交点,F点是第一照明单元的位置,G点是图像采集设备的位置,H点是第二照明单元的位置,I点是第一照明单元和第二照明单元的连线与图像采集设备的光轴的角点,FA、FD分别表示第一照明单元的照明范围的边界,GA、GC分别表示图像采集设备的视场FOV的边界。其中第一照明单元和第二照明单元相对于图像采集设备的光轴对称设置。
从图8中可以看出,为了使得在图像采集设备的使用范围以内利用第一照明单元和第二照明单元同时进行照明时照明范围大于或等于图像采集设备的视场范围,图8中示出的第一照明单元的照明范围是能够满足上述条件的最小照明范围。在图8中示出的光场分布中,在图像采集设备的使用范围内,在第一照明单元所在的一侧,第一照明单元的照明范围始终大于图像采集设备的视场范围。由于在图像采集设备的另一侧对称地设置有第二照明单元,在图像采集设备的使用范围内,另一侧的第二照明单元的照明范围也始终大于图像采集设备的视场范围。
在已知图像采集设备的FOV以及图像采集设备和第一照明单元的位置的情况下,可以基于图8中示出的几何关系计算图8中示出的第一照明单元的照明视角。
其中,针对图8中示出的三角形AFE,可以基于下式(2)计算cos∠AFE:
cos∠AFE=(AF 2+EF 2-AE 2)/(2AF*EF)         (2)
以GE=600mm,FH=300mm为例,可以基于公式(1)计算得到GH=lightY=7.7913mm,其中公式(1)中的|lightX|=FH=300mm,dis mid=GE=600mm。
在图像采集设备的FOV=90°,GB=800mm的情况下,可以确定点A的坐标为(-800,800)、点E的坐标为(0,600)、点F的坐标为(-30,7.7913)。
通过将点A、E、F的坐标代入公式(2)可以计算得到cos∠AFE的值,并可以进一步计算得到∠AFE=47.08°。由此可以得到图4中示出的第一照明单元的照明视角∠AFD=94.2度。
可以理解的是,基于公式(2)计算得到的照明视角∠AFD的值是用于满足第一照明单元和第二照明单元的照明范围的最小照明范围。在实际应用中,本领域技术人员可以根据实际情况将照明单元的照明范围确定为大于基于公式(2)计算得到的最小照明范围的任意值。
图9A-图9C示出了根据本公开的实施例的检测设备的布置的一种示意图。
在图9A-图9C示出的示例中,图像采集设备910、第一照明单元9201以及第二照明单元9202被安装在同一底座上。如图9A所示,第一照明单元9201安装在图像采集设备910的左侧,第二照明单元9202安装在图像采集设备910的右侧。如图9B所示,第一照明单元9201安装在图像采集设备910的上侧,第二照明单元9202安装在图像采集设备910的下侧。
图9C示出了可以用于图9A和图9B示出的检测设备的布置的立体结构。如图9C所示,第一照明单元9201、图像采集设备910、第二照明单元9202可以被安装在底座930上。其中底座930可以是利用平板形的材料形成的。例如,可以对平板形的材料进行适 当弯折,以使得安装在两侧的第一照明单元9201、第二照明单元9202的光轴的交点位于图像采集设备910的光轴上。在一些示例中,可以使用柔性材料作为底座930的材料。
图10A-图10B示出了根据本公开的实施例的检测设备的布置的另一种示意图。
在图10A-图10B示出的示例中,图像采集设备1010被安装在第一底座1030上,第一照明单元10201、第二照明单元10202被安装在第二底座1040上,第一底座和第二底座以使得第一照明单元10201的光轴和第二照明单元10202的光轴的交点在图像采集设备1010的光学系统的光轴上的方式被组装。
如图10A所示,第一照明单元10201位于图像采集设备1010的左侧,第二照明单元10202位于图像采集设备1010的右侧。
如图10B所示,第一照明单元10201、第二照明单元10202被安装在第二底座1040上。第二底座1040可以是由弯折后的平板形材料形成,其上安装有第一照明单元10201、第二照明单元10202并使得第一照明单元10201、第二照明单元10202的光轴交于一点。此外,图像采集设备1010被安装在第一底座1030上。第一底座1030与第二底座1040以使得第一照明单元10201的光轴和第二照明单元10202的光轴的交点在图像采集设备1010的光学系统的光轴上的方式被组装。例如,可以如图10B中所示出的,通过在第二底座1040上形成的镂空部分使得图像采集设备1010被安装在第一照明单元10201和第二照明单元10202的中间。
图11示出了根据本公开的实施例的检测设备的布置的又一种示意图。
如图11所示,照明设备还包括位于图像采集设备1110的一侧的第一辅助照明单元11203以及位于1110图像采集设备的另一侧的第二辅助照明单元11204。其中第一辅助照明单元11203和第二辅助照明单元11204与第一照明单元11201和第二照明单元11202可以具有相同的参数。尽管图11中仅示出了两个第一辅助照明单元11203和两个第二辅助照明单元11204,根据实际情况,本领域技术人员可以设置更多或更少数量的第一辅助照明单元和第二辅助照明单元。
图12-图14示出了根据本公开的实施例的检测设备的示例性的结构图。
在图12示出的示例性的结构中,第一辅助照明单元12203的光轴平行于第一照明单元12201的光轴,第二辅助照明单元12204的光轴平行于第二照明单元12202的光轴。图像采集设备1210、第一照明单元12201、第二照明单元12202、第一辅助照明单元12203以及第二辅助照明单元12204被安装在同一底座1230上。其中,安装在图像采集设备1210两侧的第一照明单元12201、第二照明单元12202的光轴的交点位于图像采集设备1210的光学系统的光轴上。
在图13示出的示例性的结构中,第一辅助照明单元13203的光轴平行于第一照明单元13201的光轴,第二辅助照明单元13204的光轴平行于第二照明单元13202的光轴。其中,图像采集设备1310被安装在第一底座1330上,第一照明单元13201、第二照明单元13202、第一辅助照明单元13203以及第二辅助照明单元13204被安装在第二底座1340上。第一底座1330与第二底座1340以使得第一照明单元13201的光轴和第二照明单元13202的光轴的交点在图像采集设备1310的光学系统的光轴上的方式被组装。例如,可以如图13中所示出的,通过在第二底座1340上形成的镂空部分使得图像采集设备1310被安装在第一照明单元13201和第二照明单元13202的中间。
在图14示出的示例性的结构中,检测设备1400中的第一辅助照明单元14203的光轴和第二辅助照明单元14204的光轴的交点也在图像采集设备的光学系统的光轴上。在一些实施例中,检测设备1400中的第一照明单元14201的光轴和第二照明单元14202的光轴的交点在图像采集设备1410的光学系统的光轴上,并且第一辅助照明单元14203的光轴和第二辅助照明单元14204的光轴也通过上述交点。在图14中示出的结构中第一照明单元14201、第二照明单元14202、第一辅助照明单元14203以及第二辅助照明单元 14204被安装在球冠型的底座上,以使得第一照明单元14201、第二照明单元14202、第一辅助照明单元14203以及第二辅助照明单元14204的光轴交汇于同一点。在一些实施例中,图像采集设备14010可以被安装在球冠型底座的中心,第一照明单元14201、所述至少一个第一辅助照明单元14203、所述第二照明单元14202以及所述至少一个第二辅助照明单元14204可以相对于图像采集设备1410对称布置。
图14中示出的结构仅是一种示例性的说明,在实际应用的结构中可以不包括图14中示出的网格线。
通过增加更多数量的辅助照明单元,可以获得更好的照明效果。
图15示出了根据本公开的实施例的检测设备的示意性的框图。
如图15所示,检测设备1500可以包括图像采集设备1510、包括至少一个照明单元的照明设备1520以及处理器1530。其中,图像采集设备1510可以用于获取待检测对象的图像,照明设备1520可以用于对待检测对象进行照明。其中照明设备1520可以包括位于图像采集设备一侧的第一照明单元和位于图像采集设备另一侧的第二照明单元。可以结合图5-图14描述的实施例实现图15中示出的图像采集设备1510和照明设备1520的结构,在此不再加以赘述。
处理器1530可以配置成控制照明设备进行照明,并在照明的同时控制图像采集设备采集待检测对象的待检测图像,并对待检测图像进行图像处理以确定待检测对象是否是活体。
如图15所示,处理器1530可以包括序列生成模块1531、灯光控制模块1532以及曝光控制模块1533。
序列生成模块1531可以配置成生成用于照明设备的照明模式序列。照明模式序列可以是由以下各项的组合形成的序列:第一照明单元进行照明;第二照明单元进行照明;以及两侧同时照明。
在一些实施例中,照明模式序列可以是随机生成的。
灯光控制模块1532可以配置成基于序列生成模块1531生成的照明模式序列控制第一照明单元和第二照明单元进行照明。在照明的同时,曝光控制模块1533可以配置成控制图像采集设备1510采集待检测对象的图像序列。
在一些实施例中,处理器1530还可以包括图像分类模块(未示出),图像分类模块可以配置成对图像采集设备采集的待检测图像进行图像分类,以得到待检测对象的活体预测结果,其中活体预测结果指示待检测对象是活体或待检测对象是非活体。
图16示出了根据本公开的实施例的活体检测过程的一个示例。
如图16所示,在1601处,通过图像采集设备采集待检测对象的图像序列。其中,图像序列可以是在根据照明模式序列控制的照明条件下采集的图像。可以将所采集的图像序列输入1602处的人脸检测模块。人脸检测模块可以对所采集的图像序列进行处理,并输出1603处示出的对应于图像序列中的每个图像的人脸框。其中在图像中包括多个人脸的情况下,人脸检测模块可能在图像中检测到多个人脸框。
可以将针对每个图像检测到的N个人脸框(其中N是大于或等于1的整数)输入1604处示出的人脸框决策模块。其中,针对图像序列中的每个图像,人脸框决策模块可以将N个人脸框中的最大人脸框确定为该图像的人脸框1605。
可以利用1606处示出的人脸裁剪模块,基于人脸框1605对图像序列1601进行裁剪,以得到人脸序列1607。可以利用1608处的活体检测模块对人脸序列1607进行处理,以得到图像序列的照明模式序列预测结果和活体检测结果。
当在同一目标对象上投射不同光场时,所采集到的图像能够反映光场的形态。因此,可以基于图像序列预测采集图像序列时使用的照明模式序列。只有当预测的灯光序列和 实际采集图像时使用的真实的照明模式序列一致时,利用图16中的过程得到的活体预测结果才是可靠的。
图17示出了根据本公开的实施例的身份识别过程的一个示例。
如图17所示,在1701处,通过图像采集设备采集待检测对象的图像序列。其中,图像序列可以是在根据照明模式序列控制的照明条件下采集的图像。可以将所采集的图像序列输入1702处的人脸检测模块。人脸检测模块可以对所采集的图像序列进行处理,并输出1703处示出的对应于图像序列中的每个图像的人脸框。其中在图像中包括多个人脸的情况下,人脸检测模块可能在图像中检测到多个人脸框。
可以将针对每个图像检测到的N个人脸框(其中N是大于或等于1的整数)输入1704处示出的人脸框决策模块。其中,针对图像序列中的每个图像,人脸框决策模块可以将N个人脸框中的最大人脸框确定为该图像的人脸框1705。
基于采集图像序列1701时使用的随机灯光序列1706,可以利用1707处示出的提取模块从图像序列提取当照明设备中分别位于两侧的第一照明单元和第二照明单元同时进行照明时采集的图像作为识别图像,并可以基于人脸框1705确定识别图像中的人脸信息1708,其中人脸信息可以包括识别图像中的人脸框和人脸图像。
利用1709处示出的关键点检测模块,可以对人脸信息1708进行处理以得到其中的关键点1710。利用1711处示出的人脸对齐模块,可以基于人脸信息1708中的人脸图像以及关键点1710进行对齐,以得到对齐人脸1712。利用经对齐的人脸得到的人脸特征的准确性会更高。
利用1713处示出的人脸编码模块,可以对对齐人脸1712进行处理,以得到用于表示待检测对象的身份特征的人脸编码1714。通过将人脸编码1714和数据库中预先存储的多个人脸编码进行对比,可以得到待检测对象的身份信息。
图18示出了根据本公开的实施例的人脸注册过程的一个示例。
如图18所示,在1801处,通过图像采集设备采集待检测对象的图像序列。其中,图像序列可以是在根据照明模式序列控制的照明条件下采集的图像。可以将所采集的图像序列输入1802处的人脸检测模块。人脸检测模块可以对所采集的图像序列进行处理,并输出1803处示出的对应于图像序列中的每个图像的人脸框。其中在图像中包括多个人脸的情况下,人脸检测模块可能在图像中检测到多个人脸框。
可以将针对每个图像检测到的N个人脸框(其中N是大于或等于1的整数)输入1804处示出的人脸框决策模块。其中,针对图像序列中的每个图像,人脸框决策模块可以将N个人脸框中的最大人脸框确定为该图像的人脸框1805。
基于采集图像序列1801时使用的随机灯光序列1806,可以利用1807处示出的提取模块从图像序列提取当照明设备中分别位于两侧的第一照明单元和第二照明单元同时进行照明时采集的图像作为识别图像,并可以基于人脸框1805确定识别图像中的人脸信息1808,其中人脸信息可以包括识别图像中的人脸框和人脸图像。
利用1809处示出的关键点检测模块,可以对人脸信息1808进行处理以得到其中的关键点1810。
可以利用1815处示出的人脸质控模块对关键点1810进行处理,以得到识别图像的质量信息1816。其中,质量信息可以包括但不限于待检测对象的表情、被遮挡比例、头部角度、光照情况是否满足预定的质量判别条件。
在质量信息1816指示识别图像中的待检测对象的人脸图像质量不合格的情况下,此次人脸注册终止。方法可以前进到1817,开始新一轮的图像采集或终止整个人脸注册过程。
在矢量信息1816指示识别图像中的待检测对象的人脸图像质量合格的情况下,方法前进到1811,可以利用人脸对齐模块基于人脸信息1808中的人脸图像以及关键点1810进行对齐,以得到对齐人脸1812。利用经对齐的人脸得到的人脸特征的准确性会更高。
利用1813处示出的人脸编码模块,可以对对齐人脸1812进行处理,以得到用于表示待检测对象的身份特征的人脸编码1814。可以将对应于待检测对象的人脸编码1814和待检测对象的身份信息相关联地存入数据库中,以完成注册。
图19示出了根据本公开的实施例的人脸注册过程的另一个示例。
如图19所示,在现有注册库1901中存储有多个人员信息以及对应于各个人员信息的人脸编码。在利用图18中的过程获得待检测对象的人脸编码后,可以将待检测对象的人脸编码确定为图19中示出的待入库人脸编码1902。利用1903处示出的编码比对模块,可以将待入库人脸编码1902和现有注册库1901中的编码进行比对,以得到比对分数列表1904。
利用1905处的数据库查重模块,可以基于比对分数列表1904确定待入库人脸编码1902是重复身份(ID)或是新增ID。例如,当比对分数列表中存在高于预定分数阈值的比对分数时,可以认为在现有注册库中存在与待入库人脸编码相似度较高的编码。在这种情况下,可能是因为待检测对象的信息已经录入现有注册库,也可能是因为现有注册库中存在于待检测对象的人脸编码相似的人脸编码。如果在这种情况下录入待检测对象的人脸编码,在未来的人脸识别过程中可能会造成识别错误。
在确定待入库人脸编码属于新增ID 1907的情况下,可以利用1909处的数据库更新模块将待入库人脸编码1902以及与待入库人脸编码1902相关联的人员信息录入现有注册库1901。
在确定待入库人脸编码属于重复ID 1906的情况下,可以利用1908处的重复ID处理模块,尝试使用重新注册1910、拒绝注册1911或者人工干预1912的方式进行处理。
图20是示出根据本公开的示例性实施例的电子设备的示例的框图。要注意的是,图20所示出的结构仅是一个示例,根据具体的实现方式,本公开的电子设备可以仅包括图20所示出的组成部分中的一种或多个。
电子设备2000例如可以是通用计算机(例如膝上型计算机、平板计算机等等各种计算机)、移动电话、个人数字助理。根据一些实施例,电子设备2000可以是视障辅助设备。电子设备2000可以包括摄像机、照明设备以及用于活体检测的电子电路。其中,摄像机可以被配置为获取图像,照明设备可以用于对待检测对象进行照明,电子电路可以被配置为执行结合图2、图3描述的用于活体检测的方法。
根据一些实施方式,所述电子设备2000可以被配置为包括眼镜架或者被配置为能够可拆卸地安装到眼镜架(例如眼镜架的镜框、连接两个镜框的连接件、镜腿或任何其他部分)上,从而能够拍摄到近似包括用户的视野的图像。
根据一些实施方式,所述电子设备2000也可被安装到其它可穿戴设备上,或者与其它可穿戴设备集成为一体。所述可穿戴设备例如可以是:头戴式设备(例如头盔或帽子等)、可佩戴在耳朵上的设备等。根据一些实施例,所述电子设备可被实施为可附接到可穿戴设备上的配件,例如可被实施为可附接到头盔或帽子上的配件等。
根据一些实施方式,所述电子设备2000也可具有其他形式。例如,电子设备2000可以是移动电话、通用计算设备(例如膝上型计算机、平板计算机等)、个人数字助理,等等。电子设备2000也可以具有底座,从而能够被安放在桌面上。
电子设备2000可以包括摄像机2004,用于获取图像。摄像机2004可以包括但不限于摄像头或照相机等。电子设备2000还可以包括文字识别电路2005,所述文字识别电路2005被配置为对所述图像中包含的文字进行文字检测和/或识别(例如OCR处理),从而获得文字数据。所述文字识别电路2005例如可以通过专用芯片实现。电子设备2000 还可以包括声音转换电路2006,所述声音转换电路2006被配置为将所述文字数据转换成声音数据。所述声音转换电路2006例如可以通过专用芯片实现。电子设备2000还可以包括声音输出电路2007,所述声音输出电路2007被配置为输出所述声音数据。所述声音输出电路2007可以包括但不限于耳机、扬声器、或振动器等,及其相应驱动电路。
电子设备2000还可以包括活体检测电路(电子电路)2100,所述活体检测电路(电子电路)2100包括被配置为执行如前所述的用于活体检测的方法的步骤(例如图2、图3的流程图中所示的方法步骤)的电路。
根据一些实施方式,所述电子设备2000还可以包括图像处理电路2008,所述图像处理电路2008可以包括被配置为对图像进行各种图像处理的电路。图像处理电路2008例如可以包括但不限于以下中的一个或多个:被配置为对图像进行降噪的电路、被配置为对图像进行去模糊化的电路、被配置为对图像进行几何校正的电路、被配置为对图像进行特征提取的电路、被配置为对图像中的对象进行对象检测和/或识别的电路、被配置为对图像中包含的文字进行文字检测的电路、被配置为从图像中提取文本行的电路、被配置为从图像中提取文字坐标的电路、被配置为从图像中提取对象框的电路、被配置为从图像中提取文本框的电路、被配置为基于图像进行版面分析(例如段落划分)的电路,等等。
根据一些实施方式,电子设备2000还可以包括文字处理电路2009,所述文字处理电路2009可以被配置为基于所提取的与文字有关的信息(例如文字数据、文本框、段落坐标、文本行坐标、文字坐标等)进行各种处理,从而获得诸如段落排序、文字语义分析、版面分析结果等处理结果。
上述的各种电路(例如文字识别电路2005、声音转换电路2006、声音输出电路2007、图像处理电路2008、文字处理电路2009、活体检测电路(电子电路)2100)中的一个或多个可以使用定制硬件,和/或可以用硬件、软件、固件、中间件、微代码,硬件描述语言或其任何组合来实现。例如,上述的各种电路中的一个或多个可以通过使用根据本公开的逻辑和算法,用汇编语言或硬件编程语言(诸如VERILOG,VHDL,C++)对硬件(例如,包括现场可编程门阵列(FPGA)和/或可编程逻辑阵列(PLA)的可编程逻辑电路)进行编程来实现。
根据一些实施方式,电子设备2000还可以包括通信电路2010,所述通信电路2010可以是使得能够与外部设备和/或与网络通信的任何类型的设备或系统,并且可以包括但不限于调制解调器、网卡、红外通信设备、无线通信设备和/或芯片组,例如蓝牙设备、1302.11设备、WiFi设备、WiMax设备、蜂窝通信设备和/或类似物。
根据一些实施方式,电子设备2000还可以包括输入设备2011,所述输入设备2011可以是能向电子设备2000输入信息的任何类型的设备,并且可以包括但不限于各种传感器、鼠标、键盘、触摸屏、按钮、控制杆、麦克风和/或遥控器等等。
根据一些实施方式,电子设备2000还可以包括输出设备2012,所述输出设备2012可以是能呈现信息的任何类型的设备,并且可以包括但不限于显示器、视觉输出终端、振动器和/或打印机等。尽管电子设备2000根据一些实施例用于视障辅助设备,基于视觉的输出设备可以方便用户的家人或维修工作人员等从电子设备2000获得输出信息。
根据一些实施方式,电子设备2000还可以包括处理器2001。所述处理器2001可以是任何类型的处理器,并且可以包括但不限于一个或多个通用处理器和/或一个或多个专用处理器(例如特殊处理芯片)。处理器2001例如可以是但不限于中央处理单元CPU或微处理器MPU等等。电子设备2000还可以包括工作存储器2002,所述工作存储器2002可以存储对处理器2001的工作有用的程序(包括指令)和/或数据(例如图像、文字、声音,以及其他中间数据等)的工作存储器,并且可以包括但不限于随机存取存储器和/或只读存储器设备。电子设备2000还可以包括存储设备2003,所述存储设备2003可以 包括任何非暂时性存储设备,非暂时性存储设备可以是非暂时性的并且可以实现数据存储的任何存储设备,并且可以包括但不限于磁盘驱动器、光学存储设备、固态存储器、软盘、柔性盘、硬盘、磁带或任何其他磁介质,光盘或任何其他光学介质、ROM(只读存储器)、RAM(随机存取存储器)、高速缓冲存储器和/或任何其他存储器芯片或盒、和/或计算机可从其读取数据、指令和/或代码的任何其他介质。工作存储器2002和存储设备2003可以被集合地称为“存储器”,并且在有些情况下可以相互兼用。
根据一些实施方式,处理器2001可以对摄像机2004、文字识别电路2005、声音转换电路2006、声音输出电路2007、图像处理电路2008、文字处理电路2009、通信电路2010、活体检测电路(电子电路)2100以及电子设备2000包括的其他各种装置和电路中的至少一个进行控制和调度。根据一些实施方式,图20中所述的各个组成部分中的至少一些可通过总线2013而相互连接和/或通信。
软件要素(程序)可以位于所述工作存储器2002中,包括但不限于操作系统2002a、一个或多个应用程序2002b、驱动程序和/或其他数据和代码。
根据一些实施方式,用于进行前述的控制和调度的指令可以被包括在操作系统2002a或者一个或多个应用程序2002b中。
根据一些实施方式,执行本公开所述的方法步骤(例如图2、图3的流程图中所示的方法步骤)的指令可以被包括在一个或多个应用程序2002b中,并且上述电子设备2000的各个模块可以通过由处理器2001读取和执行一个或多个应用程序2002b的指令来实现。换言之,电子设备2000可以包括处理器2001以及存储程序的存储器(例如工作存储器2002和/或存储设备2003),所述程序包括指令,所述指令在由所述处理器2001执行时使所述处理器2001执行如本公开各种实施例所述的方法。
根据一些实施方式,文字识别电路2005、声音转换电路2006、图像处理电路2008、文字处理电路2009、活体检测电路(电子电路)2100中的至少一个所执行的操作中的一部分或者全部可以由处理器2001读取和执行一个或多个应用程序2002的指令来实现。
软件要素(程序)的指令的可执行代码或源代码可以存储在非暂时性计算机可读存储介质(例如所述存储设备2003)中,并且在执行时可以被存入工作存储器2001中(可能被编译和/或安装)。因此,本公开提供存储程序的计算机可读存储介质,所述程序包括指令,所述指令在由电子设备(例如视障辅助设备)的处理器执行时,致使所述电子设备执行如本公开各种实施例所述的方法。根据另一种实施方式,软件要素(程序)的指令的可执行代码或源代码也可以从远程位置下载。
还应该理解,可以根据具体要求而进行各种变型。例如,也可以使用定制硬件,和/或可以用硬件、软件、固件、中间件、微代码,硬件描述语言或其任何组合来实现各个电路、单元、模块或者元件。例如,所公开的方法和设备所包含的电路、单元、模块或者元件中的一些或全部可以通过使用根据本公开的逻辑和算法,用汇编语言或硬件编程语言(诸如VERILOG,VHDL,C++)对硬件(例如,包括现场可编程门阵列(FPGA)和/或可编程逻辑阵列(PLA)的可编程逻辑电路)进行编程来实现。
根据一些实施方式,电子设备2000中的处理器2001可以分布在网络上。例如,可以使用一个处理器执行一些处理,而同时可以由远离该一个处理器的另一个处理器执行其他处理。电子设备2001的其他模块也可以类似地分布。这样,电子设备2001可以被解释为在多个位置执行处理的分布式计算系统。
虽然已经参照附图描述了本公开的实施例或示例,但应理解,上述的方法、系统和设备仅仅是示例性的实施例或示例,本发明的范围并不由这些实施例或示例限制,而是仅由授权后的权利要求书及其等同范围来限定。实施例或示例中的各种要素可以被省略或者可由其等同要素替代。此外,可以通过不同于本公开中描述的次序来执行各步骤。
进一步地,可以以各种方式组合实施例或示例中的各种要素。重要的是随着技术的演进,在此描述的很多要素可以由本公开之后出现的等同要素进行替换。

Claims (46)

  1. 一种用于活体检测的方法,包括:
    基于当前照明模式控制照明设备进行照明,并在照明的同时控制图像采集设备采集待检测对象的图像;
    基于所述待检测对象的图像确定预测照明模式;以及
    至少响应于确定所述预测照明模式和所述当前照明模式一致,确定所述待检测对象通过活体检测。
  2. 如权利要求1所述的方法,其中,所述当前照明模式包括用于多次照明的照明模式序列。
  3. 如权利要求1所述的方法,其中,基于所述待检测对象的图像确定预测照明模式包括:
    对所述待检测对象的图像进行图像分类,以得到所述预测照明模式。
  4. 如权利要求1-3任一项所述的方法,还包括:
    对所述待检测对象的图像进行图像分类,以得到所述待检测对象的活体预测结果,其中所述活体预测结果指示所述待检测对象是活体或所述待检测对象是非活体。
  5. 如权利要求4所述的方法,其中,对所述待检测对象的图像进行图像分类包括:
    将所述待检测对象的图像输入预测网络,其中所述预测网络包括骨干网络和连接所述骨干网络的第一输出模块和第二输出模块;
    利用所述骨干网络对所述待检测对象的图像进行处理以得到所述待检测对象的图像特征;
    利用所述第一输出模块对所述图像特征进行处理以得到所述预测照明模式;
    利用所述第二输出模块对所述图像特征进行处理以得到所述活体预测结果。
  6. 如权利要求4所述的方法,其中,至少响应于确定所述预测照明模式和所述当前照明模式一致,确定所述待检测对象通过活体检测包括:
    响应于确定所述预测照明模式和所述当前照明模式一致,并且响应于所述活体预测结果指示所述待检测对象是活体,确定所述待检测对象通过活体检测。
  7. 如权利要求1所述的方法,其中,所述图像采集设备包括红外摄像装置,以及所述照明设备是红外照明设备。
  8. 如权利要求1所述的方法,其中所述照明设备包括位于所述图像采集设备一侧的第一照明单元和位于所述图像采集设备另一侧的第二照明单元。
  9. 如权利要求2所述的方法,其中,所述当前照明模式是包括以下各项的照明模式序列:
    第一照明单元进行照明;
    第二照明单元进行照明;以及
    两侧同时照明。
  10. 如权利要求9所述的方法,其中所述照明模式序列是随机照明模式序列。
  11. 如权利要求9所述的方法,其中,所述待检测对象的图像是待检测对象的人脸图像序列,所述方法还包括:
    在所述人脸图像序列中确定两侧同时照明时采集的人脸图像作为识别图像;以及
    对所述识别图像进行图像处理以得到人脸识别结果。
  12. 如权利要求8所述的方法,其中,所述第一照明单元和所述第二照明单元相对于所述图像采集设备对称布置。
  13. 如权利要求8所述的方法,其中,所述第一照明单元的第一光轴与所述图像采集设备的光学系统的光轴相交,并且所述第二照明单元的第二光轴与所述图像采集设备的光学系统的光轴相交。
  14. 如权利要求13所述的方法,其中,所述第一光轴与所述第二光轴的交点在所述图像采集设备的光学系统的光轴上。
  15. 如权利要求13所述的方法,其中,所述第一光轴与所述第二光轴的交点位于所述图像采集设备的中心成像平面上,其中所述中心成像平面位于所述图像采集设备的最近成像平面与所述图像采集设备的最远成像平面之间的中间位置。
  16. 如权利要求15所述的方法,其中,所述图像采集设备的最近成像平面与所述图像采集设备的距离是40cm,所述图像采集设备的最远成像平面与所述图像采集设备的距离是80cm。
  17. 如权利要求15所述的方法,其中,在所述图像采集设备的所述最近成像平面与所述最远成像平面之间的区域内,所述第一照明单元和所述第二照明单元同时进行照明时的照明范围覆盖所述图像采集设备的视场范围。
  18. 如权利要求14所述的方法,其中,所述照明设备还包括位于所述图像采集设备的所述一侧的至少一个第一辅助照明单元以及位于所述图像采集设备的所述另一侧的至少一个第二辅助照明单元。
  19. 如权利要求18所述的方法,其中所述至少一个第一辅助照明单元的光轴和所述至少一个第二辅助照明单元的光轴的交点也在所述图像采集设备的光学系统的光轴上。
  20. 如权利要求18所述的方法,其中,所述第一照明单元、所述至少一个第一辅助照明单元、所述第二单元以及所述至少一个第二辅助照明单元位于球冠型的底座上。
  21. 如权利要求20所述的方法,其中,所述图像采集设备被安装在所述球冠型底座的中心,所述第一照明单元、所述至少一个第一辅助照明单元、所述第二照明单元以及所述至少一个第二辅助照明单元相对于所述图像采集设备对称布置。
  22. 如权利要求18所述的方法,其中所述第一辅助照明单元的光轴平行于所述第一照明单元的所述第一光轴,所述第二辅助照明单元的光轴平行于所述第二照明单元的所述第二光轴。
  23. 如权利要求8所述的方法,其中所述图像采集设备、所述第一照明单元以及所述第二照明单元位于在同一底座上。
  24. 如权利要求8所述的方法,其中所述图像采集设备位于第一底座上,所述第一照明单元以及所述第二照明单元位于第二底座上,所述第一底座和所述第二底座以使得所述第一照明单元的所述第一光轴和所述第二照明单元的所述第二光轴的交点在所述图像采集设备的光学系统的光轴上的方式被组装。
  25. 一种电子电路,包括:
    被配置为执行根据权利要求1-24中任一项所述的方法的步骤的电路。
  26. 一种电子设备,包括:
    处理器;以及
    存储程序的存储器,所述程序包括指令,所述指令在由所述处理器执行时使所述处理器执行根据权利要求1-24中任一项所述的方法。
  27. 一种存储程序的非暂态计算机可读存储介质,所述程序包括指令,所述指令在由电子设备的处理器执行时,致使所述电子设备执行根据权利要求1-24中任一项所述的方法。
  28. 一种计算机程序产品,包括计算机程序,其中,所述计算机程序在被处理器执行时实现权利要求1-24中任一项所述的方法。
  29. 一种检测设备,其特征在于:
    图像采集设备;以及
    照明设备,包括位于所述图像采集设备一侧的第一照明单元和位于所述图像采集设备另一侧的第二照明单元;
    其中,所述第一照明单元的第一光轴与所述图像采集设备的光学系统的光轴相交,并且所述第二照明单元的第二光轴与所述图像采集设备的光学系统的光轴相交。
  30. 如权利要求29所述的设备,其特征在于,所述第一光轴与所述第二光轴的交点在所述图像采集设备的光学系统的光轴上。
  31. 如权利要求29所述的设备,其特征在于,所述第一照明单元和所述第二照明单元相对于所述图像采集设备对称布置。
  32. 如权利要求30所述的设备,其特征在于,所述第一光轴与所述第二光轴的交点位于所述图像采集设备的中心成像平面上,其中所述中心成像平面位于所述图像采集设备的最近成像平面与所述图像采集设备的最远成像平面之间的中间位置。
  33. 如权利要求32所述的设备,其特征在于,所述图像采集设备的最近成像平面与所述图像采集设备的距离是40cm,所述图像采集设备的最远成像平面与所述图像采集设备的距离是80cm。
  34. 如权利要求32所述的设备,其特征在于,在所述图像采集设备的所述最近成像平面与所述最远成像平面之间的区域内,所述第一照明单元和所述第二照明单元同时进行照明时的照明范围覆盖所述图像采集设备的视场范围。
  35. 如权利要求29所述的设备,其特征在于,所述图像采集设备包括红外摄像装置,以及所述照明设备是红外照明设备。
  36. 如权利要求29所述的设备,其特征在于,所述照明设备还包括位于所述图像采集设备的所述一侧的至少一个第一辅助照明单元以及位于所述图像采集设备的所述另一侧的至少一个第二辅助照明单元。
  37. 如权利要求36所述的设备,其特征在于,所述至少一个第一辅助照明单元的光轴和所述至少一个第二辅助照明单元的光轴的交点也在所述图像采集设备的光学系统的光轴上。
  38. 如权利要求36所述的设备,其特征在于,所述第一照明单元、所述至少一个第一辅助照明单元、所述第二照明单元以及所述至少一个第二辅助照明单元被安装在球冠型的底座上。
  39. 如权利要求38所述的设备,其特征在于,所述图像采集设备被安装在所述球冠型底座的中心,所述第一照明单元、所述至少一个第一辅助照明单元、所述第二照明单元以及所述至少一个第二辅助照明单元相对于所述图像采集设备对称布置。
  40. 如权利要求36所述的设备,其特征在于,所述第一辅助照明单元的光轴平行于所述第一照明单元的所述第一光轴,所述第二辅助照明单元的光轴平行于所述第二照明单元的所述第二光轴。
  41. 如权利要求29所述的设备,其特征在于,所述图像采集设备、所述第一照明单元以及所述第二照明单元被安装在同一底座上。
  42. 如权利要求29所述的设备,其特征在于,所述图像采集设备被安装在第一底座上,所述第一照明单元以及所述第二照明单元被安装在第二底座上,所述第一底座和所述第二底座以使得所述第一照明单元的所述第一光轴和所述第二照明单元的所述第二光轴的交点在所述图像采集设备的光学系统的光轴上的方式被组装。
  43. 如权利要求29所述的设备,还包括处理器,所述处理器被配置为:
    控制所述照明设备进行照明,并在照明的同时控制图像采集设备采集待检测对象的待检测图像;以及
    对所述待检测图像进行图像处理以确定所述待检测对象是否是活体。
  44. 如权利要求43所述的设备,其特征在于,控制所述照明设备进行照明,并在照明的同时控制图像采集设备采集待检测对象的待检测图像包括:
    基于照明序列控制所述照明设备进行照明,并在照明的同时控制图像采集设备采集待检测对象的图像序列。
  45. 如权利要求44所述的设备,其特征在于,所述照明序列是由以下各项的组合形成的序列:
    第一照明单元进行照明;
    第二照明单元进行照明;以及
    两侧同时照明。
  46. 如权利要求45所述的设备,其特征在于,所述照明序列是随机生成的。
PCT/CN2022/078053 2021-03-05 2022-02-25 用于活体检测的方法、电子电路、电子设备和介质 WO2022183992A1 (zh)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202120476916.XU CN214202417U (zh) 2021-03-05 2021-03-05 检测设备
CN202120476916.X 2021-03-05
CN202110245618.4A CN112906610A (zh) 2021-03-05 2021-03-05 用于活体检测的方法、电子电路、电子设备和介质
CN202110245618.4 2021-03-05

Publications (1)

Publication Number Publication Date
WO2022183992A1 true WO2022183992A1 (zh) 2022-09-09

Family

ID=83153865

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/078053 WO2022183992A1 (zh) 2021-03-05 2022-02-25 用于活体检测的方法、电子电路、电子设备和介质

Country Status (1)

Country Link
WO (1) WO2022183992A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050054937A1 (en) * 2003-07-23 2005-03-10 Hideyuki Takaoka Endoscope for observing scattered light from living body tissue and method of observing scattered light from living body tissue
TW201910755A (zh) * 2017-08-03 2019-03-16 上海微電子裝備(集團)股份有限公司 一種自動光學檢測裝置及方法
US20190146204A1 (en) * 2016-05-02 2019-05-16 Carl Zeiss Microscopy Gmbh Angularly-Selective Illumination
CN110516644A (zh) * 2019-08-30 2019-11-29 深圳前海微众银行股份有限公司 一种活体检测方法及装置
CN112115747A (zh) * 2019-06-21 2020-12-22 阿里巴巴集团控股有限公司 活体检测和数据处理方法、设备、系统及存储介质
CN112906610A (zh) * 2021-03-05 2021-06-04 上海肇观电子科技有限公司 用于活体检测的方法、电子电路、电子设备和介质
CN214202417U (zh) * 2021-03-05 2021-09-14 上海肇观电子科技有限公司 检测设备

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050054937A1 (en) * 2003-07-23 2005-03-10 Hideyuki Takaoka Endoscope for observing scattered light from living body tissue and method of observing scattered light from living body tissue
US20190146204A1 (en) * 2016-05-02 2019-05-16 Carl Zeiss Microscopy Gmbh Angularly-Selective Illumination
TW201910755A (zh) * 2017-08-03 2019-03-16 上海微電子裝備(集團)股份有限公司 一種自動光學檢測裝置及方法
CN112115747A (zh) * 2019-06-21 2020-12-22 阿里巴巴集团控股有限公司 活体检测和数据处理方法、设备、系统及存储介质
CN110516644A (zh) * 2019-08-30 2019-11-29 深圳前海微众银行股份有限公司 一种活体检测方法及装置
CN112906610A (zh) * 2021-03-05 2021-06-04 上海肇观电子科技有限公司 用于活体检测的方法、电子电路、电子设备和介质
CN214202417U (zh) * 2021-03-05 2021-09-14 上海肇观电子科技有限公司 检测设备

Similar Documents

Publication Publication Date Title
JP6845295B2 (ja) 目追跡でのグレアに対処すること
US20220270397A1 (en) Image processing method and device, equipment, and computer-readable storage medium
Mulfari et al. Using Google Cloud Vision in assistive technology scenarios
JP5160235B2 (ja) 画像中の物体の検出及び追跡
WO2017185630A1 (zh) 基于情绪识别的信息推荐方法、装置和电子设备
CA3152812A1 (en) Facial recognition method and apparatus
WO2020000912A1 (zh) 一种行为检测方法、装置、电子设备和存储介质
JP2013131209A (ja) 顔特徴ベクトルの構築
WO2019033569A1 (zh) 眼球动作分析方法、装置及存储介质
CN104143086A (zh) 人像比对在移动终端操作系统上的应用技术
JP2013504114A (ja) 目状態検出装置及び方法
KR20120139100A (ko) 얼굴 인증을 이용한 보안 장치 및 방법
EP3869404A2 (en) Vehicle loss assessment method executed by mobile terminal, device, mobile terminal and medium
US10242253B2 (en) Detection apparatus, detection method, and computer program product
WO2021143216A1 (zh) 一种人脸活体检测的方法和相关装置
CN113780201B (zh) 手部图像的处理方法及装置、设备和介质
US11403799B2 (en) Method and apparatus for recognizing face-swap, device and computer readable storage medium
CN111783640A (zh) 检测方法、装置、设备以及存储介质
CN112906610A (zh) 用于活体检测的方法、电子电路、电子设备和介质
US11335128B2 (en) Methods and systems for evaluating a face recognition system using a face mountable device
CN106777071B (zh) 一种图像识别获取参考信息的方法和装置
KR20160046399A (ko) 텍스쳐 맵 생성 방법 및 장치와 데이터 베이스 생성 방법
CN112700568B (zh) 一种身份认证的方法、设备及计算机可读存储介质
JP4708835B2 (ja) 顔検出装置、顔検出方法、及び顔検出プログラム
US20230386256A1 (en) Techniques for detecting a three-dimensional face in facial recognition

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22762457

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22762457

Country of ref document: EP

Kind code of ref document: A1