CN112115747A - Living body detection and data processing method, device, system and storage medium - Google Patents

Living body detection and data processing method, device, system and storage medium Download PDF

Info

Publication number
CN112115747A
CN112115747A CN201910542696.3A CN201910542696A CN112115747A CN 112115747 A CN112115747 A CN 112115747A CN 201910542696 A CN201910542696 A CN 201910542696A CN 112115747 A CN112115747 A CN 112115747A
Authority
CN
China
Prior art keywords
image
detected
illumination
illumination information
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910542696.3A
Other languages
Chinese (zh)
Inventor
张超
汪彪
李鹏宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201910542696.3A priority Critical patent/CN112115747A/en
Publication of CN112115747A publication Critical patent/CN112115747A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The embodiment of the application provides a method, equipment, a system and a storage medium for living body detection and face recognition. In the embodiment of the application, whether the object to be detected is a living body is identified by judging whether the illumination information in the image containing the object to be detected is matched with the illumination information in the current environment. On one hand, the living body detection mode does not need the action coordination of the object to be detected, so that the time for detecting the living body can be reduced, and the detection efficiency is improved; on the other hand, the method can reduce the dependence on texture details in the image, not only can reduce the quality requirement on the image, but also has higher stability, and is beneficial to improving the accuracy of the living body detection.

Description

Living body detection and data processing method, device, system and storage medium
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to a method, device, system, and storage medium for living body detection and data processing.
Background
With the continuous development of artificial intelligence technology, face recognition technology is widely used for identifying the identity of a user. For example, in mobile payment and internet finance, face recognition technology can be used for face-brushing login, face-brushing payment and the like. For example, in an access gate device, a face recognition technique may be used to control the gate.
However, the face recognition technology is often attacked by non-living bodies, that is, the biometric features required by face recognition are simulated and generated by using artificial props (photos) and other ways, so that the authority of normal users is stolen. In order to solve the problems, the living body detection technology is developed, but the existing living body detection mode needs the user to perform actions such as shaking the head, blinking and the like in a matching mode, the time consumption of the living body detection process is long, or the texture details in the face image are relied on, the quality requirement on the face image is high, and the accuracy rate of the living body detection is low.
Disclosure of Invention
Aspects of the present application provide a method, device, system and storage medium for living body detection and data processing, so as to reduce the time consumed by living body detection and improve the efficiency and accuracy of living body detection.
The embodiment of the application provides a living body detection method, which comprises the following steps: acquiring an image containing an object to be detected in the current environment; performing illumination analysis on the image to obtain illumination information in the image; and if the illumination information in the image is matched with the illumination information in the current environment, determining that the object to be detected is a living body.
An embodiment of the present application further provides a data processing method, including: acquiring a face image of an object to be detected in a current environment; performing illumination analysis on the face image to obtain illumination information in the face image; and if the illumination information in the face image is matched with the illumination information in the current environment and the face image belongs to the face image of the known object, determining that the object to be detected passes face recognition.
An embodiment of the present application further provides a data processing method, including: acquiring first illumination information representing ambient illumination; acquiring an image containing an object to be detected; performing illumination analysis on the image to acquire second illumination information representing illumination in the image; and determining whether the object to be detected passes the detection or not based on the similarity of the first illumination information and the second illumination information.
The embodiment of the present application further provides a gate system, including: the system comprises a gate, image acquisition equipment and computing equipment; the computing equipment is respectively connected with the gate and the image acquisition equipment; the image acquisition equipment is used for acquiring a facial image of an object to be detected around an entrance and an exit of the gate and transmitting the facial image to the computing equipment; the computing equipment is used for carrying out illumination analysis on the face image to obtain illumination information in the face image, and controlling the gate to be opened to allow the object to be detected to pass through under the condition that the illumination information in the face image is matched with the illumination information in the current environment where the gate is located and the object to be detected is a legal object.
The embodiment of the present application further provides a living body detection apparatus, including: a vision sensor, a memory, and a processor; wherein the memory is used for storing a computer program; the vision sensor is used for acquiring an image containing an object to be detected in the current environment; the processor is coupled to the memory for executing the computer program for: performing illumination analysis on the image to obtain illumination information in the image; and if the illumination information in the image is matched with the illumination information in the current environment, determining that the object to be detected is a living body.
The embodiment of the present application further provides a gate, including: the gate comprises a gate body, and a camera, a memory and a processor which are arranged on the gate body; wherein the memory is used for storing a computer program; the camera is used for: collecting a face image of an object to be detected around an entrance and an exit of the gate body; the processor is coupled to the memory for executing the computer program for: and carrying out illumination analysis on the face image of the object to be detected to obtain illumination information in the face image, and controlling the gate body to be opened under the condition that the illumination information in the face image is matched with the illumination information in the current environment and the object to be detected is a legal object so as to allow the object to be detected to pass through.
The embodiment of the present application further provides a face recognition device, which includes: a vision sensor, a memory, and a processor; wherein the memory is used for storing a computer program; the vision sensor is used for acquiring a face image containing an object to be detected in the current environment; the processor is coupled to the memory for executing the computer program for: performing illumination analysis on the face image to obtain illumination information in the face image; and if the illumination information in the face image is matched with the illumination information in the current environment and the face image belongs to the face image of the known object, determining that the object to be detected passes face recognition.
An embodiment of the present application further provides a computer device, including: a memory and a processor; wherein the memory is used for storing a computer program; the processor is coupled to the memory for executing the computer program for: acquiring first illumination information representing ambient illumination; acquiring an image containing an object to be detected; performing illumination analysis on the image to acquire second illumination information representing illumination in the image; and determining whether the object to be detected passes the detection or not based on the similarity of the first illumination information and the second illumination information.
The present embodiments also provide a readable storage medium storing computer instructions, which when executed by one or more processors, cause the one or more processors to perform the steps of the above-mentioned in-vivo detection method.
The present invention also provides a readable storage medium storing computer instructions, which when executed by one or more processors, cause the one or more processors to perform the steps of the data processing method.
In the embodiment of the application, whether the object to be detected is a living body is identified by judging whether the illumination information in the image containing the object to be detected is matched with the illumination information in the current environment. On one hand, the living body detection mode does not need the action coordination of the object to be detected, so that the time for detecting the living body can be reduced, and the detection efficiency is improved; on the other hand, the method can reduce the dependence on texture details in the image, not only can reduce the quality requirement on the image, but also has higher stability, and is beneficial to improving the accuracy of the living body detection.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1a is a schematic structural diagram of a gate system according to an embodiment of the present disclosure;
FIG. 1b is a schematic structural diagram of another gate system according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart of a method for detecting a living organism according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of a data processing method according to an embodiment of the present application;
FIG. 4 is a schematic structural diagram of a living body detecting apparatus according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a face recognition device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a gate according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In some embodiments of the present application, it is determined whether the object to be detected is a living body by determining whether illumination information in an image including the object to be detected matches illumination information in a current environment. On one hand, the living body detection mode does not need the action coordination of the object to be detected, so that the time for detecting the living body can be reduced, and the detection efficiency is improved; on the other hand, the method can reduce the dependence on texture details in the image, not only can reduce the quality requirement on the image, but also has higher stability, and is beneficial to improving the accuracy of the living body detection.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1a is a schematic structural diagram of a gate system according to an embodiment of the present disclosure. As shown in fig. 1a, the system comprises: a gate 101, an image acquisition device 102, and a computing device 103. The implementation forms of the gate 101, the image capturing device 102 and the computing device 103 shown in fig. 1a are all exemplary illustrations, and are not limited thereto.
In this embodiment, the gate 101 may be a triple-roller gate, a swing gate, a wing gate, a translation gate, a rotary gate, a linear gate, etc., but is not limited thereto, according to the difference between the blocking body and the blocking method. The gate 101 may be implemented in the form of a gate, a door with a gate control function, a barrier, or the like, but is not limited thereto. In this embodiment, the number of the gates 101 is not limited, and may be one or more. In the schematic diagram of fig. 1a and other corresponding embodiments, for convenience of illustration, one gate 101 is taken as an example, but the number of gates 101 is not limited.
In this embodiment, the image capturing device 102 may be a camera, a laser sensor, an infrared sensor, or other visual sensors. For example, the image capture device 102 may be a binocular camera, a monocular camera, a depth camera, or the like, but is not limited thereto. The image capturing device 102 may be disposed on the gate 101, or may be disposed near an entrance of the gate 101. In the present embodiment, the capturing view angle of the image capturing device 102 covers the area around the entrance of the gate 101, and can capture the image around the entrance of the gate 101. When the object to be detected is located in the area around the entrance of the gate 101, the image capturing device 102 may capture a facial image of the object to be detected. In one scenario, the image capturing device 102 is disposed at an entrance position of the gate 101, and an image capturing view angle of the image capturing device 102 faces the entrance of the gate 101, when an object to be detected enters through the entrance of the gate 101, a face of the object to be detected is opposite to the capturing view angle of the image capturing device 102, and the image capturing device 102 can capture a face image of the object to be detected.
In this embodiment, the computing device 103 is in communication connection with the gate 101 and the image capturing device 102, and is mainly responsible for processing the image captured by the image capturing device 102 and controlling the gate 101 to be turned on or off. The embodiment is not limited to the implementation form of the computing device 103, and may be any computing device, processor, processing chip, or the like having a certain computing power. Taking a computer device as an example, the computing device 103 may be a terminal device such as a desktop computer, a notebook computer, or a smart phone, or may be a server device such as a conventional server, a cloud server, or a server array. Of course, the computing device 103 may also be a processor with certain processing capabilities.
The computing device 103 may be local to the gate 101 or may be in the cloud. For the case that the computing device 103 is located locally in the gate 101, the computing device 103 may be a processor disposed on the gate 101, a terminal device such as a desktop computer, a notebook computer, or a smart phone disposed in the local space of the gate 101, or a conventional server or a server array disposed in the local space of the gate 101. For the case where the computing device 103 is located in the cloud, the computing device 103 may be a server-side device such as a conventional server, a cloud server, or a server array.
Further, in the case that the computing device 103 is a terminal device or a server device, no matter whether the computing device 103 is located locally in the gate 101 or in the cloud, the computing device 103 may be in communication connection with the gate 101 in a wireless or wired manner, and similarly, may also be in communication connection with the image capturing device 102 in a wireless or wired manner. In the case where the computing device 103 is communicatively connected to the gate 101 and the image capturing device 102 in a wireless manner, respectively, the computing device 103 may be communicatively connected to the gate 101 and the image capturing device 102, respectively, via a mobile network. The network standard of the mobile network may be any one of 2G (GSM), 2.5G (GPRS), 3G (WCDMA, TD-SCDMA, CDMA2000, UTMS), 4G (LTE), 4G + (LTE +), WiMax, 5G, and other new standards that may appear in the future. Optionally, the computing device 103 may also be in communication connection with the gate 101 and the image capturing device 102 respectively in a wireless manner, such as bluetooth, WiFi, infrared, and the like.
Further, in the present embodiment, the image capturing device 102 may provide the captured face image of the object to be detected to the computing device 103. Alternatively, the computing device 103 may read the facial image of the object to be detected from the storage space of the image capturing device 102, or the image capturing device 102 actively transmits the facial image of the object to be detected to the computing device 103.
Further, in practical applications, as shown in fig. 1a, biometric features required for face recognition are often generated by using paper photos or electronic photos of legitimate users, which results in the rights of legitimate users being stolen. Because the paper photo or the electronic photo carries the illumination information when the photo is taken, even if an illegal user takes the paper photo or the electronic photo of a legal user to the image acquisition device 102 of the gate system for the image acquisition device 102 to acquire the facial image in the photo, the illumination in the current environment around the gate 101 cannot submerge the original illumination information in the photo. Therefore, if the object to be detected adopts a prop such as a photo for the image acquisition device 102 to acquire, the facial image of the object to be detected acquired by the image acquisition device 102 not only includes the illumination information in the current environment, but also includes the illumination information carried in the photo; if the object to be detected is a real person, the facial image of the object to be detected acquired by the image acquisition device 102 only carries illumination information in the current environment. Moreover, the real human face and the photo have different expressions of light phenomena such as refraction, reflection, diffuse reflection and the like.
Based on the above analysis, in the present embodiment, the computing device 103 may perform illumination analysis on the face image of the object to be detected, so as to obtain illumination information in the face image. Further, matching illumination information in the face image of the object to be detected with illumination information in the current environment; if the illumination information in the facial image of the object to be detected matches the illumination information in the current environment, the computing device 103 may determine that the object to be detected is a living body; conversely, it can be determined that the object to be detected is not a living body. Alternatively, if the computing device 103 determines that the object to be detected is not a living body, a control instruction to prohibit passage may be sent to the gate 101. Accordingly, the gate 101 may respond to the control instruction and display the pass inhibiting identification information. For example, as shown in FIG. 1a, an "X" number is displayed on the body of the gate 101.
The illumination information in the current environment includes illumination information generated by various light source devices present in the environment space where the gate 101 is located. As shown in fig. 1b, the gate system further includes a light source device 104, and the light source device 104 is responsible for providing a lighting function for the environmental space where the gate 101 is located on the one hand, and also providing illumination information required for capturing the facial image of the object to be detected for the image capturing device 102 on the other hand.
Further, the environment in which the gate 101 is located is different, and the light source device 104 is also different. For example, when the gate system is a gate system of a certain cell or a certain company, the light source device 104 may include a street lamp, a natural light source, or the like, but is not limited thereto. For another example, when the gate system is applied to a subway station, a railway station, an airport, or the like, where a pass verification is required indoors, the light source device 104 may include an indoor illumination lamp, a billboard with a light source indoors, or a natural light source that can provide natural light indoors, and the like, but is not limited thereto. In addition, if the image capturing device 102 has a fill-in light, the light source device 104 may also be the fill-in light of the image capturing device 102.
Further, for safety reasons, in practical applications, the object to be detected needs to be not only a living body but also a legal object. The legal object refers to a user having the right to pass through the gate 101. For example, in a place such as a subway station, a train station, or an airport where a ticket is required to be entered, the legitimate destination is a user who purchased a ticket or an air ticket that departs from the station. For example, in a place such as a cell or a company, the legitimate object is a resident of the cell or an employee of the company. In other words, the legitimate object is a person who holds the valid certificate of the person in a place such as a museum or a commercial building. Further, in this embodiment, in the case that the illumination information in the face image of the object to be detected matches the illumination information in the current environment where the gate 101 is located, and the object to be detected is a legal object, the computing device 103 may control the gate 101 to be opened to allow the detection object to pass through the gate 101.
Alternatively, the computing device 103 may send a control command or a control signal to a gate control component of the gate 101, and the gate control component executes a corresponding action according to the received control command or control signal. For example, the computing device 103 may send a gate opening instruction or a control signal for opening the gate to the gate control component of the gate 101 in a case where the object to be detected passes the verification set by the gate system, and the gate control component controls the gate of the gate 101 to be opened according to the received gate opening instruction or gate opening control signal. For another example, the computing device 103 may send a command to prohibit opening the gate or a control signal to prohibit opening the gate to the gate control component of the gate 101 in a case where the object to be detected does not pass the verification set by the gate system, and the gate control component keeps the gate 101 in a closed state according to the received command or control signal to prohibit the object to be detected from passing.
In the gate system provided by this embodiment, the computing device may identify whether the object to be detected is a living body by determining whether the illumination information in the facial image of the object to be detected acquired by the image acquisition device matches the illumination information in the current environment. The living body detection mode does not need the action coordination of the object to be detected, can reduce the time of living body detection, improves the detection efficiency and further contributes to shortening the detention time of the object to be detected at the gate; on the other hand, the method can reduce the dependence on the texture details in the face image, not only can reduce the quality requirement on the image, but also has higher stability, and is beneficial to improving the accuracy of the living body detection.
Further, in the embodiment, the requirement of the living body detection process on the image quality is low, and the requirement of the living body detection process on the precision of the image acquisition equipment is also relatively low.
It should be noted that the gate system provided by the embodiment of the present application can be applied to various places requiring clearance, such as train stations, airports, art venues, movie theaters, sports arenas, toll booths, commercial buildings, and so on. On the other hand, the gate system that this application embodiment provided also can be applied to the access control system in places such as district, company, dormitory building, unit building, especially unmanned on duty's access control system, can improve access control system's security.
In this embodiment, the computing device 103 may perform illumination analysis on the face image of the object to be detected to obtain illumination information in the face image. In the embodiment of the present application, the computing device 103 may acquire illumination information in the face image of the object to be detected in various ways. Wherein the lighting information includes but is not limited to: angle, color temperature, intensity and the like of the light source. For example, the computing device 103 may apply a variable parameter solution to the face image of the object to be detected using a shape (depth) algorithm, an attenuation (and dispersion from shading) algorithm, or a Scene-SIRFS algorithm to recover shape (depth) information, reflectivity, and illumination from the face image. An implementation in which the computing device 103 acquires illumination information in a face image of a subject to be detected is exemplarily described below with a SIRFS algorithm as an example.
When the SIFERS algorithm is used for acquiring illumination information in a face image of an object to be detected, a light source model corresponding to the face image can be expressed by using a Spherical Harmonic (SH) illumination model, wherein a coefficient matrix of the SH illumination model represents the illumination information in the face image. Further, the computing device 103 may perform variable parameter solution on the face image according to the prior information of the reflection image, the depth information, and the illumination information corresponding to the face image of the object to be detected, so as to obtain a coefficient matrix of the spherical harmonic illumination model. The coefficient matrix of the spherical harmonic illumination model can reflect illumination information in the facial image, so that the illumination information in the facial image of the object to be detected is obtained. For convenience of description, in the following embodiments, the coefficient matrix of the spherical harmonic illumination model is simply referred to as an illumination system matrix.
Further, the SIFRS algorithm can recover illumination information, depth information, and reflectivity from a single color image. In the SIFRS algorithm, the problem of obtaining the illumination information in the face image of the object to be detected can be represented as:
Figure BDA0002103050110000091
wherein, R is a logarithmic reflectivity image, Z is a depth image, L is an illumination coefficient matrix, and I is an RGB image containing an object to be detected.
R and Z are two images of the same dimension as I. P (R), P (Z) and P (L) are prior information of reflectivity, depth information and illumination information, respectively; and S (Z, L) is a rendering engine, and light and shadow information can be obtained according to the parameters Z and L. The optimal solution of the shape (depth) information, the reflectivity and the illumination information which enable the prior information probability to be maximum is solved by the formula (1) under the condition that the RGB image is equal to the rendering model R + S (Z, L).
Further, it can be obtained from equation (1): r — I-S (Z, L), and the above described problem of maximum likelihood can be transformed into a least sum of loss functions problem by performing a negative logarithmic transformation on P (R) P (Z) P (L):
Figure BDA0002103050110000101
where g (R), f (Z) and h (L) are loss functions of the reflectance image, depth information and illumination information, respectively. Wherein the loss function of the reflection image can be expressed as:
g(R)=λsgs(R)+λege(R)+λaga(R) (3)
wherein, gs(R) is a priori condition for smoothness, ge(R) is a prior condition of the entropy of the reflectivity, ga(R) is a priori condition for color consistency. Lambda [ alpha ]s、λeAnd λaIs an adjustable coefficient. The specific expressions of the prior conditions regarding smoothness, reflectivity and color consistency are prior art and are not described herein again.
Further, the loss function of the depth information may be expressed as:
f(Z)=λkfk(Z)+λigi(R)+λcgc(R) (4)
wherein f isk(Z) denotes a priori condition for smoothness, fi(Z) denotes a prior condition for orientation, fc(Z) represents the prior condition of the appearance profile. Lambda [ alpha ]k、λiAnd λcIs an adjustable coefficient. Specific expressions of the prior conditions of smoothness, orientation and appearance profile are prior art and are not described herein again.
Further, the prior information of the illumination information can be represented by matching the illumination information in the training library by using a multivariate Gaussian model. Based on this, the loss function of illumination information can be expressed as:
Figure BDA0002103050110000102
wherein, muLSum ΣLIs a parameter of a Gaussian function, λLAre coefficients of a priori information. L is an illumination coefficient matrix.
Further, the rendering engine S (Z, L) is used to calculate the brightness S of a point on the surface of the object, where the normal direction of the surface is ni. Wherein the rendering engine can be represented as:
S(ni,L)=[ni;1]TM[ni;1] (6)
wherein the content of the first and second substances,
Figure BDA0002103050110000111
c1=0.429043,c2=0.511664,c3=0.743125,c4=0.886227,c5=0.247708;L1~L9the matrix of illumination coefficients for each color channel (R, G, and B channels in RGB channels) is represented separately. The expression form of M in equation (6) is only exemplified by constructing the SH illumination model by using 2 nd order spherical harmonics. If the SH function is constructed using other higher order spherical harmonics, the representation of M is changed accordingly, and not shown here.
Further, a minimum value solution may be performed on the loss function shown in formula (2) by using an unconstrained optimization method, and an illumination coefficient matrix corresponding to the minimum value is taken as the illumination information in the facial image of the object to be detected. For convenience of description and distinction, in the embodiment of the present application, the matrix of illumination coefficients corresponding to the minimum value of the loss function is represented as Lt. Wherein, the unconstrained optimization method can be a Newton method or a quasi-Newton method. Further, the quasi-newton method includes: DFS algorithm, DFGS algorithm, LBFGS algorithm, etc., but is not limited thereto. The specific process of solving the minimum value of the formula (2) by using the newton method or quasi-newton method is the prior art, and is not described herein again.
Optionally, considering that the illuminance of the diffuse reflection radiation is a low-frequency function, the illuminance of the diffuse reflection radiation can be simulated by using a2 nd order spherical harmonic base, that is, an SH illumination model is constructed by using the 2 nd order spherical harmonic base to express a light source model corresponding to the face image of the object to be detected. Further, if an SH illumination model is constructed by using 2-order spherical harmonics, when calculating the matrix coefficients of the SH illumination model, each color channel (three channels of RGB) corresponds to 9 illumination coefficients, that is, the coefficient matrix of the SH illumination model includes 27 parameters, and the 27 parameters are used to describe the illumination information.
Further, after the computing device 103 obtains the illumination information in the face image of the object to be detected, the illumination information in the face image may be matched with the illumination information in the current environment where the gate 101 is located. Alternatively, the computing device 103 may calculate the similarity of the illumination information in the face image of the object to be detected to the illumination information in the current environment in which the gate 101 is located. Further, if the calculated similarity is greater than or equal to a preset illumination similarity threshold, the computing device 103 determines that the illumination information in the face image of the object to be detected matches the illumination information in the current environment, and may also determine that the face image of the object to be detected is a living body image. Accordingly, if the calculated similarity is smaller than the preset illumination similarity threshold, the computing device 103 determines that the illumination information in the facial image of the object to be detected does not match the illumination information in the current environment where the gate 101 is located, and may determine that the facial image of the object to be detected is not a live image.
Alternatively, the computing device 103 may employ a deep learning model to calculate the similarity of the illumination information in the face image of the object to be detected and the illumination information in the current environment in which the gate 101 is located. For example, an Alex network model, a VGG network model, a Google network model, a Res network model, a CNN network model, or the like may be used to calculate the similarity between the illumination information in the face image of the object to be detected and the illumination information in the current environment in which the gate 101 is located, but is not limited thereto.
Further, before the gate system performs the living body detection work, the illumination information in the current environment where the gate 101 is located can be estimated and calculated. For convenience of description and distinction, the illumination information in the current environment is represented as Le. The following is an exemplary description of an embodiment of obtaining illumination information in the current environment in which the gate 101 is located.
In the initial operation phase of the gate system, since the gate system does not know the illumination information in its current environment, the computing device 103 needs to calculate the illumination information in the current environment where the gate 101 is located in advance. Optionally, the image capturing device 102 may capture an initial environment image corresponding to an environment space to which the current environment in which the gate 101 is located belongs, and provide the initial environment image to the computing device 103. Accordingly, the computing device 103 may perform illumination analysis on the initial environment image to obtain initial illumination informationL0. The computing device 103 may perform illumination analysis on the initial environment image by using the above-described embodiment of performing illumination analysis on the image to be detected, and specific embodiments may refer to the above-described related contents, which are not described herein again. Based on this, in the initial working phase of the gate system, the computing device 103 may obtain the initial illumination information L in the environment space to which the current environment where the gate 101 is located belongs in advance0As illumination information in the current environment.
Of course, in some scenarios, the illumination information in the environment space where the gate 101 is located is relatively stable and does not change with time, and in other working stages of the gate system, the initial illumination information may also be used as the illumination information in the current environment where the gate 101 is located.
In other application scenarios, the light source in the environmental space where the gate 101 is located may change. In order to improve the accuracy of the illumination information in the current environment, an update period may be preset in the computing device 103, and a timer or a counter is started to time the update period, and each time the update period arrives, the computing device 103 controls the image capturing device 102 to capture an environment image of an environment space to which the gate 101 belongs as an initial environment image in the period, and provides the initial environment image to the computing device 103. Accordingly, the computing device 103 may perform illumination analysis on the initial environment image in each period to obtain initial illumination information L0. Accordingly, the computing device 103 can obtain the initial illumination information L in the environmental space to which the gate 101 belongs from the current period0As the illumination information of the current environment in which the image capturing device 102 is located during the period.
In order to further improve the accuracy of the illumination information in the current environment where the image capturing device 102 is located, the illumination information in the current environment where the image capturing device 102 is located may also be calculated by using the illumination information in N history images, in which the object to be detected is determined as a living body, where N is a positive integer. Further, N is more than or equal to 2, the mean value of the illumination information in the N historical images can be used as the illumination information in the current environment, namely
Figure BDA0002103050110000131
Wherein L isiThe illumination information in the N historical images is respectively. Preferably, the N history images are the N latest acquired history images, that is, the N history images in which the object to be detected included closest to the current time is determined as the living body. Because the acquisition time of the N historical images is closer to the acquisition time of the current time, the illumination information in the N historical images is closer to the illumination information in the current environment, so that when the computing device 103 determines whether the illumination information in the facial image of the object to be detected is equal to the illumination information in the current environment, the adopted illumination information in the current environment is consistent with the illumination information in the environment space where the gate 101 is actually located, and the accuracy of subsequent living body detection is improved.
In practical applications, the object to be detected needs to be not only a living body but also a legal object in view of safety. Only legitimate objects that are live will have permission to pass through the gate 101. Based on this, the computing device 103 further needs to determine whether the face image of the object to be detected matches the face image of the known object, and if the illumination information in the face image of the object to be detected matches the illumination information in the current environment where the gate 101 is located, and the object to be detected is a legal object, the computing device 103 may control the gate 101 to be turned on to allow the detected object to pass through the gate 101. Alternatively, the computing device 103 may match the face image of the object to be detected in the face image of the known object, and determine that the object to be detected is a legitimate object if the face image of the object to be detected is matched in the face image of the known object.
Further, the computing device 103 may calculate a similarity between the face image of the object to be detected and the face image of the known object, and if there is a face object in the face image of the known object whose similarity with the face image of the object to be detected is greater than or equal to a preset image similarity threshold, it may be determined that the object to be detected is a legal object by indicating that the face image of the object to be detected is matched in the face image of the known object.
It should be noted that the living body detection method and the verification method of the legal object provided in the embodiments of the present application may be performed sequentially or in parallel. When the two are executed in sequence, the execution sequence is not limited.
Further, in the embodiment of the present application, the face image of the known subject may be a face image of a legitimate subject acquired in advance and stored in the computing device 103; and the face image in the effective identity document provided by the object to be detected acquired on site by the gate system can also be provided. The valid identity document may be, but is not limited to, an identity card, a passport, a student card, an industrial card, and the like.
In practical applications, if the gate system is a gate system of a company, a cell, or a building, since the people in these application sites are relatively fixed, the face images of the people in these sites can be obtained in advance as the face images of the known objects and stored in the computing device 103. If the gate system is a passing system of application places where people flow, such as railway stations, bus stations, airports, museums, commercial buildings and the like, because the mobility of people in the places is high, the facial images of all people are difficult to acquire in advance, and therefore, the facial images in the effective identity document provided by the object to be detected can be acquired on site in the application places.
Further, in the case of a face image in a valid identity document provided by an object to be detected, which is acquired on site by the gate system, as shown in fig. 1b, the gate system further includes: the identity information collection device 105. The connection mode of the identity acquisition device 105 and the computing device 103 may refer to the connection mode of the image acquisition device 102 and the computing device 103, which is not described herein again. Further, the identity information collecting device 105 may be a camera, a scanner, a recognizer based on OCR (Optical Character Recognition) technology, a card reader, and the like, but is not limited thereto. Alternatively, the identity information collecting device 105 may be disposed on the gate 101, or may be a peripheral device disposed at an entrance of the gate 101. In fig. 1b, only the identity information acquisition device 105 is shown as being disposed on the gate 101, but not limited thereto.
In this embodiment, the identity information capture device 105 can acquire a face image in an identity document provided by an object to be detected as a face image of a known object and provide the face image of the known object to the computing device. Accordingly, the computing device 103 may calculate a similarity between the face image of the object to be detected and the face image of the known object, and determine that the object to be detected is a valid object if the calculated similarity is greater than or equal to a preset image similarity threshold.
In addition, in some application scenarios, for example, in a train station, a bus station, an airport, and the like, the object to be detected needs to enter a waiting hall or a waiting hall by using a ticket and an effective identity document, and the object to be detected needs to provide not only the effective identity document but also the ticket or the air ticket. Accordingly, the identity information acquisition device 105 may further acquire the associated information of the ticket or air ticket provided by the object to be detected, and provide the acquired associated information of the ticket or air ticket to the computing device 103. Accordingly, the computing device 103 may determine whether the ticket or ticket information provided by the object to be detected is valid based on the current time and the physical space location of the gate 101. For example, the gate system is disposed at an entrance of a certain train station, and the identification information acquisition device 105 may acquire information about the train number, departure time, starting station, and the like in a train ticket provided by the object to be detected, and provide the information to the computing device 103. The computing device 103 may determine whether the ticket provided by the object to be detected satisfies the arrival time according to the current time and the departure time in the train ticket; further, the computing device 103 may determine whether the current station is a riding point of the object to be detected according to the station information and the number of trains and the starting station provided by the object to be detected. Further, if the ticket provided by the object to be detected meets the station-entering time and the station is the corresponding riding point to be detected, it can be determined that the ticket provided by the object to be detected is valid. The fact that the train ticket is valid means that the object to be detected can enter the station at the current moment through the ticket.
It should be noted that in some application scenarios, on the basis of the above living body detection and face recognition on the object to be detected, other verification conditions may be added to determine whether the object to be detected has the right to pass through the gate 101. For example, in an application scenario where a particular population can enjoy a benefit on his or her own valid credentials. For example, students can enjoy half-price discount by identity cards when buying train tickets; for another example, in some scenic spots, the old people can visit the scenic spot freely by the identity card of the old people or the soldier can visit the scenic spot freely by the soldier card of the old people. In these application scenarios, the gate system can also verify valid certificates which can enjoy preferential offers and the like and are provided by the object to be detected. In these application scenarios, to improve security, the computing device 103 may control the gate 101 to open to allow the object to be detected to pass through if the object to be detected is all verified as being set through the gate system.
Besides the gate system provided by the embodiment of the application, the embodiment of the application also provides a living body detection method and a data processing method. The living body detection method and the data processing method provided by the embodiments of the present application are respectively exemplified below.
Fig. 2 is a schematic flowchart of a method for detecting a living body according to an embodiment of the present disclosure. As shown in fig. 2, the method includes:
201. in the current environment, an image containing an object to be detected is acquired.
202. And carrying out illumination analysis on the acquired image to obtain illumination information in the image.
203. And if the illumination information in the image is matched with the illumination information in the current environment, determining that the object to be detected is a living body.
In this embodiment, for step 201, an image acquisition device such as a camera, a laser sensor, or an infrared sensor may be used to acquire an image of the object to be detected in the current environment. The camera may be a binocular camera, a monocular camera, a depth camera, or the like, but is not limited thereto. According to different application scenes, the setting positions of the image acquisition devices are different. For example, in a gate system, the image capture device may be located on the gate or in a location near the entrance to the gate. For example, when the user uses the terminal device to perform face-brushing login or payment, or when the user uses the terminal device in the bank business hall to transact business autonomously, the image capturing device may be a camera on the terminal device, or the like. Optionally, if the image acquisition device is a monocular camera, the monocular camera in the current environment may be used to acquire an image including the object to be detected. Wherein, the cost can be reduced by adopting the monocular camera.
Further, in practical applications, biometric features required for face recognition are often generated by using paper photos or electronic photos of legitimate users, which results in the theft of the rights of legitimate users. Because the paper photo or the electronic photo carries the illumination information when the photo is shot, even if an illegal user takes the paper photo or the electronic photo of a legal user to the front of the image acquisition device for the image acquisition device to acquire the facial image in the photo, the illumination in the current environment cannot submerge the original illumination information in the photo. Therefore, if the object to be detected adopts a property such as a photo for the image acquisition device to acquire, the image of the object to be detected acquired by the image acquisition device not only includes the illumination information in the current environment, but also includes the illumination information carried in the photo; if the object to be detected is a real person, the facial image of the object to be detected acquired by the image acquisition equipment only carries illumination information in the current environment. Moreover, the light phenomena such as refraction, reflection, and diffuse reflection of light are not similar between real persons and photographs. Wherein, the illumination information in the current environment is illumination information generated by a light source device existing in the environment space where the living body detection device is located. For the description of the illumination device, reference may be made to the related contents of the above embodiment of the gate system, and details are not repeated herein.
Based on the above analysis, in step 202, a lighting analysis may be performed on the face image of the object to be detected, so as to obtain lighting information in the face image. Further, if the illumination information in the face image of the object to be detected is matched with the illumination information in the current environment, the object to be detected is determined to be a living body.
In this embodiment, whether the object to be detected is a living body can be identified by determining whether the illumination information in the image including the object to be detected matches the illumination information in the current environment. On one hand, the living body detection mode does not need the action coordination of the object to be detected, so that the time for detecting the living body can be reduced, and the detection efficiency is improved; on the other hand, the method can reduce the dependence on the texture details in the face image, not only can reduce the quality requirement on the image, but also has higher stability, and is beneficial to improving the accuracy of the living body detection.
Furthermore, the method and the device have low requirements on image quality and relatively low requirements on the precision of the image acquisition equipment, and can acquire facial images of the object to be detected by adopting some image acquisition equipment with low precision, such as a monocular camera and the like, so that the cost is saved.
In the embodiment of the present application, illumination information in an image of an object to be detected may be acquired in various ways. For example, the image of the object to be detected may be subjected to variable parameter solution by using a SAIFS algorithm, a SIRFS algorithm, or a Scene-SIRFS algorithm, and the shape (depth information), the reflectivity, and the illumination may be recovered from the image of the object to be detected. When the SIFRS algorithm is used for acquiring illumination information in an image of an object to be detected, a light source model corresponding to a face image can be expressed by using a spherical harmonic illumination model, wherein a coefficient matrix of the spherical harmonic illumination model represents the illumination information in the face image. Furthermore, variable parameter solution can be carried out on the facial image according to the prior information of the reflection image, the depth information and the illumination information corresponding to the image of the object to be detected, and then the matrix coefficient of the spherical harmonic function is obtained. The coefficient matrix of the spherical harmonic illumination model can reflect illumination information in the image, so that the illumination information in the image of the object to be detected is obtained. The specific implementation of performing illumination analysis on the image including the object to be detected by using the SIFRS algorithm may refer to the related contents in the gate system, and is not described herein again.
Further, after the illumination information in the image of the object to be detected is obtained, the illumination information in the image can be matched with the illumination information in the current environment. Optionally, the similarity between the illumination information in the image of the object to be detected and the illumination information in the current environment may be calculated. Further, if the calculated similarity is greater than or equal to a preset illumination similarity threshold, it is determined that the illumination information in the image of the object to be detected matches the illumination information in the current environment, and it is also determined that the object to be detected is a living body. Correspondingly, if the calculated similarity is smaller than the preset illumination similarity threshold, it is determined that the illumination information in the image of the object to be detected is not matched with the illumination information in the current environment, and it is determined that the object to be detected is not a living body.
Optionally, a deep learning model may be employed to calculate the similarity between the illumination information in the image of the object to be detected and the illumination information in the current environment. For example, but not limited to, an Alex network model, a VGG network model, a Google network model, a Res network model, a CNN network model, or the like may be used to calculate the similarity between the illumination information in the image of the object to be detected and the illumination information in the current environment.
It is worth mentioning that, in the embodiment of the present application, in order to improve the accuracy of the living body detection, after step 201, facial feature recognition may be performed on an image including an object to be detected, and a facial region image in the image may be determined. And then, carrying out illumination analysis on the face area image to obtain illumination information in the face area image. Further, if the illumination information in the face area image matches the illumination information in the current environment, it is determined that the object to be detected is a living body. The illumination information of the facial area image of the object to be detected is adopted for the living body detection, so that the living body detection accuracy can be improved, the calculated amount can be reduced, the processing speed is improved, and the detection efficiency is improved. For the implementation of performing illumination analysis on the facial region image, reference may be made to the above-mentioned related contents of performing illumination analysis on the image of the object to be detected, which is not described herein again.
Further, before the living body detection work is carried out, the illumination information in the current environment where the living body detection device is located can be estimated and calculated. For convenience of description and distinction, the illumination information in the current environment is represented as Le. The following description will be given with reference to an embodiment in which illumination information in the current environment in which the living body detecting apparatus is located is exemplarily described.
At the initial working stage of the living body detecting device, the living body detecting device does not know the illumination information in the current environment, so the illumination information in the current environment where the living body detecting device is located needs to be calculated in advance. Optionally, an initial environment image corresponding to an environment space to which the current environment of the living body detection device belongs may be collected, and illumination analysis may be performed on the initial environment image to obtain initial illumination information L0. The embodiment of performing illumination analysis on the image to be detected may be adopted to perform illumination analysis on the initial environment image, and specific embodiments may refer to the above related contents, which are not described herein again. Based on this, in the initial working stage of the living body detecting device, the pre-obtained initial illumination information L in the environment space where the current environment where the living body detecting device is located belongs to can be obtained0As illumination information in the current environment.
Of course, in some scenarios, the illumination information in the environment space where the living body detection device is located is relatively stable and does not change with time, and in other working stages of the living body detection device, the initial illumination information may also be used as the illumination information in the current environment where the living body detection device is located.
In other application scenarios, for example, a user performs face brushing payment or face brushing login by using his terminal device, and the environment where his terminal device is located may vary widely when the user performs face brushing payment or face brushing login and other operations each time, so that it is possible to acquire an initial environment image corresponding to an environment space where the current environment where the terminal device is located when the object to be detected performs face brushing payment or face brushing login each time, and perform illumination analysis on the initial environment image to obtain initial illumination information, which is used as illumination information in the current environment where the terminal device is located.
In other application scenarios, the light source in the environmental space where the living body detecting device is located may have a certain variation. In order to improve the accuracy of the obtained illumination information in the environment space to which the current environment belongs, an update period can be preset, and a timer or a counter is started to time the update period, and each timeWhen the update period is reached, acquiring an initial environment image corresponding to the environment space of the current environment where the living body detection device is located, and performing illumination analysis on the initial environment image to obtain initial illumination information L0. Accordingly, the initial illumination information L in the environment space to which the current environment where the living body detecting apparatus is located belongs, which is obtained in the current cycle, can be used0As the illumination information in the current environment within the period.
In order to further improve the accuracy of the acquired illumination information in the environment space to which the current environment belongs, the illumination information in the current environment can be further calculated by using the illumination information in the N historical images including the object to be detected, which is determined as a living body, wherein N is a positive integer. Further, N is more than or equal to 2, the mean value of the illumination information in the N historical images can be used as the illumination information in the current environment, namely
Figure BDA0002103050110000201
Wherein L isiThe illumination information in the N historical images is respectively. Preferably, the N history images are the N latest acquired history images, that is, the N history images in which the object to be detected closest to the current time is determined as a living body. Because the acquisition time of the N historical images is closer to the acquisition time of the current moment, the illumination information in the N historical images is closer to the illumination information in the current environment, so that when the living body detection device determines whether the illumination information in the face image of the object to be detected is equal to the illumination information in the current environment, the adopted illumination information in the current environment is consistent with the illumination information in the environment space where the living body detection device is actually located, and the accuracy of subsequent living body detection is improved.
Further, for safety reasons, in practical applications, the object to be detected needs to be not only a living body but also a legal object. The application scenes of the living body detection technology are different, and legal objects are different. For example, if the liveness detection technique is applied to a gate system, the legitimate object refers to a user having the right to pass through the gate. For another example, if the liveness detection technique is applied to an application scenario of face-brushing login or face-brushing payment, the legitimate object is a user who registers an account. And under the condition that the object to be detected is a living body and is a legal object, providing subsequent service for the object to be detected.
Further, when detecting whether the object to be detected is a legal object, the face area image of the object to be detected can be obtained and matched in the face area image of the known object; and if the face area image of the object to be detected is matched in the face area image of the known object and the object to be detected is judged to be a living body, providing subsequent service for the object to be detected. The detection of whether the object to be detected is a legal object and the live body detection provided in fig. 2 may be performed sequentially or concurrently, and when the two are performed sequentially, the order of execution is not limited.
Further, the similarity between the face area image of the object to be detected and the face area image of the known object can be calculated, and if a face object with the similarity to the face area image of the object to be detected being greater than or equal to a preset image similarity threshold exists in the face area image of the known object, it is indicated that the face area image of the object to be detected is matched in the face image of the known object, and the object to be detected can be determined to be a legal object.
Furthermore, in different application scenes, different subsequent services can be provided for the object to be detected. The following is an exemplary description in connection with several common application scenarios.
Application scenario 1: when the living body detection method provided by the embodiment is applied to a gate system, in the case that the face area image of the object to be detected is matched in the face area image of the known object and the object to be detected is determined to be a living body, the gate of the gate can be controlled to open to allow the object to be detected to pass through. For the definition of the known object in the application scenario, reference may be made to the related contents of the above embodiment of the gate system, and details are not described herein again.
Application scenario 2: when the living body detection method provided by the embodiment is applied to face-brushing payment, the face area image of the object to be detected is matched in the face area image of the known object, and when the object to be detected is determined to be a living body, the payment service can be provided for the object to be detected by combining with a payment account bound to the object to be detected. For example, the set amount of money may be deducted from the payment account bound to the object to be detected, or the set amount of money may be transferred from the payment account bound to the object to be detected to a collection account set by the object to be detected, or a payment page may be displayed to the object to be detected.
In the application scene 2, the facial image of the known object may be a facial image of a user bound to a payment account set by the object to be detected, the facial image may be a facial image in an effective identity document provided when the user performs account registration, or a facial image of the user collected by an account opening bank of the payment account; alternatively, the facial image in the valid identity document may be provided for the object to be detected as captured on site. Further, if the facial image of the known object is the facial image in the effective identity document provided by the object to be detected and acquired on site, the acquired facial image in the effective identity document and the facial image of the user bound to the payment account can be matched and calculated, and if the similarity between the acquired facial image and the acquired facial image is calculated to be larger than or equal to the set similarity threshold, the acquired facial image in the effective identity document can be used as the facial image of the known object.
Application scenario 3: when the living body detection method provided by the embodiment is applied to application scenes such as face-brushing authentication, face-brushing login or face-brushing password modification, when a living body of an object to be detected is verified, a face area image of the object to be detected is matched in a face area image of a known object, and when the object to be detected is determined to be a living body, a relevant information interface can be performed on the object to be detected, so that the object to be detected can acquire relevant information or perform relevant operations. For example, when the password of the object to be detected is modified, the password modification page can be displayed to the object to be detected; for another example, when the object to be detected is subjected to balance inquiry, balance information and the like may be presented to the object to be detected, but the present invention is not limited thereto.
In application scenario 3, the facial image of the known object may be a facial image of a user bound to a user name or an account number input by the object to be detected, and the facial image may be a facial image in an effective identity document provided when the user performs account registration, or a facial image of the user collected by a background device of the user name or the account number input by the object to be detected.
Accordingly, embodiments of the present application also provide a readable storage medium storing computer instructions, which when executed by one or more processors, cause the one or more processors to perform the steps of the above-mentioned liveness detection method.
Fig. 3 is a schematic flowchart of a data processing method according to an embodiment of the present application. The data processing method can be used for face recognition. As shown in fig. 3, the method includes:
301. in the current environment, a face image of an object to be detected is acquired.
302. And carrying out illumination analysis on the face image to obtain illumination information in the face image.
303. And if the illumination information in the face image is matched with the illumination information in the current environment and the face image belongs to the face image of the known object, determining that the object to be detected passes face recognition.
In this embodiment, for step 301, an image capturing device such as a camera, a laser sensor, or an infrared sensor may be used to capture a facial image of the object to be detected in the current environment. The camera may be a binocular camera, a monocular camera, a depth camera, or the like, but is not limited thereto. The setting position of the image capturing device can be referred to in the related contents of the above embodiments. Optionally, if the image capturing device is a monocular camera, the monocular camera in the current environment may be used to capture the facial image of the object to be detected. Wherein, the cost can be reduced by adopting the monocular camera.
Further, in practical applications, biometric features required for face recognition are often generated by using paper photos or electronic photos of legitimate users, which results in the theft of the rights of legitimate users. Because the paper photo or the electronic photo carries the illumination information when the photo is shot, even if an illegal user takes the paper photo or the electronic photo of a legal user to the front of the image acquisition device for the image acquisition device to acquire the facial image in the photo, the illumination in the current environment cannot submerge the original illumination information in the photo. Therefore, if the object to be detected adopts a property such as a photo for the image acquisition device to acquire, the image of the object to be detected acquired by the image acquisition device not only includes the illumination information in the current environment, but also includes the illumination information carried in the photo; if the object to be detected is a real person, the facial image of the object to be detected acquired by the image acquisition equipment only carries illumination information in the current environment. Moreover, the real human face and the photo have different expressions of light phenomena such as refraction, reflection, diffuse reflection and the like. Wherein, the illumination information in the current environment is illumination information generated by a light source device existing in the environment space where the living body detection device is located. For the description of the illumination device, reference may be made to the related contents of the above embodiment of the gate system, and details are not repeated herein.
Based on the above analysis, in step 302, the illumination analysis may be performed on the face image of the object to be detected, so as to obtain illumination information in the face image. Further, if the illumination information in the face image of the object to be detected is matched with the illumination information in the current environment, the object to be detected is determined to be a living body. For a specific implementation manner of performing illumination analysis on the face image of the object to be detected and determining whether the illumination information in the face image of the object to be detected matches the illumination information in the current environment, reference may be made to relevant contents of the above embodiments, which is not described herein again.
Further, for safety reasons, in practical applications, the object to be detected needs to be not only a living body but also a legal object. For the description of the legal object, reference may be made to the relevant contents in the above embodiments, and details are not repeated here. Further, in the present embodiment, if the face image of the object to be detected belongs to the face image of the known object, it is determined that the object to be detected is a legitimate object. That is, if the illumination information in the face image of the object to be detected matches the illumination information in the current environment, and the face image of the object to be detected belongs to the face image of the known object, it is determined that the object to be detected passes face recognition. For specific embodiments of determining whether the face image of the object to be detected belongs to the face image of the known object, reference may be made to relevant contents of the foregoing embodiments, and details are not repeated herein.
In this embodiment, the face recognition and the living body detection are combined, and in the process of face recognition, whether the object to be detected is a living body is recognized by judging whether illumination information in the facial image of the object to be detected, which is acquired by the image acquisition device, is matched with illumination information in the current environment, so that the accuracy of face recognition is improved, and the safety based on the face recognition technology is further improved. On one hand, the living body detection mode based on the illumination information does not need the action coordination of the object to be detected, so that the living body detection time can be reduced, and the detection efficiency can be improved; on the other hand, the method can reduce the dependence on the texture details in the face image, not only can reduce the quality requirement on the image, but also has higher stability, and is beneficial to improving the accuracy of the living body detection.
Furthermore, the method and the device have low requirements on image quality and relatively low requirements on the precision of the image acquisition equipment, and can acquire facial images of the object to be detected by adopting some image acquisition equipment with low precision, such as a monocular camera and the like, so that the cost is saved.
Further, in this embodiment, in the case that the object to be detected passes through face recognition, subsequent services can be provided to the object to be recognized. Wherein, the application scenes are different, and the subsequent services provided for the object to be detected are also different. For example, in the application scenario 1, if the object to be detected passes through face recognition, the gate of the gate machine may be controlled to open to allow the object to be detected to pass through. For another example, in the application scenario 2, if the object to be detected passes through face recognition, a payment service may be provided for the object to be detected in combination with a payment account bound to the object to be detected. For example, the set amount of money may be deducted from the payment account bound to the object to be detected, or the set amount of money may be transferred from the payment account bound to the object to be detected to a collection account set by the object to be detected, or a payment page may be displayed to the object to be detected.
For another example, in the application scenario 3, if the object to be detected passes through face recognition, a related information interface may be performed on the object to be detected, so that the object to be detected obtains related information or performs related operations. For example, when the password of the object to be detected is modified, the password modification page can be displayed to the object to be detected; for another example, when the object to be detected is subjected to balance inquiry, balance information and the like may be presented to the object to be detected, but the present invention is not limited thereto.
Accordingly, embodiments of the present application also provide a readable storage medium storing computer instructions, which when executed by one or more processors, cause the one or more processors to perform the steps of the data processing method described above.
In addition to the data processing method, another data processing method is provided in the embodiments of the present application. The data processing method is not only suitable for carrying out face recognition or living body detection on line in real time, but also suitable for carrying out face recognition or living body detection on line. The data processing method comprises the following steps:
s1: first illumination information characterizing ambient illumination is obtained.
S2: an image containing an object to be detected is acquired.
S3: and performing illumination analysis on the image containing the object to be detected to obtain second illumination information representing illumination in the image.
S4: and determining whether the object to be detected passes the detection or not based on the similarity of the first illumination information and the second illumination information.
In this embodiment, in step S1, the first illumination information may be illumination information in an environment image obtained by performing illumination analysis on an environment image of a current environment collected by the image processing apparatus by using a computer device, or may be illumination information acquired by other manners. For example, the first illumination information may be illumination information transmitted by other devices or illumination information read from a storage medium, but is not limited thereto. The image acquisition device can be arranged on the computer equipment or arranged in the same physical space with the computer equipment.
Similarly, in step S2, the image that includes the object to be detected and is acquired by the computer device may be an image of the object to be detected and acquired by an image processing device on the computer device, or may be an image acquired by another method. For example, the image containing the object to be detected may be an image transmitted by another device, or the computer device may be an image read from a storage medium, but is not limited thereto.
In step S3, the computer device performs illumination analysis on the image containing the object to be detected, and obtains second illumination information in the image containing the object to be detected. Next, in step S4, it is determined whether the object to be detected passes the detection based on the similarity of the first illumination information and the second illumination information.
The data processing method provided by this embodiment determines whether the object to be detected passes detection based on the similarity between the illumination information in the image including the object to be detected and the illumination information in the environment. According to the data processing method, on one hand, the action coordination of the object to be detected is not needed, so that the time for detecting the living body can be reduced, and the detection efficiency is improved; on the other hand, the method can reduce the dependence on texture details in the image, not only can reduce the quality requirement on the image, but also has higher stability, and is beneficial to improving the accuracy of the living body detection.
The data processing method provided by the embodiment can be suitable for live body detection or face recognition which needs real-time online, such as gate or online payment environment; the method is also suitable for some application scenes needing to carry out in-vivo detection offline. For example, in some application scenarios, the police may use the data processing method provided by the present embodiment to assist in solving a case. For example, a person reports that the bank deposit is stolen, and an police may call the relevant monitoring video by using the data processing method provided in this embodiment, and determine whether the person reports that the money is genuine, i.e., determine whether the person withdraws money himself or another person pretends to withdraw money, etc., but is not limited thereto.
Alternatively, in step S3, the light source model corresponding to the image may be expressed by using a spherical harmonic illumination model, and a coefficient matrix of the spherical harmonic illumination model represents illumination in the image, that is, may be used as the second illumination information; and carrying out variable parameter solution on the image according to the reflection image, the depth information and the prior information of the illumination information corresponding to the image to obtain a coefficient matrix of the spherical harmonic illumination model. For a detailed description, reference may be made to the foregoing embodiments, which are not described in detail herein.
Optionally, for an application scenario of real-time online detection, the first lighting information in step S1 is lighting information in a current environment where the computer device is located. For specific implementation of the computer device acquiring the illumination information in the current environment, reference may be made to relevant contents in the foregoing embodiments, which are not described herein again.
Optionally, for an application scenario of offline detection, the first illumination information in step S1 is illumination information of an environment in which the device that acquires the image containing the object to be detected is located during the process of acquiring the image. Based on this, the first illumination information can perform illumination analysis on an environment image acquired by equipment for acquiring an image including an object to be detected, so as to obtain the first illumination information. It should be noted that, for the specific implementation process of the illumination analysis performed on the environment image by the computer device and the illumination analysis performed on the image including the object to be detected in step S3, reference may be made to relevant contents in the foregoing embodiments, and details are not described herein again.
In any application scenario, when whether the object to be detected passes the detection is determined based on the similarity between the first illumination information and the second illumination information, the similarity between the second illumination information and the first illumination information can be calculated; if the calculated similarity is larger than or equal to a preset illumination similarity threshold value, determining that the object to be detected passes the detection; and if the calculated similarity is smaller than a preset illumination similarity threshold, determining that the object to be detected does not pass the detection.
Accordingly, embodiments of the present application also provide a readable storage medium storing computer instructions, which when executed by one or more processors, cause the one or more processors to execute the steps of the above-mentioned computer device to execute the data processing method.
It should be noted that the execution subjects of the steps of the methods provided in the above embodiments may be the same device, or different devices may be used as the execution subjects of the methods. For example, the execution subjects of steps 201, 202 and 203 may be device a; for another example, the execution subject of step 201 may be device a, and the execution subjects of steps 202 and 203 may be device B; and so on.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations are included in a specific order, but it should be clearly understood that the operations may be executed out of the order presented herein or in parallel, and the sequence numbers of the operations, such as 301, 302, etc., are merely used for distinguishing different operations, and the sequence numbers do not represent any execution order per se. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel.
Fig. 4 is a schematic structural diagram of a living body detection apparatus provided in an embodiment of the present application. As shown in fig. 4, the apparatus includes: a vision sensor 40a, a memory 40b, and a processor 40 c.
The memory 40b is used for storing a computer program.
And the visual sensor 40a is used for acquiring an image containing the object to be detected under the current environment.
The processor is coupled to the memory 40b for executing a computer program for: performing illumination analysis on an image containing an object to be detected to obtain illumination information in the image; and if the illumination information in the image containing the object to be detected is matched with the illumination information in the current environment, determining that the object to be detected is a living body.
Alternatively, the vision sensor 40a may be a camera, a laser sensor, an infrared sensor, or the like. For example, the vision sensor 40a may be a binocular camera, a monocular camera, a depth camera, or the like, but is not limited thereto.
In an embodiment, when performing the illumination analysis on the image including the object to be detected, the processor 40b is specifically configured to: expressing a light source model corresponding to the image containing the object to be detected by using a spherical harmonic illumination model, wherein a coefficient matrix of the spherical harmonic illumination model represents illumination information in the image containing the object to be detected; and carrying out variable parameter solution on the image containing the object to be detected according to the prior information of the reflection image, the depth information and the illumination information corresponding to the image containing the object to be detected to obtain a coefficient matrix of the spherical harmonic illumination model.
In another embodiment, the processor 40b is further configured to: calculating the similarity between the illumination information in the image containing the object to be detected and the illumination information in the current environment; if the similarity is larger than or equal to a preset illumination similarity threshold value, determining that illumination information in the image containing the object to be detected is matched with illumination information in the current environment; and if the similarity is smaller than a preset illumination similarity threshold, determining that the illumination information in the image containing the object to be detected is not matched with the illumination information in the current environment.
In a further embodiment, the processor 40b, before determining that the illumination information in the image containing the object to be detected matches the illumination information in the current environment, is further configured to: taking the pre-obtained initial illumination information in the environment space to which the current environment belongs as the illumination information in the current environment; or, calculating the illumination information in the current environment by using the illumination information in N historical images of which the object to be detected is determined as a living body, wherein N is a positive integer. Preferentially, the N history images are the most recently acquired N history images.
Further, when the initial illumination information in the environment space to which the current environment belongs is obtained in advance, the processor 40b is specifically configured to: acquiring an initial environment image corresponding to an environment space to which a current environment belongs; and carrying out illumination analysis on the initial environment image to obtain initial illumination information.
On the other hand, when calculating the illumination information in the current environment, the processor 40b is specifically configured to: and calculating the average value of the illumination information in the N historical images as the illumination information in the current environment.
In still other embodiments, the image containing the object to be detected comprises: a face region image of the object to be detected. The processor 40b is further configured to: matching the face area image of the object to be detected in the face area image of the known object; and if the face area image of the object to be detected is matched in the face area image of the known object and the object to be detected is judged to be a living body, providing subsequent service for the object to be detected.
Further, if the biopsy device is used in a gate system, the processor 40b is specifically configured to, when providing subsequent services for the object to be detected: and controlling a gate of the gate machine to be opened so as to allow the object to be detected to pass through. If the biopsy device is used for face-brushing payment, the processor 40b is specifically configured to, when providing subsequent services for the object to be detected: and providing payment service for the object to be detected by combining the payment account bound to the object to be detected. If the living body detecting device is used in other systems capable of face-brushing authentication, the processor 40b is specifically configured to: and displaying a related information interface for the object to be detected so as to enable the object to be detected to acquire related information or perform related operation.
In some embodiments, the in-vivo detection device further comprises: the identity information acquisition device 40 d. The identity information acquiring device 40d may be a camera, a scanner, an OCR technology based recognizer, a card reader, etc., but is not limited thereto. In this embodiment, the identity information capture device 40d may capture the facial image of the identity document provided by the object to be detected as the facial image of the known object and provide the facial image of the known object to the processor 40 c. Accordingly, the processor 40c may calculate a similarity between the face image of the object to be detected and the face image of the known object, and determine that the face area image of the object to be detected is matched in the face area image of the known object if the calculated similarity is greater than or equal to a preset image similarity threshold.
In some alternative embodiments, as shown in fig. 4, the in-vivo detection apparatus may further include: communication component 40e, power component 40f, audio component 40g, display 40h, and the like. The illustration of only a portion of the components in FIG. 4 does not imply that the liveness detection device must include all of the components shown in FIG. 4, nor that the liveness detection device can include only the components shown in FIG. 4. The components shown in the dashed box of fig. 4 are optional components, not necessary components, and may be determined according to the product form of the in-vivo detection apparatus. The living body detection device of the embodiment may be implemented as a terminal device such as a desktop computer, a notebook computer, a smart phone, or an IOT device, or may be a server device such as a conventional server, a cloud server, or a server array. If the living body detecting device of this embodiment is implemented as a terminal device such as a desktop computer, a notebook computer, a smart phone, etc., the living body detecting device may include components within a dashed line frame in fig. 4; if the living body detecting device of the present embodiment is implemented as a server device such as a conventional server, a cloud server, or a server array, the components within the dashed box in fig. 4 may not be included.
The living body detection device provided by this embodiment can identify whether the object to be detected is a living body by determining whether the illumination information in the image including the object to be detected matches the illumination information in the current environment. The living body detection mode based on the illumination information does not need the action coordination of the object to be detected, can reduce the time of living body detection and improve the detection efficiency; on the other hand, the method can reduce the dependence on texture details in the image, not only can reduce the quality requirement on the image, but also has higher stability, and is beneficial to improving the accuracy of the living body detection.
Fig. 5 is a schematic structural diagram of a face recognition device according to an embodiment of the present application. As shown in fig. 5, the apparatus includes: a vision sensor 50a, a memory 50b, and a processor 50 c. The memory 50a is used for storing computer programs.
The processor 50c is coupled to the memory 50b for executing computer programs for: performing illumination analysis on the face image to obtain illumination information in the face image; and if the illumination information in the face image is matched with the illumination information in the current environment and the face image belongs to the face image of the known object, determining that the object to be detected passes face recognition.
Further, if the object to be detected passes face recognition, the processor 40b may also provide subsequent services for the object to be detected. For example, if the face recognition device is used in a gate system, the processor 40b is specifically configured to: and controlling a gate of the gate machine to be opened so as to allow the object to be detected to pass through. If the face recognition device is used for face-brushing payment, the processor 40b is specifically configured to: and providing payment service for the object to be detected by combining the payment account bound to the object to be detected. If the face recognition device is used in other systems capable of face-brushing authentication, the processor 40b is specifically configured to: and displaying a related information interface for the object to be detected so as to enable the object to be detected to acquire related information or perform related operation.
In some embodiments, the face recognition device further comprises: identity information acquisition device 50 d. The identity information acquisition device 50d may be a camera, a scanner, an OCR technology-based recognizer, a card reader, etc., but is not limited thereto. In this embodiment, the identity information capture device 50d may capture the facial image of the identity document provided by the object to be detected as the facial image of the known object and provide the facial image of the known object to the processor 50 c. Accordingly, the processor 50c may calculate a similarity between the face image of the object to be detected and the face image of the known object, and determine that the face image of the object to be detected belongs to the face image of the known object if the calculated similarity is greater than or equal to a preset image similarity threshold.
In some optional embodiments, as shown in fig. 5, the face recognition apparatus may further include: communication component 50e, power component 50f, audio component 50g, display 50h, and the like. Only some of the components are shown schematically in fig. 5, and it is not meant that the face recognition apparatus must include all of the components shown in fig. 5, nor that the face recognition apparatus can include only the components shown in fig. 5. The components shown in the dashed box of fig. 5 are optional components, but not required components, depending on the product form of the face recognition device. The face recognition device of this embodiment may be implemented as a terminal device such as a desktop computer, a notebook computer, a smart phone, or an IOT device, or may be a server device such as a conventional server, a cloud server, or a server array. If the face recognition device of this embodiment is implemented as a terminal device such as a desktop computer, a notebook computer, a smart phone, or the like, the face recognition device may include a component within a dashed line frame in fig. 5; if the face recognition device of this embodiment is implemented as a server device such as a conventional server, a cloud server, or a server array, the components in the dashed box in fig. 5 may not be included.
The face recognition device provided by the embodiment combines face recognition and living body detection, so that the accuracy of face recognition is improved, counterfeiting of an illegal user by using a paper photo or an electronic photo of a legal user can be prevented, the accuracy of face recognition is improved, and the safety based on face recognition is improved; when the living body detection is carried out, whether the object to be detected is the living body is identified by judging whether the illumination information in the image containing the object to be detected is matched with the illumination information in the current environment, and on the one hand, the living body detection mode based on the illumination information does not need the object to be detected to carry out action matching, so that the time of the living body detection can be reduced, and the detection efficiency can be improved; on the other hand, the method can reduce the dependence on texture details in the image, not only can reduce the quality requirement on the image, but also has higher stability, and is beneficial to improving the accuracy of face recognition.
Fig. 6 is a schematic structural diagram of a gate according to an embodiment of the present application. As shown in fig. 6, the gate includes: a gate body 60a, and a camera 60b, a memory 60c and a processor 60d provided on the gate body 60 a.
The memory 60c is used for storing a computer program, among other things.
The camera 60b is used for: the method comprises the steps of collecting a face image of an object to be detected around an entrance and an exit of a gate body.
The processor 60d is coupled to the memory 60c for executing computer programs for: and under the condition that the object to be detected is a legal object, controlling the gate body 60a to be opened to allow the object to be detected to pass through. The illumination information in the current environment refers to illumination information in the current environment where the gate is located.
Alternatively, the camera 60b may be a binocular camera, a monocular camera, a depth camera, or the like, but is not limited thereto. Further, the camera 60b further includes: and the light supplement lamp 60e is used for providing illumination conditions for the camera 60b to collect the facial image of the object to be detected when the illumination conditions of the gate cycle are insufficient.
In some embodiments, the processor 60bd, when performing the illumination analysis on the facial image, is specifically configured to: expressing a light source model corresponding to a facial image of an object to be detected by using a spherical harmonic illumination model, wherein a coefficient matrix of the spherical harmonic illumination model represents illumination information in the facial image; and carrying out variable parameter solution on the face image according to the reflection image corresponding to the image of the object to be detected, the depth information and the prior information of the illumination information to obtain a coefficient matrix of the spherical harmonic illumination model.
In other embodiments, the processor 60d is further configured to: calculating the similarity between the illumination information in the face image of the object to be detected and the illumination information in the current environment; if the similarity is larger than or equal to a preset illumination similarity threshold value, determining that the illumination information in the facial image of the object to be detected is matched with the illumination information in the current environment; and if the similarity is smaller than a preset illumination similarity threshold, determining that the illumination information in the face image is not matched with the illumination information in the current environment.
In still other embodiments, the processor 60d, prior to determining that the lighting information in the image matches the lighting information in the current environment, is further configured to: taking the pre-obtained initial illumination information in the environment space to which the current environment belongs as the illumination information in the current environment; or calculating the illumination information in the current environment by using the illumination information in the N historical images of the object to be detected, which is judged as the living body, wherein N is a positive integer.
Further, when the initial illumination information in the environment space to which the current environment belongs is obtained in advance, the processor 60d is specifically configured to: controlling the camera 60b to collect an initial environment image corresponding to the environment space to which the current environment belongs; and carrying out illumination analysis on the initial environment image to obtain initial illumination information.
On the other hand, when calculating the illumination information in the current environment, the processor 60d is specifically configured to: and calculating the average value of the illumination information in the N historical images as the illumination information in the current environment. Preferably, the N history images are the N latest acquired history images. Further, the history image may be a history face image.
In still other embodiments, as shown in fig. 6, the gate further comprises: the gate control assembly 60 f. Accordingly, the processor 60d is further configured to: matching the face image of the object to be detected in the face image of the known object; if the face image of the object to be detected is matched with the face image of the known object and the object to be detected is determined to be a living body, the gate control assembly 60f is controlled to act, and the gate control assembly 60f drives the gate body 60a to open so as to allow the object to be detected to pass through.
Optionally, as shown in fig. 6, the gate further includes: and an identity information acquisition device 60g arranged on the gate body 60 a. The identity information acquisition device 60g may be a camera, a scanner, an OCR technology-based recognizer, a card reader, etc., but is not limited thereto. In the present embodiment, the identity information acquiring device 60g may acquire a face image in an identity document provided by an object to be detected as a face image of a known object, and provide the face image of the known object to the processor 60 d. Accordingly, the processor 60d may calculate a similarity between the face image of the object to be detected and the face image of the known object, and determine that the face image of the object to be detected belongs to the face image of the known object if the calculated similarity is greater than or equal to a preset image similarity threshold.
In some alternative embodiments, as shown in fig. 6, the gate may further include: and a power supply assembly 60 h. Further, the gate machine can also comprise: optional components (not shown in fig. 6) such as a communications component, an audio component, a display screen, etc. Only some of the components are shown schematically in fig. 6, and it is not meant that the gate must include all of the components shown in fig. 6, nor that the gate only includes the components shown in fig. 6.
The gate provided by this embodiment can identify whether the object to be detected is a living body by judging whether the illumination information in the facial image including the object to be detected is matched with the illumination information in the current environment, and further judge whether the object to be detected belongs to a legal object on the basis of the living body, so that the legal living object is allowed to pass through, and the gate is favorable for improving the safety. On one hand, the living body detection mode based on the illumination information does not need the action coordination of the object to be detected, so that the living body detection time can be reduced, and the detection efficiency can be improved; on the other hand, the method can reduce the dependence on texture details in the image, not only can reduce the quality requirement on the image, but also has higher stability, and is beneficial to improving the accuracy of the living body detection.
Fig. 7 is a schematic structural diagram of a computer device according to an embodiment of the present application. As shown in fig. 7, the apparatus includes: a memory 70a and a processor 70 b.
The memory 70a is used for storing a computer program.
The processor 70b is coupled to the memory 70a for executing a computer program for: acquiring first illumination information representing ambient illumination; acquiring an image containing an object to be detected; performing illumination analysis on an image containing an object to be detected to acquire second illumination information representing illumination in the image; and determining whether the object to be detected passes the detection or not based on the similarity of the first illumination information and the second illumination information.
In some embodiments, the processor 70b, when determining whether the object to be detected passes the detection, is specifically configured to: calculating the similarity between the second illumination information and the first illumination information; if the calculated similarity is larger than or equal to a preset illumination similarity threshold value, determining that the object to be detected passes the detection; and if the calculated similarity is smaller than a preset illumination similarity threshold, determining that the object to be detected does not pass the detection.
In other embodiments, the processor 70b, when obtaining the first illumination information characterizing the ambient illumination, is specifically configured to: taking the pre-obtained initial illumination information in the environment space to which the current environment belongs as first illumination information; or, using the illumination information in the N historical images of the object to be detected, which is determined as the living body, to calculate the illumination information in the current environment as the first illumination information, wherein N is a positive integer.
Further, the computer device further includes: and a camera 70 c. Accordingly, when the initial illumination information in the environment space to which the current environment belongs is obtained in advance, the processor 70b is specifically configured to: controlling the camera 70c to collect an initial environment image corresponding to the environment space to which the current environment belongs; and carrying out illumination analysis on the initial environment image to obtain initial illumination information.
Optionally, when the processor 70b calculates the illumination information in the current environment, it is specifically configured to: and calculating the average value of the illumination information in the N historical images as the illumination information in the current environment. Preferably, the N history images are the N latest acquired history images.
It should be noted that, in this embodiment, the specific implementation manners of the processor 70b performing illumination analysis on the image including the object to be detected and performing illumination analysis on the environment image can refer to the relevant contents of the above embodiments, and are not described herein again. In some optional embodiments, as shown in fig. 7, the computer device may further include: identity information capture device 70d, communication component 70e, power component 70f, audio component 70g, display 70h, and the like. The implementation and the function of the identity information acquisition device 70d can be referred to the relevant content of the above embodiments, and are not described herein again. Only some of the components shown in fig. 7 are schematically shown, and it is not meant that the computer device must include all of the components shown in fig. 7, nor that the computer device only includes the components shown in fig. 7. The components shown in the dashed box of fig. 7 are optional components, not necessary components, and may depend on the product form of the computer device. The computer device of this embodiment may be implemented as a terminal device such as a desktop computer, a notebook computer, a smart phone, or an IOT device, or may be a server device such as a conventional server, a cloud server, or a server array. If the computer device of this embodiment is implemented as a terminal device such as a desktop computer, a notebook computer, a smart phone, etc., the computer device may include components within a dashed line frame in fig. 7; if the computer device of this embodiment is implemented as a server device such as a conventional server, a cloud server, or a server array, the components in the dashed box in fig. 7 may not be included.
The computer device provided by this embodiment can detect the object to be detected based on the similarity between the illumination information in the image including the object to be detected and the illumination information in the process of acquiring the image. On one hand, the detection mode based on the illumination information does not need the action coordination of the object to be detected, so that the time for detecting the living body can be reduced, and the detection efficiency is improved; on the other hand, the method can reduce the dependence on texture details in the image, not only can reduce the quality requirement on the image, but also has higher stability, and is beneficial to improving the accuracy of the living body detection.
In embodiments of the present application, the memory is used to store computer programs and may be configured to store other various data to support operations on the device on which it is located. Wherein the processor may execute a computer program stored in the memory to implement the corresponding control logic. The memory may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Wherein the communication component is configured to facilitate wired or wireless communication between the device in which it is located and other devices. The device in which the communication component is located may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component may also be implemented based on Near Field Communication (NFC) technology, Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, or other technologies.
The display screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP), among others. If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
Wherein the power supply component is configured to provide power to the various components of the device in which it is located. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.
Wherein the audio component may be configured to output and/or input an audio signal. For example, the audio component includes a Microphone (MIC) configured to receive an external audio signal when the device in which the audio component is located is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in a memory or transmitted via a communication component. In some embodiments, the audio assembly further comprises a speaker for outputting audio signals. For example, if the device in which the audio component is located has a language interaction function, voice interaction with a user can be realized through the audio component.
It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (29)

1. A method of in vivo detection, comprising:
acquiring an image containing an object to be detected in the current environment;
performing illumination analysis on the image to obtain illumination information in the image;
and if the illumination information in the image is matched with the illumination information in the current environment, determining that the object to be detected is a living body.
2. The method of claim 1, wherein acquiring an image containing an object to be detected in a current environment comprises:
and acquiring an image containing the object to be detected by using the monocular camera in the current environment.
3. The method of claim 1, wherein performing illumination analysis on the image to obtain illumination information in the image comprises:
expressing a light source model corresponding to the image by using a spherical harmonic illumination model, wherein a coefficient matrix of the spherical harmonic illumination model expresses illumination information in the image;
and carrying out variable parameter solution on the image according to the reflection image, the depth information and the prior information of the illumination information corresponding to the image to obtain a coefficient matrix of the spherical harmonic illumination model.
4. The method of claim 1, further comprising:
calculating the similarity between the illumination information in the image and the illumination information in the current environment;
if the similarity is larger than or equal to a preset illumination similarity threshold value, determining that the illumination information in the image is matched with the illumination information in the current environment;
and if the similarity is smaller than a preset illumination similarity threshold, determining that the illumination information in the image is not matched with the illumination information in the current environment.
5. The method of claim 1, further comprising, prior to determining that the lighting information in the image matches the lighting information in the current environment:
taking the pre-obtained initial illumination information in the environment space to which the current environment belongs as the illumination information in the current environment; or
And calculating the illumination information in the current environment by using the illumination information in the N historical images of the living body of the object to be detected, wherein N is a positive integer.
6. The method of claim 5, wherein obtaining the initial illumination information in the environment space to which the current environment belongs in advance comprises:
acquiring an initial environment image corresponding to an environment space to which a current environment belongs;
and carrying out illumination analysis on the initial environment image to obtain the initial illumination information.
7. The method according to claim 5, wherein calculating the illumination information in the current environment by using the illumination information in the N historical images of the living body of the object to be detected comprises:
and calculating the mean value of the illumination information in the N historical images as the illumination information in the current environment.
8. The method according to claim 5 or 7, wherein the N history images are the N most recently acquired history images.
9. The method according to any one of claims 1 to 7, wherein the image comprises a facial region image of the object to be detected; the method further comprises the following steps:
matching the face area image of the object to be detected in the face area image of the known object;
and if the face area image of the object to be detected is matched in the face area image of the known object and the object to be detected is judged to be a living body, providing subsequent service for the object to be detected.
10. The method according to claim 9, wherein providing subsequent services to the object to be detected comprises at least one of:
controlling a gate of a gate machine to be opened so as to allow the object to be detected to pass through;
providing payment service for the object to be detected in combination with the payment account bound to the object to be detected;
and displaying a related information interface for the object to be detected so as to enable the object to be detected to acquire related information or perform related operation.
11. A data processing method, comprising:
acquiring a face image of an object to be detected in a current environment;
performing illumination analysis on the face image to obtain illumination information in the face image;
and if the illumination information in the face image is matched with the illumination information in the current environment and the face image belongs to the face image of the known object, determining that the object to be detected passes face recognition.
12. The method according to claim 11, wherein in the case that the object to be detected passes face recognition, the method further comprises at least one of the following operations:
controlling a gate to be opened to allow the object to be detected to pass through;
providing payment service for the object to be detected in combination with the payment account bound to the object to be detected;
and displaying a related information interface for the object to be detected so as to enable the object to be detected to acquire related information or perform related operation.
13. A data processing method, comprising:
acquiring first illumination information representing ambient illumination;
acquiring an image containing an object to be detected;
performing illumination analysis on the image to acquire second illumination information representing illumination in the image;
and determining whether the object to be detected passes the detection or not based on the similarity of the first illumination information and the second illumination information.
14. The method of claim 13, wherein obtaining first lighting information characterizing ambient lighting comprises:
and calculating first illumination information representing the ambient illumination by using the second illumination information in the N newly detected historical images.
15. The method according to claim 13 or 14, wherein performing illumination analysis on the image to obtain second illumination information representing illumination in the image comprises:
expressing a light source model corresponding to the image by using a spherical harmonic illumination model, wherein a coefficient matrix of the spherical harmonic illumination model expresses the second illumination information;
and carrying out variable parameter solution on the image according to the reflection image, the depth information and the prior information of the illumination information corresponding to the image to obtain a coefficient matrix of the spherical harmonic illumination model.
16. A gate system, comprising: the system comprises a gate, image acquisition equipment and computing equipment; the computing equipment is respectively connected with the gate and the image acquisition equipment;
the image acquisition equipment is used for acquiring a facial image of an object to be detected around an entrance and an exit of the gate and transmitting the facial image to the computing equipment;
the computing equipment is used for carrying out illumination analysis on the face image to obtain illumination information in the face image, and controlling the gate to be opened to allow the object to be detected to pass through under the condition that the illumination information in the face image is matched with the illumination information in the current environment where the gate is located and the object to be detected is a legal object.
17. The system of claim 16, wherein the computing device is further configured to:
calculating the similarity between the illumination information in the face image and the illumination information in the current environment;
if the similarity is larger than or equal to a preset illumination similarity threshold value, determining that the illumination information in the face image is matched with the illumination information in the current environment;
and if the similarity is smaller than a preset illumination similarity threshold, determining that the illumination information in the face image is not matched with the illumination information in the current environment.
18. The system of claim 16, wherein the computing device is further configured to:
matching the face image of the object to be detected in the face image of the known object;
and if the face image of the object to be detected is matched in the face images of the known objects, determining that the object to be detected is a legal object.
19. The system of claim 18, further comprising: an identity information acquisition device; the identity acquisition device is connected with the computing device;
the identity information acquisition device is used for: and acquiring a face image in the identity document provided by the object to be detected as the face image of the known object, and providing the face image of the known object to the computing equipment.
20. The system of claim 19, wherein the identity information collection device is disposed on the gate.
21. The system of claim 16, further comprising: a light source device; the light source equipment is used for providing illumination information required by image acquisition for the image acquisition equipment.
22. The system of any of claims 16-21, wherein the image capture device is a monocular camera.
23. The system according to any one of claims 16-21, wherein the image capturing device is disposed on the gate, and a capturing view angle of the image capturing device covers an area around an entrance and an exit of the gate.
24. The system according to any one of claims 16-21, wherein the computing device is a processor disposed on the gate, or the computing device is a cloud-based server device.
25. A living body examination apparatus, comprising: a vision sensor, a memory, and a processor; wherein the memory is used for storing a computer program;
the vision sensor is used for acquiring an image containing an object to be detected in the current environment;
the processor is coupled to the memory for executing the computer program for:
performing illumination analysis on the image to obtain illumination information in the image; and if the illumination information in the image is matched with the illumination information in the current environment, determining that the object to be detected is a living body.
26. A gate, comprising: the gate comprises a gate body, and a camera, a memory and a processor which are arranged on the gate body; wherein the memory is used for storing a computer program;
the camera is used for: collecting a face image of an object to be detected around an entrance and an exit of the gate body;
the processor is coupled to the memory for executing the computer program for: and carrying out illumination analysis on the face image of the object to be detected to obtain illumination information in the face image, and controlling the gate body to be opened under the condition that the illumination information in the face image is matched with the illumination information in the current environment and the object to be detected is a legal object so as to allow the object to be detected to pass through.
27. A face recognition device, comprising: a vision sensor, a memory, and a processor; wherein the memory is used for storing a computer program;
the vision sensor is used for acquiring a face image containing an object to be detected in the current environment;
the processor is coupled to the memory for executing the computer program for:
performing illumination analysis on the face image to obtain illumination information in the face image; and if the illumination information in the face image is matched with the illumination information in the current environment and the face image belongs to the face image of the known object, determining that the object to be detected passes face recognition.
28. A computer device, comprising: a memory and a processor;
wherein the memory is used for storing a computer program;
the processor is coupled to the memory for executing the computer program for:
acquiring first illumination information representing ambient illumination; acquiring an image containing an object to be detected; performing illumination analysis on the image to acquire second illumination information representing illumination in the image; and determining whether the object to be detected passes the detection or not based on the similarity of the first illumination information and the second illumination information.
29. A readable storage medium having stored thereon computer instructions, which, when executed by one or more processors, cause the one or more processors to perform the steps of the method of any one of claims 1-15.
CN201910542696.3A 2019-06-21 2019-06-21 Living body detection and data processing method, device, system and storage medium Pending CN112115747A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910542696.3A CN112115747A (en) 2019-06-21 2019-06-21 Living body detection and data processing method, device, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910542696.3A CN112115747A (en) 2019-06-21 2019-06-21 Living body detection and data processing method, device, system and storage medium

Publications (1)

Publication Number Publication Date
CN112115747A true CN112115747A (en) 2020-12-22

Family

ID=73796254

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910542696.3A Pending CN112115747A (en) 2019-06-21 2019-06-21 Living body detection and data processing method, device, system and storage medium

Country Status (1)

Country Link
CN (1) CN112115747A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113569676A (en) * 2021-07-16 2021-10-29 北京市商汤科技开发有限公司 Image processing method, image processing device, electronic equipment and storage medium
WO2022183992A1 (en) * 2021-03-05 2022-09-09 上海肇观电子科技有限公司 Living body detection method, electronic circuit, electronic device and medium
CN113569676B (en) * 2021-07-16 2024-06-11 北京市商汤科技开发有限公司 Image processing method, device, electronic equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022183992A1 (en) * 2021-03-05 2022-09-09 上海肇观电子科技有限公司 Living body detection method, electronic circuit, electronic device and medium
CN113569676A (en) * 2021-07-16 2021-10-29 北京市商汤科技开发有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113569676B (en) * 2021-07-16 2024-06-11 北京市商汤科技开发有限公司 Image processing method, device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US10699103B2 (en) Living body detecting method and apparatus, device and storage medium
CN108229376B (en) Method and device for detecting blinking
CN108470169A (en) Face identification system and method
CN106446873A (en) Face detection method and device
TW202026948A (en) Methods and devices for biological testing and storage medium thereof
US8824747B2 (en) Skin-tone filtering
US10839204B2 (en) Sharing identification data with audio/video recording and communication devices and local processing of the shared data
CN108876833A (en) Image processing method, image processing apparatus and computer readable storage medium
CN110163053B (en) Method and device for generating negative sample for face recognition and computer equipment
KR101939696B1 (en) Multi-midal access control system Operating in user's unconciousness state
CN106682620A (en) Human face image acquisition method and device
US20180276866A1 (en) System and Method for Creating a Virtual Backdrop
CN103383723A (en) Method and system for spoof detection for biometric authentication
CN105518708A (en) Method and equipment for verifying living human face, and computer program product
CN107609463B (en) Living body detection method, living body detection device, living body detection equipment and storage medium
CN106600556A (en) Image processing method and apparatus
CN108197585A (en) Recognition algorithms and device
KR20220132633A (en) Efficient management of face recognition systems and methods in multiple regions
CN208351494U (en) Face identification system
KR20190111034A (en) Feature image acquisition method and device, and user authentication method
KR20090132839A (en) System and method for issuing photo-id card
CN115147936A (en) Living body detection method, electronic device, storage medium, and program product
CN205644823U (en) Social security self -service terminal device
Bekzod Face recognition based automated student attendance system
CN112115747A (en) Living body detection and data processing method, device, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination