WO2018121428A1 - Procédé de détection d'organisme vivant, appareil et support d'enregistrement - Google Patents

Procédé de détection d'organisme vivant, appareil et support d'enregistrement Download PDF

Info

Publication number
WO2018121428A1
WO2018121428A1 PCT/CN2017/117958 CN2017117958W WO2018121428A1 WO 2018121428 A1 WO2018121428 A1 WO 2018121428A1 CN 2017117958 W CN2017117958 W CN 2017117958W WO 2018121428 A1 WO2018121428 A1 WO 2018121428A1
Authority
WO
WIPO (PCT)
Prior art keywords
preset
light
difference
detection
reflected light
Prior art date
Application number
PCT/CN2017/117958
Other languages
English (en)
Chinese (zh)
Inventor
刘尧
汪铖杰
李季檩
梁亦聪
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2018121428A1 publication Critical patent/WO2018121428A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships

Definitions

  • the present invention relates to the field of communications technologies, and in particular, to a living body detecting method, apparatus, and storage medium.
  • identity verification technologies such as fingerprint recognition, eye pattern recognition, iris recognition, and face recognition have been greatly developed.
  • face recognition technology is the most prominent, and it has been more and more widely applied to various identity authentication systems.
  • the identity authentication system based on face recognition mainly needs to solve two problems, one is face verification and the other is living body detection.
  • the living body detection is mainly used to confirm that the collected face image and the like are from the user himself, rather than playing back or forging materials.
  • Aiming at the current methods of detecting live objects, such as photo attacks, video playback attacks, synthetic face attacks, etc., a "randomized interaction” technique is proposed.
  • the so-called "randomized interaction” technology refers to the movement of different parts of the face in the video. The change is cut in, and the random interactions that require the user's active cooperation, such as blinking, shaking, or lip recognition, etc., are used to judge whether the detected object is a living body or the like.
  • An embodiment of the present invention provides a living body detecting method, including:
  • the detection object is a living body.
  • the embodiment of the invention further provides a living body detecting device, comprising:
  • a receiving unit configured to receive a living body detection request
  • a monitoring unit configured to monitor the detection object to obtain an image sequence
  • a detecting unit configured to determine whether the reflected light signal matches a preset optical signal sample when determining that the preset portion of the detection object has a reflected light signal in the image sequence; and when the reflected light signal and the When the optical signal samples are matched, it is determined that the detection object is a living body.
  • the present application also proposes a non-transitory computer readable storage medium storing computer readable instructions that cause at least one processor to perform the methods described above.
  • FIG. 1 is a schematic diagram of a scene of a living body detecting method according to an embodiment of the present invention
  • FIG. 1b is another schematic diagram of a living body detecting method according to an embodiment of the present invention.
  • FIG. 1c is a flowchart of a living body detecting method according to an embodiment of the present invention.
  • FIG. 2 is another flowchart of a living body detecting method according to an embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram of a living body detecting apparatus according to an embodiment of the present invention.
  • FIG. 3b is another schematic structural diagram of a living body detecting apparatus according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
  • the algorithm used in the existing scheme for living body detection has a low accuracy, and it cannot effectively resist synthetic face attacks.
  • the cumbersome active interaction will also The pass rate of the correct sample is greatly reduced, so overall, the in vivo detection of the existing scheme is not good, which greatly affects the accuracy and security of the authentication.
  • embodiments of the present invention provide a living body detecting method and apparatus.
  • the living body detecting device may be specifically integrated in a device such as a terminal, and may use the screen light intensity and color change of the terminal, or use other components or devices such as a flash or an infrared emitter as a light source to project onto the detection object, and then, The living body detection is performed by analyzing a preset portion of the detected object in the received image sequence, such as a reflected light signal of the face.
  • the detection interface when the terminal receives the living body detection request, the detection interface can be started according to the living body detection request, wherein, as shown in FIG. 1a, the detection interface is In addition to the detection area, a non-detection area (the gray part marked in Fig. 1a) is also provided, which is mainly used for flashing a color mask, which can be used as a light source to project light to the detection object, for example, see 1b.
  • the reflected light signal of the real living body and the forged living body (the carrier of the composite picture or video, such as a photo, a mobile phone or a tablet computer) is different, it is possible to determine whether or not the projected light is present in the preset portion of the detection object.
  • the detection object can be monitored (the monitoring situation can be performed by using the detection area in the detection interface) Displaying, and then determining whether the preset portion of the detected object in the monitored image sequence has a reflected light signal generated by the projected light, and whether the reflected light signal matches the preset optical signal sample, if present and preset If the optical signal samples match, it is determined that the detection object is a living body, otherwise, if it does not exist or does not match the preset optical signal sample, it is determined that the detection object is not a living body, and the like.
  • a living body detecting device (hereinafter referred to as a living body detecting device), which may be integrated into a device such as a terminal, which may be a mobile phone, a tablet computer, a notebook computer or a personal computer ( PC, Personal Computer) and other devices.
  • a terminal which may be a mobile phone, a tablet computer, a notebook computer or a personal computer ( PC, Personal Computer) and other devices.
  • a living body detecting method includes: receiving a living body detecting request, and starting a light source according to the living body detecting request, wherein the light source is used for projecting light to the detecting object, and monitoring the detecting object to obtain an image sequence, when determining the image sequence
  • the preset part of the detection object has a reflected light signal generated by the projected light, and when the reflected light signal matches the preset light signal sample, the detected object is determined to be a living body.
  • the specific process of the living body detection method can be as follows:
  • the biometric detection request triggered by the user may be received, or the biometric detection request sent by another device may be received, and the like.
  • the body detection request activates a light source for projecting light to the detection object.
  • the corresponding living body detection process may be invoked according to the living body detection request, the light source is activated according to the living body detection process, and the like.
  • the light source can be set according to the needs of the actual application, for example, by adjusting the brightness of the terminal screen, or by using other light-emitting components such as a flash or an infrared emitter or an external device, or
  • the method is implemented by setting a color mask on the display interface, and the like, that is, the step of “starting the light source according to the living body detection request” may be implemented by any one of the following methods:
  • the predetermined light-emitting component is turned on according to the living body detection request, so that the light-emitting component emits light as a light source to the detection object.
  • the light emitting part may comprise a component such as a flash lamp or an infrared emitter.
  • the detection interface may flash a color mask, and the color mask is used as a light source to project light to the detection object.
  • the area of the flashing color mask may be determined according to the requirements of the actual application.
  • the detecting interface may include a detecting area and a non-detecting area, and the detecting area is mainly used for displaying a monitoring situation, and the non-detecting area may be used.
  • the color mask is used as a light source to project light to the detection object, and so on.
  • the color mask and other parameters of the color mask can be set according to the requirements of the actual application.
  • the color mask can be preset by the system and directly retrieved when the detection interface is started, or can be received at the same time.
  • the biometric detection method may further include:
  • a color mask is generated such that the light projected by the color mask can be changed according to a preset rule.
  • the intensity of the change of the light can also be maximized.
  • the preset rule may be determined according to the needs of the actual application, and the manner of maximizing the intensity of the change of the light may also be various. For example, for the same color of light, the brightness of the screen before and after the change may be maximized.
  • the intensity of the change of light for example, the brightness of the screen before and after the change is set to the maximum and minimum, and for the light of different colors, the intensity of the change of the light can be maximized by adjusting the color difference before and after the change, and so on.
  • the color selection may be selected to be the most robust to signal analysis.
  • the color space for example, in the color model (LAB) color space, the screen changes from the brightest red to the brightest green, the chromaticity of the reflected light changes the most, and so on.
  • the camera of the terminal can be specifically called, the detection object is photographed in real time, an image sequence is obtained, and the captured image sequence is displayed in the detection area, and the like.
  • the image sequence may also be subjected to denoising processing.
  • the noise model as the Gaussian noise
  • the timing multi-frame averaging and/or the same-frame multi-scale averaging can be used to reduce the noise as much as possible, and details are not described herein again.
  • the detected object may be determined to be inactive.
  • the reflected signal may be generated by a light source that is projected by the light source to the detection object.
  • the light source may be activated according to the living body detection request after receiving the living body detection request, or may be initiated in other cases. It should be noted that the present application does not limit the manner and timing of starting the light source, and it is only necessary to project light to the predetermined portion of the detection target when monitoring the detection target.
  • the method for determining whether there is a reflected light signal in the preset part of the image sequence in the image sequence may be various.
  • the reflected light information may be detected by using the inter-frame difference of the image.
  • the specific information may be as follows:
  • the difference between the frames may be an interframe difference or a frame difference, where the interframe difference refers to a difference between two adjacent frames, and the frame difference is between frames corresponding to before and after the change of the projected light. difference.
  • the pixel coordinates of the adjacent frames in the image sequence may be respectively acquired when determining that the position change degree of the detection object is less than the preset change value, and then the inter-frame difference is calculated based on the pixel coordinates.
  • the pixel coordinates of the frame corresponding to the frame before and after the change of the projected light are respectively obtained from the image sequence, and the frame is calculated based on the pixel coordinates. difference.
  • the method for calculating the inter-frame difference or the frame difference based on the pixel coordinates may be various, for example, as follows:
  • the pixel coordinates of the frame corresponding to the change of the projected light are transformed to minimize the registration error of the pixel coordinate, and the pixel corresponding to the preset condition is selected according to the transformation result, and the pixel is calculated according to the selected pixel point. Frame difference.
  • the preset change value and the preset condition may be set according to actual application requirements, and details are not described herein again.
  • Determining whether a difference between the frames (such as an interframe difference or a frame difference) is greater than a preset threshold, and if so, determining that a reflected light signal generated by the projected light exists in a preset portion of the detected object in the image sequence, and if not, Then, it is determined that the reflected portion of the detected object in the image sequence does not have a reflected light signal generated by the projected light.
  • the preset threshold may be determined according to the requirements of the actual application, and details are not described herein again.
  • the difference (such as inter-frame difference or frame difference) is classified and analyzed by a preset global feature algorithm or a classifier, and if the analysis result indicates that the inter-frame variation of the preset portion of the detection object is greater than a set value, determining the image sequence
  • the reflected light signal generated by the projected light is present in the preset portion of the detection object. If the analysis result indicates that the inter-frame change of the preset portion of the detection object is not greater than the set value, it is determined that the preset portion of the detected object in the image sequence is not There is a reflected light signal generated by the projected light.
  • the setting value may be determined according to the requirements of the actual application, and the manner of “classifying and analyzing the inter-frame difference by using a preset global feature algorithm or a classifier” may also be various, for example, as follows:
  • the difference (such as the interframe difference or the frame difference) is analyzed to determine whether the reflected light signal generated by the projected light exists in the image sequence, and if the reflected light signal generated by the projected light does not exist, the indication detection is generated.
  • the inter-frame change of the preset part of the object is not greater than the analysis result of the set value; if there is a reflected light signal generated by the projected light, whether the reflector of the reflected light information existing is determined by a preset global feature algorithm or a classifier If the preset part is the preset part, the analysis result indicating that the inter-frame change of the preset part of the detection target is greater than the set value is generated, and if it is not the preset part, generating the indication object The interframe change of the preset part is not greater than the analysis result of the set value.
  • the preset global feature algorithm or classifier can be used in the sequence of images.
  • the image is classified to filter out the frame in which the preset portion exists, and the candidate frame is obtained, and the inter-frame difference of the candidate frame is analyzed to determine whether the reflected light signal generated by the projected light exists in the preset portion.
  • Generating a reflected light signal generated by the light to generate an analysis result indicating that the inter-frame change of the preset portion of the detection target is not greater than a set value; and if there is a reflected light signal generated by the projected light, generating a pre-detection target Let the interframe change of the part be larger than the analysis result of the set value, and so on.
  • the global feature algorithm refers to an algorithm based on global features, wherein the global features may include mean variance of gray scale, gray level co-occurrence matrix, fast Fourier transform (FFT, Fast Fourier Transformation) and discrete cosine transform (DCT, Discrete) Cosine transform) The transformed spectrum.
  • the classifier can include a Support Vector Machine (SVM), a neural network, a decision tree, and the like.
  • the method for determining whether the reflected optical signal matches the preset optical signal sample may also be multiple.
  • any one of the following methods may be adopted:
  • a difference between a parameter value in the reflected optical signal and a parameter value in the preset optical signal sample is less than a preset difference range, and if less than the preset difference range, indicating a parameter value in the reflected optical signal
  • the parameter values in the preset optical signal samples match, and if greater than the preset difference range, it indicates that the parameter values in the reflected optical signal do not match the parameter values in the preset optical signal samples, and the like.
  • the reflected light signal matches the preset optical signal sample by analyzing whether the shape of the reflected light signal on the image matches the shape presented on the preset light signal sample image, for example, if the reflected light The similarity between the shape of the signal presented on the image and the shape presented on the image of the preset optical signal sample is greater than the set value, and then the reflected light signal is determined to match the preset optical signal sample; otherwise, if the reflected light signal is on the image The similarity between the rendered shape and the shape presented on the preset optical signal sample image is less than or equal to the set value, then it is determined that the reflected optical signal does not match the preset optical signal sample, and so on.
  • the preset optical signal sample, the preset difference range, and the set value may be set according to actual application requirements. For example, if a human body part needs to be tested in vivo, the projected light may be irradiated on the person's face.
  • the commonality of the emitted light signals generated after the above is taken as the preset optical signal sample, and the error range that may be generated by the corresponding parameters may be set accordingly as the preset difference range, or may be set accordingly.
  • the corresponding shape (such as the face of a person generally has the facial features of the person, and the general appearance and position of the facial features, etc.) the set value of the similarity, and so on, and will not be described again here.
  • the light source can be started to project light to the detection object, and the detection object is monitored, and then the monitored image is determined. Whether the reflected light signal generated by the projected light exists in the preset portion of the detected object in the sequence, and whether the reflected light signal matches the preset optical signal sample, and if present and matched, determining that the detected object is a living body;
  • the solution does not require complicated interaction and operation with the user, and therefore, the requirement for hardware configuration can be greatly reduced.
  • the scheme is based on detecting the reflected light signal of the preset part of the object, the real living body and forgery are The reflected light signal of the living body (the carrier of the composite picture or video, such as photos, mobile phones or tablets) is different. Therefore, the scheme can also effectively resist the synthetic face attack and improve the accuracy of the discrimination; therefore, in short, This solution can improve the detection of living organisms, thereby improving the accuracy and security of authentication.
  • the living body detecting device is specifically integrated in the terminal, and the light source is specifically a color mask, and the preset portion of the detecting object is specifically a human face as an example.
  • a living body detection method can be as follows:
  • the terminal receives a living body detection request.
  • the biometric detection request triggered by the user may be received, or the biometric detection request sent by another device may be received, and the like.
  • the living body detecting request may be triggered to be generated, so that the terminal receives the living body detecting request.
  • the terminal generates a color mask, so that the light projected by the color mask can be changed according to a preset rule.
  • the intensity of the change of the light can also be maximized.
  • the preset rule may be determined according to the needs of the actual application, and the manner of maximizing the intensity of the change of the light may also be various. For example, for the same color of light, the brightness of the screen before and after the change may be maximized.
  • the intensity of the light changes for example, the brightness of the screen before and after the change is set to the maximum and minimum, and for the light of different colors, the intensity of the change of the light can be maximized by adjusting the color difference before and after the change, such as changing the screen from the darkest of black. The brightest white, and so on.
  • the color selection may be selected to be the most robust to signal analysis.
  • the color space for example, in the color model (LAB) color space, the screen changes from red to brightest, green to brightest, its reflected light has the largest change in chromaticity, and so on.
  • the terminal starts the detection interface according to the living body detection request, and passes the detection boundary.
  • the non-detection area in the face flashes a color mask such that the color mask acts as a light source to project light onto a subject, such as a person's face.
  • the corresponding living body detection process may be invoked according to the living body detection request, the corresponding detection interface is started according to the living body detection process, and the like.
  • the detection interface may include a detection area and a non-detection area.
  • the detection area is mainly used to display the acquired image sequence, and the non-detection area may be used to flash a color mask, and the color mask is used as a light source to detect the object.
  • the color mask is used as a light source to detect the object.
  • the detection object needs to be kept within a certain distance from the screen of the mobile device, for example, when the user needs to detect whether a certain face is a living body. You can take the mobile device to the right place directly in front of the face to monitor the face, and so on.
  • the terminal monitors the detection object to obtain a sequence of images.
  • the camera of the terminal may be specifically called to capture the detected object in real time to obtain a sequence of images, and the captured image sequence is displayed in the detection area.
  • the image sequence may also be subjected to denoising processing.
  • the noise model as the Gaussian noise
  • the timing multi-frame averaging and/or the same-frame multi-scale averaging can be used to reduce the noise as much as possible, and details are not described herein again.
  • the terminal calculates an interframe difference in the sequence of images.
  • the inter-frame alignment method can be used to more precisely correct the pixel pair of the inter-frame difference in the case where the user's face is detected without a sharp position change. That is, when determining that the position change degree of the detection object is less than the preset change value, respectively acquiring the pixel coordinates of the adjacent frame in the image sequence, and then transforming the pixel coordinates to minimize the registration error of the pixel coordinate, and then The interframe difference is calculated based on the result of the transform, for example, as follows:
  • the transformation type of the transformation matrix M employed is the homography transformation with the highest degree of freedom, so that the registration error can be minimized.
  • MSE Mean Square Error
  • RASAC Random Sample Consensus
  • the step "calculate the inter-frame difference based on the transformation result" can include:
  • the pixel points whose correlation is in accordance with the preset condition are filtered, and the inter-frame difference is calculated according to the selected pixel points.
  • the preset change value and the preset condition may be set according to actual application requirements, and details are not described herein again.
  • the terminal determines, according to the interframe difference, whether a reflected light signal generated by the projected light is present in a face of the image in the image sequence. If yes, step 207 is performed. If not, the terminal determines that the detected object is inactive.
  • the terminal may determine whether the interframe difference is greater than a preset threshold, and if yes, determine a reflected light signal generated by the projected light in a face of the image sequence, and if not, determine a face of the person in the image sequence There is a reflected light signal generated by the projected light.
  • the preset threshold may be determined according to the requirements of the actual application, and details are not described herein again.
  • the cascading discriminant model may also be used for processing.
  • the global feature algorithm or the classifier may be used to preprocess the interframe difference to The occurrence of the reflected light signal is roughly determined so that the subsequent processing of most of the normal frames without the reflected light signal can be skipped, that is, only the frame in which the reflected light signal exists is processed later. That is, the step "the terminal determines whether the reflected light signal generated by the projected light is present in the face of the image in the image sequence according to the interframe difference" may include:
  • the inter-frame difference is classified and analyzed by a preset global feature algorithm or a classifier. If the analysis result indicates that the inter-frame variation of the person's face is greater than the set value, determining that the projected surface ray is generated by the person's face in the image sequence. The reflected light signal, if the analysis result indicates that the inter-frame change of the person's face is not greater than the set value, determining that the reflected light signal generated by the projected light is not present in the face of the person in the image sequence.
  • the set value may be determined according to the needs of the actual application, and the “by default”
  • the global feature algorithm or the classifier can classify and analyze the inter-frame difference.
  • the inter-frame difference is analyzed to determine whether there is a reflected light signal generated by the projected light in the image sequence. If there is no reflected light signal generated by the projected light, an inter-frame change indicating the face of the person is generated. An analysis result larger than the set value; if there is a reflected light signal generated by the projected light, the preset global feature algorithm or the classifier determines whether the reflector of the reflected light information exists as a human face, and if it is a human face, An analysis result indicating that the inter-frame change of the face of the person is greater than the set value is generated, and if it is not the face of the person, an analysis result indicating that the inter-frame change of the face of the person is not greater than the set value is generated.
  • the image in the image sequence may be classified by a preset global feature algorithm or a classifier to filter out a frame of the face of the person, obtain a candidate frame, and analyze an interframe difference of the candidate frame to determine the person. Whether there is a reflected light signal generated by the projected light on the face, and if there is no reflected light signal generated by the projected light, an analysis result indicating that the inter-frame change of the face of the person is not greater than a set value is generated; if the projection exists The reflected light signal generated by the light generates an analysis result indicating that the inter-frame variation of the person's face is greater than the set value, and the like.
  • the global feature algorithm refers to an algorithm based on global features, wherein the global features may include a mean variance of gray scales, a gray level co-occurrence matrix, a transformed spectrum such as FFT and DCT.
  • the classifier can be set according to the requirements of the actual application. For example, if it is only used to determine whether there is a reflected light signal, a simpler classifier can be used, and if it is used to determine whether it is a person's face or the like, it can be used. More complex classifiers, such as neural network classifiers, are used for processing and will not be described here.
  • the terminal determines whether the reflected optical signal matches the preset optical signal sample. If the terminal matches, determining whether the detected object is a living body, and if not, determining that the detected object is a non-living body.
  • the terminal may analyze whether the parameter in the reflected optical signal matches the parameter in the preset optical signal sample, and if yes, determine that the reflected optical signal matches the preset optical signal sample, and if not, determine the reflected optical signal and the pre-determined Let the optical signal samples not match. For example, it may be specifically determined whether a difference between a parameter value in the reflected optical signal and a parameter value in the preset optical signal sample is less than a preset difference range, and if less than the preset difference range, indicating a parameter value in the reflected optical signal The parameter values in the preset optical signal samples are matched. If the value is greater than the preset difference range, it indicates that the parameter values in the reflected optical signal do not match the parameter values in the preset optical signal samples. and many more.
  • the terminal may also determine whether the reflected optical signal matches the preset optical signal sample by analyzing whether the shape of the reflected light signal on the image matches the shape presented on the preset optical signal sample image, for example, if The similarity between the shape of the reflected light signal on the image and the shape presented on the image of the preset light signal sample is greater than a set value, and then the reflected light signal is determined to match the preset light signal sample; otherwise, if the reflected light signal is If the similarity between the shape presented on the image and the shape presented on the preset light signal sample image is less than or equal to the set value, it is determined that the reflected light signal does not match the preset optical signal sample, and so on.
  • the preset optical signal sample, the preset difference range, and the set value may be set according to actual application requirements, and details are not described herein again.
  • some interactive operations may also be appropriately added, for example, the user performs an action such as blinking or opening the mouth, that is, in the step “determining the presence of the projected light in the face of the image in the image sequence.
  • the generated reflected light signal it may also include:
  • the detection object such as a person's face
  • the preset action can be set according to the requirements of the actual application. It should be noted that, in order to avoid cumbersome interaction, the number and difficulty of the preset action may be limited, for example, only one simple operation is needed. The interaction, such as blinking or opening the mouth, can not be repeated here.
  • a non-detection area can be disposed on the detection interface for flashing a color mask, wherein the color mask can be used as a light source to project light to a detection object, such as a person's face, so that when needed When the living body is detected, the face of the person can be monitored, and then the reflected light signal generated by the projected light is present in the face of the monitored image sequence, and the reflected light signal matches the preset light signal sample. If it exists and matches, it is determined that the person's face is a living body; since the solution does not need to perform cumbersome interaction operations and operations with the user, the requirement for hardware configuration can be greatly reduced, and the basis for the living body discrimination is determined by the solution.
  • the solution can also Effectively resist synthetic face attacks and improve the accuracy of the judgment; therefore, in summary, the program Under limited to the hardware configuration of the terminal, to improve the detection effect in vivo, thereby improving authentication accuracy and safety.
  • an embodiment of the present invention further provides a living body detecting device, which is referred to as a living body detecting device.
  • the living body detecting device includes one or more memories; one or more processors; The one or more memories are stored with one or more instruction modules configured to be executed by the one or more processors; wherein the one or more instruction modules include: a receiving unit 301, a monitoring unit 303, and Detection unit 304.
  • the one or more instruction modules may further include a startup unit 302. The specific functions of each unit are described as follows:
  • the receiving unit 301 is configured to receive a living body detection request.
  • the receiving unit 301 may be specifically configured to receive a biometric detection request triggered by a user, or may also receive a biometric detection request sent by another device, and the like.
  • the starting unit 302 is configured to start a light source according to the living body detection request, and the light source is used to project light to the detection object.
  • the initiating unit 302 may be specifically configured to invoke a corresponding living body detection process according to the living body detection request, activate a light source according to the living body detection process, and the like.
  • the light source can be set according to the needs of the actual application, for example, by adjusting the brightness of the terminal screen, or by using other light-emitting components such as a flash or an infrared emitter or an external device, or By setting a color mask on the display interface, etc., the startup unit 302 can specifically perform any of the following operations:
  • the activation unit 302 is specifically configured to adjust the brightness of the screen according to the living body detection request, so that the screen as a light source projects light to the detection object.
  • the activation unit 302 is specifically configured to turn on the preset light-emitting component according to the living body detection request, so that the light-emitting component emits light as a light source to the detection object.
  • the light emitting part may comprise a component such as a flash lamp or an infrared emitter.
  • the activation unit 302 is specifically configured to start a detection interface according to the living body detection request, and the detection interface may flash a color mask, and the color mask is used as a light source to project light to the detection object.
  • the area of the flashing color mask may be determined according to the requirements of the actual application.
  • the detecting interface may include a detecting area and a non-detecting area, and the detecting area is mainly used for displaying a monitoring situation, and the non-detecting area may be used.
  • the color mask is used as a light source to project light to the detection object, and so on.
  • the color mask and other parameters of the color mask can be set according to the requirements of the actual application.
  • the color mask can be preset by the system and directly retrieved when the detection interface is started, or can be received at the same time.
  • Automatically generated after the live detection request That is, as shown in FIG. 3b, the living body detecting device may further include a generating unit 305 as follows:
  • the generating unit 305 can be configured to generate a color mask such that the light projected by the color mask can be changed according to a preset rule.
  • the generating unit 305 can also be used to maximize the intensity of the change of the light.
  • the preset rule may be determined according to the needs of the actual application, and the manner of maximizing the intensity of the change of the light may also be various, for example, as follows:
  • the generating unit 305 can be specifically configured to maximize the intensity of the change of the light by adjusting the brightness of the screen before and after the change for the light of the same color. For the light of different colors, the intensity of the change of the light is maximized by adjusting the color difference before and after the change.
  • the color selection may be selected to be the most robust to signal analysis. For details, refer to the previous embodiment, and details are not described herein again.
  • the monitoring unit 303 is configured to monitor the detection object to obtain a sequence of images.
  • the monitoring unit 303 can be specifically used to call the camera of the terminal, capture the detected object in real time, obtain an image sequence, and display the captured image sequence in the detection area.
  • the monitoring unit 303 may perform the denoising processing on the image sequence.
  • the monitoring unit 303 may perform the denoising processing on the image sequence.
  • the detecting unit 304 is configured to determine, when the reflected light signal exists in the preset part of the detection object in the image sequence, determine whether the reflected light signal matches the preset light signal sample; and when the reflected light signal matches the preset light signal sample, It is determined that the detection object is a living body.
  • the detecting unit 304 is further configured to determine, when the reflected portion of the detected object in the image sequence does not have the reflected light signal generated by the projected light, or the reflected light signal does not match the preset optical signal sample, determine the detected object. It is not a living body.
  • the detecting unit 304 may include a calculating subunit, a determining subunit, and a determining subunit, as follows:
  • a calculation subunit that can be used to calculate the difference between frames in the sequence of images.
  • the difference between the frames may be an interframe difference or a frame difference, where the interframe difference refers to a difference between two adjacent frames, and the frame difference is between frames corresponding to before and after the change of the projected light. difference.
  • the calculating subunit may be specifically configured to obtain pixel coordinates of adjacent frames in the image sequence when the degree of change of the position of the detecting object is less than a preset change value. Calculating the inter-frame difference on the pixel coordinates; for example, the pixel coordinates may be transformed to minimize the registration error of the pixel coordinates, and then, according to the transformation result, the pixel points whose correlation meets the preset condition are filtered, and according to the screening The resulting pixels calculate the interframe difference, and so on.
  • the calculating sub-unit may be specifically configured to determine, when the degree of change of the position of the detecting object is less than a preset change value, respectively obtain pixel coordinates of the frame corresponding to the change of the projected light from the image sequence, and calculate the pixel coordinate based on the pixel coordinate Frame difference; for example, the pixel coordinates may be transformed to minimize the registration error of the pixel coordinates, and then, according to the transformation result, the pixel points whose correlation is in accordance with the preset condition are filtered, and the frame is calculated according to the selected pixel point. Poor, and so on.
  • the preset change value and the preset condition may be set according to actual application requirements.
  • the determining subunit may be configured to determine, according to the difference, whether the preset portion of the detection object in the image sequence has a reflected light signal generated by the projected light, and whether the reflected optical signal matches the preset optical signal sample.
  • the determining subunit may be configured to determine that the detected object is a living body when the determining subunit determines that the reflected light signal generated by the projected light is present, and the reflected optical signal matches the preset optical signal sample.
  • the determining subunit may be further configured to determine that the detected object is inactive when the determining subunit determines that the reflected light signal generated by the projected light does not exist, or the reflected optical signal does not match the preset optical signal sample.
  • the method for determining whether there is a reflected light signal generated by the projected light in the preset portion of the image sequence in the image sequence may be different according to the difference between the frames. For example, any one of the following methods may be adopted:
  • the determining subunit may be specifically configured to determine whether the difference between the frames is greater than a preset threshold, and if yes, determining that the reflected light signal generated by the projected light exists in the preset part of the detected object in the image sequence; Determining that the preset portion of the detected object in the image sequence does not have a reflected light signal generated by the projected light.
  • the determining sub-unit may be specifically configured to perform classification analysis on the difference between the frames by using a preset global feature algorithm or a classifier, and if the analysis result indicates that the inter-frame variation of the preset part of the detection object is greater than a set value, determining The preset part of the detection object in the image sequence has a reflected light signal generated by the projected light; if the analysis result indicates that the inter-frame change of the preset part of the detection object is not greater than a set value, determining the detected object in the image sequence The reflected light signal generated by the projected light does not exist in the preset portion.
  • the set value may be determined according to the needs of the actual application, and the manner of “classifying and analyzing the inter-frame difference by using a preset global feature algorithm or a classifier” may also be Kind, for example, can be as follows:
  • the determining subunit may be specifically configured to analyze the difference between the frames to determine whether the reflected light signal generated by the projected light exists in the image sequence, and if there is no reflected light signal generated by the projected light, And generating an analysis result indicating that the inter-frame change of the preset part of the detection object is not greater than a set value; if there is a reflected light signal generated by the projected light, determining the reflected light information existing by using a preset global feature algorithm or a classifier Whether the reflector is a preset part of the detection object, and if it is the preset part, generating an analysis result indicating that the inter-frame change of the preset part of the detection object is greater than a set value, if not the preset part, generating An analysis result indicating that the inter-frame change of the preset portion of the detection object is not greater than the set value.
  • the determining subunit may be specifically configured to classify the image in the image sequence by using a preset global feature algorithm or a classifier to filter out a frame in which the preset part exists, obtain a candidate frame, and analyze the candidate frame.
  • Inter-frame difference to determine whether the preset part has a reflected light signal generated by the projected light, and if there is no reflected light signal generated by the projected light, generating an inter-frame change indicating that the preset part of the detection object is not greater than The analysis result of the set value; if there is a reflected light signal generated by the projected light, an analysis result indicating that the inter-frame change of the preset portion of the detection target is greater than the set value is generated.
  • the global feature algorithm refers to an algorithm based on global features, wherein the global features may include a mean variance of gray scales, a gray level co-occurrence matrix, a transformed spectrum such as FFT and DCT.
  • the method for determining whether the reflected optical signal matches the preset optical signal sample may also be multiple.
  • any one of the following methods may be adopted:
  • the determining subunit may be configured to analyze whether the parameter in the reflected optical signal matches the parameter in the preset optical signal sample, and if yes, determine that the reflected optical signal matches the preset optical signal sample, and if not, determine the reflection
  • the optical signal does not match the preset optical signal sample. For example, it may be specifically determined whether a difference between a parameter value in the reflected optical signal and a parameter value in the preset optical signal sample is less than a preset difference range, and if less than the preset difference range, indicating a parameter value in the reflected optical signal
  • the parameter values in the preset optical signal samples match, and if greater than the preset difference range, it indicates that the parameter values in the reflected optical signal do not match the parameter values in the preset optical signal samples, and the like.
  • the determining subunit may be specifically configured to determine whether the reflected optical signal matches the preset optical signal sample by analyzing whether the shape of the reflected light signal on the image matches the shape presented on the preset optical signal sample image. For example, if the similarity between the shape of the reflected light signal on the image and the shape presented on the preset light signal sample image is greater than the set value, it is determined that the reflected light signal matches the preset light signal sample, otherwise, if Determining the reflected light signal and the preset light signal when the similarity between the shape of the reflected light signal on the image and the shape presented on the preset light signal sample image is less than or equal to the set value Samples do not match, and so on.
  • the preset optical signal sample, the preset difference range, and the set value may be set according to actual application requirements, and details are not described herein again.
  • the foregoing units may be implemented as a separate entity, or may be implemented in any combination, and may be implemented as the same or a plurality of entities.
  • the foregoing method embodiments and details are not described herein.
  • the device can be integrated into a device such as a terminal, and the terminal can be a device such as a mobile phone, a tablet computer, a notebook computer, or a PC.
  • the activation unit 302 can activate the light source to project the light to the detection object, and monitor the 303 through the detection area in the detection interface to monitor the detection object. And determining, by the detecting unit 304, whether the preset portion of the detected object in the monitored image sequence has a reflected light signal generated by the projected light, and whether the reflected light matches the preset optical signal sample, if present and matched, It is determined that the detection object is a living body; since the solution does not need to perform cumbersome interaction operations and operations with the user, the requirement for hardware configuration can be greatly reduced, and the basis for the living body discrimination is to detect the reflection of the preset part of the object.
  • the solution can effectively resist synthetic face attacks and improve The accuracy of the discrimination; therefore, in summary, the program can improve the living body Test results, to improve the accuracy of authentication and security.
  • the embodiment of the present invention further provides a terminal.
  • the terminal may include a radio frequency (RF) circuit 401, a memory 402 including one or more computer readable storage media, and an input unit. 403.
  • RF radio frequency
  • the terminal structure shown in FIG. 4 does not constitute a limitation to the terminal, and may include more or less components than those illustrated, or a combination of certain components, or different component arrangements. among them:
  • the RF circuit 401 can be used for transmitting and receiving information or during a call, and receiving and transmitting signals. Specifically, after receiving downlink information of the base station, the downlink information is processed by one or more processors 408. In addition, the data related to the uplink is sent to the base station. .
  • the RF circuit 401 includes, but is not limited to, an antenna, at least one amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, and a low noise amplifier (LNA, Low Noise Amplifier), duplexer, etc. In addition, the RF circuit 401 can also communicate with the network and other devices through wireless communication.
  • SIM Subscriber Identity Module
  • LNA Low Noise Amplifier
  • the wireless communication can use any communication standard or protocol, including but not limited to a global mobile communication system (GSM, Global System of Mobile communication), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA) , Long Term Evolution (LTE), e-mail, Short Messaging Service (SMS), etc.
  • GSM Global System of Mobile communication
  • GPRS General Packet Radio Service
  • CDMA Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • LTE Long Term Evolution
  • SMS Short Messaging Service
  • the memory 402 can be used to store software programs and modules, and the processor 408 executes various functional applications and data processing by running software programs and modules stored in the memory 402.
  • the memory 402 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored according to Data created by the use of the terminal (such as audio data, phone book, etc.).
  • memory 402 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, memory 402 may also include a memory controller to provide access to memory 402 by processor 408 and input unit 403.
  • Input unit 403 can be used to receive input numeric or character information, as well as to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function controls.
  • input unit 403 can include a touch-sensitive surface as well as other input devices.
  • Touch-sensitive surfaces also known as touch screens or trackpads, collect touch operations on or near the user (such as the user using a finger, stylus, etc., any suitable object or accessory on a touch-sensitive surface or touch-sensitive Operation near the surface), and drive the corresponding connecting device according to a preset program.
  • the touch sensitive surface may include two parts of a touch detection device and a touch controller.
  • the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information.
  • the processor 408 is provided and can receive commands from the processor 408 and execute them.
  • touch-sensitive surfaces can be implemented in a variety of types, including resistive, capacitive, infrared, and surface acoustic waves.
  • the input unit 403 can also include other input devices. Specifically, other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like.
  • Display unit 404 can be used to display information entered by the user or information provided to the user, as well as various graphical user interfaces of the terminal, which can be composed of graphics, text, icons, video, and any combination thereof.
  • the display unit 404 can include a display panel.
  • the display panel can be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
  • the touch sensitive surface can cover the display panel when the touch sensitive surface is detected in it Upon or near a touch operation, the processor 408 is passed to determine the type of touch event, and the processor 408 then provides a corresponding visual output on the display panel based on the type of touch event.
  • the touch-sensitive surface and display panel are implemented as two separate components to perform input and input functions, in some embodiments, the touch-sensitive surface can be integrated with the display panel to implement input and output functions.
  • the terminal may also include at least one type of sensor 405, such as a light sensor, motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel according to the brightness of the ambient light, and the proximity sensor may close the display panel and/or the backlight when the terminal moves to the ear.
  • the gravity acceleration sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity.
  • the terminal can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, no longer Narration.
  • the audio circuit 406, the speaker, and the microphone provide an audio interface between the user and the terminal.
  • the audio circuit 406 can transmit the converted electrical signal of the audio data to the speaker, and convert it into a sound signal output by the speaker; on the other hand, the microphone converts the collected sound signal into an electrical signal, which is received by the audio circuit 406 and then converted.
  • the audio data is then processed by the audio data output processor 408, sent via RF circuitry 401 to, for example, another terminal, or the audio data is output to memory 402 for further processing.
  • the audio circuit 406 may also include an earbud jack to provide communication between the peripheral earphone and the terminal.
  • WiFi is a short-range wireless transmission technology
  • the terminal can help users to send and receive emails, browse web pages, and access streaming media through the WiFi module 407, which provides wireless broadband Internet access for users.
  • FIG. 4 shows the WiFi module 407, it can be understood that it does not belong to the necessary configuration of the terminal, and can be omitted as needed within the scope of not changing the essence of the invention.
  • Processor 408 is the control center of the terminal, which connects various portions of the entire handset using various interfaces and lines, by executing or executing software programs and/or modules stored in memory 402, and by invoking data stored in memory 402, The various functions of the terminal and processing data to monitor the mobile phone as a whole.
  • the processor 408 may include one or more processing cores; preferably, the processor 408 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like.
  • the modem processor primarily handles wireless communications. It will be appreciated that the above described modem processor may also not be integrated into the processor 408.
  • the terminal also includes a power source 409 (such as a battery) that supplies power to the various components.
  • the power source can be logically coupled to the processor 408 through the power management system for power management.
  • the system manages functions such as charging, discharging, and power management.
  • the power supply 409 may also include any one or more of a DC or AC power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
  • the terminal may further include a camera, a Bluetooth module, and the like, and details are not described herein again.
  • the memory 402 described above will store one or more programs and be configured to be executed by one or more processors 408.
  • the one or more programs described above may include the following instruction modules:
  • the receiving unit 301 is configured to receive a living body detection request.
  • the starting unit 302 is configured to start a light source according to the living body detection request, and the light source is used to project light to the detection object;
  • the monitoring unit 303 is configured to monitor the detection object to obtain an image sequence
  • the detecting unit 304 is configured to determine, when the reflected light signal generated by the projected light is present in the preset portion of the detection object in the image sequence, and the reflected light signal matches the preset optical signal sample,
  • the test object is a living body.
  • the processor 408 in the terminal loads the executable file corresponding to the process of one or more applications into the memory 402 according to the following instruction, and the processor 408 runs the application stored in the memory 402, thereby Achieve a variety of functions:
  • Receiving a living body detection request starting a light source according to the living body detection request, wherein the light source is used for projecting light to the detection object, and the object is monitored to obtain a sequence of images, and when the predetermined portion of the detection object in the image sequence is determined to have the projected light When the generated reflected light signal matches the preset optical signal sample, the detected object is determined to be a living body.
  • the method for determining whether the preset portion of the detection object in the image sequence has the reflected light signal generated by the projected light, and determining whether the reflected light signal matches the preset optical signal sample may be various. For details, refer to the foregoing. The embodiment is not described here.
  • the light source may be implemented in various manners, for example, by adjusting the brightness of the screen of the terminal, or by using other light-emitting components such as a flash or an infrared emitter or an external device, or By setting a color mask on the display interface, etc., that is, the application in the memory 402 can also implement the following functions:
  • the screen brightness is adjusted according to the living body detection request such that the screen as a light source projects light to the detection object.
  • the preset light-emitting component is turned on according to the living body detection request, so that the light-emitting component emits light as a light source to the detection object.
  • the light emitting part may comprise a component such as a flash lamp or an infrared emitter.
  • the detection interface is activated according to the living body detection request, and the detection interface may flash a color mask, and the color mask is used as a light source to project light to the detection object.
  • the area of the flashing color mask may be determined according to the requirements of the actual application.
  • the detecting interface may include a detecting area and a non-detecting area, and the detecting area is mainly used for displaying a monitoring situation, and the non-detecting area may be used.
  • the color mask is used as a light source to project light to the detection object, and so on.
  • the color mask and other parameters of the color mask can be set according to the requirements of the actual application, and the color mask can be preset by the system and directly retrieved when the detection interface is started, or It can also be automatically generated after receiving the biometric detection request, that is, the application stored in the memory 402, and can also implement the following functions:
  • a color mask is generated such that the light projected by the color mask can be changed according to a preset rule and the intensity of the change of the light is maximized.
  • the color selection may be selected to be the most robust to signal analysis. Color space.
  • the image sequence may also be subjected to denoising processing, that is, the application stored in the memory 402 may also implement the following functions. :
  • the image sequence is subjected to denoising processing.
  • the noise model as Gaussian noise as an example, it is possible to use timing multi-frame averaging and/or co-frame multi-scale averaging to reduce noise as much as possible, and the like.
  • the terminal when the terminal needs to perform the living body detection, the terminal can start the light source to project the light to the detection object, monitor the detection object, and then determine whether the preset part of the detection object exists in the monitored image sequence. Projecting a reflected light signal generated by the light, and whether the reflected light signal matches the preset optical signal sample, and if present and matched, determining that the detected object is a living body; since the solution does not require complicated interaction and operation with the user, Therefore, the requirement for the hardware configuration can be greatly reduced, and the basis for the living body discrimination is to detect the reflected light signal of the preset part of the object, and the real living body and the forged living body (the composite picture or video carrier, such as a photo) The reflected light signal of the mobile phone or tablet computer is different. Therefore, the solution can also effectively resist the synthetic face attack and improve the accuracy of the discrimination; therefore, in short, the solution can be limited in the terminal, especially the mobile terminal. Improve the detection of living body under hardware configuration, thus improving the identification The
  • the storage medium may include a read only memory (ROM), a random access memory (RAM), a magnetic disk or an optical disk, and the like.
  • an embodiment of the present invention further provides a storage medium in which a data processing program is stored, and the data processing program is used to execute any one of the foregoing methods of the embodiments of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Geometry (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

L'invention concerne un procédé de détection d'organisme vivant et un appareil, comprenant les étapes suivantes : lorsqu'une détection d'un organisme vivant est requise, un sujet de détection peut être surveillé ; puis déterminer si un signal optique réfléchi est présent ou non dans une position prédéfinie du sujet de détection dans une séquence d'images obtenue au moyen de la surveillance et si ledit signal optique réfléchi correspond ou non à un échantillon de signal optique prédéfini ; s'il est présent et qu'il correspond, déterminer le sujet de détection en tant qu'organisme vivant.
PCT/CN2017/117958 2016-12-30 2017-12-22 Procédé de détection d'organisme vivant, appareil et support d'enregistrement WO2018121428A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611257052.2 2016-12-30
CN201611257052 2016-12-30

Publications (1)

Publication Number Publication Date
WO2018121428A1 true WO2018121428A1 (fr) 2018-07-05

Family

ID=62031297

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/CN2017/117958 WO2018121428A1 (fr) 2016-12-30 2017-12-22 Procédé de détection d'organisme vivant, appareil et support d'enregistrement
PCT/CN2018/111218 WO2019080797A1 (fr) 2016-12-30 2018-10-22 Procédé de détection d'être vivant, terminal et support d'enregistrement

Family Applications After (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/111218 WO2019080797A1 (fr) 2016-12-30 2018-10-22 Procédé de détection d'être vivant, terminal et support d'enregistrement

Country Status (2)

Country Link
CN (1) CN107992794B (fr)
WO (2) WO2018121428A1 (fr)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110969077A (zh) * 2019-09-16 2020-04-07 成都恒道智融信息技术有限公司 一种基于颜色变化的活体检测方法
CN111444831A (zh) * 2020-03-25 2020-07-24 深圳中科信迅信息技术有限公司 一种活体检测人脸识别的方法
CN111797735A (zh) * 2020-06-22 2020-10-20 深圳壹账通智能科技有限公司 人脸视频识别方法、装置、设备及存储介质
CN113298747A (zh) * 2020-02-19 2021-08-24 北京沃东天骏信息技术有限公司 图片、视频检测方法和装置
CN113888500A (zh) * 2021-09-29 2022-01-04 平安银行股份有限公司 基于人脸图像的炫光程度检测方法、装置、设备及介质
WO2023221996A1 (fr) * 2022-05-16 2023-11-23 北京旷视科技有限公司 Procédé de détection de corps vivant, dispositif électronique, support de stockage et produit de programme

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107992794B (zh) * 2016-12-30 2019-05-28 腾讯科技(深圳)有限公司 一种活体检测方法、装置和存储介质
CN107832712A (zh) * 2017-11-13 2018-03-23 深圳前海微众银行股份有限公司 活体检测方法、装置和计算机可读存储介质
CN109101881B (zh) * 2018-07-06 2021-08-20 华中科技大学 一种基于多尺度时序图像的实时眨眼检测方法
CN109376592B (zh) * 2018-09-10 2021-04-27 创新先进技术有限公司 活体检测方法、装置和计算机可读存储介质
CN111310515A (zh) * 2018-12-11 2020-06-19 上海耕岩智能科技有限公司 编码遮罩生物特征分析方法、存储介质及神经网络
CN111310514A (zh) * 2018-12-11 2020-06-19 上海耕岩智能科技有限公司 编码遮罩生物特征重建方法及存储介质
CN109660745A (zh) * 2018-12-21 2019-04-19 深圳前海微众银行股份有限公司 视频录制方法、装置、终端及计算机可读存储介质
CN111488756B (zh) * 2019-01-25 2023-10-03 杭州海康威视数字技术股份有限公司 基于面部识别的活体检测的方法、电子设备和存储介质
CN109961025B (zh) * 2019-03-11 2020-01-24 烟台市广智微芯智能科技有限责任公司 一种基于图像偏斜度的真假脸部识别检测方法和检测系统
CN110414346A (zh) * 2019-06-25 2019-11-05 北京迈格威科技有限公司 活体检测方法、装置、电子设备及存储介质
CN110298312B (zh) * 2019-06-28 2022-03-18 北京旷视科技有限公司 活体检测方法、装置、电子设备及计算机可读存储介质
CN112183156B (zh) * 2019-07-02 2023-08-11 杭州海康威视数字技术股份有限公司 一种活体检测方法和设备
CN110516644A (zh) * 2019-08-30 2019-11-29 深圳前海微众银行股份有限公司 一种活体检测方法及装置
CN110688946A (zh) * 2019-09-26 2020-01-14 上海依图信息技术有限公司 基于图片识别的公有云静默活体检测设备和方法
CN111126229A (zh) * 2019-12-17 2020-05-08 中国建设银行股份有限公司 数据处理方法和装置
CN111274928B (zh) * 2020-01-17 2023-04-07 腾讯科技(深圳)有限公司 一种活体检测方法、装置、电子设备和存储介质
CN111310575B (zh) * 2020-01-17 2022-07-08 腾讯科技(深圳)有限公司 一种人脸活体检测的方法、相关装置、设备及存储介质
SG10202005395VA (en) * 2020-06-08 2021-01-28 Alipay Labs Singapore Pte Ltd Face liveness detection system, device and method
CN111783640A (zh) * 2020-06-30 2020-10-16 北京百度网讯科技有限公司 检测方法、装置、设备以及存储介质
CN111899232B (zh) * 2020-07-20 2023-07-04 广西大学 利用图像处理对竹木复合集装箱底板无损检测的方法
CN111914763B (zh) * 2020-08-04 2023-11-28 网易(杭州)网络有限公司 活体检测方法、装置和终端设备
CN112528909B (zh) * 2020-12-18 2024-05-21 平安银行股份有限公司 活体检测方法、装置、电子设备及计算机可读存储介质
CN113807159A (zh) * 2020-12-31 2021-12-17 京东科技信息技术有限公司 人脸识别处理方法、装置、设备及其存储介质
CN112818782B (zh) * 2021-01-22 2021-09-21 电子科技大学 一种基于媒介感知的泛化性静默活体检测方法
CN113837930B (zh) * 2021-09-24 2024-02-02 重庆中科云从科技有限公司 人脸图像合成方法、装置以及计算机可读存储介质
CN113869219B (zh) * 2021-09-29 2024-05-21 平安银行股份有限公司 人脸活体检测方法、装置、设备及存储介质
CN115995102A (zh) * 2021-10-15 2023-04-21 北京眼神科技有限公司 人脸静默活体检测方法、装置、存储介质及设备
CN116978078A (zh) * 2022-04-14 2023-10-31 京东科技信息技术有限公司 活体检测方法和装置、系统、电子设备、计算机可读介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104951769A (zh) * 2015-07-02 2015-09-30 京东方科技集团股份有限公司 活体识别装置、活体识别方法和活体认证系统
CN105260731A (zh) * 2015-11-25 2016-01-20 商汤集团有限公司 一种基于光脉冲的人脸活体检测系统及方法
CN105637532A (zh) * 2015-06-08 2016-06-01 北京旷视科技有限公司 活体检测方法、活体检测系统以及计算机程序产品
CN105912986A (zh) * 2016-04-01 2016-08-31 北京旷视科技有限公司 活体检测方法、活体检测系统以及计算机程序产品
CN107273794A (zh) * 2017-04-28 2017-10-20 北京建筑大学 一种人脸识别过程中的活体鉴别方法及装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016197298A1 (fr) * 2015-06-08 2016-12-15 北京旷视科技有限公司 Procédé de détection de corps vivant, système de détection de corps vivant et produit de programme informatique
CN106529512B (zh) * 2016-12-15 2019-09-10 北京旷视科技有限公司 活体人脸验证方法及装置
CN107992794B (zh) * 2016-12-30 2019-05-28 腾讯科技(深圳)有限公司 一种活体检测方法、装置和存储介质
CN107220635A (zh) * 2017-06-21 2017-09-29 北京市威富安防科技有限公司 基于多造假方式的人脸活体检测方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105637532A (zh) * 2015-06-08 2016-06-01 北京旷视科技有限公司 活体检测方法、活体检测系统以及计算机程序产品
CN104951769A (zh) * 2015-07-02 2015-09-30 京东方科技集团股份有限公司 活体识别装置、活体识别方法和活体认证系统
CN105260731A (zh) * 2015-11-25 2016-01-20 商汤集团有限公司 一种基于光脉冲的人脸活体检测系统及方法
CN105912986A (zh) * 2016-04-01 2016-08-31 北京旷视科技有限公司 活体检测方法、活体检测系统以及计算机程序产品
CN107273794A (zh) * 2017-04-28 2017-10-20 北京建筑大学 一种人脸识别过程中的活体鉴别方法及装置

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110969077A (zh) * 2019-09-16 2020-04-07 成都恒道智融信息技术有限公司 一种基于颜色变化的活体检测方法
CN113298747A (zh) * 2020-02-19 2021-08-24 北京沃东天骏信息技术有限公司 图片、视频检测方法和装置
CN111444831A (zh) * 2020-03-25 2020-07-24 深圳中科信迅信息技术有限公司 一种活体检测人脸识别的方法
CN111797735A (zh) * 2020-06-22 2020-10-20 深圳壹账通智能科技有限公司 人脸视频识别方法、装置、设备及存储介质
CN113888500A (zh) * 2021-09-29 2022-01-04 平安银行股份有限公司 基于人脸图像的炫光程度检测方法、装置、设备及介质
WO2023221996A1 (fr) * 2022-05-16 2023-11-23 北京旷视科技有限公司 Procédé de détection de corps vivant, dispositif électronique, support de stockage et produit de programme

Also Published As

Publication number Publication date
WO2019080797A1 (fr) 2019-05-02
CN107992794A (zh) 2018-05-04
CN107992794B (zh) 2019-05-28

Similar Documents

Publication Publication Date Title
WO2018121428A1 (fr) Procédé de détection d'organisme vivant, appareil et support d'enregistrement
WO2017181769A1 (fr) Procédé, appareil et système, dispositif, et support de stockage de reconnaissance faciale
WO2019096008A1 (fr) Procédé d'identification, dispositif informatique et support d'informations
US9443155B2 (en) Systems and methods for real human face recognition
US9953506B2 (en) Alarming method and device
US11074466B2 (en) Anti-counterfeiting processing method and related products
US10860850B2 (en) Method of recognition based on iris recognition and electronic device supporting the same
US10061969B2 (en) Fingerprint unlocking method and terminal
KR102488563B1 (ko) 차등적 뷰티효과 처리 장치 및 방법
CN108345819B (zh) 一种发送报警消息的方法和装置
WO2019020014A1 (fr) Procédé de commande de déverrouillage et produit associé
WO2015003522A1 (fr) Procédé de reconnaissance de visage, appareil et terminal mobile
US11328044B2 (en) Dynamic recognition method and terminal device
US20170243063A1 (en) Authentication method, electronic device, and storage medium
CN108259758B (zh) 图像处理方法、装置、存储介质和电子设备
WO2019011098A1 (fr) Procédé de commande de déverrouillage et produit associé
CN107241552B (zh) 一种图像获取方法、装置、存储介质和终端
WO2019154184A1 (fr) Procédé de reconnaissance de caractéristique biologique et terminal mobile
CN112037162A (zh) 一种面部痤疮的检测方法及设备
WO2019015418A1 (fr) Procédé de commande de déverrouillage et produit associé
US20200125874A1 (en) Anti-Counterfeiting Processing Method, Electronic Device, and Non-Transitory Computer-Readable Storage Medium
CN110765924A (zh) 一种活体检测方法、装置以及计算机可读存储介质
US10671713B2 (en) Method for controlling unlocking and related products
WO2019011207A1 (fr) Procédé de reconnaissance d'iris et produits associés
CN107895108B (zh) 一种操作管理方法和移动终端

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17887997

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17887997

Country of ref document: EP

Kind code of ref document: A1