WO2020124994A1 - Liveness detection method and apparatus, electronic device, and storage medium - Google Patents

Liveness detection method and apparatus, electronic device, and storage medium Download PDF

Info

Publication number
WO2020124994A1
WO2020124994A1 PCT/CN2019/095081 CN2019095081W WO2020124994A1 WO 2020124994 A1 WO2020124994 A1 WO 2020124994A1 CN 2019095081 W CN2019095081 W CN 2019095081W WO 2020124994 A1 WO2020124994 A1 WO 2020124994A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature value
condition
image frame
target action
organ region
Prior art date
Application number
PCT/CN2019/095081
Other languages
French (fr)
Chinese (zh)
Inventor
王旭
Original Assignee
北京字节跳动网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字节跳动网络技术有限公司 filed Critical 北京字节跳动网络技术有限公司
Publication of WO2020124994A1 publication Critical patent/WO2020124994A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Definitions

  • Embodiments of the present disclosure relate to image processing technology, for example, to a living body detection method, device, electronic device, and storage medium.
  • ID authentication needs to determine that the subject is a living body.
  • the above method is easy to be attacked by photos. For example, by closing the eyes of the legal user, and quickly placing a black pen on the user's eyes, and quickly leaving, to simulate the closed eyes of the legal user, resulting in the user's photo Being misjudged as a living body, the related art methods cannot guarantee the accuracy of the living body detection and reduce the security of identity authentication.
  • Embodiments of the present disclosure provide a living body detection method, device, electronic equipment, and storage medium, which can accurately identify a living body and improve the security of identity authentication.
  • an embodiment of the present disclosure provides a living body detection method, which includes:
  • the image frames in the video are obtained in real time
  • each organ region corresponds to at least one feature value set, and each feature value set includes at least one feature value;
  • An image frame in which the at least one organ region satisfies the unoccluded condition is determined as a target image frame, and feature values matching each organ region identified in the target image frame are calculated respectively, and the calculated feature
  • the value update is stored in the feature value set corresponding to the organ region to which the feature value belongs;
  • At least one target action condition is detected on the feature value set in real time, and when it is determined that the living body detection end condition is satisfied, the detection results of the at least one target action condition are all verified to determine that the user is a living body.
  • an embodiment of the present disclosure also provides a living body detection device, which includes:
  • the image frame acquisition module is set to acquire the image frames in the video in real time when it is determined that the living body detection start condition is satisfied;
  • An organ region identification module configured to identify at least one organ region of the user in the image frame, each organ region corresponds to at least one feature value set, and each feature value set includes at least one feature value;
  • the feature value set update module is configured to determine the image frames in which the at least one organ region satisfies the unoccluded condition as the target image frame, and calculate feature values matching each organ region identified in the target image frame, And update and store the calculated feature value to the feature value set corresponding to the organ region to which the feature value belongs;
  • the living body detection module is configured to perform real-time detection of the at least one target action condition on the feature value set, and when it is determined that the living body detection end condition is satisfied, the detection results based on the at least one target action condition are all verified and determined
  • the user is a living body.
  • an embodiment of the present disclosure also provides an electronic device, the electronic device includes:
  • One or more processors are One or more processors;
  • Memory set to store one or more programs
  • the one or more programs are executed by the one or more processors, so that the one or more processors implement the method described in the foregoing embodiment.
  • an embodiment of the present disclosure further provides a computer-readable storage medium that stores a computer program on the computer-readable storage medium, and the computer program is executed by a processor to implement the method described in the foregoing embodiment.
  • FIG. 1 is a flowchart of a living body detection method provided in Embodiment 1 of the present disclosure
  • FIG. 2 is a flowchart of a living body detection method provided in Embodiment 2 of the present disclosure
  • FIG. 3 is a schematic structural diagram of a living body detection device provided in Embodiment 3 of the present disclosure.
  • Embodiment 4 is a schematic structural diagram of an electronic device according to Embodiment 4 of the present disclosure.
  • FIG. 1 is a flowchart of a living body detection method according to Embodiment 1 of the present disclosure. This embodiment can be applied to detect whether a user in a recorded video is a living body.
  • the method can be performed by a living body detection device. It can be implemented by software and/or hardware, and the device can be configured in an electronic device, such as a computer. As shown in FIG. 1, the method includes S110 to S160.
  • the living body detection start condition may refer to a condition for determining to start performing the living body detection operation. Exemplarily, when the living body detection start instruction is received, it is determined that the living body detection start condition is satisfied.
  • the video is formed by a series of static image frames continuously displayed at a very fast speed.
  • the video can be split into a series of image frames, and each image frame can be used as an image.
  • the image frame is an image including a user's face image.
  • a video including the user's face is recorded, so as to obtain information about the user's face according to the image frames in the video.
  • Living body detection is usually a detection process performed in real time. Therefore, the video is a video being recorded in real time, and each image frame recorded in the video can be acquired and processed in real time, thereby ensuring the timeliness of living body detection.
  • each organ region corresponds to at least one feature value set, and each feature value set includes at least one feature value.
  • the organ area may refer to the area where the organ in the user's face is located.
  • the organ area includes at least one of the following: an eye area, a mouth area, and a nose area.
  • face pose action detection can be achieved through one or more key points in the face, and based on Face gesture detection detects whether the user is alive.
  • the key points may refer to key points in the human face that have an identification function.
  • the key points may include key points of the left eyeball, key points of the mouth, key points of the nostril, key points of the brow and tail, and key points of the contour of the face.
  • the key points can be identified by a pre-trained machine learning model, or by other methods, such as Active Shape Model (ASM)-based methods, which are not specifically limited in the embodiments of the present disclosure.
  • ASM Active Shape Model
  • the characteristic value of each organ region can be calculated.
  • the characteristic value of the organ region includes the closed value of the eye region ;
  • the characteristic value of the organ area includes the opening and closing of the mouth area;
  • the characteristic value of the organ area includes a pitch angle of the nose area And/or rotation angle.
  • the feature value of the nose area may also include the yaw angle.
  • the feature value may be determined according to the relative positions between multiple key points.
  • the closing value of the eye area may be the distance between the highest key point of the eyeball and the lowest keypoint of the eyeball.
  • a corresponding machine learning model can be trained for each feature value to calculate the feature value, or a method for calculating the feature value according to the key point can be determined according to requirements, and this embodiment of the present disclosure does not specifically limit this.
  • each feature value set can store a fixed number of feature values, but the fixed number of feature values in the feature value set may be the same or different.
  • each feature value set may be regarded as a queue, redundant feature values are deleted at the head of the queue, and new feature values are inserted at the end of the queue.
  • the storage space of each queue may be the same or different.
  • the feature value of the eye area is a closed value
  • the corresponding feature value set is a closed value set
  • the feature value of the mouth area is a contraction degree
  • the corresponding feature value set is a contraction degree set
  • the feature value of the nose area is The pitch angle, rotation angle and yaw angle respectively correspond to the set of characteristic values as pitch angle set, rotation angle set and yaw angle set.
  • the closed value set stores up to 5 feature values
  • the pitch angle set stores up to 15 feature values
  • the rotation angle set stores up to 15 feature values.
  • the unoccluded condition refers to the condition that the organ region identified in the image frame is not covered to block other items. Occlusion judgment is performed on the acquired image frame in real time. After the judgment of the current image frame acquired in real time is completed, if the image frame is not acquired again, the judgment is ended, and when the image frame is acquired again, the occlusion judgment of the image frame is continued.
  • determining that the at least one organ region satisfies the unoccluded condition as the target image frame may include: inputting the image frame into a pre-trained occlusion judgment network model that matches each organ region And obtain the output of the occlusion judgment network model, for each organ region in the image frame, the occlusion judgment result; if the occlusion judgment result for each organ region is unoccluded, determine the at least one organ region All meet the unoccluded condition.
  • the image frame is determined to be the target image frame.
  • the occlusion judgment network model may be a machine learning model.
  • an occlusion image corresponding to the organ region is obtained as a training sample, and an occlusion judgment network model matching the organ region is trained.
  • the image frame is input into the occlusion judgment network model matching the organ region, and the occlusion judgment result matching the organ region in the image frame output by the model can be obtained.
  • the matching judgment result of all the organ regions identified in the image frame is unoccluded, it is determined that each organ region identified in the image frame satisfies the unoccluded condition, and the image frame is determined as the target image frame.
  • the occlusion judgment network model realizes the occlusion judgment of each organ region in the image frame, which can improve the accuracy of the occlusion judgment of each organ region, so that the living body detection is performed when each organ region is unoccluded, thereby improving the living body detection Accuracy.
  • S140 using the image frame as a target image frame, respectively calculating a feature value matching each organ region identified in the target image frame, and updating and storing the calculated feature value to the feature value In the feature value set corresponding to the organ region to which it belongs, S160 is executed.
  • each set of feature values is updated every time an image frame judged by occlusion is acquired.
  • the image frames are continuously acquired in real time, so that the feature value set is continuously updated, making the liveness detection time-sensitive.
  • S160 Perform at least one target action condition detection on the feature value set in real time, and when it is determined that the living body detection end condition is satisfied, the detection results of the at least one target action condition are all verified to determine that the user is a living body .
  • the target action condition may refer to a condition that defines the action for detecting the captured user as a living body.
  • the target action condition may include at least one of the following: open-eye and close-eye action, open-mouth and close-mouth action, nodding action, and shaking head action.
  • the biopsy end condition may refer to a condition for limiting the end of the target action condition detection. Exemplarily, when the target action condition detection time ends, the target action condition detection is successful, or the biopsy stop instruction is received, it is determined that the biopsy detection is satisfied End condition.
  • the test results include verification passed and verification failed.
  • the user is prompted on the display screen of the electronic device to make an action that matches the target action condition within a set time. If the user is detected to make the action, continue to the next target action condition detection; if the user is not detected to make the action within the set time, determine that the detection result of the target action condition is verification failure, and continue to the next target action Condition detection.
  • the living body detection is ended.
  • the detection result of each target action condition is verified, that is, it is detected that the user has made all the target actions defined by at least one target action condition, and the user is determined to pass the live detection, That is, the user is a living body.
  • At least one target action condition can be detected according to the change of each feature value in the feature value set.
  • the target action condition is an action of opening and closing eyes, which can be stored in the closed value set matching the eye area.
  • the closing value determines the maximum and minimum closing values of the eyes. At the same time, it is determined that the change from the maximum closing value to the minimum closing value or the maximum closing value to the minimum closing value is a monotonous change. The detection result of the action of opening and closing the eyes is verified.
  • the detecting in real time at least one target action condition on the feature value set includes: if it is determined that the at least one feature value set matching the target action condition satisfies the target action condition, determining the target The test result of the operating condition is verified.
  • the feature value set that needs to be detected can be determined according to the target action condition currently being detected, to avoid detecting all feature value sets at the same time, reduce the amount of detected data, and improve the efficiency of target action detection.
  • the living body detection includes at least one of the following: opening and closing eyes, opening and closing mouths, nodding, and shaking heads.
  • opening and closing eyes opening and closing mouths
  • nodding nodding
  • shaking heads only one action can be detected based on one set of feature values.
  • the timing of the motion detection in the biopsy cannot be determined. Therefore, when the biopsy prompts to detect the target action, the unrelated actions that are not related to the target action are displayed. This extraneous motion is not a natural continuous motion of the user (for example blinking).
  • the user when performing a biopsy, the user will only perform one action that matches the biopsy according to the biopsy prompt.
  • the living body detection prompts the user to make a nodding motion, but detects that the user makes an opening and closing motion.
  • additional multiple feature value sets need to be detected. For example, when a reasonable action other than the specified action is detected, the target action condition is verified. Pass; for example, when an unreasonable action other than the specified action is detected, it is determined that the user is not a living body. It can be understood that multiple target value sets need to be detected in the target action condition to determine whether the user has made the target action, and at the same time determine whether the user has made unreasonable actions. When the set of characteristic values matching the unreasonable action condition in the target action condition determines that the unreasonable action condition is satisfied, it is determined that the detection result of the target action condition is verification failure.
  • the determining that the at least one feature value set matching the target action condition satisfies the target action condition includes: if the maximum feature value and the minimum feature in each feature value set matching the target action condition The value satisfies the maximum value condition matching each feature value set, and at least one feature value between the maximum feature value, the minimum feature value, and the maximum feature value and the minimum feature value satisfies the monotonic change condition, Then, it is determined that at least one feature value set matching the target action condition meets the target action condition.
  • the maximum value condition refers to a condition that limits the maximum value threshold and the minimum value threshold of the feature value set.
  • the maximum value threshold and the minimum value threshold defined in the maximum value condition of different feature value sets may be different or the same.
  • the monotonic change condition may refer to a condition in which the change in the size of continuous feature values within a set range is monotonic.
  • the maximum eigenvalue, minimum eigenvalue, and at least one eigenvalue between the maximum eigenvalue and the minimum eigenvalue satisfy the monotonic change condition, which means that the size change of the eigenvalue in the range from the maximum eigenvalue to the minimum eigenvalue is a monotonically decreasing or minimum feature
  • the change in the size of the feature value from the value to the largest feature value is monotonously increasing.
  • the maximum value condition and the monotonic change condition it is determined that there is a maximum feature value and a minimum feature value in the feature value set, and the maximum feature value and the minimum feature value are continuously monotonously changing, and it is determined that the user has made a continuous and standard-compliant target action.
  • the maximum feature value is the feature value when the eyes are opened and the distance between the upper and lower eyelids is the largest
  • the minimum feature value is the feature value when the eyes are closed and the distance between the upper and lower eyelids is the smallest
  • the maximum feature value is The size change between the minimum feature values is that the distance between the upper and lower eyelids decreases monotonously, and it is determined that the closed value set satisfies the action of opening and closing the eyes.
  • the target action condition By simultaneously satisfying the maximum value condition and the monotonous change condition, it is determined that the target action condition is satisfied, to ensure that the user has made the action prompted by the target action condition, and at the same time to ensure that the user is a continuous action, thereby achieving accurate detection of the target action condition.
  • the image frames in the video recording the user’s face image are obtained in real time, and for each image frame, the occlusion judgment of the organ region is performed first, and when all the organ regions meet the unoccluded condition, at least one is updated according to real-time A set of feature values corresponding to the organ region, and at least one target action condition is detected according to the updated set of feature values, and whether the user is alive is determined based on the detection result of the at least one target action condition, which solves the occlusion photo method used in the related art
  • the problem of living body detection the false judgment of living body detection is reduced, and the accuracy rate of living body detection is improved, thereby improving the security of identity authentication.
  • FIG. 2 is a flowchart of a living body detection method according to Embodiment 2 of the present disclosure.
  • the method of this embodiment may include S210 to S2120.
  • the living body detection, video, image frame, organ area, feature value, feature value set, unoccluded condition, living body detection end condition and detection result in this embodiment can all refer to the description in the above embodiment.
  • each organ region corresponds to at least one feature value set, and each feature value set includes at least one feature value, wherein the organ region includes at least one of the following : Eye area, mouth area and nose area; the feature value set corresponding to the eye area includes the eye closure value set; the feature value set corresponding to the mouth area includes the mouth opening and closing degree set; the feature value corresponding to the nose area
  • the set includes a set of pitch angles and/or a set of rotation angles.
  • S260 using the image frame as a target image frame, respectively calculating feature values matching each organ region identified in the target image frame, and updating and storing the calculated feature values to the feature values In the feature value set corresponding to the organ region to which it belongs, S280 is executed.
  • the determining that the at least one feature value set matching the target action condition meets the target action condition may include: if the maximum feature value and the minimum value in each feature value set matching the target action condition The feature value satisfies the maximum value condition matching each set of feature values, and at least one feature value between the maximum feature value, the minimum feature value, and the maximum feature value and the minimum feature value satisfies the monotonic change condition , It is determined that at least one feature value set matching the target action condition meets the target action condition.
  • At least one feature value set matching the target action condition does not satisfy the target action condition, then continue to determine whether the target action condition is met according to the at least one feature value set updated in real time. If the detection time of the target operating condition ends, at least one feature value set matching the target operating condition does not always satisfy the target operating condition, and it is determined that the detection result of the target operating condition is a failure.
  • the time end condition may refer to a condition that limits the detection duration of the currently detected target operating condition. If the duration of the detection corresponding to the currently detected target action condition, the user has never made a facial action gesture defined by the target action condition, it is determined that the detection result of the currently detected target action condition is failure. If the image frame acquired in real time at the current time is still within the detection duration corresponding to the currently detected target action condition, continue to perform the current detection of the target action condition according to the multiple feature value sets updated corresponding to the next image frame acquired in real time Detection.
  • FIG. 3 is a schematic structural diagram of a living body detection device according to an embodiment of the present disclosure. This embodiment can be applied to detect whether a user in a recorded video is a living body.
  • the device may be implemented in software and/or hardware, and the device may be configured in an electronic device.
  • the apparatus may include: an image frame acquisition module 310, an organ region identification module 320, a feature value set update module 330, and a living body detection module 340.
  • the image frame acquisition module 310 is set to acquire the image frames in the video in real time when it is determined that the living body detection start condition is satisfied;
  • the organ region identification module 320 is configured to identify at least one organ region of the user in the image frame, each organ region corresponds to at least one feature value set, and each feature value set includes at least one feature value;
  • the feature value set update module 330 is configured to determine the image frames in which the at least one organ region satisfies the unoccluded condition as the target image frame, and then separately calculate the features matching each organ region identified in the target image frame Value, and update and store the calculated feature value to the feature value set corresponding to the organ region to which the feature value belongs;
  • the living body detection module 340 is configured to detect at least one target action condition on the feature value set in real time, and when it is determined that the living body detection end condition is satisfied, the detection results according to the at least one target action condition are all verified and determined
  • the user is a living body.
  • the image frames in the video recording the face image of the user are acquired in real time, and the occlusion judgment of the organ region is performed for each image frame first, and when all the organ regions meet the unoccluded condition, at least one organ region is updated in real time Corresponding feature value set, and detecting at least one target action condition according to the updated feature value set, and judging whether the user is a living body according to the detection result of the at least one target action condition, which solves the related art of using the occlusion photo method to pass the living body
  • the problem of detection reduces the misjudgment of live detection, improves the accuracy of live detection, and thus improves the security of identity authentication.
  • the feature value set updating module 330 includes: an occlusion judgment module configured to input the image frame into a pre-trained occlusion judgment network model that matches each organ region, and obtain the The output of the occlusion judgment network model is the occlusion judgment result of each organ region in the image frame; the unocclusion condition judgment module is set to determine that the occlusion judgment result of each organ region is unoccluded. The at least one organ region satisfies the unoccluded condition; the target image frame determination module is configured to determine the image frame when it is recognized that the at least one organ region in the image frame satisfies the unoccluded condition Is the target image frame.
  • the living body detection module 340 includes a target action condition detection module configured to determine the target action if it is determined that at least one feature value set matching the target action condition satisfies the target action condition The test result of the condition is verified.
  • the target action condition detection module includes: if the maximum feature value and the minimum feature value in each feature value set matching the target action condition satisfy the maximum value matching each feature value set Conditions, and at least one feature value between the maximum feature value, the minimum feature value, and the maximum feature value and the minimum feature value satisfies the monotonic change condition, then determine at least one feature that matches the target action condition The set of values meets the target action condition.
  • the organ region includes an eye region, and the characteristic value of the organ region includes a closed value of the eye region.
  • the organ region includes a mouth region, and the characteristic value of the organ region includes a degree of opening and closing of the mouth region.
  • the organ area includes a nose area
  • the characteristic value of the organ area includes a pitch angle and/or a rotation angle of the nose area
  • the living body detection device provided by the embodiment of the present disclosure belongs to the same concept as the living body detection method provided by the embodiment one.
  • For technical details not described in detail in the embodiments of the present disclosure please refer to embodiment one, and the embodiment and the embodiment one of the present disclosure Has the same effect.
  • An embodiment of the present disclosure provides an electronic device.
  • FIG. 4 a schematic structural diagram of an electronic device (eg, terminal device or server) 400 suitable for implementing the embodiment of the present disclosure is shown.
  • the electronic devices in the embodiments of the present disclosure may include, for example, mobile phones, notebook computers, digital broadcast receivers, personal digital assistants (Personal Digital Assistant (PDA), tablet computers (Portable Android Device, PAD), portable multimedia players (Personal Multimedia Player, PMP), mobile terminals such as in-vehicle terminals (for example, car navigation terminals), and fixed terminals such as digital television (TV), desktop computers, and so on.
  • PDA Personal Digital Assistant
  • PAD Portable Android Device
  • PMP portable multimedia players
  • mobile terminals such as in-vehicle terminals (for example, car navigation terminals)
  • fixed terminals such as digital television (TV), desktop computers, and so on.
  • TV digital television
  • the electronic device shown in FIG. 4 is only an example, and should not bring limitations on the functions and use scope of the
  • the electronic device 400 may include a processing device (such as a central processing unit, a graphics processor, etc.) 401.
  • the processing device 401 may be based on a program stored in a read-only memory (Read-only Memory, ROM) 402 or from The storage device 408 loads a program in a random access memory (Random Access Memory, RAM) 403 to perform various appropriate actions and processes.
  • RAM Random Access Memory
  • various programs and data necessary for the operation of the electronic device 400 are also stored.
  • the processing device 401, ROM 402, and RAM 403 are connected to each other via a bus 404.
  • An input/output (Input/Output, I/O) interface 405 is also connected to the bus 404.
  • the following devices can be connected to the I/O interface 405: including input devices 406 such as touch screens, touch pads, keyboards, mice, cameras, microphones, accelerometers, gyroscopes, etc.; including, for example, liquid crystal displays (Liquid Crystal Display, LCD) , An output device 407 of a speaker, a vibrator, etc.; a storage device 408 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 409.
  • the communication device 409 may allow the electronic device 400 to perform wireless or wired communication with other devices to exchange data.
  • FIG. 4 shows an electronic device 400 including various devices, it should be understood that it is not required to implement or have all the devices shown. The present disclosure may alternatively be implemented or equipped with more or fewer devices.
  • an embodiment of the present disclosure includes a computer program product including a computer program carried on a computer-readable medium, the computer program containing program code for performing the method shown in the flowchart.
  • the computer program may be downloaded and installed from the network through the communication device 409, or from the storage device 408, or from the ROM 402.
  • the processing device 401 When the computer program is executed by the processing device 401, the above-described functions defined in the living body detection method of the embodiment of the present disclosure are executed.
  • Embodiments of the present disclosure also provide a computer-readable storage medium.
  • the computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium, or a combination of the two.
  • the computer-readable storage medium may include, for example, electrical, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or a combination of the above.
  • Computer-readable storage media may include: electrical connection with one or more wires, portable computer disk, hard disk, RAM, ROM, erasable programmable read-only memory (Erasable Programmable Read-Only Memory, EPROM ) Or flash memory, optical fiber, portable compact disk read-only memory (Compact Disc Read-Only Memory, CD-ROM), optical storage device, magnetic storage device, or a suitable combination of the above.
  • the computer-readable storage medium may be any tangible medium containing or storing a program, which may be used by or in combination with an instruction execution system, apparatus, or device.
  • the computer-readable signal medium may include a data signal that is propagated in baseband or as part of a carrier wave, in which computer-readable program code is carried.
  • This propagated data signal can take many forms, including electromagnetic signals, optical signals, or suitable combinations of the above.
  • the computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, and the computer-readable signal medium may send, propagate, or transmit a program for use by or in combination with an instruction execution system, apparatus, or device .
  • the program code contained on the computer-readable medium may be transmitted using any appropriate medium, including but not limited to: electric wires, optical cables, radio frequency (RF), etc., or a suitable combination of the foregoing.
  • RF radio frequency
  • the computer-readable medium may be included in the electronic device; or it may exist alone without being assembled into the electronic device.
  • the computer-readable medium carries one or more programs, and when the one or more programs are executed by the electronic device, the electronic device: when it is determined that the living condition detection start condition is satisfied, acquire the image frames in the video in real time; At least one organ region of the user is identified in the image frame, and each organ region corresponds to at least one feature value set, and each feature value set includes at least one feature value; an image frame in which at least one organ region satisfies the unoccluded condition It is determined that the target image frame respectively calculates the feature value matching each organ region identified in the target image frame, and updates and stores the calculated feature value to the feature value corresponding to the organ region to which the feature value belongs In the set; at least one target action condition is detected on the feature value set in real time, and when it is determined that the in-vivo detection end condition is satisfied, the detection results based on the at least one target action condition are all verified and determined that the user is Living body.
  • the present disclosure may write computer program code for performing the operations of the present disclosure in one or more programming languages or a combination thereof.
  • the above programming languages include object-oriented programming languages such as Java, Smalltalk, C++, and also include Conventional procedural programming language-such as "C" language or similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as an independent software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network, including a local area network (Local Area Network, LAN) or a wide area network (Wide Area Network, WAN), or it can be connected to an external computer (for example, using an Internet service provider to connect through the Internet).
  • LAN Local Area Network
  • WAN Wide Area Network
  • each block in the flowchart or block diagram may represent a module, a program segment, or a part of code, and the module, program segment, or a part of code contains one or more executable instructions for implementing a prescribed logical function.
  • the functions marked in the blocks may also occur in an order different from that marked in the drawings. For example, two blocks represented in succession may actually be executed in parallel, and they may sometimes be executed in the reverse order, depending on the functions involved.
  • the modules described in the embodiments of the present disclosure may be implemented in software or hardware.
  • the name of the module does not constitute a limitation on the module itself under certain circumstances.
  • the image frame acquisition module can also be described as "a module that acquires image frames in a video in real time when it is determined that the living condition detection start condition is satisfied. ".

Abstract

A liveness detection method and apparatus, an electronic device, and a storage medium. The method comprises: obtaining an image frame in a video in real time upon determining that a liveness detection starting condition is met (S110); recognizing at least one organ area of a user in the image frame, wherein each organ area corresponds to at least one feature value set, and each feature value set comprises at least one feature value (S120); determining an image frame in which the at least one organ area meets a condition of being uncovered as a target image frame, calculating a feature value matching each organ area recognized in the target image frame, updating and storing the calculated feature value to a feature value set corresponding to the organ area to which the feature value belongs (S140); and performing detection of at least one target action condition on the feature value set in real time, and when it is determined that a liveness detection ending condition is met, determining that the user is a live person if the detection result of the at least one target action condition is "verification succeeds" (S160).

Description

活体检测方法、装置、电子设备及存储介质Living body detection method, device, electronic equipment and storage medium
本申请要求在2018年12月18日提交中国专利局、申请号为201811549518.5的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。This application requires the priority of the Chinese patent application with the application number 201811549518.5 submitted to the China Patent Office on December 18, 2018. The entire content of this application is incorporated by reference in this application.
技术领域Technical field
本公开实施例涉及图像处理技术,例如涉及一种活体检测方法、装置、电子设备及存储介质。Embodiments of the present disclosure relate to image processing technology, for example, to a living body detection method, device, electronic device, and storage medium.
背景技术Background technique
人们在从事多种活动中,经常需要进行身份认证,从而保证信息的安全性。随着计算机和网络的发展,互联网电子设备的普及使身份认证工作变得高效,通常身份认证需要确定被拍摄对象为活体。People are often required to perform identity authentication in various activities to ensure the security of information. With the development of computers and networks, the popularity of Internet electronic devices has made ID authentication work more efficient. Usually, ID authentication needs to determine that the subject is a living body.
在金融系统、人脸识别系统中经常需要进行活体判断。在这个过程中,一般需要用户完成面部基础动作,例如,眨眼动作。相关技术通过检测多个包括用户脸部图像的照片中设定脸部部位(例如,眼睛或者嘴巴等)姿态参数的变化情况,确定该用户是否为活体用户,同时使用人脸关键点定位和人脸追踪等技术,验证用户是否为本人操作。In the financial system and face recognition system, it is often necessary to perform in vivo judgment. In this process, the user is generally required to complete basic facial movements, for example, blinking movements. Related technologies determine whether a user is a living user by detecting changes in posture parameters of face parts (for example, eyes or mouth, etc.) in multiple photos including the user's face image, and use face key point positioning and human Face tracking and other technologies to verify whether the user is operating himself.
但上述方式很容易被照片攻击,例如,通过合法用户闭眼的照片,并将一只黑色笔快速放置在用户双眼部位,并迅速离开,实现模拟合法用户的闭眼动作,从而导致该用户照片被误判为活体,相关技术的方法无法保证活体检测的准确性,降低了身份认证的安全性。But the above method is easy to be attacked by photos. For example, by closing the eyes of the legal user, and quickly placing a black pen on the user's eyes, and quickly leaving, to simulate the closed eyes of the legal user, resulting in the user's photo Being misjudged as a living body, the related art methods cannot guarantee the accuracy of the living body detection and reduce the security of identity authentication.
发明内容Summary of the invention
本公开实施例提供一种活体检测方法、装置、电子设备及存储介质,可以准确识别活体,并提高身份认证的安全性。Embodiments of the present disclosure provide a living body detection method, device, electronic equipment, and storage medium, which can accurately identify a living body and improve the security of identity authentication.
在一实施例中,本公开实施例提供了一种活体检测方法,该方法包括:In an embodiment, an embodiment of the present disclosure provides a living body detection method, which includes:
在确定满足活体检测开始条件时,实时获取视频中的图像帧;When it is determined that the living condition detection start condition is satisfied, the image frames in the video are obtained in real time;
在所述图像帧中识别用户的至少一个器官区域,每个器官区域对应至少一个特征值集合,每个特征值集合包括至少一个特征值;Identify at least one organ region of the user in the image frame, each organ region corresponds to at least one feature value set, and each feature value set includes at least one feature value;
将所述至少一个器官区域均满足未遮挡条件的图像帧确定为目标图像帧,分别计算与所述目标图像帧中识别到的每个器官区域匹配的特征值,并将计算 得到的所述特征值更新存储至与所述特征值所属器官区域对应的特征值集合中;An image frame in which the at least one organ region satisfies the unoccluded condition is determined as a target image frame, and feature values matching each organ region identified in the target image frame are calculated respectively, and the calculated feature The value update is stored in the feature value set corresponding to the organ region to which the feature value belongs;
实时对所述特征值集合进行至少一个目标动作条件的检测,并在确定满足活体检测结束条件时,根据所述至少一个目标动作条件的检测结果均为验证通过,确定所述用户为活体。At least one target action condition is detected on the feature value set in real time, and when it is determined that the living body detection end condition is satisfied, the detection results of the at least one target action condition are all verified to determine that the user is a living body.
在一实施例中,本公开实施例还提供了一种活体检测装置,该装置包括:In an embodiment, an embodiment of the present disclosure also provides a living body detection device, which includes:
图像帧获取模块,设置为在确定满足活体检测开始条件时,实时获取视频中的图像帧;The image frame acquisition module is set to acquire the image frames in the video in real time when it is determined that the living body detection start condition is satisfied;
器官区域识别模块,设置为在所述图像帧中识别用户的至少一个器官区域,每个器官区域对应至少一个特征值集合,每个特征值集合包括至少一个特征值;An organ region identification module, configured to identify at least one organ region of the user in the image frame, each organ region corresponds to at least one feature value set, and each feature value set includes at least one feature value;
特征值集合更新模块,设置为将所述至少一个器官区域均满足未遮挡条件的图像帧确定为目标图像帧,分别计算与所述目标图像帧中识别到的每个器官区域匹配的特征值,并将计算得到的所述特征值更新存储至与所述特征值所属器官区域对应的特征值集合中;The feature value set update module is configured to determine the image frames in which the at least one organ region satisfies the unoccluded condition as the target image frame, and calculate feature values matching each organ region identified in the target image frame, And update and store the calculated feature value to the feature value set corresponding to the organ region to which the feature value belongs;
活体检测模块,设置为实时对所述特征值集合进行至少一个目标动作条件的检测,并在确定满足活体检测结束条件时,根据所述至少一个目标动作条件的检测结果均为验证通过,确定所述用户为活体。The living body detection module is configured to perform real-time detection of the at least one target action condition on the feature value set, and when it is determined that the living body detection end condition is satisfied, the detection results based on the at least one target action condition are all verified and determined The user is a living body.
在一实施例中,本公开实施例还提供了一种电子设备,该电子设备包括:In an embodiment, an embodiment of the present disclosure also provides an electronic device, the electronic device includes:
一个或多个处理器;One or more processors;
存储器,设置为存储一个或多个程序;Memory, set to store one or more programs;
所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现上述实施例所述方法。The one or more programs are executed by the one or more processors, so that the one or more processors implement the method described in the foregoing embodiment.
在一实施例中,本公开实施例还提供了一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现上述实施例所述的方法。In an embodiment, an embodiment of the present disclosure further provides a computer-readable storage medium that stores a computer program on the computer-readable storage medium, and the computer program is executed by a processor to implement the method described in the foregoing embodiment.
附图说明BRIEF DESCRIPTION
图1是本公开实施例一提供的一种活体检测方法的流程图;FIG. 1 is a flowchart of a living body detection method provided in Embodiment 1 of the present disclosure;
图2是本公开实施例二提供的一种活体检测方法的流程图;2 is a flowchart of a living body detection method provided in Embodiment 2 of the present disclosure;
图3是本公开实施例三提供的一种活体检测装置的结构示意图;3 is a schematic structural diagram of a living body detection device provided in Embodiment 3 of the present disclosure;
图4是本公开实施例四提供的一种电子设备的结构示意图。4 is a schematic structural diagram of an electronic device according to Embodiment 4 of the present disclosure.
具体实施方式detailed description
下面结合附图和实施例对本公开作进行说明。此处所描述的实施例仅仅用于解释本公开,而非对本公开的限定。为了便于描述,附图中仅示出了与本公开相关的部分而非全部结构。The disclosure will be described below with reference to the drawings and embodiments. The embodiments described herein are only for explaining the present disclosure, but not for limiting the present disclosure. For ease of description, only some parts but not all structures related to the present disclosure are shown in the drawings.
实施例一Example one
图1为本公开实施例一提供的一种活体检测方法的流程图,本实施例可适用于检测录制的视频中的用户是否是活体的情况,该方法可以由活体检测装置来执行,该装置可以采用软件和/或硬件的方式实现,该装置可以配置于电子设备中,例如典型的是计算机等。如图1所示,该方法包括S110至S160。FIG. 1 is a flowchart of a living body detection method according to Embodiment 1 of the present disclosure. This embodiment can be applied to detect whether a user in a recorded video is a living body. The method can be performed by a living body detection device. It can be implemented by software and/or hardware, and the device can be configured in an electronic device, such as a computer. As shown in FIG. 1, the method includes S110 to S160.
S110,在确定满足活体检测开始条件时,实时获取视频中的图像帧。S110, when it is determined that the living body detection start condition is satisfied, the image frames in the video are acquired in real time.
活体检测开始条件可以是指确定开始执行活体检测操作的条件,示例性的,当接收到活体检测开始指令时,确定满足活体检测开始条件。The living body detection start condition may refer to a condition for determining to start performing the living body detection operation. Exemplarily, when the living body detection start instruction is received, it is determined that the living body detection start condition is satisfied.
一般来说,视频是由一系列静态的图像帧以极快的速度连续放映形成。由此,可以将视频拆分成一系列图像帧,每个图像帧可以作为一张图像。在本公开实施例中图像帧是一张包括用户人脸图像的图像。在进行活体检测时,录制包括用户人脸的视频,从而根据视频中的图像帧获取用户人脸的信息。Generally speaking, the video is formed by a series of static image frames continuously displayed at a very fast speed. Thus, the video can be split into a series of image frames, and each image frame can be used as an image. In the embodiment of the present disclosure, the image frame is an image including a user's face image. When performing living body detection, a video including the user's face is recorded, so as to obtain information about the user's face according to the image frames in the video.
活体检测通常是实时进行的检测过程,因此,该视频是正在实时录制的视频,可以获取视频录制的每一个图像帧并实时处理,从而保证活体检测的时效性。Living body detection is usually a detection process performed in real time. Therefore, the video is a video being recorded in real time, and each image frame recorded in the video can be acquired and processed in real time, thereby ensuring the timeliness of living body detection.
S120,在所述图像帧中识别用户的至少一个器官区域,每个器官区域对应至少一个特征值集合,每个特征值集合包括至少一个特征值。S120. Identify at least one organ region of the user in the image frame, each organ region corresponds to at least one feature value set, and each feature value set includes at least one feature value.
器官区域可以是指用户人脸中的器官所在的区域,例如,器官区域包括下述至少一项:眼睛区域、嘴巴区域和鼻子区域。The organ area may refer to the area where the organ in the user's face is located. For example, the organ area includes at least one of the following: an eye area, a mouth area, and a nose area.
通常在活体检测中需要判断视频中拍摄到的人脸姿态是不是符合活体的姿态特征,在一实施例中,可以通过人脸中的一个或多个关键点实现人脸姿态动作检测,并基于人脸姿态动作检测判断用户是否为活体。其中,关键点可以是指人脸中具有标识作用的关键点,例如,关键点可以包括左眼眼珠关键点、嘴角关键点、鼻孔关键点、眉尾关键点和脸部轮廓关键点等。关键点可以通过预 先训练的机器学习模型识别,或者还可以通过其他方法,如基于活动形状模型(Active Shape Model,ASM)的方法,对此,本公开实施例不作具体限制。Usually in live detection, it is necessary to determine whether the face pose captured in the video conforms to the pose characteristics of the live body. In one embodiment, face pose action detection can be achieved through one or more key points in the face, and based on Face gesture detection detects whether the user is alive. The key points may refer to key points in the human face that have an identification function. For example, the key points may include key points of the left eyeball, key points of the mouth, key points of the nostril, key points of the brow and tail, and key points of the contour of the face. The key points can be identified by a pre-trained machine learning model, or by other methods, such as Active Shape Model (ASM)-based methods, which are not specifically limited in the embodiments of the present disclosure.
根据每个器官区域中识别到的关键点,可以计算每个器官区域的特征值,可选的,若所述器官区域为眼睛区域,所述器官区域的特征值包括所述眼睛区域的闭合值;若所述器官区域为嘴巴区域,所述器官区域的特征值包括所述嘴巴区域的张合度;若所述器官区域为鼻子区域,所述器官区域的特征值包括所述鼻子区域的俯仰角度和/或旋转角度。此外,鼻子区域的特征值还可以包括偏航角度。其中,特征值可以根据多个关键点之间的相对位置确定,例如,眼睛区域的闭合值可以是眼珠最高关键点和眼珠最低关键点的之间的距离。此外,还可以对应每个特征值分别训练对应的机器学习模型进行特征值计算,或者可以根据需求确定根据关键点计算特征值的方法,对此,本公开实施例不作具体限制。According to the identified key points in each organ region, the characteristic value of each organ region can be calculated. Optionally, if the organ region is an eye region, the characteristic value of the organ region includes the closed value of the eye region ; If the organ area is a mouth area, the characteristic value of the organ area includes the opening and closing of the mouth area; if the organ area is a nose area, the characteristic value of the organ area includes a pitch angle of the nose area And/or rotation angle. In addition, the feature value of the nose area may also include the yaw angle. The feature value may be determined according to the relative positions between multiple key points. For example, the closing value of the eye area may be the distance between the highest key point of the eyeball and the lowest keypoint of the eyeball. In addition, a corresponding machine learning model can be trained for each feature value to calculate the feature value, or a method for calculating the feature value according to the key point can be determined according to requirements, and this embodiment of the present disclosure does not specifically limit this.
将计算得到的图像帧中每个器官区域的特征值分别对应添加到该特征值所属器官区域匹配的至少一个特征值集合中,实时更新与每个器官区域对应的特征值集合。其中,每个特征值集合能存储固定数量的特征值,但特征值集合中的特征值的固定数量可以相同,也可以不同。在一实施例中,每个特征值集合可以视为一个队列,在队头删除冗余的特征值,在队尾插入新特征值,每个队列的存储空间可以相同,也可以不同。The feature values of each organ region in the calculated image frame are correspondingly added to at least one feature value set matching the organ region to which the feature value belongs, and the feature value set corresponding to each organ region is updated in real time. Each feature value set can store a fixed number of feature values, but the fixed number of feature values in the feature value set may be the same or different. In an embodiment, each feature value set may be regarded as a queue, redundant feature values are deleted at the head of the queue, and new feature values are inserted at the end of the queue. The storage space of each queue may be the same or different.
在一实施例中,眼睛区域的特征值为闭合值,对应的特征值集合为闭合值集合;嘴巴区域的特征值为张合度,对应的特征值集合为张合度集合;鼻子区域的特征值为俯仰角度、旋转角度和偏航角度,分别对应的特征值集合为俯仰角度集合、旋转角度集合和偏航角度集合。示例性的,闭合值集合最多存储5个特征值,俯仰角度集合最多存储15个特征值,旋转角度集合最多存储15个特征值。In an embodiment, the feature value of the eye area is a closed value, and the corresponding feature value set is a closed value set; the feature value of the mouth area is a contraction degree, and the corresponding feature value set is a contraction degree set; the feature value of the nose area is The pitch angle, rotation angle and yaw angle respectively correspond to the set of characteristic values as pitch angle set, rotation angle set and yaw angle set. Exemplarily, the closed value set stores up to 5 feature values, the pitch angle set stores up to 15 feature values, and the rotation angle set stores up to 15 feature values.
S130,判断所述图像帧中识别到的每个器官区域是否满足未遮挡条件,直至实时获取的所述视频中的图像帧全部判断完成,如果是,则执行S140;否则,执行S150。S130. Determine whether each organ region identified in the image frame satisfies the unoccluded condition until all image frames in the video acquired in real time are completed. If yes, execute S140; otherwise, execute S150.
未遮挡条件是指限定图像帧中识别到的器官区域未被覆盖遮挡其他物品的条件。实时对获取的图像帧进行遮挡判断,在实时获取的当前图像帧判断完成 之后,如果未再次获取图像帧,则结束判断,当再次获取图像帧时,继续对该图像帧进行遮挡判断。The unoccluded condition refers to the condition that the organ region identified in the image frame is not covered to block other items. Occlusion judgment is performed on the acquired image frame in real time. After the judgment of the current image frame acquired in real time is completed, if the image frame is not acquired again, the judgment is ended, and when the image frame is acquired again, the occlusion judgment of the image frame is continued.
可选的,将所述至少一个器官区域均满足未遮挡条件的图像帧确定为目标图像帧,可以包括:将所述图像帧输入到预先训练的与每个器官区域匹配的遮挡判断网络模型中,并获取所述遮挡判断网络模型输出的,对所述图像帧中每个器官区域的遮挡判断结果;在每个器官区域的遮挡判断结果为未遮挡的情况下,确定所述至少一个器官区域均满足所述未遮挡条件。在所述图像帧识别到的每个所述器官区域均满足未遮挡条件的情况下,确定所述图像帧为目标图像帧。Optionally, determining that the at least one organ region satisfies the unoccluded condition as the target image frame may include: inputting the image frame into a pre-trained occlusion judgment network model that matches each organ region And obtain the output of the occlusion judgment network model, for each organ region in the image frame, the occlusion judgment result; if the occlusion judgment result for each organ region is unoccluded, determine the at least one organ region All meet the unoccluded condition. When each of the organ regions identified by the image frame satisfies the unoccluded condition, the image frame is determined to be the target image frame.
其中,遮挡判断网络模型可以是机器学习模型。在一实施例中,针对每个器官区域,分别获取该器官区域对应的遮挡图像作为训练样本,训练与该器官区域匹配的遮挡判断网络模型。将图像帧输入与该器官区域匹配的遮挡判断网络模型,可以得到模型输出的与该图像帧中该器官区域匹配的遮挡判断结果。在图像帧中识别到的所有器官区域匹配的遮挡判断结果均为未遮挡的情况下,确定图像帧识别到的每个器官区域满足未遮挡条件,确定该图像帧为目标图像帧。Among them, the occlusion judgment network model may be a machine learning model. In an embodiment, for each organ region, an occlusion image corresponding to the organ region is obtained as a training sample, and an occlusion judgment network model matching the organ region is trained. The image frame is input into the occlusion judgment network model matching the organ region, and the occlusion judgment result matching the organ region in the image frame output by the model can be obtained. In a case where the matching judgment result of all the organ regions identified in the image frame is unoccluded, it is determined that each organ region identified in the image frame satisfies the unoccluded condition, and the image frame is determined as the target image frame.
通过遮挡判断网络模型实现图像帧中每个器官区域的遮挡判断,可以提高每个器官区域的遮挡判断的准确率,从而在每个器官区域未遮挡的情况下进行活体检测,从而提高活体检测的准确率。The occlusion judgment network model realizes the occlusion judgment of each organ region in the image frame, which can improve the accuracy of the occlusion judgment of each organ region, so that the living body detection is performed when each organ region is unoccluded, thereby improving the living body detection Accuracy.
S140,将所述图像帧作为目标图像帧,分别计算与所述目标图像帧中识别到的每个器官区域匹配的特征值,并将计算得到的所述特征值更新存储至与所述特征值所属器官区域对应的特征值集合中,执行S160。S140, using the image frame as a target image frame, respectively calculating a feature value matching each organ region identified in the target image frame, and updating and storing the calculated feature value to the feature value In the feature value set corresponding to the organ region to which it belongs, S160 is executed.
在一实施例中,每获取一个通过遮挡判断的图像帧,所有特征值集合更新一次。随着视频实时录制,图像帧不断实时获取,从而特征值集合不断更新,使活体检测具有时效性。In an embodiment, each set of feature values is updated every time an image frame judged by occlusion is acquired. With the real-time video recording, the image frames are continuously acquired in real time, so that the feature value set is continuously updated, making the liveness detection time-sensitive.
S150,实时获取下一个图像帧。S150. Acquire the next image frame in real time.
S160,实时对所述特征值集合进行至少一个目标动作条件的检测,并在确定满足活体检测结束条件时,根据所述至少一个目标动作条件的检测结果均为验证通过,确定所述用户为活体。S160: Perform at least one target action condition detection on the feature value set in real time, and when it is determined that the living body detection end condition is satisfied, the detection results of the at least one target action condition are all verified to determine that the user is a living body .
其中,目标动作条件可以是指限定用于检测拍摄的用户为活体的动作的条件。示例性的,目标动作条件可以包括下述至少一项:睁眼闭眼动作、张嘴闭嘴动作、点头动作和摇头动作。Among them, the target action condition may refer to a condition that defines the action for detecting the captured user as a living body. Exemplarily, the target action condition may include at least one of the following: open-eye and close-eye action, open-mouth and close-mouth action, nodding action, and shaking head action.
活体检测结束条件可以是指用于限定目标动作条件检测结束的条件,示例性的,可以在目标动作条件的检测时间结束、目标动作条件检测成功或者接收到活体检测停止指令时,确定满足活体检测结束条件。检测结果包括验证通过和验证失败。The biopsy end condition may refer to a condition for limiting the end of the target action condition detection. Exemplarily, when the target action condition detection time ends, the target action condition detection is successful, or the biopsy stop instruction is received, it is determined that the biopsy detection is satisfied End condition. The test results include verification passed and verification failed.
通常在电子设备的显示屏幕中提示用户在设定时间内做出与目标动作条件匹配的动作。若检测到用户做出该动作,继续下一个目标动作条件检测;若在设定时间内未检测到用户做出该动作,确定该目标动作条件的检测结果为验证失败,并继续下一个目标动作条件检测。在所有目标动作条件检测完成时,结束活体检测,当每个目标动作条件的检测结果为验证通过,也即检测到用户做出至少一个目标动作条件限定的所有目标动作,确定用户通过活体检测,即用户为活体。Usually, the user is prompted on the display screen of the electronic device to make an action that matches the target action condition within a set time. If the user is detected to make the action, continue to the next target action condition detection; if the user is not detected to make the action within the set time, determine that the detection result of the target action condition is verification failure, and continue to the next target action Condition detection. When the detection of all target action conditions is completed, the living body detection is ended. When the detection result of each target action condition is verified, that is, it is detected that the user has made all the target actions defined by at least one target action condition, and the user is determined to pass the live detection, That is, the user is a living body.
其中,可以根据特征值集合中每个特征值的变化情况,实现对至少一个目标动作条件进行检测,例如,目标动作条件是睁眼闭眼动作,可以根据眼睛区域匹配的闭合值集合中存储的闭合值,确定眼睛的最大闭合值和最小闭合值,同时确定最大闭合值向最小闭合值变化或者最大闭合值向最小闭合值变化是单调变化,睁眼闭眼动作的检测结果为验证通过。Among them, at least one target action condition can be detected according to the change of each feature value in the feature value set. For example, the target action condition is an action of opening and closing eyes, which can be stored in the closed value set matching the eye area. The closing value determines the maximum and minimum closing values of the eyes. At the same time, it is determined that the change from the maximum closing value to the minimum closing value or the maximum closing value to the minimum closing value is a monotonous change. The detection result of the action of opening and closing the eyes is verified.
可选的,所述实时对所述特征值集合进行至少一个目标动作条件的检测,包括:如果确定所述目标动作条件匹配的至少一个特征值集合满足所述目标动作条件,则确定所述目标动作条件的检测结果为验证通过。Optionally, the detecting in real time at least one target action condition on the feature value set includes: if it is determined that the at least one feature value set matching the target action condition satisfies the target action condition, determining the target The test result of the operating condition is verified.
在一实施例中,可以根据当前正在检测的目标动作条件,确定需要进行检测的特征值集合,避免同时对所有的特征值集合进行检测,减少检测的数据量,提高目标动作检测效率。In an embodiment, the feature value set that needs to be detected can be determined according to the target action condition currently being detected, to avoid detecting all feature value sets at the same time, reduce the amount of detected data, and improve the efficiency of target action detection.
在一实施例中,活体检测包括下述至少一项:睁眼闭眼动作、张嘴闭嘴动作、点头动作和摇头动作。通常可以仅根据一个特征值集合检测一个动作。当不法人员利用合法用户的照片或者视频模拟展示活体检测中的动作时,无法确定活体检测中动作检测时序,从而,在活体检测提示检测目标动作时,展示与 目标动作无关的无关动作,同时,该无关动作不是用户的自然连续动作(例如眨眼)。In an embodiment, the living body detection includes at least one of the following: opening and closing eyes, opening and closing mouths, nodding, and shaking heads. Generally, only one action can be detected based on one set of feature values. When the illegal person uses the photo or video of the legitimate user to simulate the motion in the biopsy, the timing of the motion detection in the biopsy cannot be determined. Therefore, when the biopsy prompts to detect the target action, the unrelated actions that are not related to the target action are displayed. This extraneous motion is not a natural continuous motion of the user (for example blinking).
例如,在任何一个动作检测过程中,眨眼动作会不断出现。For example, in any motion detection process, blinking motions will continue to appear.
又如,用户在进行活体检测时,根据活体检测提示仅仅会做出与活体检测匹配的一个动作。活体检测提示用户做出点头动作,但检测到用户做出张嘴闭嘴动作。In another example, when performing a biopsy, the user will only perform one action that matches the biopsy according to the biopsy prompt. The living body detection prompts the user to make a nodding motion, but detects that the user makes an opening and closing motion.
从而,在对活体检测规定的动作匹配的特征值集合之外,还需要对额外多个特征值集合进行检测,例如,当检测到规定的动作之外的合理动作时,确定该目标动作条件验证通过;又如,当检测到规定的动作之外的不合理动作时,确定用户不是活体。可以理解的是,目标动作条件中需要检测多个特征值集合,判断用户是否做出目标动作,以及同时判断用户是否做出不合理动作。当目标动作条件中不合理动作条件匹配的特征值集合确定满足该不合理动作条件时,确定该目标动作条件的检测结果为验证失败。Therefore, in addition to the feature value set that matches the specified action of the living body detection, additional multiple feature value sets need to be detected. For example, when a reasonable action other than the specified action is detected, the target action condition is verified. Pass; for example, when an unreasonable action other than the specified action is detected, it is determined that the user is not a living body. It can be understood that multiple target value sets need to be detected in the target action condition to determine whether the user has made the target action, and at the same time determine whether the user has made unreasonable actions. When the set of characteristic values matching the unreasonable action condition in the target action condition determines that the unreasonable action condition is satisfied, it is determined that the detection result of the target action condition is verification failure.
通过根据与目标动作条件匹配的特征值集合检测目标动作条件,可以避免同时对所有的特征值集合进行检测,减少检测的数据量,提高目标动作检测效率,同时,还可以避免在检测到目标动作的同时也检测到不合理的动作情况下通过活体检测,从而,提高活体检测的准确率。By detecting the target action condition based on the set of feature values matching the target action condition, you can avoid detecting all the feature value sets at the same time, reduce the amount of data detected, improve the efficiency of target action detection, and at the same time, avoid detecting the target action At the same time, it also detects the unreasonable action through the living body detection, thereby improving the accuracy of the living body detection.
可选的,所述确定所述目标动作条件匹配的至少一个特征值集合满足所述目标动作条件,包括:如果与所述目标动作条件匹配的每个特征值集合中的最大特征值和最小特征值满足与每个特征值集合匹配的最值条件,且所述最大特征值、所述最小特征值和所述最大特征值与所述最小特征值之间的至少一个特征值满足单调变化条件,则确定所述目标动作条件匹配的至少一个特征值集合满足所述目标动作条件。Optionally, the determining that the at least one feature value set matching the target action condition satisfies the target action condition includes: if the maximum feature value and the minimum feature in each feature value set matching the target action condition The value satisfies the maximum value condition matching each feature value set, and at least one feature value between the maximum feature value, the minimum feature value, and the maximum feature value and the minimum feature value satisfies the monotonic change condition, Then, it is determined that at least one feature value set matching the target action condition meets the target action condition.
其中,最值条件是指限定特征值集合的最大值阈值和最小值阈值的条件。其中,不同特征值集合的最值条件中限定的最大值阈值和最小值阈值可以不同,也可以相同。在特征值集合确定的最大特征值大于最大值阈值,且最小特征值小于最小值阈值的情况下,确定该特征值集合满足最值条件。Among them, the maximum value condition refers to a condition that limits the maximum value threshold and the minimum value threshold of the feature value set. The maximum value threshold and the minimum value threshold defined in the maximum value condition of different feature value sets may be different or the same. When the maximum feature value determined by the feature value set is greater than the maximum value threshold, and the minimum feature value is less than the minimum value threshold, it is determined that the feature value set satisfies the maximum value condition.
单调变化条件可以是指限定在设定范围内的连续特征值的大小变化是单调的条件。最大特征值、最小特征值和最大特征值与最小特征值之间的至少一个 特征值满足单调变化条件,是指最大特征值到最小特征值范围内的特征值的大小变化是单调递减或者最小特征值到最大特征值范围内的特征值的大小变化是单调递增。The monotonic change condition may refer to a condition in which the change in the size of continuous feature values within a set range is monotonic. The maximum eigenvalue, minimum eigenvalue, and at least one eigenvalue between the maximum eigenvalue and the minimum eigenvalue satisfy the monotonic change condition, which means that the size change of the eigenvalue in the range from the maximum eigenvalue to the minimum eigenvalue is a monotonically decreasing or minimum feature The change in the size of the feature value from the value to the largest feature value is monotonously increasing.
根据最值条件以及单调变化条件,确定特征值集合存在最大特征值和最小特征值且最大特征值与最小特征值之间是连续单调变化的,确定用户做出了连续且符合标准的目标动作。示例性的,睁眼闭眼动作中,最大特征值为眼睛睁开且上下眼皮距离最大时的特征值,最小特征值为眼睛闭上且上下眼皮距离最小时的特征值,同时最大特征值到最小特征值之间的大小变化情况为上下眼皮之间的距离单调递减,确定闭合值集合满足睁眼闭眼动作。According to the maximum value condition and the monotonic change condition, it is determined that there is a maximum feature value and a minimum feature value in the feature value set, and the maximum feature value and the minimum feature value are continuously monotonously changing, and it is determined that the user has made a continuous and standard-compliant target action. Exemplarily, in the action of opening and closing the eyes, the maximum feature value is the feature value when the eyes are opened and the distance between the upper and lower eyelids is the largest, the minimum feature value is the feature value when the eyes are closed and the distance between the upper and lower eyelids is the smallest, and the maximum feature value is The size change between the minimum feature values is that the distance between the upper and lower eyelids decreases monotonously, and it is determined that the closed value set satisfies the action of opening and closing the eyes.
通过在同时满足最值条件以及单调变化条件时,确定满足目标动作条件,保证用户做出了目标动作条件提示的动作,同时保证用户是连续做出的动作,从而实现准确检测目标动作条件。By simultaneously satisfying the maximum value condition and the monotonous change condition, it is determined that the target action condition is satisfied, to ensure that the user has made the action prompted by the target action condition, and at the same time to ensure that the user is a continuous action, thereby achieving accurate detection of the target action condition.
本公开实施例通过实时获取录制用户人脸图像的视频中的图像帧,对每个图像帧先进行器官区域的遮挡判断,并在所有器官区域满足未遮挡条件的情况下,根据实时更新至少一个器官区域对应的特征值集合,并根据更新的特征值集合对至少一个目标动作条件进行检测,以及根据至少一个目标动作条件的检测结果,判断用户是否为活体,解决了相关技术中采用遮挡照片方法通过活体检测的问题,减少活体检测误判的情况,提高活体检测的准确率,从而提高身份认证的安全性。In the embodiments of the present disclosure, the image frames in the video recording the user’s face image are obtained in real time, and for each image frame, the occlusion judgment of the organ region is performed first, and when all the organ regions meet the unoccluded condition, at least one is updated according to real-time A set of feature values corresponding to the organ region, and at least one target action condition is detected according to the updated set of feature values, and whether the user is alive is determined based on the detection result of the at least one target action condition, which solves the occlusion photo method used in the related art Through the problem of living body detection, the false judgment of living body detection is reduced, and the accuracy rate of living body detection is improved, thereby improving the security of identity authentication.
实施例二Example 2
图2为本公开实施例二提供的一种活体检测方法的流程图。FIG. 2 is a flowchart of a living body detection method according to Embodiment 2 of the present disclosure.
本实施例的方法可以包括S210至S2120。The method of this embodiment may include S210 to S2120.
S210,在确定满足活体检测开始条件时,实时获取视频中的图像帧。S210, when it is determined that the living body detection start condition is satisfied, the image frames in the video are acquired in real time.
本实施例中的活体检测、视频、图像帧、器官区域、特征值、特征值集合、未遮挡条件、活体检测结束条件和检测结果等均可以参考上述实施例中的描述。The living body detection, video, image frame, organ area, feature value, feature value set, unoccluded condition, living body detection end condition and detection result in this embodiment can all refer to the description in the above embodiment.
S220,在所述图像帧中识别用户的至少一个器官区域,每个器官区域对应至少一个特征值集合,每个特征值集合包括至少一个特征值,其中,所述器官区域包括下述至少一项:眼睛区域、嘴巴区域和鼻子区域;所述眼睛区域对应的特征值集合包括所述眼睛闭合值集合;所述嘴巴区域对应的特征值集合包括 嘴巴张合度集合;所述鼻子区域对应的特征值集合包括俯仰角度集合和/或旋转角度集合。S220. Identify at least one organ region of the user in the image frame, each organ region corresponds to at least one feature value set, and each feature value set includes at least one feature value, wherein the organ region includes at least one of the following : Eye area, mouth area and nose area; the feature value set corresponding to the eye area includes the eye closure value set; the feature value set corresponding to the mouth area includes the mouth opening and closing degree set; the feature value corresponding to the nose area The set includes a set of pitch angles and/or a set of rotation angles.
S230,将所述图像帧输入到预先训练的与每个器官区域匹配的遮挡判断网络模型中,并获取所述遮挡判断网络模型输出的,对所述图像帧中每个器官区域的遮挡判断结果。S230, input the image frame into a pre-trained occlusion judgment network model matching each organ region, and obtain the occlusion judgment result for each organ region in the image frame output by the occlusion judgment network model .
S240,在每个器官区域的遮挡判断结果为未遮挡的情况下,确定所述至少一个器官区域均满足所述未遮挡条件。S240: In a case where the result of the occlusion judgment of each organ region is unoccluded, determine that the at least one organ region satisfies the unoccluded condition.
S250,判断所述图像帧中识别到的每个器官区域是否满足未遮挡条件,直至实时获取的所述视频中的图像帧全部判断完成,如果是,则执行S260;否则,执行S270。S250. Determine whether each organ region identified in the image frame satisfies the unoccluded condition until the image frames in the video acquired in real time are all determined. If yes, execute S260; otherwise, execute S270.
S260,将所述图像帧作为目标图像帧,分别计算与所述目标图像帧中识别到的每个器官区域匹配的特征值,并将计算得到的所述特征值更新存储至与所述特征值所属器官区域对应的特征值集合中,执行S280。S260, using the image frame as a target image frame, respectively calculating feature values matching each organ region identified in the target image frame, and updating and storing the calculated feature values to the feature values In the feature value set corresponding to the organ region to which it belongs, S280 is executed.
S270,实时获取下一个图像帧。返回执行S250。S270, acquiring the next image frame in real time. Return to execute S250.
S280,从所述至少一个目标动作条件中确定当前检测的目标动作条件,并实时判断所述当前检测的目标动作条件匹配的至少一个特征值集合是否满足所述目标动作条件,如果是,执行S290;否则,执行S2100。S280, determine the currently detected target action condition from the at least one target action condition, and determine in real time whether at least one feature value set matching the currently detected target action condition meets the target action condition, and if so, execute S290 ; Otherwise, execute S2100.
可选的,所述确定所述目标动作条件匹配的至少一个特征值集合满足所述目标动作条件,可以包括:如果与所述目标动作条件匹配的每个特征值集合中的最大特征值和最小特征值满足与每个特征值集合匹配的最值条件,且所述最大特征值、所述最小特征值和所述最大特征值与所述最小特征值之间的至少一个特征值满足单调变化条件,则确定所述目标动作条件匹配的至少一个特征值集合满足所述目标动作条件。Optionally, the determining that the at least one feature value set matching the target action condition meets the target action condition may include: if the maximum feature value and the minimum value in each feature value set matching the target action condition The feature value satisfies the maximum value condition matching each set of feature values, and at least one feature value between the maximum feature value, the minimum feature value, and the maximum feature value and the minimum feature value satisfies the monotonic change condition , It is determined that at least one feature value set matching the target action condition meets the target action condition.
在一实施例中,如果目标动作条件匹配的至少一个特征值集合不满足该目标动作条件,则根据实时更新的至少一个特征值集合继续判断是否满足该目标动作条件。若该目标动作条件的检测时间结束,目标动作条件匹配的至少一个特征值集合始终不全满足该目标动作条件,确定目标动作条件的检测结果为失败。In an embodiment, if at least one feature value set matching the target action condition does not satisfy the target action condition, then continue to determine whether the target action condition is met according to the at least one feature value set updated in real time. If the detection time of the target operating condition ends, at least one feature value set matching the target operating condition does not always satisfy the target operating condition, and it is determined that the detection result of the target operating condition is a failure.
S290,确定所述目标动作条件的检测结果为验证通过。S290: Determine that the detection result of the target action condition is verified.
S2100,判断是否满足当前检测的目标动作条件对应的时间结束条件,如果是,则执行S2110;否则,执行S280。S2100. Determine whether the time end condition corresponding to the currently detected target action condition is satisfied. If yes, execute S2110; otherwise, execute S280.
其中,时间结束条件可以是指限定当前检测的目标动作条件的检测时长的条件。若当前检测的目标动作条件对应的检测的持续时间内,用户始终没有做出目标动作条件限定的人脸动作姿态,确定当前检测的目标动作条件的检测结果为失败。若当前时刻实时获取的图像帧还处于当前检测的目标动作条件对应的检测的持续时间内,继续根据实时获取的下一个图像帧对应更新的多个特征值集合,对当前检测的目标动作条件进行检测。The time end condition may refer to a condition that limits the detection duration of the currently detected target operating condition. If the duration of the detection corresponding to the currently detected target action condition, the user has never made a facial action gesture defined by the target action condition, it is determined that the detection result of the currently detected target action condition is failure. If the image frame acquired in real time at the current time is still within the detection duration corresponding to the currently detected target action condition, continue to perform the current detection of the target action condition according to the multiple feature value sets updated corresponding to the next image frame acquired in real time Detection.
S2110,确定所述目标动作条件的检测结果为验证失败。S2110. Determine that the detection result of the target action condition is verification failure.
S2120,在确定满足活体检测结束条件时,根据所述至少一个目标动作条件的检测结果,确定所述用户是否为活体。S2120. When it is determined that the living body detection end condition is satisfied, determine whether the user is a living body according to the detection result of the at least one target action condition.
实施例三Example Three
图3为本公开实施例提供的一种活体检测装置的结构示意图,本实施例可适用于检测录制的视频中的用户是否是活体的情况。该装置可以采用软件和/或硬件的方式实现,该装置可以配置于电子设备中。如图3所示,该装置可以包括:图像帧获取模块310、器官区域识别模块320、特征值集合更新模块330和活体检测模块340。FIG. 3 is a schematic structural diagram of a living body detection device according to an embodiment of the present disclosure. This embodiment can be applied to detect whether a user in a recorded video is a living body. The device may be implemented in software and/or hardware, and the device may be configured in an electronic device. As shown in FIG. 3, the apparatus may include: an image frame acquisition module 310, an organ region identification module 320, a feature value set update module 330, and a living body detection module 340.
图像帧获取模块310,设置为在确定满足活体检测开始条件时,实时获取视频中的图像帧;The image frame acquisition module 310 is set to acquire the image frames in the video in real time when it is determined that the living body detection start condition is satisfied;
器官区域识别模块320,设置为在所述图像帧中识别用户的至少一个器官区域,每个器官区域对应至少一个特征值集合,每个特征值集合包括至少一个特征值;The organ region identification module 320 is configured to identify at least one organ region of the user in the image frame, each organ region corresponds to at least one feature value set, and each feature value set includes at least one feature value;
特征值集合更新模块330,设置为将所述至少一个器官区域均满足未遮挡条件的图像帧确定为目标图像帧,则分别计算与所述目标图像帧中识别到的每个器官区域匹配的特征值,并将计算得到的所述特征值更新存储至与所述特征值所属器官区域对应的特征值集合中;The feature value set update module 330 is configured to determine the image frames in which the at least one organ region satisfies the unoccluded condition as the target image frame, and then separately calculate the features matching each organ region identified in the target image frame Value, and update and store the calculated feature value to the feature value set corresponding to the organ region to which the feature value belongs;
活体检测模块340,设置为实时对所述特征值集合进行至少一个目标动作条件的检测,并在确定满足活体检测结束条件时,根据所述至少一个目标动作条件的检测结果均为验证通过,确定所述用户为活体。The living body detection module 340 is configured to detect at least one target action condition on the feature value set in real time, and when it is determined that the living body detection end condition is satisfied, the detection results according to the at least one target action condition are all verified and determined The user is a living body.
本公开实施例通过实时获取录制用户人脸图像的视频中的图像帧,对每个图像帧先进行器官区域的遮挡判断,并在所有器官区域满足未遮挡条件时,根据实时更新至少一个器官区域对应的特征值集合,并根据更新的特征值集合对至少一个目标动作条件进行检测,以及根据至少一个目标动作条件的检测结果,判断用户是否为活体,解决了相关技术中采用遮挡照片方法通过活体检测的问题,减少活体检测误判的情况,提高活体检测的准确率,从而提高身份认证的安全性。In the embodiment of the present disclosure, the image frames in the video recording the face image of the user are acquired in real time, and the occlusion judgment of the organ region is performed for each image frame first, and when all the organ regions meet the unoccluded condition, at least one organ region is updated in real time Corresponding feature value set, and detecting at least one target action condition according to the updated feature value set, and judging whether the user is a living body according to the detection result of the at least one target action condition, which solves the related art of using the occlusion photo method to pass the living body The problem of detection reduces the misjudgment of live detection, improves the accuracy of live detection, and thus improves the security of identity authentication.
在一实施例中,所述特征值集合更新模块330,包括:遮挡判断模块,设置为将所述图像帧输入到预先训练的与每个器官区域匹配的遮挡判断网络模型中,并获取所述遮挡判断网络模型输出的,对所述图像帧中所述每个器官区域的遮挡判断结果;未遮挡条件判断模块,设置为在每个器官区域的遮挡判断结果为未遮挡的情况下,确定所述至少一个器官区域均满足所述未遮挡条件;目标图像帧确定模块,设置为在识别到所述图像帧中的所述至少一个器官区域均满足未遮挡条件的情况下,确定所述图像帧为所述目标图像帧。In an embodiment, the feature value set updating module 330 includes: an occlusion judgment module configured to input the image frame into a pre-trained occlusion judgment network model that matches each organ region, and obtain the The output of the occlusion judgment network model is the occlusion judgment result of each organ region in the image frame; the unocclusion condition judgment module is set to determine that the occlusion judgment result of each organ region is unoccluded. The at least one organ region satisfies the unoccluded condition; the target image frame determination module is configured to determine the image frame when it is recognized that the at least one organ region in the image frame satisfies the unoccluded condition Is the target image frame.
在一实施例中,所述活体检测模块340,包括:目标动作条件检测模块,设置为如果确定所述目标动作条件匹配的至少一个特征值集合满足所述目标动作条件,则确定所述目标动作条件的检测结果为验证通过。In an embodiment, the living body detection module 340 includes a target action condition detection module configured to determine the target action if it is determined that at least one feature value set matching the target action condition satisfies the target action condition The test result of the condition is verified.
在一实施例中,所述目标动作条件检测模块,包括:如果与所述目标动作条件匹配的每个特征值集合中的最大特征值和最小特征值满足与每个特征值集合匹配的最值条件,且所述最大特征值、所述最小特征值和所述最大特征值与所述最小特征值之间的至少一个特征值满足单调变化条件,则确定所述目标动作条件匹配的至少一个特征值集合满足所述目标动作条件。In an embodiment, the target action condition detection module includes: if the maximum feature value and the minimum feature value in each feature value set matching the target action condition satisfy the maximum value matching each feature value set Conditions, and at least one feature value between the maximum feature value, the minimum feature value, and the maximum feature value and the minimum feature value satisfies the monotonic change condition, then determine at least one feature that matches the target action condition The set of values meets the target action condition.
在一实施例中,所述器官区域包括眼睛区域,所述器官区域的特征值包括所述眼睛区域的闭合值。In an embodiment, the organ region includes an eye region, and the characteristic value of the organ region includes a closed value of the eye region.
在一实施例中,所述器官区域包括嘴巴区域,所述器官区域的特征值包括所述嘴巴区域的张合度。In an embodiment, the organ region includes a mouth region, and the characteristic value of the organ region includes a degree of opening and closing of the mouth region.
在一实施例中,所述器官区域包括鼻子区域,所述器官区域的特征值包括所述鼻子区域的俯仰角度和/或旋转角度。In an embodiment, the organ area includes a nose area, and the characteristic value of the organ area includes a pitch angle and/or a rotation angle of the nose area.
本公开实施例提供的活体检测装置,与实施例一提供的活体检测方法属于 同一构思,未在本公开实施例中详尽描述的技术细节可参见实施例一,并且本公开实施例与实施例一具有相同的效果。The living body detection device provided by the embodiment of the present disclosure belongs to the same concept as the living body detection method provided by the embodiment one. For technical details not described in detail in the embodiments of the present disclosure, please refer to embodiment one, and the embodiment and the embodiment one of the present disclosure Has the same effect.
实施例四Example 4
本公开实施例提供了一种电子设备,参考图4,示出了适于用来实现本公开实施例的电子设备(例如终端设备或服务器)400的结构示意图。本公开实施例中的电子设备可以包括诸如移动电话、笔记本电脑、数字广播接收器、个人数字助理(Personal Digital Assistant,PDA)、平板电脑(Portable Android Device,PAD)、便携式多媒体播放器(Personal Multimedia Player,PMP)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字电视(television,TV)、台式计算机等等的固定终端。图4示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来限制。An embodiment of the present disclosure provides an electronic device. Referring to FIG. 4, a schematic structural diagram of an electronic device (eg, terminal device or server) 400 suitable for implementing the embodiment of the present disclosure is shown. The electronic devices in the embodiments of the present disclosure may include, for example, mobile phones, notebook computers, digital broadcast receivers, personal digital assistants (Personal Digital Assistant (PDA), tablet computers (Portable Android Device, PAD), portable multimedia players (Personal Multimedia Player, PMP), mobile terminals such as in-vehicle terminals (for example, car navigation terminals), and fixed terminals such as digital television (TV), desktop computers, and so on. The electronic device shown in FIG. 4 is only an example, and should not bring limitations on the functions and use scope of the embodiments of the present disclosure.
如图4所示,电子设备400可以包括处理装置(例如中央处理器、图形处理器等)401,处理装置401可以根据存储在只读存储器(Read-only Memory,ROM)402中的程序或者从存储装置408加载到随机访问存储器(Random Access Memory,RAM)403中的程序而执行多种适当的动作和处理。在RAM 403中,还存储有电子设备400操作所需的多种程序和数据。处理装置401、ROM 402以及RAM 403通过总线404彼此相连。输入/输出(Input/Output,I/O)接口405也连接至总线404。As shown in FIG. 4, the electronic device 400 may include a processing device (such as a central processing unit, a graphics processor, etc.) 401. The processing device 401 may be based on a program stored in a read-only memory (Read-only Memory, ROM) 402 or from The storage device 408 loads a program in a random access memory (Random Access Memory, RAM) 403 to perform various appropriate actions and processes. In the RAM 403, various programs and data necessary for the operation of the electronic device 400 are also stored. The processing device 401, ROM 402, and RAM 403 are connected to each other via a bus 404. An input/output (Input/Output, I/O) interface 405 is also connected to the bus 404.
通常,以下装置可以连接至I/O接口405:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置406;包括例如液晶显示器(Liquid Crystal Display,LCD)、扬声器、振动器等的输出装置407;包括例如磁带、硬盘等的存储装置408;以及通信装置409。通信装置409可以允许电子设备400与其他设备进行无线或有线通信以交换数据。虽然图4示出了包括多种装置的电子设备400,但是应理解的是,并不要求实施或具备所有示出的装置。本公开可以替代地实施或具备更多或更少的装置。Generally, the following devices can be connected to the I/O interface 405: including input devices 406 such as touch screens, touch pads, keyboards, mice, cameras, microphones, accelerometers, gyroscopes, etc.; including, for example, liquid crystal displays (Liquid Crystal Display, LCD) , An output device 407 of a speaker, a vibrator, etc.; a storage device 408 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 409. The communication device 409 may allow the electronic device 400 to perform wireless or wired communication with other devices to exchange data. Although FIG. 4 shows an electronic device 400 including various devices, it should be understood that it is not required to implement or have all the devices shown. The present disclosure may alternatively be implemented or equipped with more or fewer devices.
根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,该计算机程序产品包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装 置409从网络上被下载和安装,或者从存储装置408被安装,或者从ROM 402被安装。在该计算机程序被处理装置401执行时,执行本公开实施例的活体检测方法中限定的上述功能。According to an embodiment of the present disclosure, the process described above with reference to the flowchart may be implemented as a computer software program. For example, an embodiment of the present disclosure includes a computer program product including a computer program carried on a computer-readable medium, the computer program containing program code for performing the method shown in the flowchart. In such an embodiment, the computer program may be downloaded and installed from the network through the communication device 409, or from the storage device 408, or from the ROM 402. When the computer program is executed by the processing device 401, the above-described functions defined in the living body detection method of the embodiment of the present disclosure are executed.
实施例五Example 5
本公开实施例还提供了一种计算机可读存储介质,计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的组合。计算机可读存储介质例如可以包括电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者以上的组合。计算机可读存储介质的更具体的例子可以包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、RAM、ROM、可擦式可编程只读存储器(Erasable Programmable Read-Only Memory,EPROM)或闪存、光纤、便携式紧凑磁盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括电磁信号、光信号或上述的合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、射频(Radio Frequency,RF)等等,或者上述的合适的组合。Embodiments of the present disclosure also provide a computer-readable storage medium. The computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium, or a combination of the two. The computer-readable storage medium may include, for example, electrical, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or a combination of the above. More specific examples of computer-readable storage media may include: electrical connection with one or more wires, portable computer disk, hard disk, RAM, ROM, erasable programmable read-only memory (Erasable Programmable Read-Only Memory, EPROM ) Or flash memory, optical fiber, portable compact disk read-only memory (Compact Disc Read-Only Memory, CD-ROM), optical storage device, magnetic storage device, or a suitable combination of the above. In the present disclosure, the computer-readable storage medium may be any tangible medium containing or storing a program, which may be used by or in combination with an instruction execution system, apparatus, or device. In this disclosure, the computer-readable signal medium may include a data signal that is propagated in baseband or as part of a carrier wave, in which computer-readable program code is carried. This propagated data signal can take many forms, including electromagnetic signals, optical signals, or suitable combinations of the above. The computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, and the computer-readable signal medium may send, propagate, or transmit a program for use by or in combination with an instruction execution system, apparatus, or device . The program code contained on the computer-readable medium may be transmitted using any appropriate medium, including but not limited to: electric wires, optical cables, radio frequency (RF), etc., or a suitable combination of the foregoing.
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。The computer-readable medium may be included in the electronic device; or it may exist alone without being assembled into the electronic device.
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:在确定满足活体检测开始条件时,实时获取视频中的图像帧;在所述图像帧中识别用户的至少一个器官区域,每个器官区域对应至少一个特征值集合,每个特征值集合包括至少一个特征值;将所述至少一个器官区域均满足未遮挡条件的图像帧确定为目标图像帧分别计算与所述目标图像帧中识别到的每个器官区域匹配的特征值,并将计算得到的所述特征值更新存储至与所述特征值所属器官区域对应的特征值集合中;实时对 所述特征值集合进行至少一个目标动作条件的检测,并在确定满足活体检测结束条件时,根据所述至少一个目标动作条件的检测结果均为验证通过,确定所述用户为活体。The computer-readable medium carries one or more programs, and when the one or more programs are executed by the electronic device, the electronic device: when it is determined that the living condition detection start condition is satisfied, acquire the image frames in the video in real time; At least one organ region of the user is identified in the image frame, and each organ region corresponds to at least one feature value set, and each feature value set includes at least one feature value; an image frame in which at least one organ region satisfies the unoccluded condition It is determined that the target image frame respectively calculates the feature value matching each organ region identified in the target image frame, and updates and stores the calculated feature value to the feature value corresponding to the organ region to which the feature value belongs In the set; at least one target action condition is detected on the feature value set in real time, and when it is determined that the in-vivo detection end condition is satisfied, the detection results based on the at least one target action condition are all verified and determined that the user is Living body.
本公开可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(Local Area Network,LAN)或广域网(Wide Area Network,WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。The present disclosure may write computer program code for performing the operations of the present disclosure in one or more programming languages or a combination thereof. The above programming languages include object-oriented programming languages such as Java, Smalltalk, C++, and also include Conventional procedural programming language-such as "C" language or similar programming language. The program code may execute entirely on the user's computer, partly on the user's computer, as an independent software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server. In situations involving remote computers, the remote computer can be connected to the user's computer through any kind of network, including a local area network (Local Area Network, LAN) or a wide area network (Wide Area Network, WAN), or it can be connected to an external computer ( For example, using an Internet service provider to connect through the Internet).
附图中的流程图和框图,图示了按照本公开一种或多种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。The flowcharts and block diagrams in the drawings illustrate the possible implementation architecture, functions, and operations of systems, methods, and computer program products according to one or more embodiments of the present disclosure. Each block in the flowchart or block diagram may represent a module, a program segment, or a part of code, and the module, program segment, or a part of code contains one or more executable instructions for implementing a prescribed logical function. In some alternative implementations, the functions marked in the blocks may also occur in an order different from that marked in the drawings. For example, two blocks represented in succession may actually be executed in parallel, and they may sometimes be executed in the reverse order, depending on the functions involved. Each block in the block diagrams and/or flowcharts, and combinations of blocks in the block diagrams and/or flowcharts, can be implemented with dedicated hardware-based systems that perform specified functions or operations, or can be implemented with dedicated hardware It is implemented in combination with computer instructions.
描述于本公开实施例中所涉及到的模块可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,模块的名称在某种情况下并不构成对该模块本身的限定,例如,图像帧获取模块还可以被描述为“在确定满足活体检测开始条件时,实时获取视频中的图像帧的模块”。The modules described in the embodiments of the present disclosure may be implemented in software or hardware. Among them, the name of the module does not constitute a limitation on the module itself under certain circumstances. For example, the image frame acquisition module can also be described as "a module that acquires image frames in a video in real time when it is determined that the living condition detection start condition is satisfied. ".

Claims (16)

  1. 一种活体检测方法,包括:A living body detection method, including:
    在确定满足活体检测开始条件时,实时获取视频中的图像帧;When it is determined that the living condition detection start condition is satisfied, the image frames in the video are obtained in real time;
    在所述图像帧中识别用户的至少一个器官区域,每个器官区域对应至少一个特征值集合,每个特征值集合包括至少一个特征值;Identify at least one organ region of the user in the image frame, each organ region corresponds to at least one feature value set, and each feature value set includes at least one feature value;
    将所述至少一个器官区域均满足未遮挡条件的图像帧确定为目标图像帧,分别计算与所述目标图像帧中识别到的每个器官区域匹配的特征值,并将计算得到的所述特征值更新存储至与所述特征值所属器官区域对应的特征值集合中;An image frame in which the at least one organ region satisfies the unoccluded condition is determined as a target image frame, and feature values matching each organ region identified in the target image frame are calculated respectively, and the calculated feature The value update is stored in the feature value set corresponding to the organ region to which the feature value belongs;
    实时对所述特征值集合进行至少一个目标动作条件的检测,并在确定满足活体检测结束条件时,根据所述至少一个目标动作条件的检测结果均为验证通过,确定所述用户为活体。At least one target action condition is detected on the feature value set in real time, and when it is determined that the living body detection end condition is satisfied, the detection results of the at least one target action condition are all verified to determine that the user is a living body.
  2. 根据权利要求1所述的方法,其中,所述将所述至少一个器官区域均满足未遮挡条件的情况的图像帧确定为目标图像帧,包括:The method according to claim 1, wherein the determining of the image frames in a case where all of the at least one organ region satisfies the unoccluded condition as the target image frame includes:
    将所述图像帧输入到预先训练的与每个器官区域匹配的遮挡判断网络模型中,并获取所述遮挡判断网络模型输出的,对所述图像帧中每个器官区域的遮挡判断结果;Input the image frame into a pre-trained occlusion judgment network model matching each organ region, and obtain the occlusion judgment result for each organ region in the image frame output by the occlusion judgment network model;
    在每个器官区域的遮挡判断结果为未遮挡的情况下,确定所述至少一个器官区域均满足所述未遮挡条件;In the case that the result of the occlusion determination of each organ region is unoccluded, it is determined that the at least one organ region satisfies the unoccluded condition;
    在识别到所述图像帧中的所述至少一个器官区域均满足未遮挡条件的情况下,确定所述图像帧为所述目标图像帧。When it is recognized that all the at least one organ region in the image frame satisfies the unoccluded condition, the image frame is determined to be the target image frame.
  3. 根据权利要求1所述的方法,其中,所述实时对所述特征值集合进行至少一个目标动作条件的检测,包括:The method according to claim 1, wherein the detecting in real time at least one target action condition on the feature value set includes:
    响应于确定与所述目标动作条件匹配的至少一个特征值集合满足所述目标动作条件,确定所述目标动作条件的检测结果为验证通过。In response to determining that at least one feature value set matching the target action condition satisfies the target action condition, it is determined that the detection result of the target action condition is verified.
  4. 根据权利要求3所述的方法,其中,所述确定与所述目标动作条件匹配的至少一个特征值集合满足所述目标动作条件,包括:The method according to claim 3, wherein the determination that at least one feature value set matching the target action condition satisfies the target action condition includes:
    响应于与所述目标动作条件匹配的每个特征值集合中的最大特征值和最小特征值满足与每个特征值集合匹配的最值条件,且所述最大特征值、所述最小特征值和所述最大特征值与所述最小特征值之间的至少一个特征值满足单调变化条件,确定与所述目标动作条件匹配的至少一个特征值集合满足所述目标动作条件。In response to the maximum feature value and the minimum feature value in each feature value set matching the target action condition satisfying the maximum value condition matching each feature value set, and the maximum feature value, the minimum feature value and At least one feature value between the maximum feature value and the minimum feature value satisfies a monotonic change condition, and it is determined that at least one feature value set matching the target action condition satisfies the target action condition.
  5. 根据权利要求1所述的方法,其中,所述器官区域包括眼睛区域,所述 器官区域的特征值包括所述眼睛区域的闭合值。The method according to claim 1, wherein the organ region includes an eye region, and the characteristic value of the organ region includes a closed value of the eye region.
  6. 根据权利要求1所述的方法,其中,所述器官区域包括嘴巴区域,所述器官区域的特征值包括所述嘴巴区域的张合度。The method according to claim 1, wherein the organ region includes a mouth region, and the characteristic value of the organ region includes a degree of opening and closing of the mouth region.
  7. 根据权利要求1所述的方法,其中,所述器官区域包括鼻子区域,所述器官区域的特征值包括下述至少一项:所述鼻子区域的俯仰角度和所述鼻子区域的旋转角度。The method according to claim 1, wherein the organ area includes a nose area, and the characteristic value of the organ area includes at least one of the following: a pitch angle of the nose area and a rotation angle of the nose area.
  8. 一种活体检测装置,包括:A living body detection device, including:
    图像帧获取模块,设置为在确定满足活体检测开始条件时,实时获取视频中的图像帧;The image frame acquisition module is set to acquire the image frames in the video in real time when it is determined that the living body detection start condition is satisfied;
    器官区域识别模块,设置为在所述图像帧中识别用户的至少一个器官区域,每个器官区域对应至少一个特征值集合,每个特征值集合包括至少一个特征值;An organ region identification module, configured to identify at least one organ region of the user in the image frame, each organ region corresponds to at least one feature value set, and each feature value set includes at least one feature value;
    特征值集合更新模块,设置为将所述至少一个器官区域均满足未遮挡条件的图像帧确定为目标图像帧,分别计算与所述目标图像帧中识别到的每个器官区域匹配的特征值,并将计算得到的所述特征值更新存储至与所述特征值所属器官区域对应的特征值集合中;The feature value set update module is configured to determine the image frames in which the at least one organ region satisfies the unoccluded condition as the target image frame, and calculate feature values matching each organ region identified in the target image frame, And update and store the calculated feature value to the feature value set corresponding to the organ region to which the feature value belongs;
    活体检测模块,设置为实时对所述特征值集合进行至少一个目标动作条件的检测,并在确定满足活体检测结束条件时,根据所述至少一个目标动作条件的检测结果均为验证通过,确定所述用户为活体。The living body detection module is configured to perform real-time detection of the at least one target action condition on the feature value set, and when it is determined that the living body detection end condition is satisfied, the detection results based on the at least one target action condition are all verified and determined The user is a living body.
  9. 根据权利要求8所述的装置,其中,所述特征值集合更新模块,包括:The apparatus according to claim 8, wherein the feature value set update module includes:
    遮挡判断模块,设置为将所述图像帧输入到预先训练的与每个器官区域匹配的遮挡判断网络模型中,并获取所述遮挡判断网络模型输出的,对所述图像帧中每个器官区域的遮挡判断结果;The occlusion judgment module is configured to input the image frame into a pre-trained occlusion judgment network model matching each organ region, and obtain the output of the occlusion judgment network model for each organ region in the image frame Occlusion judgment results;
    未遮挡条件判断模块,设置为在每个器官区域的遮挡判断结果为未遮挡的情况下,确定所述至少一个器官区域均满足所述未遮挡条件;An unoccluded condition judgment module, configured to determine that the at least one organ area satisfies the unoccluded condition when the result of the occlusion judgment of each organ area is unoccluded;
    目标图像帧确定模块,设置为在识别到所述图像帧中的所述至少一个器官区域均满足未遮挡条件的情况下,确定所述图像帧为所述目标图像帧。The target image frame determination module is configured to determine that the image frame is the target image frame when it is recognized that the at least one organ region in the image frame satisfies the unoccluded condition.
  10. 根据权利要求8所述的装置,其中,所述活体检测模块,包括:The device according to claim 8, wherein the living body detection module comprises:
    目标动作条件检测模块,设置为响应于确定与所述目标动作条件匹配的至少一个特征值集合满足所述目标动作条件,确定所述目标动作条件的检测结果为验证通过。The target action condition detection module is configured to, in response to determining that at least one feature value set matching the target action condition satisfies the target action condition, determine that the detection result of the target action condition is verified.
  11. 根据权利要求10所述的装置,其中,所述目标动作条件检测模块,包括:The apparatus according to claim 10, wherein the target action condition detection module comprises:
    响应于与所述目标动作条件匹配的每个特征值集合中的最大特征值和最小特征值满足与每个特征值集合匹配的最值条件,且所述最大特征值、所述最小特征值和所述最大特征值与所述最小特征值之间的至少一个特征值满足单调变化条件,确定与所述目标动作条件匹配的至少一个特征值集合满足所述目标动作条件。In response to the maximum feature value and the minimum feature value in each feature value set matching the target action condition satisfying the maximum value condition matching each feature value set, and the maximum feature value, the minimum feature value and At least one feature value between the maximum feature value and the minimum feature value satisfies a monotonic change condition, and it is determined that at least one feature value set matching the target action condition satisfies the target action condition.
  12. 根据权利要求8所述的装置,其中,所述器官区域包括眼睛区域,所述器官区域的特征值包括所述眼睛区域的闭合值。The apparatus according to claim 8, wherein the organ region includes an eye region, and the characteristic value of the organ region includes a closed value of the eye region.
  13. 根据权利要求8所述的装置,其中,所述器官区域包括嘴巴区域,所述器官区域的特征值包括所述嘴巴区域的张合度。The apparatus according to claim 8, wherein the organ region includes a mouth region, and the characteristic value of the organ region includes a degree of opening and closing of the mouth region.
  14. 根据权利要求8所述的装置,其中,所述器官区域包括鼻子区域,所述器官区域的特征值包括下述至少一项:所述鼻子区域的俯仰角度所述鼻子区域的旋转角度。The apparatus according to claim 8, wherein the organ area includes a nose area, and the characteristic value of the organ area includes at least one of the following: a pitch angle of the nose area, and a rotation angle of the nose area.
  15. 一种电子设备,包括:An electronic device, including:
    一个或多个处理器;One or more processors;
    存储器,设置为存储一个或多个程序;Memory, set to store one or more programs;
    所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-7任一项所述的方法。The one or more programs are executed by the one or more processors, so that the one or more processors implement the method according to any one of claims 1-7.
  16. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1-7任一项所述的方法。A computer-readable storage medium storing a computer program on the computer-readable storage medium, the computer program being executed by a processor to implement the method according to any one of claims 1-7.
PCT/CN2019/095081 2018-12-18 2019-07-08 Liveness detection method and apparatus, electronic device, and storage medium WO2020124994A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811549518.5A CN109684974A (en) 2018-12-18 2018-12-18 Biopsy method, device, electronic equipment and storage medium
CN201811549518.5 2018-12-18

Publications (1)

Publication Number Publication Date
WO2020124994A1 true WO2020124994A1 (en) 2020-06-25

Family

ID=66186223

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/095081 WO2020124994A1 (en) 2018-12-18 2019-07-08 Liveness detection method and apparatus, electronic device, and storage medium

Country Status (2)

Country Link
CN (1) CN109684974A (en)
WO (1) WO2020124994A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109684974A (en) * 2018-12-18 2019-04-26 北京字节跳动网络技术有限公司 Biopsy method, device, electronic equipment and storage medium
CN110334637A (en) * 2019-06-28 2019-10-15 百度在线网络技术(北京)有限公司 Human face in-vivo detection method, device and storage medium
CN112183173B (en) * 2019-07-05 2024-04-09 北京字节跳动网络技术有限公司 Image processing method, device and storage medium
CN114973347B (en) * 2021-04-22 2023-07-21 中移互联网有限公司 Living body detection method, device and equipment
CN113971841A (en) * 2021-10-28 2022-01-25 北京市商汤科技开发有限公司 Living body detection method and device, computer equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108415653A (en) * 2018-03-27 2018-08-17 百度在线网络技术(北京)有限公司 Screen locking method and device for terminal device
CN108446651A (en) * 2018-03-27 2018-08-24 百度在线网络技术(北京)有限公司 Face identification method and device
CN108875452A (en) * 2017-05-11 2018-11-23 北京旷视科技有限公司 Face identification method, device, system and computer-readable medium
CN109684974A (en) * 2018-12-18 2019-04-26 北京字节跳动网络技术有限公司 Biopsy method, device, electronic equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9985963B2 (en) * 2015-02-15 2018-05-29 Beijing Kuangshi Technology Co., Ltd. Method and system for authenticating liveness face, and computer program product thereof
CN107330914B (en) * 2017-06-02 2021-02-02 广州视源电子科技股份有限公司 Human face part motion detection method and device and living body identification method and system
WO2019127365A1 (en) * 2017-12-29 2019-07-04 深圳前海达闼云端智能科技有限公司 Face living body detection method, electronic device and computer program product

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875452A (en) * 2017-05-11 2018-11-23 北京旷视科技有限公司 Face identification method, device, system and computer-readable medium
CN108415653A (en) * 2018-03-27 2018-08-17 百度在线网络技术(北京)有限公司 Screen locking method and device for terminal device
CN108446651A (en) * 2018-03-27 2018-08-24 百度在线网络技术(北京)有限公司 Face identification method and device
CN109684974A (en) * 2018-12-18 2019-04-26 北京字节跳动网络技术有限公司 Biopsy method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN109684974A (en) 2019-04-26

Similar Documents

Publication Publication Date Title
WO2020124994A1 (en) Liveness detection method and apparatus, electronic device, and storage medium
US10635893B2 (en) Identity authentication method, terminal device, and computer-readable storage medium
WO2020124993A1 (en) Liveness detection method and apparatus, electronic device, and storage medium
US20220076000A1 (en) Image Processing Method And Apparatus
US20220405986A1 (en) Virtual image generation method, device, terminal and storage medium
WO2017185630A1 (en) Emotion recognition-based information recommendation method and apparatus, and electronic device
WO2016169432A1 (en) Identity authentication method and device, and terminal
US20210342427A1 (en) Electronic device for performing user authentication and operation method therefor
CN109993150B (en) Method and device for identifying age
CN109670444B (en) Attitude detection model generation method, attitude detection device, attitude detection equipment and attitude detection medium
WO2021083069A1 (en) Method and device for training face swapping model
CN111783626B (en) Image recognition method, device, electronic equipment and storage medium
WO2018103416A1 (en) Method and device for detecting facial image
US11922721B2 (en) Information display method, device and storage medium for superimposing material on image
US20200275271A1 (en) Authentication of a user based on analyzing touch interactions with a device
WO2022095674A1 (en) Method and apparatus for operating mobile device
WO2021179719A1 (en) Face detection method, apparatus, medium, and electronic device
CN109934191A (en) Information processing method and device
CN110276313B (en) Identity authentication method, identity authentication device, medium and computing equipment
WO2020244160A1 (en) Terminal device control method and apparatus, computer device, and readable storage medium
KR20190109654A (en) Electronic device and method for measuring heart rate
WO2020007191A1 (en) Method and apparatus for living body recognition and detection, and medium and electronic device
WO2023006033A1 (en) Speech interaction method, electronic device, and medium
WO2021073204A1 (en) Object display method and apparatus, electronic device, and computer readable storage medium
CN111079472A (en) Image comparison method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19899480

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 04/10/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19899480

Country of ref document: EP

Kind code of ref document: A1