WO2020015477A1 - 一种人脸识别方法及终端设备 - Google Patents

一种人脸识别方法及终端设备 Download PDF

Info

Publication number
WO2020015477A1
WO2020015477A1 PCT/CN2019/090705 CN2019090705W WO2020015477A1 WO 2020015477 A1 WO2020015477 A1 WO 2020015477A1 CN 2019090705 W CN2019090705 W CN 2019090705W WO 2020015477 A1 WO2020015477 A1 WO 2020015477A1
Authority
WO
WIPO (PCT)
Prior art keywords
face image
detection
face
specified
occlusion
Prior art date
Application number
PCT/CN2019/090705
Other languages
English (en)
French (fr)
Inventor
徐崴
李亮
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Publication of WO2020015477A1 publication Critical patent/WO2020015477A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the embodiments of the present specification relate to the technical field of face recognition, and in particular, to a face recognition method and a terminal device.
  • Face payment is an emerging electronic payment method.
  • the payment method consists of two parts: face recognition, login to the user's wallet account, and deduction from the wallet to complete the payment process.
  • the process of facial recognition and login to the user's wallet account is to scan and / or take a picture of the user's face, and compare the face picture with the bottom picture in the user's wallet account to complete the identification and verification of the user's identity.
  • the current face payment method in the process of scanning and / or taking pictures of the user's face, there will be many factors that affect the user's face picture, thereby affecting the user's identity authentication and payment success rate.
  • the embodiments of the present specification provide a face recognition method and a terminal device for identifying factors affecting a face image, and ensuring the success rate of using the face image for user identity authentication and face payment.
  • a face recognition method including:
  • the specified detection operation including at least one of glasses detection, occlusion detection, and face quality evaluation detection;
  • a terminal device including:
  • a first execution module configured to perform a specified detection operation on the face image, where the specified detection operation includes at least one of glasses detection, occlusion detection, and face quality evaluation detection;
  • a second execution module is configured to execute a reminder operation that matches the detection result when the detection result of the specified detection operation is abnormal.
  • a terminal device including: a memory, a processor, and a computer program stored on the memory and executable on the processor.
  • the computer program is implemented as follows when executed by the processor step:
  • the specified detection operation including at least one of glasses detection, occlusion detection, and face quality evaluation detection;
  • a computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the following steps are implemented:
  • the specified detection operation including at least one of glasses detection, occlusion detection, and face quality evaluation detection;
  • a specified detection operation is performed on a face image, and the specified detection operation includes at least one of glasses detection, occlusion detection, and face quality evaluation detection.
  • the detection result of the specified detection operation is abnormal, the factors affecting the face image can be identified, and a reminder operation matching the detection result can be performed, so that the user can adjust according to the reminder to exclude factors affecting the face image, ensuring subsequent follow-up The success rate of user identity authentication and face payment using face images.
  • FIG. 1 is a flowchart of a face recognition method according to an embodiment of the present specification
  • FIG. 2 is a schematic diagram of an actual application scenario of a face recognition method provided by an embodiment of the present specification
  • FIG. 3 is a schematic flowchart of an actual application scenario of a face recognition method provided by an embodiment of the present specification
  • FIG. 4 is a block diagram of a system for implementing an actual application scenario of a face recognition method according to an embodiment of the present specification
  • FIG. 5 is one of the structural block diagrams of a terminal device provided by an embodiment of the present specification.
  • FIG. 6 is a second structural block diagram of a terminal device according to an embodiment of the present specification.
  • Embodiments of the present invention provide a face recognition method and a terminal device, which are used to identify factors affecting a face image and ensure the success rate of using the face image for user identity authentication and face payment.
  • An embodiment of the present invention provides a face recognition method.
  • the execution subject of the method may be, but is not limited to, a terminal device or a device or system that can be configured to execute the method provided by the embodiment of the present invention.
  • the following describes the implementation of the method by taking the execution subject of the method as a terminal device capable of executing the method as an example. It can be understood that the execution subject of the method as a terminal device is only an exemplary description, and should not be construed as limiting the method.
  • FIG. 1 is a flowchart of a face recognition method according to an embodiment of the present invention.
  • the method in FIG. 1 may be executed by a terminal device. As shown in FIG. 1, the method may include:
  • Step 110 Obtain a face image to be identified.
  • An implementation manner of obtaining the face image to be identified may be to obtain the face image to be identified by scanning, or to obtain the face image to be identified by shooting.
  • the embodiments of the present invention are not specifically limited.
  • Step 120 Perform a specified detection operation on the face image.
  • the specified detection operation includes at least one of glasses detection, occlusion detection, and face quality evaluation detection.
  • the glasses detection can be understood as glasses reflection detection and / or large-frame glasses detection.
  • the glasses detection may also be the detection of any glasses that can affect image collection in the prior art, which is not specifically limited in the embodiment of the present invention.
  • the occlusion detection can be understood as the detection of face occlusion.
  • the face quality evaluation and detection can be understood as the detection of the blur degree and light intensity of the face.
  • Step 130 When the detection result of the designated detection operation is abnormal, perform a reminder operation that matches the detection result.
  • the detection result needs to be determined according to a specified detection operation.
  • the detection result may be a reflection detection result; if the detection operation is specified as occlusion detection, the detection result may be occlusion detection result; if the detection operation is specified as face quality evaluation detection, the detection result is Detection results can be evaluated for face quality.
  • the detection result of the specified detection operation is abnormal. It can be understood that if the detection result is a numerical value, and the value is greater than a threshold value, it is determined that the detection result is abnormal; otherwise, it is determined that the detection result is normal.
  • the detection result is a reflection detection result, the reflection detection result is a reflection probability, and the reflection probability is greater than a threshold, it is determined that the reflection detection result is abnormal; if the detection result is an occlusion detection result, the occlusion detection result Is the occlusion probability, and the occlusion probability is greater than the threshold, it is determined that the occlusion detection result is abnormal; if the detection result is a face quality evaluation detection result, the face quality evaluation detection result is a quality problem probability, and the quality problem probability is greater than the threshold, then It is determined that the result of the face quality assessment is abnormal.
  • the threshold may be determined according to the actual situation of the actual application scenario, which is not limited in the embodiment of the present invention.
  • the reminding operation needs to be determined according to the detection result.
  • the reminding operation may be an operation to remind the user to remove the glasses; if the detection result is an occlusion detection result, the reminding operation may be an operation to remind the user to remove the occlusion; if the detection result is a face
  • the reminding operation may be an operation for reminding the user to adjust the image acquisition angle.
  • the user is reminded to focus when taking a picture; if the detection result of the face quality assessment is motion blur, the user is reminded not to shake when taking a picture; if the face quality assessment is detected If the result is insufficient light, the user is reminded to turn on the illumination light or choose a location with good light to take a picture.
  • a specified detection operation is performed on a face image, and the specified detection operation includes at least one of glasses detection, occlusion detection, and face quality evaluation detection.
  • the detection result of the specified detection operation is abnormal, the factors affecting the face image can be identified, and a reminder operation matching the detection result can be performed, so that the user can adjust according to the reminder to exclude factors affecting the face image, ensuring subsequent follow-up The success rate of user identity authentication and face payment using face images.
  • step 120 may be specifically implemented as:
  • the reflection detection model is obtained based on a predetermined number of face image samples with reflection and / or face image samples without reflection.
  • the face image sample with reflection may include at least one of a face image sample with glasses reflection and a face image sample with black frame glasses; the face image sample without reflection may include wearing ordinary glasses At least one of a face image sample and a faceless image sample without glasses.
  • the face image samples with reflection include face image samples with glasses reflection and face image samples with black frame glasses
  • the face image samples without reflection include face image samples with ordinary glasses and face without glasses Image samples.
  • the reflection detection model can be obtained as follows: First, the training data includes approximately four types of face image samples, which are face image samples with reflective glasses, face image samples with black-rimmed glasses, and those wearing ordinary glasses. For face image samples and faceless eyeglass sample samples, one thousand images were selected under the same category respectively; then, reflection detection models were obtained by training one thousand face image samples in four categories. Wherein, how to obtain a reflection detection model through training on one thousand face image samples of each of four categories belongs to the prior art, which is not described in the embodiment of the present invention.
  • a reflective detection model is obtained by training a predetermined number of face image samples with reflection and / or non-reflective face image samples, and then the face image is used as an input of the reflection detection model to obtain output reflection detection.
  • the presence or absence of reflective factors in the face image is determined according to the reflective detection result, which effectively prevents the collected face images from being affected by the reflective factors, thereby ensuring the success of subsequent use of the face image for user identity authentication and face payment. rate.
  • the specified detection operation is occlusion detection
  • step 120 may be specifically implemented as:
  • the occlusion detection model is obtained based on a predetermined number of face image samples with occlusion and / or face image samples without occlusion.
  • the face image sample with occlusion may include a face image sample with a hand blocking a face, a face image sample with a bangs blocking face, a face image sample with a hat blocking face, and a face with a mask blocking face. At least one of image samples.
  • the face image samples with occlusion include face image samples with hands blocking faces, face image samples with bangs blocking faces, face image samples with hat blocking faces, and face image samples with mask blocking faces.
  • the occlusion detection model can be obtained as follows: First, the training data includes approximately five types of face image samples, which are a face image sample of a hand blocking a face, a face image sample of a bang blocking a face, and a hat blocking. Face image samples of faces, face image samples of masks blocking faces, and face image samples of unblocking faces, respectively, select one thousand images under the same category; then, pass one thousand faces of five categories each Image samples are trained to obtain the occlusion detection model. Wherein, how to obtain an occlusion detection model by training each of a thousand face image samples in five categories belongs to the prior art, which is not described in the embodiment of the present invention.
  • an occlusion detection model is obtained by training a predetermined number of occluded face image samples and / or unoccluded face image samples, and then the face image is used as the input of the occlusion detection model to obtain the output occlusion detection.
  • the presence of occlusion factors in the face image is determined according to the occlusion detection results, which effectively prevents the collected face images from being affected by the occlusion factors, thereby ensuring the success of subsequent use of the face image for user identity authentication and face payment. rate.
  • step 120 may be specifically implemented as:
  • the face quality assessment and detection model is obtained based on a predetermined number of blurred human face image samples and / or clear human face image samples.
  • the blurred face image sample may include at least one of an out-of-focus blurred face image sample, a motion-blurred face image sample, and an insufficiently-lit face image sample.
  • the blurred face image samples include out-of-focus blurred face image samples, motion-blurred face image samples, and insufficiently-lit face image samples.
  • the facial quality assessment detection model can be obtained as follows: First, the training data includes approximately four types of face image samples, which are defocused face image samples, motion blurred face image samples, and insufficient light. For the face image samples and clear face image samples, one thousand images were selected under the same category respectively; then, the face quality evaluation and detection model was obtained by training each of the one thousand face image samples in four categories. Wherein, how to obtain a face quality evaluation and detection model through training on one thousand face image samples of each of four categories belongs to the prior art, which is not described in the embodiment of the present invention.
  • a face quality assessment detection model is obtained by training a predetermined number of blurred face image samples and / or clear face image samples, and then the face image is used as an input for the face quality assessment detection model to obtain
  • the output of the face quality assessment detection results determines whether the face image has insufficient light, motion, or defocus based on the face quality assessment detection results, which effectively prevents the collected face images from being affected by the above factors, thereby ensuring that The subsequent success rate of using the face image for user authentication and face payment is shown.
  • step 110 may be specifically implemented as:
  • the first step is to determine that the collected face image is located in a framing frame of a display interface on the terminal device;
  • a second step if the ratio of the area of the face image in the framing frame to the entire display interface meets a threshold, it is determined that the face image is a face image to be identified.
  • the threshold may be set according to actual requirements, and is not specifically limited in the embodiment of the present invention.
  • the threshold value may be the same as or different from the threshold value described in the foregoing embodiment.
  • the first step may be specifically implemented as follows: training a face detection model based on a face image sample in advance; using the face image as an input to the face detection model to obtain an output face detection result; if the face If the detection result is normal, it is determined that the face image is located in the framing frame of the display interface on the terminal device; if the face detection result is abnormal, the user is reminded to perform the operation of re-acquiring the face image.
  • the result of the face detection is the area coordinates of the face image
  • the third step is to obtain the area coordinates of the area where the face image is located.
  • a proportion of the area where the face image is located in the entire display interface is determined.
  • the collected face image is located in a framing frame of a display interface on a terminal device, if the ratio of the area of the face image in the framing frame to the entire display interface meets a threshold, it is determined that the face image is a pending
  • the recognized face image provides a prerequisite for subsequent specified detection operations on the face image, and ensures the quality of the face image to be recognized.
  • the face recognition method provided by the embodiment of the present invention may further include:
  • the next specified detection operation after the specified detection operation is performed can be understood as that when the reflection detection result corresponding to the glasses detection is normal, occlusion detection can be performed; when the occlusion detection result corresponding to the occlusion detection is normal, face quality can be performed Evaluation testing.
  • the detection order of the glasses detection, occlusion detection, and face quality evaluation detection may be arbitrary, which is not limited in the embodiment of the present invention. or,
  • Sending the face image to be recognized to the recognition terminal device can be understood as sending the face image to be recognized to the recognition terminal device when the detection result of the designated detection operation is normal.
  • the recognition terminal device can compare the face image with a pre-stored face image, and if the similarity value between the two is greater than a predetermined value, it is determined that the user identity authentication is passed and the payment is deducted from the wallet to complete the payment operation.
  • the predetermined value needs to be set according to actual requirements, which is not specifically limited in the embodiment of the present invention.
  • the recognition terminal device compares the face image with a pre-stored face image, which can be specifically implemented as: obtaining the image information of the face area of the face image, and the person of the pre-stored face image
  • the image information of the face area is compared with the two image information, and based on the similar features in the two image information, the similarity value between the face image and the face image stored in advance is determined.
  • the pre-stored face image may be a face image corresponding to a user wallet account stored in the terminal device in advance, or may be a face obtained on the official website system according to a user ID number corresponding to the user wallet account. image.
  • the next designated detection operation after the designated detection operation is performed, which effectively eliminates the influencing factors in the face image to be identified, and ensures the face image to be identified.
  • the quality guarantees the success rate of user identity authentication and face payment in the subsequent use of face images.
  • the face image to be recognized is sent to the recognition terminal device, and the recognition terminal device performs user identity authentication and face payment based on the face image to be recognized, ensuring the use of a face The success rate of images for user authentication and face payment.
  • FIG. 3 shows a flowchart of a face recognition method provided in an embodiment of the present invention in an actual application scenario
  • FIG. 4 shows a system block diagram of a face recognition method provided in an embodiment of the present invention in an actual application scenario
  • the user's face recognition logs in to the user's wallet account to perform face payment as shown in FIG. 3 and FIG. 4:
  • the user is prompted on the terminal device 1 to enter the user's mobile phone number.
  • the terminal device 1 sends the user's mobile phone number to the identification terminal device.
  • the identification terminal device 2 receives the user's mobile phone number and finds the user's wallet account based on the user's mobile phone number. If found, step 330 is performed; otherwise, step 340 is performed.
  • the identification terminal device 2 prompts the user to register a new user.
  • the terminal device 1 collects a face image.
  • the terminal device 1 determines whether the face image is a face image to be identified; if yes, step 360 is performed; if not, step 330 is performed.
  • the terminal device 1 determines whether the face image is a face image to be identified, and the specific implementation can participate in the related content in the above embodiment, which is not repeated in this embodiment of the present invention.
  • the terminal device 1 performs a specified detection operation on the face image, and the specified detection operation is glasses detection.
  • the detection result of the specified detection operation is abnormal, step 361 is performed; when the detection result of the specified detection operation is normal, step 370 or 390 is performed.
  • the terminal device 1 performs a reminder operation that matches the detection result. For example, the user is reminded to take off his glasses.
  • the terminal device 1 performs a specified detection operation on the face image, the specified detection operation being occlusion detection.
  • the detection result of the specified detection operation is abnormal, step 371 is performed; when the detection result of the specified detection operation is normal, step 380 or 390 is performed.
  • the terminal device 1 performs a reminder operation that matches the detection result. For example, the terminal device 1 reminds the user to remove occlusion.
  • the terminal device 1 performs a specified detection operation on the face image, the specified detection operation being a face quality evaluation detection.
  • the detection result of the specified detection operation is abnormal, step 381 is performed; when the detection result of the specified detection operation is normal, step 390 is performed.
  • the terminal device 1 performs a reminder operation that matches the detection result. For example, the reminder adjusts the image acquisition angle.
  • the recognition terminal device 2 receives the face image to be recognized sent by the terminal device 1, and compares the face image with a previously stored face image; if the similarity between the two is greater than a predetermined value, execute 391 ; If not, go to step 330.
  • the user is authenticated and debited from the wallet to complete the payment operation.
  • a specified detection operation is performed on a face image, and the specified detection operation includes at least one of glasses detection, occlusion detection, and face quality evaluation detection.
  • the detection result of the specified detection operation is abnormal, the factors affecting the face image can be identified, and a reminder operation matching the detection result can be performed, so that the user can adjust according to the reminder to exclude factors affecting the face image, ensuring subsequent The success rate of user identity authentication and face payment using face images.
  • FIG. 5 is a schematic structural diagram of a terminal device according to an embodiment of the present invention. As shown in FIG. 5, the terminal device 500 may include:
  • An obtaining module 510 configured to obtain a face image to be identified
  • a first execution module 520 configured to perform a specified detection operation on the face image, where the specified detection operation includes at least one of glasses detection, occlusion detection, and face quality evaluation detection;
  • the second execution module 530 is configured to execute a reminder operation matching the detection result when the detection result of the specified detection operation is abnormal.
  • the first execution module 520 may include:
  • a first input unit configured to use the face image as an input of a reflection detection model to obtain an output reflection detection result
  • the reflection detection model is obtained based on a predetermined number of face image samples with reflection and / or face image samples without reflection.
  • the reflective face image sample includes at least one of a reflective face image sample of glasses and a facial image sample of black frame glasses; the non-reflective face image sample includes At least one of a face image sample wearing ordinary glasses and a face image sample without glasses.
  • the first execution module 520 may include:
  • a second input unit configured to use the face image as an input of an occlusion detection model to obtain an output occlusion detection result
  • the occlusion detection model is obtained based on a predetermined number of face image samples with occlusion and / or face image samples without occlusion.
  • the face image sample with occlusion includes a face image sample with a hand blocking a face, a face image sample with a bang blocking a face, a face image sample with a hat blocking a face, and a mask blocking a person At least one of a face image sample of a face.
  • the first execution module 520 may include:
  • a third input unit configured to use the face image as an input of a face quality assessment detection model to obtain an output face quality assessment detection result
  • the face quality assessment and detection model is obtained based on a predetermined number of blurred human face image samples and / or clear human face image samples.
  • the blurred face image sample includes at least one of an out-of-focus blurred face image sample, a motion-blurred face image sample, and an insufficiently-lit face image sample.
  • the obtaining module 510 may include:
  • a first determining unit configured to determine that the collected face image is located in a framing frame of a display interface on a terminal device
  • a second determining unit is configured to determine that the face image is a face image to be identified if a ratio of a region of the face image in the framing frame to the entire display interface meets a threshold.
  • the obtaining module 510 may further include:
  • An obtaining unit configured to obtain area coordinates of an area where the face image is located
  • a third determining unit is configured to determine, based on the coordinates of the area and the size of the entire display interface, a proportion of the area where the face image is located in the entire display interface.
  • the terminal device may further include:
  • a third execution module 540 configured to execute a next specified detection operation after the specified detection operation when the detection result of the specified detection operation is normal; or,
  • the sending module 550 is configured to send the face image to be recognized to a recognition terminal device.
  • a specified detection operation is performed on a face image, and the specified detection operation includes at least one of glasses detection, occlusion detection, and face quality evaluation detection.
  • the detection result of the specified detection operation is abnormal, the factors affecting the face image can be identified, and a reminder operation matching the detection result can be performed, so that the user can adjust according to the reminder to exclude factors affecting the face image, ensuring subsequent follow-up The success rate of user identity authentication and face payment using face images.
  • FIG. 6 is a schematic structural diagram of a terminal device according to an embodiment of the present specification. Please refer to FIG. 6.
  • the terminal device includes a processor, and optionally includes an internal bus, a network interface, and a memory.
  • the memory may include a memory, such as a high-speed random access memory (Random-Access Memory, RAM), and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory.
  • RAM random access memory
  • non-volatile memory such as at least one disk memory.
  • the terminal device may also include hardware required by other services.
  • the processor, network interface and memory can be connected to each other through an internal bus, which can be an ISA (Industry Standard Architecture) bus, a PCI (Peripheral Component Interconnect) bus, or an EISA (Extended Industry Standard Architecture (Extended Industry Standard Architecture) bus and so on.
  • the bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only a two-way arrow is used in FIG. 6, but it does not mean that there is only one bus or one type of bus.
  • the program may include program code, where the program code includes a computer operation instruction.
  • the memory may include memory and non-volatile memory, and provide instructions and data to the processor.
  • the processor reads the corresponding computer program from the non-volatile memory into the memory and then runs it to form an association device of the resource value-added object and the resource object on a logical level.
  • the processor executes a program stored in the memory, and is specifically used to perform the following operations:
  • the specified detection operation including at least one of glasses detection, occlusion detection, and face quality evaluation detection;
  • a specified detection operation is performed on a face image, and the specified detection operation includes at least one of glasses detection, occlusion detection, and face quality evaluation detection.
  • the detection result of the specified detection operation is abnormal, the factors affecting the face image can be identified, and a reminder operation matching the detection result can be performed, so that the user can adjust according to the reminder to exclude factors affecting the face image, ensuring subsequent follow-up The success rate of user identity authentication and face payment using face images.
  • the above-mentioned face recognition method disclosed in the embodiment shown in FIG. 1 of this specification may be applied to a processor, or implemented by a processor.
  • the processor may be an integrated circuit chip with signal processing capabilities.
  • each step of the above method may be completed by an integrated logic circuit of hardware in a processor or an instruction in a form of software.
  • the above processor may be a general-purpose processor, including a central processing unit (CPU), a network processor (NP), etc .; it may also be a digital signal processor (DSP), special integration Circuit (Application Specific Integrated Circuit, ASIC), Field Programmable Gate Array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • a general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the steps of the method disclosed in combination with one or more embodiments of the present specification may be directly embodied as being executed by a hardware decoding processor, or may be executed and completed by using a combination of hardware and software modules in the decoding processor.
  • the software module may be located in a mature storage medium such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory, or an electrically erasable programmable memory, a register, and the like.
  • the storage medium is located in a memory, and the processor reads the information in the memory and completes the steps of the foregoing method in combination with its hardware.
  • the terminal device may also execute the face recognition method of FIG. 1, which is not described in this specification.
  • the terminal device in this specification does not exclude other implementations, such as logic devices or a combination of software and hardware, etc. That is to say, the execution body of the following processing flow is not limited to each logical unit. It can also be a hardware or logic device.
  • the embodiments of the present specification also provide a computer-readable storage medium.
  • a computer program is stored on the computer-readable storage medium.
  • the computer-readable storage medium is, for example, a read-only memory (ROM), a random access memory (RAM), a magnetic disk or an optical disk.
  • the embodiments of the present invention may be provided as a method, a system, or a computer program product. Therefore, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Moreover, the present invention may take the form of a computer program product implemented on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) containing computer-usable program code.
  • computer-usable storage media including, but not limited to, disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing device to work in a particular manner such that the instructions stored in the computer-readable memory produce a manufactured article including a system of instructions, the instructions
  • the system implements the functions specified in one or more flowcharts and / or one or more blocks of the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing device, so that a series of steps can be performed on the computer or other programmable device to produce a computer-implemented process, which can be executed on the computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more flowcharts and / or one or more blocks of the block diagrams.
  • a computing device includes one or more processors (CPUs), input / output interfaces, network interfaces, and memory.
  • processors CPUs
  • input / output interfaces output interfaces
  • network interfaces network interfaces
  • memory volatile and non-volatile memory
  • Memory may include non-persistent memory, random access memory (RAM), and / or non-volatile memory in computer-readable media, such as read-only memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
  • RAM random access memory
  • ROM read-only memory
  • flash RAM flash memory
  • Computer-readable media includes both permanent and non-persistent, removable and non-removable media.
  • Information can be stored by any method or technology.
  • Information may be computer-readable instructions, data structures, modules of a program, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), and read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technologies, read-only disc read-only memory (CD-ROM), digital versatile disc (DVD) or other optical storage, Magnetic tape cartridges, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission media may be used to store information that can be accessed by computing devices.
  • computer-readable media does not include temporary computer-readable media, such as modulated data signals and carrier waves.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Accounting & Taxation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Strategic Management (AREA)
  • Finance (AREA)
  • General Business, Economics & Management (AREA)
  • Collating Specific Patterns (AREA)
  • Telephone Function (AREA)
  • Pinball Game Machines (AREA)

Abstract

本发明实施例提供一种人脸识别方法及终端设备,该人脸识别方法包括:获取待识别的人脸图像;对所述人脸图像执行指定检测操作,所述指定检测操作包括眼镜检测、遮挡检测及脸部质量评估检测中的至少一种;当所述指定检测操作的检测结果异常时,执行与所述检测结果匹配的提醒操作。

Description

一种人脸识别方法及终端设备 技术领域
本说明书实施例涉及人脸识别技术领域,尤其涉及一种人脸识别方法及终端设备。
背景技术
随着各种支付技术的迅速发展,为了极大简化支付过程,人脸支付应运而生。人脸支付是一种新出现的电子支付方式,该支付方式由两部分组成:人脸识别登录用户钱包账号及从钱包中扣款完成支付过程。其中,人脸识别登录用户钱包账号的过程是扫描和/或拍摄用户的脸部图片,将该脸部图片与用户的钱包账号中的留底图片进行比对,来完成用户身份的识别和核验,从而完成从钱包中扣款完成支付的过程。但是,目前的人脸支付方式,在扫描和/或拍摄用户的脸部图片的过程中,会存在诸多影响用户的脸部图片的因素,从而影响用户的身份认证及支付成功率。
发明内容
本说明书实施例提供一种人脸识别方法及终端设备,用于识别影响人脸图像的因素,确保了采用人脸图像进行用户身份认证及人脸支付的成功率。
本说明书实施例采用下述技术方案:
第一方面,提供了一种人脸识别方法,包括:
获取待识别的人脸图像;
对所述人脸图像执行指定检测操作,所述指定检测操作包括眼镜检测、遮挡检测及脸部质量评估检测中的至少一种;
当所述指定检测操作的检测结果异常时,执行与所述检测结果匹配的提醒操作。
第二方面,提供了一种终端设备,包括:
获取模块,用于获取待识别的人脸图像;
第一执行模块,用于对所述人脸图像执行指定检测操作,所述指定检测操作包括眼镜检测、遮挡检测及脸部质量评估检测中的至少一种;
第二执行模块,用于当所述指定检测操作的检测结果异常时,执行与所述检测结果 匹配的提醒操作。
第三方面,提供了一种终端设备,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如下步骤:
获取待识别的人脸图像;
对所述人脸图像执行指定检测操作,所述指定检测操作包括眼镜检测、遮挡检测及脸部质量评估检测中的至少一种;
当所述指定检测操作的检测结果异常时,执行与所述检测结果匹配的提醒操作。
第四方面,提供了一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如下步骤:
获取待识别的人脸图像;
对所述人脸图像执行指定检测操作,所述指定检测操作包括眼镜检测、遮挡检测及脸部质量评估检测中的至少一种;
当所述指定检测操作的检测结果异常时,执行与所述检测结果匹配的提醒操作。
本发明实施例采用的上述至少一个技术方案能够达到以下有益效果:
本发明实施例通过对人脸图像执行指定检测操作,该指定检测操作包括眼镜检测、遮挡检测及脸部质量评估检测中的至少一种。当该指定检测操作的检测结果异常时,即可识别到影响人脸图像的因素,从而执行与检测结果匹配的提醒操作,使得用户根据提醒进行调整以排除影响人脸图像的因素,确保了后续采用人脸图像进行用户身份认证及人脸支付的成功率。
另外,当指定检测操作的检测结果异常时,执行与检测结果匹配的提醒操作,能够精准引导用户减少乃至去除影响人脸图像的影响因素,从而保证后续用户能顺利完成整个人脸支付流程,提升全链路通过率。同时也是帮助用户学习使用人脸支付的一个过程,使用户在感受到人脸支付的智能性后,也会因为其独特的用户体验,有利于人脸支付的普及。
附图说明
此处所说明的附图用来提供对本发明的进一步理解,构成本发明的一部分,本发明 的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:
图1为本说明书的一个实施例提供的人脸识别方法流程图;
图2为本说明书的一个实施例提供的人脸识别方法的实际应用场景实现示意图;
图3为本说明书的一个实施例提供的人脸识别方法的实际应用场景实现流程示意图;
图4为本说明书的一个实施例提供的人脸识别方法的实际应用场景实现系统框图;
图5为本说明书的一个实施例提供的终端设备的结构框图之一;
图6为本说明书的一个实施例提供的终端设备的结构框图之二。
具体实施方式
为使本发明的目的、技术方案和优点更加清楚,下面将结合本发明具体实施例及相应的附图对本发明技术方案进行清楚、完整地描述。显然,所描述的实施例仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明实施例提供一种人脸识别方法及终端设备,用于识别影响人脸图像的因素,确保了采用人脸图像进行用户身份认证及人脸支付的成功率。本发明实施例提供一种人脸识别方法,该方法的执行主体,可以但不限于终端设备或能够被配置为执行本发明实施例提供的该方法的装置或系统。
为便于描述,下文以该方法的执行主体为能够执行该方法的终端设备为例,对该方法的实施方式进行介绍。可以理解,该方法的执行主体为终端设备只是一种示例性的说明,并不应理解为对该方法的限定。
图1为本发明实施例提供的人脸识别方法的流程图,图1的方法可以由终端设备执行,如图1所示,该方法可以包括:
步骤110、获取待识别的人脸图像。
该获取待识别的人脸图像的实现方式可以是通过扫描的方式获取待识别的人脸图像,或者,通过拍摄的方式获取待识别的人脸图像。本发明实施例不做具体限定。
步骤120、对所述人脸图像执行指定检测操作。
其中,所述指定检测操作包括眼镜检测、遮挡检测及脸部质量评估检测中的至少一 种。
该眼镜检测可以理解为眼镜反光检测和/或大框眼镜检测。
当然,眼镜检测还可以为对现有技术中任一种能够影响图像采集的眼镜的检测,本发明实施例不作具体限定。
该遮挡检测可以理解为对脸部遮挡的检测。
该脸部质量评估检测可以理解为对脸部的模糊度、光线强度等检测。
步骤130、当所述指定检测操作的检测结果异常时,执行与所述检测结果匹配的提醒操作。
该检测结果需要根据指定检测操作来确定。
例如,若指定检测操作为眼镜检测,则检测结果可以为反光检测结果;若指定检测操作为遮挡检测,则检测结果可以为遮挡检测结果;若指定检测操作为脸部质量评估检测,则检测结果可以为脸部质量评估检测结果。
该指定检测操作的检测结果异常,可理解为若检测结果为数值,该数值大于阈值,则确定检测结果异常;反之,则确定检测结果正常。
示例性的,沿用上述示例,若检测结果为反光检测结果,该反光检测结果为反光概率,且该反光概率大于阈值,则确定反光检测结果异常;若检测结果为遮挡检测结果,该遮挡检测结果为遮挡概率,且该遮挡概率大于阈值,则确定遮挡检测结果异常;若检测结果为脸部质量评估检测结果,该脸部质量评估检测结果为质量问题概率,且该质量问题概率大于阈值,则确定脸部质量评估检测结果异常。其中,该阈值可以根据实际应用场景的实际情况确定,本发明实施例在此不做限定。
该提醒操作需要根据检测结果来确定。
沿用上述示例,若检测结果为反光检测结果,则提醒操作可以为提醒用户摘掉眼镜的操作;若检测结果为遮挡检测结果,则提醒操作可以为提醒用户去掉遮挡的操作;若检测结果为脸部质量评估检测结果,则提醒操作可以为提醒用户调整图像采集角度的操作。
示例性的,若脸部质量评估检测结果为失焦模糊,则提醒用户拍照时做好聚焦;若脸部质量评估检测结果为运动模糊,则提醒用户拍照时不要晃动;若脸部质量评估检测结果为光线不足,则提醒用户开启照明灯或选择光线好的位置进行拍照。
本发明实施例通过对人脸图像执行指定检测操作,该指定检测操作包括眼镜检测、遮挡检测及脸部质量评估检测中的至少一种。当该指定检测操作的检测结果异常时,即可识别到影响人脸图像的因素,从而执行与检测结果匹配的提醒操作,使得用户根据提醒进行调整以排除影响人脸图像的因素,确保了后续采用人脸图像进行用户身份认证及人脸支付的成功率。
另外,当指定检测操作的检测结果异常时,执行与检测结果匹配的提醒操作,能够精准引导用户减少乃至去除影响人脸图像的影响因素,从而保证后续用户能顺利完成整个人脸支付流程,提升全链路通过率。同时也是帮助用户学习使用人脸支付的一个过程,使用户在感受到人脸支付的智能性后,也会因为其独特的用户体验,有利于人脸支付的普及。
可选的,作为一个实施例,若所述指定检测操作为眼镜检测,则步骤120具体可实现为:
将所述人脸图像作为反光检测模型的输入,以得到输出的反光检测结果;
其中,所述反光检测模型是基于预定数量的具有反光的人脸图像样本和/或无反光的人脸图像样本训练得到的。
其中,所述具有反光的人脸图像样本可以包括眼镜反光的人脸图像样本和具有黑边框眼镜的人脸图像样本中的至少一种;所述无反光的人脸图像样本可以包括佩戴普通眼镜的人脸图像样本和无眼镜的人脸图像样本中的至少一种。
假设,具有反光的人脸图像样本包括眼镜反光的人脸图像样本和具有黑边框眼镜的人脸图像样本,无反光的人脸图像样本包括佩戴普通眼镜的人脸图像样本和无眼镜的人脸图像样本。
本步骤中,该反光检测模型获得可以为:首先,训练数据中大概包括4类人脸图像样本,分别为眼镜反光的人脸图像样本、具有黑边框眼镜的人脸图像样本、佩戴普通眼镜的人脸图像样本和无眼镜的人脸图像样本,在同一类别下分别选取一千张图像;然后,通过4个类别的各一千张人脸图像样本训练得到反光检测模型。其中,如何通过4个类别的各一千张人脸图像样本训练得到反光检测模型,属于现有技术,本发明实施例不再赘述。
本发明实施例,通过预定数量的具有反光的人脸图像样本和/或无反光的人脸图像样本训练得到反光检测模型,再将人脸图像作为反光检测模型的输入,以得到输出的反光 检测结果,根据反光检测结果确定该人脸图像的是否存在反光因素,有效避免了采集的人脸图像受反光因素的影响,进而确保了后续采用该人脸图像进行用户身份认证及人脸支付的成功率。
可选的,作为一个实施例,所述指定检测操作为遮挡检测,则步骤120具体可实现为:
将所述人脸图像作为遮挡检测模型的输入,以得到输出的遮挡检测结果;
其中,所述遮挡检测模型是基于预定数量的具有遮挡的人脸图像样本和/或无遮挡的人脸图像样本训练得到的。
其中,所述具有遮挡的人脸图像样本可以包括手遮挡人脸的人脸图像样本、刘海挡人脸的人脸图像样本、帽子挡人脸的人脸图像样本和口罩挡人脸的人脸图像样本中的至少一种。
假设,具有遮挡的人脸图像样本包括手遮挡人脸的人脸图像样本、刘海挡人脸的人脸图像样本、帽子挡人脸的人脸图像样本和口罩挡人脸的人脸图像样本。
本步骤中,该遮挡检测模型获得可以为:首先,训练数据中大概包括5类人脸图像样本,分别为手遮挡人脸的人脸图像样本、刘海挡人脸的人脸图像样本、帽子挡人脸的人脸图像样本、口罩挡人脸的人脸图像样本和无遮挡的人脸图像样本,在同一类别下分别选取一千张图像;然后,通过5个类别的各一千张人脸图像样本训练得到遮挡检测模型。其中,如何通过5个类别的各一千张人脸图像样本训练得到遮挡检测模型,属于现有技术,本发明实施例不再赘述。
本发明实施例,通过预定数量的具有遮挡的人脸图像样本和/或无遮挡的人脸图像样本训练得到遮挡检测模型,再将人脸图像作为遮挡检测模型的输入,以得到输出的遮挡检测结果,根据遮挡检测结果确定该人脸图像的是否存在遮挡因素,有效避免了采集的人脸图像受遮挡因素的影响,进而确保了后续采用该人脸图像进行用户身份认证及人脸支付的成功率。
可选的,作为一个实施例,若所述指定检测操作为脸部质量评估检测,则步骤120具体可实现为:
将所述人脸图像作为脸部质量评估检测模型的输入,以得到输出的脸部质量评估检测结果;
其中,所述脸部质量评估检测模型是基于预定数量的模糊的人脸图像样本和/或清晰的人脸图像样本训练得到的。
其中,所述模糊的人脸图像样本可以包括失焦模糊的人脸图像样本、运动模糊的人脸图像样本和光线不足的人脸图像样本中的至少一种。
假设,模糊的人脸图像样本包括失焦模糊的人脸图像样本、运动模糊的人脸图像样本和光线不足的人脸图像样本。
本步骤中,该脸部质量评估检测模型获得可以为:首先,训练数据中大概包括4类人脸图像样本,分别为失焦模糊的人脸图像样本、运动模糊的人脸图像样本、光线不足的人脸图像样本和清晰的人脸图像样本,在同一类别下分别选取一千张图像;然后,通过4个类别的各一千张人脸图像样本训练得到脸部质量评估检测模型。其中,如何通过4个类别的各一千张人脸图像样本训练得到脸部质量评估检测模型,属于现有技术,本发明实施例不再赘述。
本发明实施例,通过预定数量的模糊的人脸图像样本和/或清晰的人脸图像样本训练得到脸部质量评估检测模型,再将人脸图像作为脸部质量评估检测模型的输入,以得到输出的脸部质量评估检测结果,根据脸部质量评估检测结果确定该人脸图像的是否存在光线不足、运动或失焦等因素,有效避免了采集的人脸图像受上述因素的影响,进而确保了后续采用该人脸图像进行用户身份认证及人脸支付的成功率。
可选的,作为一个实施例,步骤110具体可实现为:
第一步,确定采集的人脸图像位于终端设备上显示界面的取景框中;
第二步,若所述取景框中的人脸图像所在区域占整个所述显示界面的比例满足阈值,则确定所述人脸图像为待识别的人脸图像。
该阈值可以根据实际需求设置,本发明实施例不作具体限定。该阈值与上述实施例中所述的阈值可以相同也可以不同。
具体实施时,第一步具体可实现为:预先基于人脸图像样本训练得到人脸检测模型;将人脸图像作为人脸检测模型的输入,以得到输出的人脸检测结果;若该人脸检测结果正常,则确定该人脸图像位于终端设备上显示界面的取景框中;若该人脸检测结果异常,则提醒用户执行重新采集人脸图像操作。
示例性的,若人脸检测结果为人脸图像的区域坐标,则判断该区域坐标是否落入预 先设定的取景框对应的坐标集中;若是,则确定该人脸图像位于终端设备上显示界面的取景框中;若否,则提醒用户将脸部放入取景框中并进行重新采集人脸图像的操作,如图2所示。
在执行第二步之前,还包括;
第三步,获取所述人脸图像所在区域的区域坐标;
第四步,基于所述区域坐标和整个所述显示界面的尺寸,确定所述人脸图像所在区域在整个所述显示界面的占比。
这里需要补充的是,若所述取景框中的人脸图像所在区域占整个所述显示界面的比例不满足阈值,则提醒用户执行调整操作。
本发明实施例,通过确定采集的人脸图像位于终端设备上显示界面的取景框中,若该取景框中的人脸图像所在区域占整个显示界面的比例满足阈值,则确定人脸图像为待识别的人脸图像,为后续对该人脸图像执行指定检测操作提供了前提,确保了待识别的人脸图像的质量。
可选的,作为一个实施例,当所述指定检测操作的检测结果正常时,本发明实施例提供的人脸识别方法还可以包括:
执行所述指定检测操作之后的下一个指定检测操作,可理解为,当眼镜检测对应的反光检测结果正常时,可执行遮挡检测;当遮挡检测对应的遮挡检测结果正常时,可执行脸部质量评估检测。其中,眼镜检测、遮挡检测及脸部质量评估检测三者的检测顺序可以是任意的,本发明实施例不做限定。或者,
将所述待识别的人脸图像发送至识别终端设备,可理解为,当指定检测操作的检测结果正常时,将待识别的人脸图像发送至识别终端设备。该识别终端设备可将该人脸图像与预先存储的人脸图像进行比对,若两者的相似度值大于预定数值,则确定用户身份认证通过并从钱包中扣款完成支付操作。其中,预定数值需要根据实际需求设置,本发明实施例不做具体限定。
示例性的,该识别终端设备将该人脸图像与预先存储的人脸图像进行比对,具体可实现为;获取人脸图像的人脸区域的图像信息,及预先存储的人脸图像的人脸区域的图像信息,将两个图像信息进行比对,基于两个图像信息中的相似特征,确定人脸图像与预先存储的人脸图像的相似度值。其中,预先存储的人脸图像可以是识别终端设备内部预先存储的与用户钱包账号对应的人脸图像,也可以是根据与用户钱包账号对应的用户 身份证号码,在官方官网系统获取的人脸图像。
本发明实施例,当指定检测操作的检测结果正常时,执行指定检测操作之后的下一个指定检测操作,有效排除了待识别的人脸图像中存在的影响因素,确保了待识别的人脸图像的质量,为后续采用人脸图像进行用户身份认证及人脸支付的成功率提供了保障。
另外,当指定检测操作的检测结果正常时,将待识别的人脸图像发送至识别终端设备,由识别终端设备基于待识别的人脸图像进行用户身份认证及人脸支付,确保了采用人脸图像进行用户身份认证及人脸支付的成功率。
下面将结合具体的实施例,对本发明实施例的方法做进一步的描述。
图3示出了本发明实施例提供的人脸识别方法在实际应用场景下的流程图;图4示出了本发明实施例提供的人脸识别方法在实际应用场景下的系统框图;
示例性的,用户人脸识别登录用户钱包账号进行人脸支付,结合图3和图4所示:
在310,终端设备1上提示用户输入用户手机号。用户在终端设备1上输入手机号后,终端设备1将该用户手机号发送至识别终端设备。
在320,识别终端设备2接收到用户手机号,并基于用户手机号查找用户钱包账号,若查找到,执行步骤330;否则,执行步骤340。
在340,识别终端设备2提示用户进行新用户注册。
在330,终端设备1采集人脸图像。
在350,终端设备1确定该人脸图像是否为待识别的人脸图像;若是,则执行步骤360;若否,则执行步骤330。
其中,终端设备1确定该人脸图像是否为待识别的人脸图像,具体实现可以参加上述实施例中的相关内容,本发明实施例不再赘述。
在360,终端设备1对人脸图像执行指定检测操作,该指定检测操作为眼镜检测。当所述指定检测操作的检测结果异常时,执行步骤361;当所述指定检测操作的检测结果正常时,执行步骤370或390。
在361,终端设备1执行与检测结果匹配的提醒操作,示例性的,提醒用户摘下眼镜。
在370,终端设备1对人脸图像执行指定检测操作,该指定检测操作为遮挡检测。 当所述指定检测操作的检测结果异常时,执行步骤371;当所述指定检测操作的检测结果正常时,执行步骤380或390。
在371,终端设备1执行与检测结果匹配的提醒操作,示例性的,提醒用户去除遮挡。
在380,终端设备1对人脸图像执行指定检测操作,该指定检测操作为脸部质量评估检测。当所述指定检测操作的检测结果异常时,执行步骤381;当所述指定检测操作的检测结果正常时,执行步骤390。
在381,终端设备1执行与检测结果匹配的提醒操作,示例性的,提醒调整图像采集角度。
在390,识别终端设备2接收终端设备1发送的待识别的人脸图像,并将该人脸图像与预先存储的人脸图像进行比对;若两者的相似度大于预定数值,则执行391;若否,执行步骤330。
在391,用户身份认证通过并从钱包中扣款完成支付操作。
本发明实施例通过对人脸图像执行指定检测操作,该指定检测操作包括眼镜检测、遮挡检测及脸部质量评估检测中的至少一种。当该指定检测操作的检测结果异常时,即可识别到影响人脸图像的因素,从而执行与检测结果匹配的提醒操作,使得用户根据提醒进行调整以排除影响人脸图像的因素,确保了后续采用人脸图像进行用户身份认证及人脸支付的成功率。
另外,当指定检测操作的检测结果异常时,执行与检测结果匹配的提醒操作,能够精准引导用户减少乃至去除影响人脸图像的影响因素,从而保证后续用户能顺利完成整个人脸支付流程,提升全链路通过率。同时也是帮助用户学习使用人脸支付的一个过程,使用户在感受到人脸支付的智能性后,也会因为其独特的用户体验,有利于人脸支付的普及。
以上,结合图1至图4详细说明了本发明实施例的人脸识别方法,下面,结合图5,详细说明本发明实施例的终端设备。
图5示出了本发明实施例提供的终端设备的结构示意图,如图5所示,该终端设备500可以包括:
获取模块510,用于获取待识别的人脸图像;
第一执行模块520,用于对所述人脸图像执行指定检测操作,所述指定检测操作包括眼镜检测、遮挡检测及脸部质量评估检测中的至少一种;
第二执行模块530,用于当所述指定检测操作的检测结果异常时,执行与所述检测结果匹配的提醒操作。
在一种实施例中,若所述指定检测操作为眼镜检测,则所述第一执行模块520可以包括:
第一输入单元,用于将所述人脸图像作为反光检测模型的输入,以得到输出的反光检测结果;
其中,所述反光检测模型是基于预定数量的具有反光的人脸图像样本和/或无反光的人脸图像样本训练得到的。
在一种实施例中,所述具有反光的人脸图像样本包括眼镜反光的人脸图像样本和具有黑边框眼镜的人脸图像样本中的至少一种;所述无反光的人脸图像样本包括佩戴普通眼镜的人脸图像样本和无眼镜的人脸图像样本中的至少一种。
在一种实施例中,若所述指定检测操作为遮挡检测,则所述第一执行模块520可以包括:
第二输入单元,用于将所述人脸图像作为遮挡检测模型的输入,以得到输出的遮挡检测结果;
其中,所述遮挡检测模型是基于预定数量的具有遮挡的人脸图像样本和/或无遮挡的人脸图像样本训练得到的。
在一种实施例中,所述具有遮挡的人脸图像样本包括手遮挡人脸的人脸图像样本、刘海挡人脸的人脸图像样本、帽子挡人脸的人脸图像样本和口罩挡人脸的人脸图像样本中的至少一种。
在一种实施例中,若所述指定检测操作为脸部质量评估检测,则所述第一执行模块520可以包括:
第三输入单元,用于将所述人脸图像作为脸部质量评估检测模型的输入,以得到输出的脸部质量评估检测结果;
其中,所述脸部质量评估检测模型是基于预定数量的模糊的人脸图像样本和/或清晰的人脸图像样本训练得到的。
在一种实施例中,所述模糊的人脸图像样本包括失焦模糊的人脸图像样本、运动模糊的人脸图像样本和光线不足的人脸图像样本中的至少一种。
在一种实施例中,所述获取模块510可以包括:
第一确定单元,用于确定采集的人脸图像位于终端设备上显示界面的取景框中;
第二确定单元,用于若所述取景框中的人脸图像所在区域占整个所述显示界面的比例满足阈值,则确定所述人脸图像为待识别的人脸图像。
在一种实施例中,所述获取模块510还可以包括:
获取单元,用于获取所述人脸图像所在区域的区域坐标;
第三确定单元,用于基于所述区域坐标和整个所述显示界面的尺寸,确定所述人脸图像所在区域在整个所述显示界面的占比。
在一种实施例中,所述终端设备还可以包括:
第三执行模块540,用于当所述指定检测操作的检测结果正常时,执行所述指定检测操作之后的下一个指定检测操作;或者,
发送模块550,用于将所述待识别的人脸图像发送至识别终端设备。
本发明实施例通过对人脸图像执行指定检测操作,该指定检测操作包括眼镜检测、遮挡检测及脸部质量评估检测中的至少一种。当该指定检测操作的检测结果异常时,即可识别到影响人脸图像的因素,从而执行与检测结果匹配的提醒操作,使得用户根据提醒进行调整以排除影响人脸图像的因素,确保了后续采用人脸图像进行用户身份认证及人脸支付的成功率。
另外,当指定检测操作的检测结果异常时,执行与检测结果匹配的提醒操作,能够精准引导用户减少乃至去除影响人脸图像的影响因素,从而保证后续用户能顺利完成整个人脸支付流程,提升全链路通过率。同时也是帮助用户学习使用人脸支付的一个过程,使用户在感受到人脸支付的智能性后,也会因为其独特的用户体验,有利于人脸支付的普及。
图6是本说明书的一个实施例提供的终端设备的结构示意图。请参考图6,在硬件层面,该终端设备包括处理器,可选地还包括内部总线、网络接口、存储器。其中,存储器可能包含内存,例如高速随机存取存储器(Random-Access Memory,RAM),也可能还包括非易失性存储器(non-volatile memory),例如至少1个磁盘存储器等。当然, 该终端设备还可能包括其他业务所需要的硬件。
处理器、网络接口和存储器可以通过内部总线相互连接,该内部总线可以是ISA(Industry Standard Architecture,工业标准体系结构)总线、PCI(Peripheral Component Interconnect,外设部件互连标准)总线或EISA(Extended Industry Standard Architecture,扩展工业标准结构)总线等。所述总线可以分为地址总线、数据总线、控制总线等。为便于表示,图6中仅用一个双向箭头表示,但并不表示仅有一根总线或一种类型的总线。
存储器,用于存放程序。具体地,程序可以包括程序代码,所述程序代码包括计算机操作指令。存储器可以包括内存和非易失性存储器,并向处理器提供指令和数据。
处理器从非易失性存储器中读取对应的计算机程序到内存中然后运行,在逻辑层面上形成资源增值对象与资源对象的关联装置。处理器,执行存储器所存放的程序,并具体用于执行以下操作:
获取待识别的人脸图像;
对所述人脸图像执行指定检测操作,所述指定检测操作包括眼镜检测、遮挡检测及脸部质量评估检测中的至少一种;
当所述指定检测操作的检测结果异常时,执行与所述检测结果匹配的提醒操作。
本发明实施例通过对人脸图像执行指定检测操作,该指定检测操作包括眼镜检测、遮挡检测及脸部质量评估检测中的至少一种。当该指定检测操作的检测结果异常时,即可识别到影响人脸图像的因素,从而执行与检测结果匹配的提醒操作,使得用户根据提醒进行调整以排除影响人脸图像的因素,确保了后续采用人脸图像进行用户身份认证及人脸支付的成功率。
另外,当指定检测操作的检测结果异常时,执行与检测结果匹配的提醒操作,能够精准引导用户减少乃至去除影响人脸图像的影响因素,从而保证后续用户能顺利完成整个人脸支付流程,提升全链路通过率。同时也是帮助用户学习使用人脸支付的一个过程,使用户在感受到人脸支付的智能性后,也会因为其独特的用户体验,有利于人脸支付的普及。
上述如本说明书图1所示实施例揭示的人脸识别方法可以应用于处理器中,或者由处理器实现。处理器可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器,包括中央处理器(Central Processing Unit,CPU)、 网络处理器(Network Processor,NP)等;还可以是数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本说明书一个或多个实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本说明书一个或多个实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。
该终端设备还可执行图1的人脸识别方法,本说明书在此不再赘述。
当然,除了软件实现方式之外,本说明书的终端设备并不排除其他实现方式,比如逻辑器件抑或软硬件结合的方式等等,也就是说以下处理流程的执行主体并不限定于各个逻辑单元,也可以是硬件或逻辑器件。
本说明书实施例还提供一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现上述各个方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。其中,所述的计算机可读存储介质,如只读存储器(Read-Only Memory,简称ROM)、随机存取存储器(Random Access Memory,简称RAM)、磁碟或者光盘等。
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方 框中指定的功能的系统。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令系统的制造品,该指令系统实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括要素的过程、方法、商品或者设备中还存在另外的相同要素。
以上仅为本发明的实施例而已,并不用于限制本发明。对于本领域技术人员来 说,本发明可以有各种更改和变化。凡在本发明的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本发明的权利要求范围之内。

Claims (13)

  1. 一种人脸识别方法,包括:
    获取待识别的人脸图像;
    对所述人脸图像执行指定检测操作,所述指定检测操作包括眼镜检测、遮挡检测及脸部质量评估检测中的至少一种;
    当所述指定检测操作的检测结果异常时,执行与所述检测结果匹配的提醒操作。
  2. 如权利要求1所述的方法,若所述指定检测操作为眼镜检测,则对所述人脸图像执行指定检测操作,包括:
    将所述人脸图像作为反光检测模型的输入,以得到输出的反光检测结果;
    其中,所述反光检测模型是基于预定数量的具有反光的人脸图像样本和/或无反光的人脸图像样本训练得到的。
  3. 如权利要求2所述的方法,
    所述具有反光的人脸图像样本包括眼镜反光的人脸图像样本和具有黑边框眼镜的人脸图像样本中的至少一种;所述无反光的人脸图像样本包括佩戴普通眼镜的人脸图像样本和无眼镜的人脸图像样本中的至少一种。
  4. 如权利要求1所述的方法,若所述指定检测操作为遮挡检测,则对所述人脸图像执行指定检测操作,包括:
    将所述人脸图像作为遮挡检测模型的输入,以得到输出的遮挡检测结果;
    其中,所述遮挡检测模型是基于预定数量的具有遮挡的人脸图像样本和/或无遮挡的人脸图像样本训练得到的。
  5. 如权利要求4所述的方法,所述具有遮挡的人脸图像样本包括手遮挡人脸的人脸图像样本、刘海挡人脸的人脸图像样本、帽子挡人脸的人脸图像样本和口罩挡人脸的人脸图像样本中的至少一种。
  6. 如权利要求1所述的方法,若所述指定检测操作为脸部质量评估检测,则对所述人脸图像执行指定检测操作,包括:
    将所述人脸图像作为脸部质量评估检测模型的输入,以得到输出的脸部质量评估检测结果;
    其中,所述脸部质量评估检测模型是基于预定数量的模糊的人脸图像样本和/或清晰的人脸图像样本训练得到的。
  7. 如权利要求6所述的方法,所述模糊的人脸图像样本包括失焦模糊的人脸图像样本、运动模糊的人脸图像样本和光线不足的人脸图像样本中的至少一种。
  8. 如权利要求1所述的方法,所述获取待识别的人脸图像,包括:
    确定采集的人脸图像位于终端设备上显示界面的取景框中;
    若所述取景框中的人脸图像所在区域占整个所述显示界面的比例满足阈值,则确定所述人脸图像为待识别的人脸图像。
  9. 如权利要求8所述的方法,在所述取景框中的人脸图像所在区域占整个所述显示界面的比例满足阈值之前,还包括;
    获取所述人脸图像所在区域的区域坐标;
    基于所述区域坐标和整个所述显示界面的尺寸,确定所述人脸图像所在区域在整个所述显示界面的占比。
  10. 如权利要求1所述的方法,所述方法还包括:
    当所述指定检测操作的检测结果正常时,执行所述指定检测操作之后的下一个指定检测操作;或者,
    将所述待识别的人脸图像发送至识别终端设备。
  11. 一种终端设备,包括:
    获取模块,用于获取待识别的人脸图像;
    第一执行模块,用于对所述人脸图像执行指定检测操作,所述指定检测操作包括眼镜检测、遮挡检测及脸部质量评估检测中的至少一种;
    第二执行模块,用于当所述指定检测操作的检测结果异常时,执行与所述检测结果匹配的提醒操作。
  12. 一种终端设备,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如下步骤:
    获取待识别的人脸图像;
    对所述人脸图像执行指定检测操作,所述指定检测操作包括眼镜检测、遮挡检测及脸部质量评估检测中的至少一种;
    当所述指定检测操作的检测结果异常时,执行与所述检测结果匹配的提醒操作。
  13. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如下步骤:
    获取待识别的人脸图像;
    对所述人脸图像执行指定检测操作,所述指定检测操作包括眼镜检测、遮挡检测及脸部质量评估检测中的至少一种;
    当所述指定检测操作的检测结果异常时,执行与所述检测结果匹配的提醒操作。
PCT/CN2019/090705 2018-07-16 2019-06-11 一种人脸识别方法及终端设备 WO2020015477A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810777461.8A CN109063604A (zh) 2018-07-16 2018-07-16 一种人脸识别方法及终端设备
CN201810777461.8 2018-07-16

Publications (1)

Publication Number Publication Date
WO2020015477A1 true WO2020015477A1 (zh) 2020-01-23

Family

ID=64816523

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/090705 WO2020015477A1 (zh) 2018-07-16 2019-06-11 一种人脸识别方法及终端设备

Country Status (3)

Country Link
CN (1) CN109063604A (zh)
TW (1) TWI786291B (zh)
WO (1) WO2020015477A1 (zh)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111414879A (zh) * 2020-03-26 2020-07-14 北京字节跳动网络技术有限公司 人脸遮挡程度识别方法、装置、电子设备及可读存储介质
CN111428594A (zh) * 2020-03-13 2020-07-17 北京三快在线科技有限公司 身份验证方法、装置、电子设备和存储介质
CN111428628A (zh) * 2020-03-23 2020-07-17 北京每日优鲜电子商务有限公司 人脸检测方法、装置、设备及存储介质
CN111598038A (zh) * 2020-05-22 2020-08-28 深圳市瑞立视多媒体科技有限公司 脸部特征点检测方法、装置、设备及存储介质
CN111783598A (zh) * 2020-06-24 2020-10-16 北京百度网讯科技有限公司 一种人脸识别模型训练方法、装置、设备及介质
CN111860566A (zh) * 2020-04-24 2020-10-30 北京嘀嘀无限科技发展有限公司 遮挡物识别模型训练方法、识别方法、装置及存储介质
CN111914812A (zh) * 2020-08-20 2020-11-10 腾讯科技(深圳)有限公司 图像处理模型训练方法、装置、设备及存储介质
CN111914628A (zh) * 2020-06-19 2020-11-10 北京百度网讯科技有限公司 人脸识别模型的训练方法和装置
CN112084902A (zh) * 2020-08-26 2020-12-15 武汉普利商用机器有限公司 人脸图像获取方法、装置、电子设备及存储介质
CN112766208A (zh) * 2021-01-28 2021-05-07 北京三快在线科技有限公司 一种模型训练的方法及装置
CN113674178A (zh) * 2021-08-26 2021-11-19 上海明略人工智能(集团)有限公司 一种去遮挡物的方法及装置
CN113963183A (zh) * 2021-12-22 2022-01-21 北京的卢深视科技有限公司 模型训练、人脸识别方法、电子设备及存储介质
CN115810218A (zh) * 2022-12-20 2023-03-17 山东交通学院 基于机器视觉和目标检测的人员异常行为检测方法及系统

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109063604A (zh) * 2018-07-16 2018-12-21 阿里巴巴集团控股有限公司 一种人脸识别方法及终端设备
CN110163104B (zh) * 2019-04-18 2023-02-17 创新先进技术有限公司 人脸检测方法、装置和电子设备
CN112446849A (zh) * 2019-08-13 2021-03-05 杭州海康威视数字技术股份有限公司 一种处理图片的方法及装置
CN110782602A (zh) * 2019-11-13 2020-02-11 北京三快在线科技有限公司 资源转移方法、装置、系统、设备及存储介质
CN111126098B (zh) * 2019-12-24 2023-11-07 京东科技控股股份有限公司 证件图像采集方法、装置、设备及存储介质
CN111915307A (zh) * 2020-07-02 2020-11-10 浙江恒科实业有限公司 一种无接触式移动支付系统及方法
CN111815790A (zh) * 2020-07-10 2020-10-23 成都智元汇信息技术股份有限公司 一种基于人脸识别的地铁乘车检票方法
TWI795724B (zh) * 2021-02-02 2023-03-11 神達數位股份有限公司 身分驗證方法及身分驗證系統
CN113240430B (zh) * 2021-06-16 2024-06-28 中国银行股份有限公司 移动支付验证方法及装置
CN113435400B (zh) * 2021-07-14 2022-08-30 世邦通信股份有限公司 无屏人脸识别校准方法、装置、无屏人脸识别设备及介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102262727A (zh) * 2011-06-24 2011-11-30 常州锐驰电子科技有限公司 客户采集终端人脸图像质量实时监控方法
CN104091156A (zh) * 2014-07-10 2014-10-08 深圳市中控生物识别技术有限公司 一种身份识别方法及装置
CN107808120A (zh) * 2017-09-30 2018-03-16 平安科技(深圳)有限公司 眼镜定位方法、装置及存储介质
CN107909065A (zh) * 2017-12-29 2018-04-13 百度在线网络技术(北京)有限公司 用于检测人脸遮挡的方法及装置
CN109063604A (zh) * 2018-07-16 2018-12-21 阿里巴巴集团控股有限公司 一种人脸识别方法及终端设备

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101241625B1 (ko) * 2012-02-28 2013-03-11 인텔 코오퍼레이션 얼굴 인식 환경 통지 방법, 장치, 및 이 방법을 실행하기 위한 컴퓨터 판독 가능한 기록 매체
CN103150553A (zh) * 2013-02-06 2013-06-12 北京中科虹霸科技有限公司 实现多模态身份特征识别的移动终端以及方法
CN103116749A (zh) * 2013-03-12 2013-05-22 上海洪剑智能科技有限公司 一种基于自建图像库的近红外人脸识别方法
CN105550862A (zh) * 2015-06-26 2016-05-04 宇龙计算机通信科技(深圳)有限公司 移动支付身份认证方法、认证终端及支付终端
CN105046231A (zh) * 2015-07-27 2015-11-11 小米科技有限责任公司 人脸检测方法和装置
CN105631439B (zh) * 2016-02-18 2019-11-08 北京旷视科技有限公司 人脸图像处理方法和装置
CN105813021A (zh) * 2016-05-30 2016-07-27 维沃移动通信有限公司 一种移动终端寻回方法、移动终端及服务器
CN107516070B (zh) * 2017-07-28 2021-04-06 Oppo广东移动通信有限公司 生物识别方法及相关产品
CN107644159B (zh) * 2017-09-12 2021-04-09 Oppo广东移动通信有限公司 人脸识别方法及相关产品
CN107766824A (zh) * 2017-10-27 2018-03-06 广东欧珀移动通信有限公司 人脸识别方法、移动终端以及计算机可读存储介质
CN107895108B (zh) * 2017-10-27 2021-02-26 维沃移动通信有限公司 一种操作管理方法和移动终端

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102262727A (zh) * 2011-06-24 2011-11-30 常州锐驰电子科技有限公司 客户采集终端人脸图像质量实时监控方法
CN104091156A (zh) * 2014-07-10 2014-10-08 深圳市中控生物识别技术有限公司 一种身份识别方法及装置
CN107808120A (zh) * 2017-09-30 2018-03-16 平安科技(深圳)有限公司 眼镜定位方法、装置及存储介质
CN107909065A (zh) * 2017-12-29 2018-04-13 百度在线网络技术(北京)有限公司 用于检测人脸遮挡的方法及装置
CN109063604A (zh) * 2018-07-16 2018-12-21 阿里巴巴集团控股有限公司 一种人脸识别方法及终端设备

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111428594A (zh) * 2020-03-13 2020-07-17 北京三快在线科技有限公司 身份验证方法、装置、电子设备和存储介质
CN111428628A (zh) * 2020-03-23 2020-07-17 北京每日优鲜电子商务有限公司 人脸检测方法、装置、设备及存储介质
CN111414879A (zh) * 2020-03-26 2020-07-14 北京字节跳动网络技术有限公司 人脸遮挡程度识别方法、装置、电子设备及可读存储介质
CN111414879B (zh) * 2020-03-26 2023-06-09 抖音视界有限公司 人脸遮挡程度识别方法、装置、电子设备及可读存储介质
CN111860566A (zh) * 2020-04-24 2020-10-30 北京嘀嘀无限科技发展有限公司 遮挡物识别模型训练方法、识别方法、装置及存储介质
CN111598038A (zh) * 2020-05-22 2020-08-28 深圳市瑞立视多媒体科技有限公司 脸部特征点检测方法、装置、设备及存储介质
CN111914628A (zh) * 2020-06-19 2020-11-10 北京百度网讯科技有限公司 人脸识别模型的训练方法和装置
CN111914628B (zh) * 2020-06-19 2023-06-20 北京百度网讯科技有限公司 人脸识别模型的训练方法和装置
CN111783598A (zh) * 2020-06-24 2020-10-16 北京百度网讯科技有限公司 一种人脸识别模型训练方法、装置、设备及介质
CN111783598B (zh) * 2020-06-24 2023-08-08 北京百度网讯科技有限公司 一种人脸识别模型训练方法、装置、设备及介质
CN111914812B (zh) * 2020-08-20 2022-09-16 腾讯科技(深圳)有限公司 图像处理模型训练方法、装置、设备及存储介质
CN111914812A (zh) * 2020-08-20 2020-11-10 腾讯科技(深圳)有限公司 图像处理模型训练方法、装置、设备及存储介质
CN112084902A (zh) * 2020-08-26 2020-12-15 武汉普利商用机器有限公司 人脸图像获取方法、装置、电子设备及存储介质
CN112084902B (zh) * 2020-08-26 2024-05-14 武汉普利商用机器有限公司 人脸图像获取方法、装置、电子设备及存储介质
CN112766208A (zh) * 2021-01-28 2021-05-07 北京三快在线科技有限公司 一种模型训练的方法及装置
CN113674178A (zh) * 2021-08-26 2021-11-19 上海明略人工智能(集团)有限公司 一种去遮挡物的方法及装置
CN113963183B (zh) * 2021-12-22 2022-05-31 合肥的卢深视科技有限公司 模型训练、人脸识别方法、电子设备及存储介质
CN113963183A (zh) * 2021-12-22 2022-01-21 北京的卢深视科技有限公司 模型训练、人脸识别方法、电子设备及存储介质
CN115810218A (zh) * 2022-12-20 2023-03-17 山东交通学院 基于机器视觉和目标检测的人员异常行为检测方法及系统

Also Published As

Publication number Publication date
TW202006595A (zh) 2020-02-01
CN109063604A (zh) 2018-12-21
TWI786291B (zh) 2022-12-11

Similar Documents

Publication Publication Date Title
WO2020015477A1 (zh) 一种人脸识别方法及终端设备
US20200160040A1 (en) Three-dimensional living-body face detection method, face authentication recognition method, and apparatuses
TWI716008B (zh) 人臉識別方法及裝置
US10817705B2 (en) Method, apparatus, and system for resource transfer
US11093773B2 (en) Liveness detection method, apparatus and computer-readable storage medium
WO2022111512A1 (zh) 人脸活体检测方法、装置及设备
CN109086734B (zh) 一种对人眼图像中瞳孔图像进行定位的方法及装置
CN108280332B (zh) 移动终端的生物特征认证识别检测方法、装置和设备
US20230030267A1 (en) Method and apparatus for selecting face image, device, and storage medium
TWI754818B (zh) 支付方法、裝置及系統
US11281939B2 (en) Method and apparatus for training an object identification neural network, and computer device
CN112333356B (zh) 一种证件图像采集方法、装置和设备
CN113505682B (zh) 活体检测方法及装置
WO2022105919A1 (zh) 虚拟现实设备的局部透视方法、装置及虚拟现实设备
CN110033424A (zh) 图像处理的方法、装置、电子设备及计算机可读存储介质
WO2022105677A1 (zh) 虚拟现实设备的键盘透视方法、装置及虚拟现实设备
JP2008287355A (ja) 登録装置、照合装置、プログラム及びデータ構造
CN110688878A (zh) 活体识别检测方法、装置、介质及电子设备
CN112906571A (zh) 活体识别方法、装置及电子设备
CN109376585B (zh) 一种人脸识别的辅助方法、人脸识别方法及终端设备
KR102151851B1 (ko) 적외선 영상 기반 얼굴 인식 방법 및 이를 위한 학습 방법
CN109118506A (zh) 一种确定人眼图像中瞳孔图像边缘点的方法及装置
CN110163104B (zh) 人脸检测方法、装置和电子设备
CN111860285B (zh) 用户注册方法、装置、电子设备及存储介质
KR102466084B1 (ko) 영상 기반의 동공 검출 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19836982

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19836982

Country of ref document: EP

Kind code of ref document: A1