WO2021031522A1 - 一种支付方法及装置 - Google Patents

一种支付方法及装置 Download PDF

Info

Publication number
WO2021031522A1
WO2021031522A1 PCT/CN2020/071363 CN2020071363W WO2021031522A1 WO 2021031522 A1 WO2021031522 A1 WO 2021031522A1 CN 2020071363 W CN2020071363 W CN 2020071363W WO 2021031522 A1 WO2021031522 A1 WO 2021031522A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
user
payment
sample
face image
Prior art date
Application number
PCT/CN2020/071363
Other languages
English (en)
French (fr)
Inventor
曹佳炯
Original Assignee
创新先进技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 创新先进技术有限公司 filed Critical 创新先进技术有限公司
Priority to US16/888,817 priority Critical patent/US11263634B2/en
Publication of WO2021031522A1 publication Critical patent/WO2021031522A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks

Definitions

  • This document relates to the field of intelligent identification, especially to a payment method and device.
  • face recognition is a biometric identification technology that authenticates identity based on the facial feature information of a person.
  • the payment is completed by swiping the face. It can bring a certain degree of convenience to users. Face-swiping payment does not require the user to scan the code or carry any tools. It does not require the user to enter other identification information (for example, mobile phone number, payment account number, payment password, etc.) to complete the payment behavior. Get the favor of the majority of users.
  • a payment method based on face recognition mainly considers that there may be a certain threat to the user’s asset security due to the existence of stolen or wrong brushing. Therefore, it needs to be under strong user interaction.
  • Complete the payment behavior specifically, first, the user needs to click the relevant button to trigger the start of the face-swiping process, and then, after the face-swiping process is started, the user needs to click the relevant button again to trigger the payment confirmation link, that is, the user needs to at least Participate in button click touch operation twice.
  • the purpose of one or more embodiments of this specification is to provide a payment method and device, which can not only simplify the user’s interaction steps, but also improve the accuracy of payment recognition, avoid the problem of user funds being stolen and swiping incorrectly, and improve users The security of funds to ensure the accuracy of identification of users’ willingness to pay under the premise of weak user interaction.
  • one or more embodiments of this specification provide a payment method, including:
  • Extracting feature information from the first face image information where the feature information includes: user head posture information and/or user gaze information;
  • one or more embodiments of this specification provide a payment device, including:
  • the face image acquisition module is used to acquire the first face image information of the target user
  • the key feature extraction module is configured to extract feature information from the first face image information, where the feature information includes: user head posture information and/or user gaze information;
  • the willingness to pay judgment module is configured to judge whether the target user has the willingness to pay according to the user's head posture information and/or the user gaze information;
  • the payment trigger module is used to complete the payment operation based on the face recognition function if the judgment result is yes.
  • one or more embodiments of this specification provide a payment device, including:
  • a processor and a memory arranged to store computer-executable instructions that, when executed, cause the processor to:
  • Extracting feature information from the first face image information where the feature information includes: user head posture information and/or user gaze information;
  • one or more embodiments of this specification provide a storage medium for storing computer-executable instructions that, when executed by a processor, implement the following methods:
  • Extracting feature information from the first face image information where the feature information includes: user head posture information and/or user gaze information;
  • the payment method and device in one or more embodiments of this specification acquire the first face image information of the target user; extract characteristic information from the first face image information; determine whether the target user has the willingness to pay according to the characteristic information ; If the judgment result is yes, the payment operation is completed based on the face recognition function.
  • the target user is identified whether the target user has the willingness to pay, and then whether to activate the payment function;
  • Figure 1 is a schematic diagram of the first flow of the payment method provided by one or more embodiments of this specification;
  • FIG. 2 is a schematic diagram of the second flow of the payment method provided by one or more embodiments of this specification.
  • FIG. 3 is a schematic diagram of the third process of the payment method provided by one or more embodiments of this specification.
  • FIG. 4 is a schematic diagram of the specific implementation principle of the payment method provided by one or more embodiments of this specification.
  • FIG. 5 is a schematic diagram of the module composition of the payment device provided by one or more embodiments of the specification.
  • Fig. 6 is a schematic structural diagram of a payment device provided by one or more embodiments of this specification.
  • One or more embodiments of this specification provide a payment method and device, which collects face image information of a target user, and extracts the required characteristic information from the face image information, and then recognizes the target based on the characteristic information Whether the user has the willingness to pay, and then determine whether to activate the payment function, this not only simplifies the user’s interaction steps, but also improves the recognition accuracy of payment, avoids the problem of user funds being stolen and swiped, and improves the user’s financial security. Realize the recognition accuracy of users' willingness to pay under the premise of weak user interaction.
  • Figure 1 is a schematic diagram of the first flow chart of the payment method provided by one or more embodiments of this specification.
  • the execution subject of the method in Figure 1 can be a terminal device equipped with a willingness to pay recognition device, or a payment willingness recognition device.
  • the background server, as shown in Figure 1, the method at least includes the following steps:
  • the camera device collects the first face image information of the target user;
  • S102 Extract feature information from the acquired first face image information, where the feature information includes: user head posture information and/or user gaze information;
  • feature extraction is performed on the first face image information using a preset image recognition method to obtain required feature information, which may include user head posture information , It can also include user gaze information, and it can also include both user head posture information and user gaze information;
  • S103 Determine whether the target user has the willingness to pay according to the extracted user head posture information and/or user gaze information;
  • the feature information after extracting the required feature information from the first face image, according to the feature information, identify whether the target user has the willingness to pay, that is, use the image feature extraction method to identify whether the user has the willingness to pay, and then determine Whether to trigger the payment initiation or payment confirmation link, in which, when the feature information meets the preset conditions, it indicates that the target user has the willingness to pay, that is, the target user expects to complete the payment by using the face, correspondingly, the feature information also includes the user's head posture information In the case of the user's gaze information, if the user's head posture information meets the first preset condition, and the user's gaze information meets the second preset condition, it means that the target user has the willingness to pay, that is, the target user has the payment demand;
  • the payment operation is completed based on the face recognition function. Specifically, if it is determined that the target user has the willingness to pay based on the feature information extracted from the first face image information, then the face payment is automatically completed;
  • the image feature extraction method is used to identify whether the user has the willingness to pay, which can solve the problem of the user's funds being stolen and swiped by mistake. For example, for the situation where user A and user B line up to pay with face at the same time, if the current user A needs to pay Pay with face scan, but user B is ranked in front of user A. At this time, even if the first face image information collected is user B’s image information, user B does not currently have the willingness to pay. Therefore, from the first face The feature information extracted from the image information indicates that the target user does not have the willingness to pay, and the step S104 will not be executed at this time, thereby avoiding the phenomenon of user A purchasing and user B account deduction.
  • one or more embodiments of this specification by collecting the face image information of the target user, and extracting the required feature information from the face image information, then identifying whether the target user has the willingness to pay based on the feature information, and then To determine whether to activate the payment function, one or more embodiments of this specification can not only simplify the user’s interaction steps, but also improve the recognition accuracy of payment, avoid the problem of user funds being stolen and swiping incorrectly, and improve the user’s fund security It realizes the recognition accuracy of users’ willingness to pay under the premise of weak user interaction.
  • the first face image information is used to identify whether to trigger the face-swiping payment start link, and then the second face image information is used to identify whether the face-swiping payment link is actually entered.
  • the facial feature recognition step confirms the user’s willingness to pay twice, and then determines whether to enter the activation link and the payment link respectively, which further improves the accuracy of payment recognition. Based on this, as shown in Figure 2, the above S104 is based on face recognition. Functions to complete payment operations, including:
  • S1041 Trigger execution of a face-swiping payment start operation to acquire second face image information based on the face recognition function;
  • the face-scanning payment activation link is triggered first.
  • the payment deduction link is not performed, but the current collected data is obtained through the camera device.
  • the second face image information where the first face image information and the second face image information are collected by the camera device, and the collection time of the second face image information is later than the first face image information Acquisition time
  • S1042 Determine whether the feature information extracted from the acquired second face image information has a willingness to pay
  • the process of extracting feature information from the second face image information is the same as extracting feature information from the first face image information
  • the process is the same.
  • the specific process refer to the detailed steps of S102;
  • the process of judging whether there is a willingness to pay based on the feature information extracted from the image information is the same as the process of judging whether there is a willingness to pay based on the feature information extracted from the first face image information.
  • the process of judging whether there is a willingness to pay based on the feature information extracted from the first face image information is the same as the process of judging whether there is a willingness to pay based on the feature information extracted from the first face image information.
  • the face payment link is entered, that is, the required payment amount is deducted from the corresponding payment account, that is, only
  • the face-brushing payment link is entered.
  • the above S1042 judges the features extracted from the acquired second facial image information Whether the information has a willingness to pay, including:
  • S10421 Determine whether the current user corresponding to the acquired second face image information is consistent with the target user
  • the second face image information extracted from the feature information be used to determine whether there is a willingness to pay again. Steps, and then determine whether to enter the payment link.
  • extracting feature information from the acquired first face image information specifically includes:
  • the aforementioned head gesture recognition model may be a machine learning model with a neural network structure, which is obtained by pre-training using a machine learning method and based on the first sample data set;
  • the first facial image information is used as the input of the head gesture recognition model, and the neural network structure in the head gesture recognition model performs feature extraction on the first facial image information.
  • the head gesture recognition model The output of is the head posture information of the target user, where the head posture information of the user includes: the rotation angle in the preset direction, for example, the head rotation angle in the three directions of pitch, yaw, and roll, and pitch is Refers to the rotation around the preset X axis, also called the pitch angle, yaw refers to the rotation around the preset Y axis, also called the yaw angle, and roll refers to the rotation around the preset Z axis, also called the roll angle. It is assumed that the size of the rotation angle in the direction is directly related to the user's willingness to pay for face brushing;
  • the user gaze information includes: user eye gaze At least one of the probability value of the payment screen and the probability value of the user not looking at the payment screen;
  • the above-mentioned gaze information recognition model may be a machine learning model with a neural network structure, which is obtained by using a machine learning method in advance and training based on a second sample data set;
  • the first face image information is used as the input of the gaze information recognition model, and the neural network structure in the gaze information recognition model performs feature extraction on the first face image information.
  • the output of the gaze information recognition model is It is the user gaze information of the target user, where the user gaze information may include: the probability value of the user's eyes gazing at the payment screen, that is, the probability value of the user's eyes gazing in the direction of the interactive screen. The greater the probability value, the user's eyes are staring at the camera The greater the possibility of viewing the device and the interactive screen, the correspondingly, the stronger the willingness to pay for the user's face.
  • step S102 specifically includes the process of (1) above; for the case where the feature information includes user gaze information, step S102 specifically includes the above (2) The process; for the situation where the feature information includes the user's head posture information and the user's gaze information, the foregoing step S102 specifically includes the foregoing (1) and (2) processes.
  • the head posture recognition model needs to be trained based on the sample data set in advance. Specifically, the above head posture The recognition model is trained as follows:
  • Step 1 Obtain a first sample data set, where the first sample data set includes: a plurality of first sample data, and each first sample data includes the difference between the sample face image and head posture information Correspondence;
  • the first sample data set includes a plurality of labeled first sample data, that is, includes a plurality of sample face images labeled with head posture information;
  • the head posture information includes: the rotation angle of the head in the sample face image in a preset direction, for example, the rotation angle of the head in the three directions of pitch, yaw, and roll.
  • Pitch refers to the rotation angle around the preset X Axis rotation, also called pitch angle
  • yaw refers to rotation around the preset Y axis, also called yaw angle
  • roll refers to rotation around the preset Z axis, also called roll angle
  • Pitch refers to the rotation angle around the preset X Axis rotation, also called pitch angle
  • yaw refers to rotation around the preset Y axis, also called yaw angle
  • roll refers to rotation around the preset Z axis, also called roll angle
  • Step 2 Determine the mean image data and variance image data of the multiple sample face images, where the multiple sample face images in the first sample data set are averaged to obtain the mean image of the multiple sample face images Data, and performing variance processing on multiple sample face images in the first sample data set to obtain variance image data of the multiple sample face images;
  • Step 3 For each first sample data, based on the determined mean image data and variance image data, preprocess the sample face image contained in the first sample data to obtain the preprocessed sample face image;
  • the preprocessing of the sample face image may include: dividing the difference between the original sample face image and the mean image data by the variance image data to obtain multiple preprocessed sample face images with labels ;
  • Step 4 Determine the preprocessed sample face image and corresponding head posture information as the final first model training sample
  • Step 5 using machine learning methods and training samples based on multiple first model training to obtain a head pose recognition model
  • the model parameters in the preset first machine learning model are optimized by: the rotation angle of the head in the sample face image in the three directions of pitch, yaw, and roll, correspondingly, the first machine learning model includes three Independent regression loss function, network training is performed on the parameters in the three independent regression loss functions, and the regression loss function corresponds to the rotation angle in the preset direction one-to-one.
  • the process of extracting the user's gaze information from the first face image information requires the use of a gaze information recognition model, it is necessary to train the gaze information recognition model based on the sample data set in advance.
  • the above gaze information recognition model is as follows Way training:
  • Step 1 Obtain a second sample data set, where the second sample data set includes a plurality of second sample data, and each second sample data includes a corresponding relationship between a sample eye image and gaze information;
  • the second sample data set includes a plurality of marked second sample data, that is, includes a plurality of sample eye images marked with gaze information;
  • the gaze information corresponding to the sample eye image represents whether the user's eyes are gazing at the direction of the interactive screen;
  • Step 2 Determine the mean image data and variance image data of the multiple sample eye images, wherein the multiple sample eye images in the second sample data set are averaged to obtain the mean image data of the multiple sample eye images , And performing variance processing on multiple sample eye images in the second sample data set to obtain variance image data of multiple sample eye images;
  • Step 3 For each second sample data, based on the determined mean image data and variance image data, preprocess the sample eye images contained in the second sample data to obtain preprocessed sample eye images;
  • the preprocessing of the sample eye image may include: dividing the difference between the original sample eye image and the mean image data by the variance image data to obtain a plurality of labeled preprocessed sample eye images ;
  • Step 4 Determine the preprocessed sample eye image and corresponding gaze information as the final second model training sample
  • Step 5 using machine learning methods and training samples based on multiple second model training to obtain a gaze information recognition model
  • the gaze information recognition model uses a machine learning method and based on multiple second model training samples, optimize the model parameters in the preset second machine learning model, and determine the second machine learning model with the best model parameters as the trained The gaze information recognition model, wherein the second machine learning model includes a two-category loss function, and network training is performed on the parameters in the two-category loss function.
  • the two categories are the eye-gaze interactive screen and the eye-ungaze interactive screen.
  • the above step S103 is based on the extracted user head posture information and/or user gaze Information to determine whether the target user has the willingness to pay, including:
  • the aforementioned user head posture information includes: the rotation angle A pitch in the pitch direction, the rotation angle A yaw in the yaw direction, and the rotation angle A roll in the roll direction; and the preset angle threshold corresponding to the pitch direction Is T pitch , the corresponding preset angle threshold in the yaw direction is T yaw , and the corresponding preset angle threshold in the roll direction is T roll , where, when the face plane and the interactive screen plane are parallel to each other, the three rotation angles are all Is zero, correspondingly, if A pitch ⁇ T pitch and A yaw ⁇ T yaw and A roll ⁇ T roll , it is determined that the user's head posture information meets the first preset condition;
  • the user gaze information includes: the probability value P focus of the user gazing at the payment screen; and the preset probability threshold is T focus , correspondingly, if P focus > T focus , it is determined that the user gaze information satisfies the second preset condition ;
  • the aforementioned user head posture information includes: the rotation angle A pitch in the pitch direction, the rotation angle A yaw in the yaw direction, and the rotation angle A roll in the roll direction; and the preset angle threshold corresponding to the pitch direction Is T pitch , the corresponding preset angle threshold in the yaw direction is T yaw , and the corresponding preset angle threshold in the roll direction is T roll , where, when the face plane and the interactive screen plane are parallel to each other, the three rotation angles are all Is zero, correspondingly, if A pitch ⁇ T pitch and A yaw ⁇ T yaw and A roll ⁇ T roll , then it is determined that the target user has the willingness to pay;
  • the above-mentioned user gaze information includes: the probability value P focus of the user gazing at the payment screen; and the preset probability threshold is T focus , correspondingly, if P focus > T focus , it is determined that the target user has the willingness to pay.
  • the target user is the user currently collected by the camera device installed on the merchandising counter, and the camera device collects the face image of the target user located in the shooting area.
  • the facial image information may be collected by a camera device, and the first facial image information is transmitted to the facial recognition system;
  • the payment method in one or more embodiments of this specification obtains the first face image information of the target user; extracts characteristic information from the first face image information; judges whether the target user has the willingness to pay according to the characteristic information; if If the judgment result is yes, the payment operation is completed based on the face recognition function.
  • the target user is identified whether the target user has the willingness to pay, and then whether to activate the payment function;
  • Figure 5 is a diagram of the payment device provided by one or more embodiments of this specification. A schematic diagram of the module composition. The device is used to implement the payment methods described in Figures 1 to 4, as shown in Figure 5, the device includes:
  • the face image acquisition module 501 is used to acquire the first face image information of the target user
  • the key feature extraction module 502 is configured to extract feature information from the first face image information, where the feature information includes: user head posture information and/or user gaze information;
  • the willingness to pay judgment module 503 is configured to judge whether the target user has the willingness to pay according to the user's head posture information and/or the user gaze information;
  • the payment trigger module 504 is configured to complete the payment operation based on the face recognition function if the judgment result is yes.
  • one or more embodiments of this specification by collecting the face image information of the target user, and extracting the required feature information from the face image information, then identifying whether the target user has the willingness to pay based on the feature information, and then Determine whether to activate the payment function; one or more embodiments of this specification can not only simplify the user's interaction steps, but also improve the recognition accuracy of payment, avoid the problem of users' funds being stolen and swiped by mistake, and improve the user's funds security It can realize the recognition accuracy of users' willingness to pay under the premise of weak user interaction.
  • the payment trigger module 504 is specifically configured to:
  • the execution of the payment confirmation operation by swiping the face is triggered to complete the payment based on the payment account information corresponding to the target user.
  • the payment trigger module 504 is further specifically configured to:
  • the key feature extraction module 502 is specifically configured to:
  • the user head posture information includes: rotation in a preset direction angle;
  • the user gaze information includes: the user gazes at the payment screen At least one of the probability value and the probability value that the user does not look at the payment screen.
  • the head gesture recognition model is obtained by training in the following manner:
  • the first sample data set includes a plurality of first sample data, and each of the first sample data includes the difference between the sample face image and head posture information Correspondence;
  • preprocess the sample face image contained in the first sample data For each of the first sample data, based on the mean image data and the variance image data, preprocess the sample face image contained in the first sample data to obtain the preprocessed sample face image;
  • a head posture recognition model is trained.
  • the gaze information recognition model is obtained by training in the following manner:
  • the second sample data set includes a plurality of second sample data, and each of the second sample data includes a corresponding relationship between a sample eye image and gaze information;
  • preprocessing the sample eye image included in the second sample data For each of the second sample data, based on the mean image data and the variance image data, preprocessing the sample eye image included in the second sample data to obtain a preprocessed sample eye image;
  • a gaze information recognition model is trained.
  • the willingness to pay judgment module 503 is specifically configured to:
  • the characteristic information includes user head posture information and user gaze information, it is determined whether the rotation angle in each preset direction is less than a preset angle threshold, and whether the probability value of the user gazing at the payment screen is greater than the expected value.
  • Set probability threshold
  • the feature information includes user head posture information, determining whether the rotation angle in each preset direction is less than a preset angle threshold;
  • the characteristic information includes user gaze information, determining whether the probability value of the user gazing at the payment screen is greater than a preset probability threshold
  • the payment device in one or more embodiments of this specification acquires the first face image information of the target user; extracts characteristic information from the first face image information; judges whether the target user has the willingness to pay according to the characteristic information; if If the judgment result is yes, the payment operation is completed based on the face recognition function.
  • the target user is identified whether the target user has the willingness to pay, and then whether to activate the payment function;
  • Multiple embodiments realize that not only can the user's interaction steps be simplified, but also the recognition accuracy of payment can be improved, the problem of user funds being stolen and swiped by mistake is avoided, the user's funds security is improved, and the user's weak interaction is realized. Ensure the accuracy of identification of users' willingness to pay.
  • one or more embodiments of this specification also provide a payment device, which is used to execute the above payment method, as shown in FIG. 6 Shown.
  • a payment device may have relatively large differences due to different configurations or performances, and may include one or more processors 601 and a memory 602, and the memory 602 may store one or more storage applications or data. Among them, the memory 602 may be short-term storage or persistent storage.
  • the application program stored in the memory 602 may include one or more modules (not shown in the figure), and each module may include a series of computer executable instructions for the payment device.
  • the processor 601 may be configured to communicate with the memory 602, and execute a series of computer-executable instructions in the memory 602 on the payment device.
  • the payment device may also include one or more power sources 603, one or more wired or wireless network interfaces 604, one or more input and output interfaces 605, one or more keyboards 606, and so on.
  • the payment device includes a memory and one or more programs, wherein one or more programs are stored in the memory, and one or more programs may include one or more modules, and each The module may include a series of computer-executable instructions for the payment device, and the one or more programs configured to be executed by one or more processors include the following computer-executable instructions:
  • Extracting feature information from the first face image information where the feature information includes: user head posture information and/or user gaze information;
  • one or more embodiments of this specification by collecting the face image information of the target user, and extracting the required feature information from the face image information, then identifying whether the target user has the willingness to pay based on the feature information, and then Determine whether to activate the payment function; one or more embodiments of this specification can not only simplify the user's interaction steps, but also improve the recognition accuracy of payment, avoid the problem of users' funds being stolen and swiped by mistake, and improve the user's funds security It realizes the recognition accuracy of users’ willingness to pay under the premise of weak user interaction.
  • the completion of the payment operation based on the face recognition function includes:
  • the execution of the payment confirmation operation by swiping the face is triggered to complete the payment based on the payment account information corresponding to the target user.
  • the judging whether the feature information extracted from the second face image information has a willingness to pay includes:
  • the extracting feature information from the first face image information includes:
  • the user head posture information includes: rotation in a preset direction angle;
  • the user gaze information includes: the user gazes at the payment screen At least one of the probability value and the probability value that the user does not look at the payment screen.
  • the head gesture recognition model is obtained by training in the following manner:
  • the first sample data set includes a plurality of first sample data, and each of the first sample data includes the difference between the sample face image and head posture information Correspondence;
  • preprocess the sample face image contained in the first sample data For each of the first sample data, based on the mean image data and the variance image data, preprocess the sample face image contained in the first sample data to obtain the preprocessed sample face image;
  • a head posture recognition model is trained.
  • the gaze information recognition model is obtained by training in the following manner:
  • the second sample data set includes a plurality of second sample data, and each of the second sample data includes a corresponding relationship between a sample eye image and gaze information;
  • preprocessing the sample eye image included in the second sample data For each of the second sample data, based on the mean image data and the variance image data, preprocessing the sample eye image included in the second sample data to obtain a preprocessed sample eye image;
  • a gaze information recognition model is trained.
  • the judging whether the target user has the willingness to pay according to the user's head posture information and/or the user's gaze information includes:
  • the characteristic information includes user head posture information and user gaze information, it is determined whether the rotation angle in each preset direction is less than a preset angle threshold, and whether the probability value of the user gazing at the payment screen is greater than the expected value.
  • Set probability threshold
  • the feature information includes user head posture information, determining whether the rotation angle in each preset direction is less than a preset angle threshold;
  • the characteristic information includes user gaze information, determining whether the probability value of the user gazing at the payment screen is greater than a preset probability threshold
  • the payment device in one or more embodiments of this specification acquires the first face image information of the target user; extracts characteristic information from the first face image information; judges whether the target user has the willingness to pay according to the characteristic information; if If the judgment result is yes, the payment operation is completed based on the face recognition function.
  • the target user is identified whether the target user has the willingness to pay, and then whether to activate the payment function;
  • Multiple embodiments realize that not only can the user's interaction steps be simplified, but also the recognition accuracy of payment can be improved, the problem of user funds being stolen and swiped by mistake is avoided, the user's funds security is improved, and the user's weak interaction is realized. Ensure the accuracy of identification of users' willingness to pay.
  • one or more embodiments of this specification also provide a storage medium for storing computer executable instructions, a specific implementation
  • the storage medium may be a U disk, an optical disk, a hard disk, etc.
  • the computer executable instructions stored in the storage medium can realize the following process when executed by the processor:
  • Extracting feature information from the first face image information where the feature information includes: user head posture information and/or user gaze information;
  • one or more embodiments of this specification by collecting the face image information of the target user, and extracting the required feature information from the face image information, then identifying whether the target user has the willingness to pay based on the feature information, and then Determine whether to activate the payment function; one or more embodiments of this specification can not only simplify the user's interaction steps, but also improve the recognition accuracy of payment, avoid the problem of users' funds being stolen and swiped by mistake, and improve the user's funds security It realizes the recognition accuracy of users’ willingness to pay under the premise of weak user interaction.
  • the completion of the payment operation based on the facial recognition function includes:
  • the execution of the payment confirmation operation by swiping the face is triggered to complete the payment based on the payment account information corresponding to the target user.
  • the judging whether the feature information extracted from the second face image information has a willingness to pay includes:
  • the extracting feature information from the first face image information includes:
  • the user head posture information includes: rotation in a preset direction angle;
  • the user gaze information includes: the user gazes at the payment screen At least one of the probability value and the probability value that the user does not look at the payment screen.
  • the head gesture recognition model is obtained by training in the following manner:
  • the first sample data set includes a plurality of first sample data, and each of the first sample data includes the difference between the sample face image and head posture information Correspondence;
  • preprocess the sample face image contained in the first sample data For each of the first sample data, based on the mean image data and the variance image data, preprocess the sample face image contained in the first sample data to obtain the preprocessed sample face image;
  • a head posture recognition model is trained.
  • the gaze information recognition model is obtained by training in the following manner:
  • the second sample data set includes a plurality of second sample data, and each of the second sample data includes a corresponding relationship between a sample eye image and gaze information;
  • preprocessing the sample eye image included in the second sample data For each of the second sample data, based on the mean image data and the variance image data, preprocessing the sample eye image included in the second sample data to obtain a preprocessed sample eye image;
  • a gaze information recognition model is trained.
  • the judging whether the target user has the willingness to pay according to the user head posture information and/or the user gaze information includes :
  • the characteristic information includes user head posture information and user gaze information, it is determined whether the rotation angle in each preset direction is less than a preset angle threshold, and whether the probability value of the user gazing at the payment screen is greater than the expected value.
  • Set probability threshold
  • the characteristic information includes user head posture information, determining whether the rotation angle in each preset direction is less than a preset angle threshold;
  • the characteristic information includes user gaze information, determining whether the probability value of the user gazing at the payment screen is greater than a preset probability threshold
  • the processor When the computer-executable instructions stored in the storage medium in one or more embodiments of this specification are executed by the processor, obtain the first face image information of the target user; extract feature information from the first face image information; This feature information determines whether the target user has the willingness to pay; if the determination result is yes, the payment operation is completed based on the face recognition function.
  • the target user By collecting the face image information of the target user, and extracting the required characteristic information from the face image information, based on the characteristic information, the target user is identified whether the target user has the willingness to pay, and then whether to activate the payment function;
  • Multiple embodiments realize that not only can the user's interaction steps be simplified, but also the recognition accuracy of payment can be improved, the problem of user funds being stolen and swiped by mistake is avoided, the user's funds security is improved, and the user's weak interaction is realized. Ensure the accuracy of identification of users' willingness to pay.
  • a Programmable Logic Device (such as a Field Programmable Gate Array (FPGA)) is such an integrated circuit whose logic function is determined by the user's programming of the device.
  • HDL Hardware Description Language
  • ABEL Advanced Boolean Expression Language
  • AHDL Altera Hardware Description Language
  • HD Cal JHDL
  • Java Hardware Description Language Lava, Lola, My HDL, PALASM
  • RHDL Rule Hardware Description
  • VHDL Very-High-Speed Integrated Circuit Hardware Description Language
  • Verilog Verilog
  • the controller can be implemented in any suitable manner.
  • the controller can take the form of, for example, a microprocessor or a processor and a computer-readable medium storing computer-readable program codes (such as software or firmware) executable by the (micro)processor , Logic gates, switches, application specific integrated circuits (ASICs), programmable logic controllers and embedded microcontrollers.
  • Examples of controllers include but are not limited to the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20 and Silicon Labs C8051F320, the memory controller can also be implemented as a part of the memory control logic.
  • controller in addition to implementing the controller in a purely computer-readable program code manner, it is entirely possible to program the method steps to make the controller use logic gates, switches, application specific integrated circuits, programmable logic controllers and embedded The same function can be realized in the form of a microcontroller, etc. Therefore, such a controller can be regarded as a hardware component, and the devices included in it for implementing various functions can also be regarded as a structure within the hardware component. Or even, the device for realizing various functions can be regarded as both a software module for realizing the method and a structure within a hardware component.
  • a typical implementation device is a computer.
  • the computer may be, for example, a personal computer, a laptop computer, a cell phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or Any combination of these devices.
  • one or more embodiments of this specification can be provided as a method, a system, or a computer program product. Therefore, one or more of this specification may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, one or more of this specification can adopt computer program products implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes. form.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
  • the device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
  • the instructions provide steps for implementing functions specified in a flow or multiple flows in the flowchart and/or a block or multiple blocks in the block diagram.
  • the computing device includes one or more processors (CPU), input/output interfaces, network interfaces, and memory.
  • processors CPU
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • the memory may include non-permanent memory in computer readable media, random access memory (RAM) and/or non-volatile memory, such as read-only memory (ROM) or flash memory (flash RAM). Memory is an example of computer readable media.
  • RAM random access memory
  • ROM read-only memory
  • flash RAM flash memory
  • Computer-readable media include permanent and non-permanent, removable and non-removable media, and information storage can be realized by any method or technology.
  • the information can be computer-readable instructions, data structures, program modules, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disc (DVD) or other optical storage, Magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices. According to the definition in this article, computer-readable media does not include transitory media, such as modulated data signals and carrier waves.
  • one or more of the embodiments in this specification can be provided as a method, a system, or a computer program product. Therefore, one or more of this specification may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, one or more of this specification can adopt computer program products implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes. form.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • program modules include routines, programs, objects, components, data structures, etc. that perform specific tasks or implement specific abstract data types.
  • program modules can also be practiced in distributed computing environments. In these distributed computing environments, tasks are performed by remote processing devices connected through a communication network. In a distributed computing environment, program modules can be located in local and remote computer storage media including storage devices.

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Accounting & Taxation (AREA)
  • Computer Security & Cryptography (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

一种支付方法及装置,该方法包括:获取目标用户的第一人脸图像信息(S101);从该第一人脸图像信息中提取特征信息;根据该特征信息判断目标用户是否具有支付意愿;若判断结果为是,则基于人脸识别功能完成支付操作(S104)。该方法还包括:通过采集目标用户的人脸图像信息,并从该人脸图像信息中提取出所需的特征信息,进而基于该特征信息识别目标用户是否具有支付意愿,再确定是否启动支付功能。

Description

一种支付方法及装置 技术领域
本文件涉及智能识别领域,尤其涉及一种支付方法及装置。
背景技术
目前,随着互联网技术的快速发展,以及人脸识别及其相关应用的不断普及,人脸识别是基于人的脸部特征信息进行身份认证的生物特征识别技术,其中,由于通过刷脸完成支付能够给用户带来一定便捷性,刷脸支付无需用户扫码,也无需携带任何工具,还不需要用户输入其他身份信息(例如,手机号、支付账号、支付密码等)就可完成支付行为,得到广大用户的青睐。
当前,相关技术中提供的一种基于人脸识别的支付方法,主要是考虑可能因存在盗刷或错刷而导致给用户的资产安全带来一定威胁,因此,需要在用户的强交互下才能完成支付行为;具体的,首先,用户需要点击相关按键,以触发启动刷脸流程,然后,在刷脸流程启动后,还需要用户再次点击相关按键,以触发进入支付确认环节,即需要用户至少两次参与按键点击触控操作。
由此可知,需要提供一种用户交互简单且准确度高的支付方法。
发明内容
本说明书一个或多个实施例的目的是提供一种支付方法及装置,不仅能够简化用户的交互步骤,还能够提高支付的识别准确度,避免用户资金被盗刷误刷的问题,提高了用户的资金安全性,实现在用户弱交互的前提下确保用户支付意愿的识别精准度。
为解决上述技术问题,本说明书一个或多个实施例是这样实现的:
第一方面,本说明书一个或多个实施例提供了一种支付方法,包括:
获取目标用户的第一人脸图像信息;
从所述第一人脸图像信息中提取特征信息,其中,所述特征信息包括:用户头部姿态信息和/或用户注视信息;
根据所述用户头部姿态信息和/或所述用户注视信息,判断所述目标用户是否具有支 付意愿;
若判断结果为是,则基于人脸识别功能完成支付操作。
第二方面,本说明书一个或多个实施例提供了一种支付装置,包括:
人脸图像获取模块,用于获取目标用户的第一人脸图像信息;
关键特征提取模块,用于从所述第一人脸图像信息中提取特征信息,其中,所述特征信息包括:用户头部姿态信息和/或用户注视信息;
支付意愿判断模块,用于根据所述用户头部姿态信息和/或所述用户注视信息,判断所述目标用户是否具有支付意愿;
支付触发模块,用于若判断结果为是,则基于人脸识别功能完成支付操作。
第三方面,本说明书一个或多个实施例提供了一种支付设备,包括:
处理器;以及被安排成存储计算机可执行指令的存储器,所述可执行指令在被执行时使所述处理器:
获取目标用户的第一人脸图像信息;
从所述第一人脸图像信息中提取特征信息,其中,所述特征信息包括:用户头部姿态信息和/或用户注视信息;
根据所述用户头部姿态信息和/或所述用户注视信息,判断所述目标用户是否具有支付意愿;
若判断结果为是,则基于人脸识别功能完成支付操作。
第四方面,本说明书一个或多个实施例提供了一种存储介质,用于存储计算机可执行指令,所述可执行指令在被处理器执行时实现以下方法:
获取目标用户的第一人脸图像信息;
从所述第一人脸图像信息中提取特征信息,其中,所述特征信息包括:用户头部姿态信息和/或用户注视信息;
根据所述用户头部姿态信息和/或所述用户注视信息,判断所述目标用户是否具有支付意愿;
若判断结果为是,则基于人脸识别功能完成支付操作。
本说明书一个或多个实施例中的支付方法及装置,获取目标用户的第一人脸图像信息;从该第一人脸图像信息中提取特征信息;根据该特征信息判断目标用户是否具有支付意愿;若判断结果为是,则基于人脸识别功能完成支付操作。通过采集目标用户的人脸图像信息,并从该人脸图像信息中提取出所需的特征信息,进而基于该特征信息识别目标用户是否具有支付意愿,再确定是否启动支付功能;本说明书一个或多个实施例实现了不仅能够简化用户的交互步骤,还能够提高支付的识别准确度,避免用户资金被盗刷误刷的问题,提高了用户的资金安全性,实现在用户弱交互的前提下确保用户支付意愿的识别精准度。
附图说明
为了更清楚地说明本说明书一个或多个实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本说明书一个或多个中记载的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本说明书一个或多个实施例提供的支付方法的第一种流程示意图;
图2为本说明书一个或多个实施例提供的支付方法的第二种流程示意图;
图3为本说明书一个或多个实施例提供的支付方法的第三种流程示意图;
图4为本说明书一个或多个实施例提供的支付方法的具体实现原理示意图;
图5为本说明书一个或多个实施例提供的支付装置的模块组成示意图;
图6为本说明书一个或多个实施例提供的支付设备的结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本说明书一个或多个中的技术方案,下面将结合本说明书一个或多个实施例中的附图,对本说明书一个或多个实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本说明书一个或多个一部分实施例,而不是全部的实施例。基于本说明书一个或多个中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都应当属于本文件的保护范围。
本说明书一个或多个实施例提供了一种支付方法及装置,通过采集目标用户的人脸图像信息,并从该人脸图像信息中提取出所需的特征信息,进而基于该特征信息识别目 标用户是否具有支付意愿,再确定是否启动支付功能,这样不仅能够简化用户的交互步骤,还能够提高支付的识别准确度,避免用户资金被盗刷误刷的问题,提高了用户的资金安全性,实现在用户弱交互的前提下确保用户支付意愿的识别精准度。
图1为本说明书一个或多个实施例提供的支付方法的第一种流程示意图,图1中的方法的执行主体可以是设置有支付意愿识别装置的终端设备,还可以是支付意愿识别装置的后台服务器,如图1所示,该方法至少包括以下步骤:
S101,获取目标用户的第一人脸图像信息;其中,该目标用户是摄像装置当前采集到的用户,第一人脸图像信息可以是通过摄像装置采集得到,并将该第一人脸图像信息传输至人脸识别系统;
具体的,针对支持刷脸支付的应用场景,例如,商场等公共场所设置的商品售卖柜,又如,超市收银台或餐厅等设置的自助支付设备;通过商品售卖柜或自助支付设备上安装的摄像装置采集目标用户的第一人脸图像信息;
S102,从获取到的第一人脸图像信息中提取特征信息,其中,该特征信息包括:用户头部姿态信息和/或用户注视信息;
具体的,在通过摄像装置采集到第一人脸图像之后,利用预设图像识别方法对第一人脸图像信息进行特征提取,得到所需的特征信息,该特征信息可以包括用户头部姿态信息,还可以包括用户注视信息,也可以同时包括用户头部姿态信息和用户注视信息;
S103,根据提取出的用户头部姿态信息和/或用户注视信息,判断目标用户是否具有支付意愿;
具体的,在从第一人脸图像中提取出所需的特征信息之后,根据该特征信息,来识别目标用户是否具有支付意愿,即采用图像特征提取方式来识别用户是否具有支付意愿,进而确定是否触发支付启动或支付确认环节,其中,当特征信息满足预设条件时,则说明目标用户具有支付意愿,即目标用户期望完成刷脸支付,相应的,针对特征信息同时包括用户头部姿态信息和用户注视信息的情况,若用户头部姿态信息满足第一预设条件,且用户注视信息满足第二预设条件,则说明目标用户具有支付意愿,即目标用户具有支付需求;
若判断结果为是,则S104,基于人脸识别功能完成支付操作,具体的,如果基于从第一人脸图像信息中提取的特征信息确定出目标用户具有支付意愿,则自动完成刷脸支付;
其中,采用图像特征提取方式来识别用户是否具有支付意愿,能够解决用户资金被盗刷误刷的问题,例如,针对用户A和用户B同时排队进行刷脸支付的情况,如果当前用户A需要进行刷脸支付但用户B排在用户A前面,此时即使采集到的第一人脸图像信息为用户B的图像信息,但由于用户B当前并不具有支付意愿,因此,从该第一人脸图像信息中提取出的特征信息表征目标用户不具有支付意愿,此时将不执行S104的步骤,从而避免了用户A购买、用户B账户扣款的错刷现象。
本说明书一个或多个实施例中,通过采集目标用户的人脸图像信息,并从该人脸图像信息中提取出所需的特征信息,进而基于该特征信息识别目标用户是否具有支付意愿,再确定是否启动支付功能,本说明书一个或多个实施例实现了不仅能够简化用户的交互步骤,还能够提高支付的识别准确度,避免用户资金被盗刷误刷的问题,提高了用户的资金安全性,实现在用户弱交互的前提下确保用户支付意愿的识别精准度。
需要说明的是,上述步骤S101至S104的编号并不限定具体实现步骤的先后顺序;
其中,为了进一步提高支付识别的准确度,基于第一人脸图像信息来识别是否触发刷脸支付启动环节,再基于第二人脸图像信息来识别是否真正进入刷脸支付环节,通过引入双重人脸特征识别步骤进行两次用户支付意愿的确认,进而分别确定是否进入启动环节和支付环节,进一步提高了支付识别的准确度,基于此,上述如图2所示,上述S104,基于人脸识别功能完成支付操作,具体包括:
S1041,触发执行刷脸支付启动操作,以基于人脸识别功能获取第二人脸图像信息;
具体的,根据上述步骤S101至S103基于第一人脸图像信息确定出具有支付意愿之后,先触发进入刷脸支付启动环节,此时不进行支付扣款环节,而是通过摄像装置获取当前采集到的第二人脸图像信息;其中,第一人脸图像信息与第二人脸图像信息是先后通过摄像装置采集到的,第二人脸图像信息的采集时间晚于第一人脸图像信息的采集时间;
S1042,判断从获取到的第二人脸图像信息中提取出的特征信息是否具有支付意愿;
具体的,在获取到第二人脸图像之后,同样的,利用预设图像识别方法对第二人脸图像信息进行特征提取,得到所需的特征信息,该特征信息也可以包括用户头部姿态信息,还可以包括用户注视信息,还可以同时包括用户头部姿态信息和用户注视信息;其中,从第二人脸图像信息中提取特征信息的过程与从第一人脸图像信息中提取特征信息的过程相同,具体过程参见上述S102的细化步骤;
具体的,在从第二人脸图像信息中提取出特征信息后,同样的,根据该特征信息,判断第二人脸图像信息对应的当前用户是否具有支付意愿;其中,基于从第二人脸图像信息中提取的特征信息判断是否具有支付意愿的过程与基于从第一人脸图像信息中提取的特征信息判断是否具有支付意愿的过程相同,具体过程参见上述S103的细化步骤;
若判断结果为是,则S1043,触发执行刷脸支付确认操作,以基于目标用户对应的支付账号信息完成支付;
具体的,如果基于从第二人脸图像信息中提取的特征信息确定出当前用户具有支付意愿,则进入刷脸支付环节,即从相应的支付账号中扣除所需支付金额,也就是后,只有从第一人脸图像信息中提取出的特征信息、以及从第二人脸图像信息中提取出的特征信息均满足预设条件时,才进入刷脸支付环节。
进一步的,考虑到可能存在由于某种原因目标用户放弃最终的支付扣款而中途离开,导致进入刷脸支付启动环节之后,采集到的第二人脸图像信息对应的当前用户与目标用户不是同一人的情况,此时如果继续进入刷脸支付环节,则将出现误刷的问题,基于此,如图3所示,上述S1042,判断从获取到的第二人脸图像信息中提取出的特征信息是否具有支付意愿,具体包括:
S10421,判断获取到的第二人脸图像信息对应的当前用户与目标用户是否一致;
具体的,考虑到可能存在目标用户中途离开的情况,导致确定是否刷脸支付确认所采集的第二人脸图像信息对应的用户与目标用户不是同一人,因此,针对同一次刷脸支付流程,需要判断触发刷脸支付启动操作的第一人脸与触发刷脸支付确认操作的第二人脸是否为同一张人脸;
若一致,则S10422,判断从获取到的第二人脸图像信息中提取出的特征信息是否具有支付意愿;
具体的,只有确认采集到的第二人脸图像信息对应的当前用户与目标用户是同一人的情况,才继续执行基于从第二人脸图像信息中提取的特征信息再次判断是否具有支付意愿的步骤,进而确定是否进入刷脸支付环节。
其中,针对从第一人脸图像信息中提取特征信息的过程,为了提高特征信息提取的准确度,进而进一步提供基于特征信息判断是否具有支付意愿的准确度,采用预先训练得到相应的识别模型,再利用该识别模型从人脸图像信息中进行特征信息的提取,基于此,上述S102,从获取到的第一人脸图像信息中提取特征信息,具体包括:
(1)利用预先训练好的头部姿态识别模型并基于获取到的第一人脸图像信息,确定目标用户的用户头部姿态信息,其中,该用户头部姿态信息包括:预设方向上的旋转角度;
其中,上述头部姿态识别模型可以是具有神经网络结构的机器学习模型,是预先利用机器学习方法并基于第一样本数据集训练得到的;
具体的,将第一人脸图像信息作为头部姿态识别模型的输入,该头部姿态识别模型中的神经网络结构对第一人脸图像信息进行特征提取,对应的,该头部姿态识别模型的输出即为目标用户的用户头部姿态信息,其中,该用户头部姿态信息包括:预设方向上的旋转角度,例如,头部在pitch、yaw、roll三个方向的旋转角度,pitch是指围绕预设X轴旋转,也称为俯仰角,yaw是指围绕预设Y轴旋转,也称为偏航角,roll是指围绕预设Z轴旋转,也称为翻滚角,其中,预设方向上的旋转角度的大小与用户的刷脸支付意愿直接相关;
(2)利用预先训练好的注视信息识别模型并基于获取到的第一人脸图像信息中的眼部区域特征,确定目标用户的用户注视信息,其中,该用户注视信息包括:用户眼部注视支付屏幕的概率值和用户眼部未注视支付屏幕的概率值中至少一项;
其中,上述注视信息识别模型可以是具有神经网络结构的机器学习模型,是预先利用机器学习方法并基于第二样本数据集训练得到的;
具体的,将第一人脸图像信息作为注视信息识别模型的输入,该注视信息识别模型中的神经网络结构对第一人脸图像信息进行特征提取,对应的,该注视信息识别模型的输出即为目标用户的用户注视信息,其中,该用户注视信息可以包括:用户眼部注视支付屏幕的概率值,即用户眼部注视交互屏幕方向的概率值,该概率值越大说明用户眼睛盯着摄像装置和交互屏幕看的可能性越大,对应的,表征用户的刷脸支付意愿越强。
需要说明的是,针对上述特征信息包括用户头部姿态信息的情况,上述步骤S102具体包括上述(1)的过程;针对上述特征信息包括用户注视信息的情况,上述步骤S102具体包括上述(2)的过程;针对上述特征信息包括用户头部姿态信息和用户注视信息的情况,上述步骤S102具体包括上述(1)和(2)的过程。
其中,由于从第一人脸图像信息中提取用户头部姿态信息的过程需要使用头部姿态识别模型,因此,需要预先基于样本数据集训练得到头部姿态识别模型,具体的,上述头部姿态识别模型是通过如下方式训练得到的:
步骤一,获取第一样本数据集,其中,该第一样本数据集包括:多个第一样本数据,每个第一样本数据包括样本人脸图像与头部姿态信息之间的对应关系;
具体的,第一样本数据集包括多个带有标记的第一样本数据,即包括多个标记有头部姿态信息的样本人脸图像;第一样本数据集即为多个带有标记的头部区域样本集合X={x 1,x 2,......,x n},其中,x n表示序号为n的带有标记的头部区域样本(即序号为n的样本人脸图像);
其中,该头部姿态信息包括:样本人脸图像中的头部在预设方向上的旋转角度,例如,头部在pitch、yaw、roll三个方向的旋转角度,pitch是指围绕预设X轴旋转,也称为俯仰角,yaw是指围绕预设Y轴旋转,也称为偏航角,roll是指围绕预设Z轴旋转,也称为翻滚角;
步骤二,确定多个样本人脸图像的均值图像数据和方差图像数据,其中,对第一样本数据集中的多个样本人脸图像进行求均值处理,得到多个样本人脸图像的均值图像数据,以及对第一样本数据集中的多个样本人脸图像进行求方差处理,得到多个样本人脸图像的方差图像数据;
在具体实施时,为了提高最终的头部姿态识别模型的特征信息提取准确度,需要对第一样本数据集中的原始的样本人脸图像进行预处理,因此,需要先确定多个样本人脸图像的均值图像数据和方差图像数据;
步骤三,针对每个第一样本数据,基于确定出的均值图像数据和方差图像数据,对该第一样本数据中包含的样本人脸图像进行预处理,得到预处理后的样本人脸图像;
具体的,对样本人脸图像的预处理可以包括:将原始的样本人脸图像与均值图像数据的差值除以方差图像数据的处理,得到多个带标记的预处理后的样本人脸图像;
步骤四,将预处理后的样本人脸图像和对应的头部姿态信息确定为最终的第一模型训练样本;
步骤五,利用机器学习方法并基于多个第一模型训练样本,训练得到头部姿态识别模型;
具体的,利用机器学习方法并基于多个第一模型训练样本,对预设的第一机器学习模型中的模型参数进行优化训练,将模型参数最优的第一机器学习模型确定为训练好的头部姿态识别模型,其中,若头部姿态信息包括:样本人脸图像中的头部分别在pitch、 yaw、roll三个方向上的旋转角度,对应的,该第一机器学习模型包括三个独立的回归损失函数,对该三个独立的回归损失函数中的参数进行网络训练,回归损失函数与预设方向上的旋转角度一一对应。
其中,由于从第一人脸图像信息中提取用户注视信息的过程需要使用注视信息识别模型,因此,需要预先基于样本数据集训练得到注视信息识别模型,具体的,上述注视信息识别模型是通过如下方式训练得到的:
步骤一,获取第二样本数据集,其中,该第二样本数据集包括:多个第二样本数据,每个第二样本数据包括样本眼部图像与注视信息之间的对应关系;
具体的,第二样本数据集包括多个带有标记的第二样本数据,即包括多个标记有注视信息的样本眼部图像;第二样本数据集即为多个带有标记的眼部区域样本集合E={e 1,e 2,......,e n},其中,e n表示序号为n的带有标记的眼部区域样本(即序号为n的样本眼部图像);其中,上述与样本眼部图像对应的注视信息表征用户眼睛是否注视交互屏幕方向;
步骤二,确定多个样本眼部图像的均值图像数据和方差图像数据,其中,对第二样本数据集中的多个样本眼部图像进行求均值处理,得到多个样本眼部图像的均值图像数据,以及对第二样本数据集中的多个样本眼部图像进行求方差处理,得到多个样本眼部图像的方差图像数据;
在具体实施时,为了提高最终的注视信息识别模型的特征信息提取准确度,需要对第二样本数据集中的原始的样本眼部图像进行预处理,因此,需要先确定多个样本眼部图像的均值图像数据和方差图像数据;
步骤三,针对每个第二样本数据,基于确定出的均值图像数据和方差图像数据,对该第二样本数据中包含的样本眼部图像进行预处理,得到预处理后的样本眼部图像;
具体的,对样本眼部图像的预处理可以包括:将原始的样本眼部图像与均值图像数据的差值除以方差图像数据的处理,得到多个带标记的预处理后的样本眼部图像;
步骤四,将预处理后的样本眼部图像和对应的注视信息确定为最终的第二模型训练样本;
步骤五,利用机器学习方法并基于多个第二模型训练样本,训练得到注视信息识别模型;
具体的,利用机器学习方法并基于多个第二模型训练样本,对预设的第二机器学习模型中的模型参数进行优化训练,将模型参数最优的第二机器学习模型确定为训练好的注视信息识别模型,其中,该第二机器学习模型包括二分类损失函数,对该二分类损失函数中的参数进行网络训练,两个分类分别为眼部注视交互屏幕和眼部未注视交互屏幕。
其中,结合上述具体实现过程从第一人脸图像信息中提取出特征信息后,需要进行用户支付意愿的识别,基于此,上述步骤S103,根据提取出的用户头部姿态信息和/或用户注视信息,判断目标用户是否具有支付意愿,具体包括:
(1)针对上述特征信息包括用户头部姿态信息和用户注视信息的情况,则判断每个预设方向上的旋转角度是否小于预设角度阈值,以及判断用户注视支付屏幕的概率值是否大于预设概率阈值;
若判断结果均为是,则确定目标用户具有支付意愿;
具体的,假设上述用户头部姿态信息包括:pitch方向上的旋转角度A pitch、yaw方向上的旋转角度A yaw、以及roll方向上的旋转角度A roll;且pitch方向上对应的预设角度阈值为T pitch、yaw方向上对应的预设角度阈值为T yaw、以及roll方向上对应的预设角度阈值为T roll,其中,当人脸平面和交互屏幕平面互相平行时,三个旋转角度均为零,对应的,如果A pitch<T pitch且A yaw<T yaw且A roll<T roll,则确定用户头部姿态信息满足第一预设条件;
相应的,假设上述用户注视信息包括:用户注视支付屏幕的概率值P focus;且预设概率阈值为T focus,对应的,如果P focus>T focus,则确定用户注视信息满足第二预设条件;
其中,若用户头部姿态信息满足第一预设条件且用户注视信息满足第二预设条件,则确定目标用户具有支付意愿;
(2)针对上述特征信息只包括用户头部姿态信息的情况,则判断每个预设方向上的旋转角度是否小于预设角度阈值;
若判断结果为是,则确定目标用户具有支付意愿;
具体的,假设上述用户头部姿态信息包括:pitch方向上的旋转角度A pitch、yaw方向上的旋转角度A yaw、以及roll方向上的旋转角度A roll;且pitch方向上对应的预设角度阈 值为T pitch、yaw方向上对应的预设角度阈值为T yaw、以及roll方向上对应的预设角度阈值为T roll,其中,当人脸平面和交互屏幕平面互相平行时,三个旋转角度均为零,对应的,如果A pitch<T pitch且A yaw<T yaw且A roll<T roll,则确定目标用户具有支付意愿;
(3)针对若上述特征信息只包括用户注视信息的情况,则判断用户注视支付屏幕的概率值是否大于预设概率阈值;
若判断结果为是,则确定目标用户具有支付意愿;
具体的,假设上述用户注视信息包括:用户注视支付屏幕的概率值P focus;且预设概率阈值为T focus,对应的,如果P focus>T focus,则确定目标用户具有支付意愿。
在一个具体的实施例中,以商场等公共场所设置的商品售卖柜的应用场景为例,如图4所示,给出了支付方法的具体实现原理示意图,具体包括:
(1)获取目标用户的第一人脸图像信息;其中,该目标用户是商品售卖柜上设置的摄像装置当前采集到的用户,摄像装置采集位于拍摄区域的目标用户的人脸图像,第一人脸图像信息可以是通过摄像装置采集得到,并将该第一人脸图像信息传输至人脸识别系统;
(2)利用预先训练好的头部姿态识别模型,从第一人脸图像信息中提取第一用户头部姿态信息;以及利用预先训练好的注视信息识别模型,从第一人脸图像信息中提取第一用户注视信息;
(3)根据第一用户头部姿态信息和第一用户注视信息,判断目标用户是否具有支付意愿;
(4)若上述(3)的判断结果为是,则触发执行刷脸支付启动操作,并基于人脸识别功能获取第二人脸图像信息;
(5)判断获取到的第二人脸图像信息对应的当前用户与上述目标用户是否一致;即确定当前用户与目标用户是否为同一用户;
(6)若上述(5)的判断结果为一致,则利用预先训练好的头部姿态识别模型,从第二人脸图像信息中提取第二用户头部姿态信息;以及利用预先训练好的注视信息识别模型,从第二人脸图像信息中提取第二用户注视信息;
(7)根据第二用户头部姿态信息和第二用户注视信息,再次判断目标用户是否 具有支付意愿;
(8)若上述(7)的判断结果为是,则触发执行刷脸支付确认操作,以基于目标用户对应的支付账号信息完成支付。
本说明书一个或多个实施例中的支付方法,获取目标用户的第一人脸图像信息;从该第一人脸图像信息中提取特征信息;根据该特征信息判断目标用户是否具有支付意愿;若判断结果为是,则基于人脸识别功能完成支付操作。通过采集目标用户的人脸图像信息,并从该人脸图像信息中提取出所需的特征信息,进而基于该特征信息识别目标用户是否具有支付意愿,再确定是否启动支付功能;本说明书一个或多个实施例实现了不仅能够简化用户的交互步骤,还能够提高支付的识别准确度,避免用户资金被盗刷误刷的问题,提高了用户的资金安全性,实现在用户弱交互的前提下确保用户支付意愿的识别精准度。
对应上述图1至图4描述的支付方法,基于相同的技术构思,本说明书一个或多个实施例还提供了一种支付装置,图5为本说明书一个或多个实施例提供的支付装置的模块组成示意图,该装置用于执行图1至图4描述的支付方法,如图5所示,该装置包括:
人脸图像获取模块501,用于获取目标用户的第一人脸图像信息;
关键特征提取模块502,用于从所述第一人脸图像信息中提取特征信息,其中,所述特征信息包括:用户头部姿态信息和/或用户注视信息;
支付意愿判断模块503,用于根据所述用户头部姿态信息和/或所述用户注视信息,判断所述目标用户是否具有支付意愿;
支付触发模块504,用于若判断结果为是,则基于人脸识别功能完成支付操作。
本说明书一个或多个实施例中,通过采集目标用户的人脸图像信息,并从该人脸图像信息中提取出所需的特征信息,进而基于该特征信息识别目标用户是否具有支付意愿,再确定是否启动支付功能;本说明书一个或多个实施例实现了不仅能够简化用户的交互步骤,还能够提高支付的识别准确度,避免用户资金被盗刷误刷的问题,提高了用户的资金安全性,实现在用户弱交互的前提下确保用户支付意愿的识别精准度。
可选地,所述支付触发模块504,具体用于:
触发执行刷脸支付启动操作,以基于人脸识别功能获取第二人脸图像信息;
判断从所述第二人脸图像信息中提取出的特征信息是否具有支付意愿;
若判断结果为是,则触发执行刷脸支付确认操作,以基于所述目标用户对应的支付账号信息完成支付。
可选地,所述支付触发模块504,进一步具体用于:
判断所述第二人脸图像信息对应的当前用户与所述目标用户是否一致;
若一致,则判断从所述第二人脸图像信息中提取出的特征信息是否具有支付意愿。
可选地,所述关键特征提取模块502,具体用于:
利用预先训练好的头部姿态识别模型并基于所述第一人脸图像信息,确定所述目标用户的用户头部姿态信息,其中,所述用户头部姿态信息包括:预设方向上的旋转角度;
和/或,
利用预先训练好的注视信息识别模型并基于所述第一人脸图像信息中的眼部区域特征,确定所述目标用户的用户注视信息,其中,所述用户注视信息包括:用户注视支付屏幕的概率值和用户未注视支付屏幕的概率值中至少一项。
可选地,所述头部姿态识别模型是通过如下方式训练得到的:
获取第一样本数据集,其中,所述第一样本数据集包括:多个第一样本数据,每个所述第一样本数据包括样本人脸图像与头部姿态信息之间的对应关系;
确定多个所述样本人脸图像的均值图像数据和方差图像数据;
针对每个所述第一样本数据,基于所述均值图像数据和所述方差图像数据,对该第一样本数据中包含的样本人脸图像进行预处理,得到预处理后的样本人脸图像;
将所述预处理后的样本人脸图像和对应的头部姿态信息确定为最终的第一模型训练样本;
利用机器学习方法并基于多个所述第一模型训练样本,训练得到头部姿态识别模型。
可选地,所述注视信息识别模型是通过如下方式训练得到的:
获取第二样本数据集,其中,所述第二样本数据集包括:多个第二样本数据, 每个所述第二样本数据包括样本眼部图像与注视信息之间的对应关系;
确定多个所述样本眼部图像的均值图像数据和方差图像数据;
针对每个所述第二样本数据,基于所述均值图像数据和所述方差图像数据,对该第二样本数据中包含的样本眼部图像进行预处理,得到预处理后的样本眼部图像;
将所述预处理后的样本眼部图像和对应的注视信息确定为最终的第二模型训练样本;
利用机器学习方法并基于多个所述第二模型训练样本,训练得到注视信息识别模型。
可选地,所述支付意愿判断模块503,具体用于:
若所述特征信息包括用户头部姿态信息和用户注视信息,则判断每个预设方向上的所述旋转角度是否小于预设角度阈值,以及判断所述用户注视支付屏幕的概率值是否大于预设概率阈值;
若判断结果均为是,则确定所述目标用户具有支付意愿;
或者,
若所述特征信息包括用户头部姿态信息,则判断每个预设方向上的所述旋转角度是否小于预设角度阈值;
若判断结果为是,则确定所述目标用户具有支付意愿;
或者,
若所述特征信息包括用户注视信息,则判断所述用户注视支付屏幕的概率值是否大于预设概率阈值;
若判断结果为是,则确定所述目标用户具有支付意愿。
本说明书一个或多个实施例中的支付装置,获取目标用户的第一人脸图像信息;从该第一人脸图像信息中提取特征信息;根据该特征信息判断目标用户是否具有支付意愿;若判断结果为是,则基于人脸识别功能完成支付操作。通过采集目标用户的人脸图像信息,并从该人脸图像信息中提取出所需的特征信息,进而基于该特征信息识别目标用户是否具有支付意愿,再确定是否启动支付功能;本说明书一个或多个实施例实现了不仅能够简化用户的交互步骤,还能够提高支付的识别准确度,避免用户资金被盗刷误 刷的问题,提高了用户的资金安全性,实现在用户弱交互的前提下确保用户支付意愿的识别精准度。
需要说明的是,本说明书中关于支付装置的实施例与本说明书中关于支付方法的实施例基于同一发明构思,因此该实施例的具体实施可以参见前述对应的支付方法的实施,重复之处不再赘述。
进一步地,对应上述图1至图4所示的方法,基于相同的技术构思,本说明书一个或多个实施例还提供了一种支付设备,该设备用于执行上述的支付方法,如图6所示。
支付设备可因配置或性能不同而产生比较大的差异,可以包括一个或一个以上的处理器601和存储器602,存储器602中可以存储有一个或一个以上存储应用程序或数据。其中,存储器602可以是短暂存储或持久存储。存储在存储器602的应用程序可以包括一个或一个以上模块(图示未示出),每个模块可以包括对支付设备中的一系列计算机可执行指令。更进一步地,处理器601可以设置为与存储器602通信,在支付设备上执行存储器602中的一系列计算机可执行指令。支付设备还可以包括一个或一个以上电源603,一个或一个以上有线或无线网络接口604,一个或一个以上输入输出接口605,一个或一个以上键盘606等。
在一个具体的实施例中,支付设备包括有存储器,以及一个或一个以上的程序,其中一个或者一个以上程序存储于存储器中,且一个或者一个以上程序可以包括一个或一个以上模块,且每个模块可以包括对支付设备中的一系列计算机可执行指令,且经配置以由一个或者一个以上处理器执行该一个或者一个以上程序包含用于进行以下计算机可执行指令:
获取目标用户的第一人脸图像信息;
从所述第一人脸图像信息中提取特征信息,其中,所述特征信息包括:用户头部姿态信息和/或用户注视信息;
根据所述用户头部姿态信息和/或所述用户注视信息,判断所述目标用户是否具有支付意愿;
若判断结果为是,则基于人脸识别功能完成支付操作。
本说明书一个或多个实施例中,通过采集目标用户的人脸图像信息,并从该人脸图像信息中提取出所需的特征信息,进而基于该特征信息识别目标用户是否具有支付 意愿,再确定是否启动支付功能;本说明书一个或多个实施例实现了不仅能够简化用户的交互步骤,还能够提高支付的识别准确度,避免用户资金被盗刷误刷的问题,提高了用户的资金安全性,实现在用户弱交互的前提下确保用户支付意愿的识别精准度。
可选地,计算机可执行指令在被执行时,所述基于人脸识别功能完成支付操作,包括:
触发执行刷脸支付启动操作,以基于人脸识别功能获取第二人脸图像信息;
判断从所述第二人脸图像信息中提取出的特征信息是否具有支付意愿;
若判断结果为是,则触发执行刷脸支付确认操作,以基于所述目标用户对应的支付账号信息完成支付。
可选地,计算机可执行指令在被执行时,所述判断从所述第二人脸图像信息中提取出的特征信息是否具有支付意愿,包括:
判断所述第二人脸图像信息对应的当前用户与所述目标用户是否一致;
若一致,则判断从所述第二人脸图像信息中提取出的特征信息是否具有支付意愿。
可选地,计算机可执行指令在被执行时,所述从所述第一人脸图像信息中提取特征信息,包括:
利用预先训练好的头部姿态识别模型并基于所述第一人脸图像信息,确定所述目标用户的用户头部姿态信息,其中,所述用户头部姿态信息包括:预设方向上的旋转角度;
和/或,
利用预先训练好的注视信息识别模型并基于所述第一人脸图像信息中的眼部区域特征,确定所述目标用户的用户注视信息,其中,所述用户注视信息包括:用户注视支付屏幕的概率值和用户未注视支付屏幕的概率值中至少一项。
可选地,计算机可执行指令在被执行时,所述头部姿态识别模型是通过如下方式训练得到的:
获取第一样本数据集,其中,所述第一样本数据集包括:多个第一样本数据,每个所述第一样本数据包括样本人脸图像与头部姿态信息之间的对应关系;
确定多个所述样本人脸图像的均值图像数据和方差图像数据;
针对每个所述第一样本数据,基于所述均值图像数据和所述方差图像数据,对该第一样本数据中包含的样本人脸图像进行预处理,得到预处理后的样本人脸图像;
将所述预处理后的样本人脸图像和对应的头部姿态信息确定为最终的第一模型训练样本;
利用机器学习方法并基于多个所述第一模型训练样本,训练得到头部姿态识别模型。
可选地,计算机可执行指令在被执行时,所述注视信息识别模型是通过如下方式训练得到的:
获取第二样本数据集,其中,所述第二样本数据集包括:多个第二样本数据,每个所述第二样本数据包括样本眼部图像与注视信息之间的对应关系;
确定多个所述样本眼部图像的均值图像数据和方差图像数据;
针对每个所述第二样本数据,基于所述均值图像数据和所述方差图像数据,对该第二样本数据中包含的样本眼部图像进行预处理,得到预处理后的样本眼部图像;
将所述预处理后的样本眼部图像和对应的注视信息确定为最终的第二模型训练样本;
利用机器学习方法并基于多个所述第二模型训练样本,训练得到注视信息识别模型。
可选地,计算机可执行指令在被执行时,所述根据所述用户头部姿态信息和/或所述用户注视信息,判断所述目标用户是否具有支付意愿,包括:
若所述特征信息包括用户头部姿态信息和用户注视信息,则判断每个预设方向上的所述旋转角度是否小于预设角度阈值,以及判断所述用户注视支付屏幕的概率值是否大于预设概率阈值;
若判断结果均为是,则确定所述目标用户具有支付意愿;
或者,
若所述特征信息包括用户头部姿态信息,则判断每个预设方向上的所述旋转角度是否小于预设角度阈值;
若判断结果为是,则确定所述目标用户具有支付意愿;
或者,
若所述特征信息包括用户注视信息,则判断所述用户注视支付屏幕的概率值是否大于预设概率阈值;
若判断结果为是,则确定所述目标用户具有支付意愿。
本说明书一个或多个实施例中的支付设备,获取目标用户的第一人脸图像信息;从该第一人脸图像信息中提取特征信息;根据该特征信息判断目标用户是否具有支付意愿;若判断结果为是,则基于人脸识别功能完成支付操作。通过采集目标用户的人脸图像信息,并从该人脸图像信息中提取出所需的特征信息,进而基于该特征信息识别目标用户是否具有支付意愿,再确定是否启动支付功能;本说明书一个或多个实施例实现了不仅能够简化用户的交互步骤,还能够提高支付的识别准确度,避免用户资金被盗刷误刷的问题,提高了用户的资金安全性,实现在用户弱交互的前提下确保用户支付意愿的识别精准度。
需要说明的是,本说明书中关于支付设备的实施例与本说明书中关于支付方法的实施例基于同一发明构思,因此该实施例的具体实施可以参见前述对应的支付方法的实施,重复之处不再赘述。
进一步地,对应上述图1至图4所示的方法,基于相同的技术构思,本说明书一个或多个实施例还提供了一种存储介质,用于存储计算机可执行指令,一种具体的实施例中,该存储介质可以为U盘、光盘、硬盘等,该存储介质存储的计算机可执行指令在被处理器执行时,能实现以下流程:
获取目标用户的第一人脸图像信息;
从所述第一人脸图像信息中提取特征信息,其中,所述特征信息包括:用户头部姿态信息和/或用户注视信息;
根据所述用户头部姿态信息和/或所述用户注视信息,判断所述目标用户是否具有支付意愿;
若判断结果为是,则基于人脸识别功能完成支付操作。
本说明书一个或多个实施例中,通过采集目标用户的人脸图像信息,并从该人脸图像信息中提取出所需的特征信息,进而基于该特征信息识别目标用户是否具有支付 意愿,再确定是否启动支付功能;本说明书一个或多个实施例实现了不仅能够简化用户的交互步骤,还能够提高支付的识别准确度,避免用户资金被盗刷误刷的问题,提高了用户的资金安全性,实现在用户弱交互的前提下确保用户支付意愿的识别精准度。
可选地,该存储介质存储的计算机可执行指令在被处理器执行时,所述基于人脸识别功能完成支付操作,包括:
触发执行刷脸支付启动操作,以基于人脸识别功能获取第二人脸图像信息;
判断从所述第二人脸图像信息中提取出的特征信息是否具有支付意愿;
若判断结果为是,则触发执行刷脸支付确认操作,以基于所述目标用户对应的支付账号信息完成支付。
可选地,该存储介质存储的计算机可执行指令在被处理器执行时,所述判断从所述第二人脸图像信息中提取出的特征信息是否具有支付意愿,包括:
判断所述第二人脸图像信息对应的当前用户与所述目标用户是否一致;
若一致,则判断从所述第二人脸图像信息中提取出的特征信息是否具有支付意愿。
可选地,该存储介质存储的计算机可执行指令在被处理器执行时,所述从所述第一人脸图像信息中提取特征信息,包括:
利用预先训练好的头部姿态识别模型并基于所述第一人脸图像信息,确定所述目标用户的用户头部姿态信息,其中,所述用户头部姿态信息包括:预设方向上的旋转角度;
和/或,
利用预先训练好的注视信息识别模型并基于所述第一人脸图像信息中的眼部区域特征,确定所述目标用户的用户注视信息,其中,所述用户注视信息包括:用户注视支付屏幕的概率值和用户未注视支付屏幕的概率值中至少一项。
可选地,该存储介质存储的计算机可执行指令在被处理器执行时,所述头部姿态识别模型是通过如下方式训练得到的:
获取第一样本数据集,其中,所述第一样本数据集包括:多个第一样本数据,每个所述第一样本数据包括样本人脸图像与头部姿态信息之间的对应关系;
确定多个所述样本人脸图像的均值图像数据和方差图像数据;
针对每个所述第一样本数据,基于所述均值图像数据和所述方差图像数据,对该第一样本数据中包含的样本人脸图像进行预处理,得到预处理后的样本人脸图像;
将所述预处理后的样本人脸图像和对应的头部姿态信息确定为最终的第一模型训练样本;
利用机器学习方法并基于多个所述第一模型训练样本,训练得到头部姿态识别模型。
可选地,该存储介质存储的计算机可执行指令在被处理器执行时,所述注视信息识别模型是通过如下方式训练得到的:
获取第二样本数据集,其中,所述第二样本数据集包括:多个第二样本数据,每个所述第二样本数据包括样本眼部图像与注视信息之间的对应关系;
确定多个所述样本眼部图像的均值图像数据和方差图像数据;
针对每个所述第二样本数据,基于所述均值图像数据和所述方差图像数据,对该第二样本数据中包含的样本眼部图像进行预处理,得到预处理后的样本眼部图像;
将所述预处理后的样本眼部图像和对应的注视信息确定为最终的第二模型训练样本;
利用机器学习方法并基于多个所述第二模型训练样本,训练得到注视信息识别模型。
可选地,该存储介质存储的计算机可执行指令在被处理器执行时,所述根据所述用户头部姿态信息和/或所述用户注视信息,判断所述目标用户是否具有支付意愿,包括:
若所述特征信息包括用户头部姿态信息和用户注视信息,则判断每个预设方向上的所述旋转角度是否小于预设角度阈值,以及判断所述用户注视支付屏幕的概率值是否大于预设概率阈值;
若判断结果均为是,则确定所述目标用户具有支付意愿;
或者,
若所述特征信息包括用户头部姿态信息,则判断每个预设方向上的所述旋转角 度是否小于预设角度阈值;
若判断结果为是,则确定所述目标用户具有支付意愿;
或者,
若所述特征信息包括用户注视信息,则判断所述用户注视支付屏幕的概率值是否大于预设概率阈值;
若判断结果为是,则确定所述目标用户具有支付意愿。
本说明书一个或多个实施例中的存储介质存储的计算机可执行指令在被处理器执行时,获取目标用户的第一人脸图像信息;从该第一人脸图像信息中提取特征信息;根据该特征信息判断目标用户是否具有支付意愿;若判断结果为是,则基于人脸识别功能完成支付操作。通过采集目标用户的人脸图像信息,并从该人脸图像信息中提取出所需的特征信息,进而基于该特征信息识别目标用户是否具有支付意愿,再确定是否启动支付功能;本说明书一个或多个实施例实现了不仅能够简化用户的交互步骤,还能够提高支付的识别准确度,避免用户资金被盗刷误刷的问题,提高了用户的资金安全性,实现在用户弱交互的前提下确保用户支付意愿的识别精准度。
需要说明的是,本说明书中关于存储介质的实施例与本说明书中关于支付方法的实施例基于同一发明构思,因此该实施例的具体实施可以参见前述对应的支付方法的实施,重复之处不再赘述。
上述对本说明书特定实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者可能是有利的。
在20世纪90年代,对于一个技术的改进可以很明显地区分是硬件上的改进(例如,对二极管、晶体管、开关等电路结构的改进)还是软件上的改进(对于方法流程的改进)。然而,随着技术的发展,当今的很多方法流程的改进已经可以视为硬件电路结构的直接改进。设计人员几乎都通过将改进的方法流程编程到硬件电路中来得到相应的硬件电路结构。因此,不能说一个方法流程的改进就不能用硬件实体模块来实现。例如,可编程逻辑器件(Programmable Logic Device,PLD)(例如现场可编程门阵列(Field Programmable Gate Array,FPGA))就是这样一种集成电路,其逻辑功能由用户对器件 编程来确定。由设计人员自行编程来把一个数字系统“集成”在一片PLD上,而不需要请芯片制造厂商来设计和制作专用的集成电路芯片。而且,如今,取代手工地制作集成电路芯片,这种编程也多半改用“逻辑编译器(logic compiler)”软件来实现,它与程序开发撰写时所用的软件编译器相类似,而要编译之前的原始代码也得用特定的编程语言来撰写,此称之为硬件描述语言(Hardware Description Language,HDL),而HDL也并非仅有一种,而是有许多种,如ABEL(Advanced Boolean Expression Language)、AHDL(Altera Hardware Description Language)、Confluence、CUPL(Cornell University Programming Language)、HD Cal、JHDL(Java Hardware Description Language)、Lava、Lola、My HDL、PALASM、RHDL(Ruby Hardware Description Language)等,目前最普遍使用的是VHDL(Very-High-Speed Integrated Circuit Hardware Description Language)与Verilog。本领域技术人员也应该清楚,只需要将方法流程用上述几种硬件描述语言稍作逻辑编程并编程到集成电路中,就可以很容易得到实现该逻辑方法流程的硬件电路。
控制器可以按任何适当的方式实现,例如,控制器可以采取例如微处理器或处理器以及存储可由该(微)处理器执行的计算机可读程序代码(例如软件或固件)的计算机可读介质、逻辑门、开关、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程逻辑控制器和嵌入微控制器的形式,控制器的例子包括但不限于以下微控制器:ARC 625D、Atmel AT91SAM、Microchip PIC18F26K20以及Silicone Labs C8051F320,存储器控制器还可以被实现为存储器的控制逻辑的一部分。本领域技术人员也知道,除了以纯计算机可读程序代码方式实现控制器以外,完全可以通过将方法步骤进行逻辑编程来使得控制器以逻辑门、开关、专用集成电路、可编程逻辑控制器和嵌入微控制器等的形式来实现相同功能。因此这种控制器可以被认为是一种硬件部件,而对其内包括的用于实现各种功能的装置也可以视为硬件部件内的结构。或者甚至,可以将用于实现各种功能的装置视为既可以是实现方法的软件模块又可以是硬件部件内的结构。
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机。具体的,计算机例如可以为个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任何设备的组合。
为了描述的方便,描述以上装置时以功能分为各种单元分别描述。当然,在实 施本说明书一个或多个时可以把各单元的功能在同一个或多个软件和/或硬件中实现。
本领域内的技术人员应明白,本说明书一个或多个的实施例可提供为方法、系统、或计算机程序产品。因此,本说明书一个或多个可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本说明书一个或多个可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本说明书一个或多个是参照根据本说明书一个或多个实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存 储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
本领域技术人员应明白,本说明书一个或多个的实施例可提供为方法、系统或计算机程序产品。因此,本说明书一个或多个可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。而且,本说明书一个或多个可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本说明书一个或多个可以在由计算机执行的计算机可执行指令的一般上下文中描述,例如程序模块。一般地,程序模块包括执行特定任务或实现特定抽象数据类型的例程、程序、对象、组件、数据结构等等。也可以在分布式计算环境中实践本说明书一个或多个,在这些分布式计算环境中,由通过通信网络而被连接的远程处理设备来执行任务。在分布式计算环境中,程序模块可以位于包括存储设备在内的本地和远程计算机存储介质中。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
以上所述仅为本说明书一个或多个的实施例而已,并不用于限制本说明书一个或多个。对于本领域技术人员来说,本说明书一个或多个可以有各种更改和变化。凡在本说明书一个或多个的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本说明书一个或多个的权利要求范围之内。

Claims (16)

  1. 一种支付方法,包括:
    获取目标用户的第一人脸图像信息;
    从所述第一人脸图像信息中提取特征信息,其中,所述特征信息包括:用户头部姿态信息和/或用户注视信息;
    根据所述用户头部姿态信息和/或所述用户注视信息,判断所述目标用户是否具有支付意愿;
    若判断结果为是,则基于人脸识别功能完成支付操作。
  2. 根据权利要求1所述的方法,其中,所述基于人脸识别功能完成支付操作,包括:
    触发执行刷脸支付启动操作,以基于人脸识别功能获取第二人脸图像信息;
    判断从所述第二人脸图像信息中提取出的特征信息是否具有支付意愿;
    若判断结果为是,则触发执行刷脸支付确认操作,以基于所述目标用户对应的支付账号信息完成支付。
  3. 根据权利要求2所述的方法,其中,所述判断从所述第二人脸图像信息中提取出的特征信息是否具有支付意愿,包括:
    判断所述第二人脸图像信息对应的当前用户与所述目标用户是否一致;
    若一致,则判断从所述第二人脸图像信息中提取出的特征信息是否具有支付意愿。
  4. 根据权利要求1所述的方法,其中,所述从所述第一人脸图像信息中提取特征信息,包括:
    利用预先训练好的头部姿态识别模型并基于所述第一人脸图像信息,确定所述目标用户的用户头部姿态信息,其中,所述用户头部姿态信息包括:预设方向上的旋转角度;
    和/或,
    利用预先训练好的注视信息识别模型并基于所述第一人脸图像信息中的眼部区域特征,确定所述目标用户的用户注视信息,其中,所述用户注视信息包括:用户注视支付屏幕的概率值和用户未注视支付屏幕的概率值中至少一项。
  5. 根据权利要求4所述的方法,其中,所述头部姿态识别模型是通过如下方式训 练得到的:
    获取第一样本数据集,其中,所述第一样本数据集包括:多个第一样本数据,每个所述第一样本数据包括样本人脸图像与头部姿态信息之间的对应关系;
    确定多个所述样本人脸图像的均值图像数据和方差图像数据;
    针对每个所述第一样本数据,基于所述均值图像数据和所述方差图像数据,对该第一样本数据中包含的样本人脸图像进行预处理,得到预处理后的样本人脸图像;
    将所述预处理后的样本人脸图像和对应的头部姿态信息确定为最终的第一模型训练样本;
    利用机器学习方法并基于多个所述第一模型训练样本,训练得到头部姿态识别模型。
  6. 根据权利要求4所述的方法,其中,所述注视信息识别模型是通过如下方式训练得到的:
    获取第二样本数据集,其中,所述第二样本数据集包括:多个第二样本数据,每个所述第二样本数据包括样本眼部图像与注视信息之间的对应关系;
    确定多个所述样本眼部图像的均值图像数据和方差图像数据;
    针对每个所述第二样本数据,基于所述均值图像数据和所述方差图像数据,对该第二样本数据中包含的样本眼部图像进行预处理,得到预处理后的样本眼部图像;
    将所述预处理后的样本眼部图像和对应的注视信息确定为最终的第二模型训练样本;
    利用机器学习方法并基于多个所述第二模型训练样本,训练得到注视信息识别模型。
  7. 根据权利要求4所述的方法,其中,所述根据所述用户头部姿态信息和/或所述用户注视信息,判断所述目标用户是否具有支付意愿,包括:
    若所述特征信息包括用户头部姿态信息和用户注视信息,则判断每个预设方向上的所述旋转角度是否小于预设角度阈值,以及判断所述用户注视支付屏幕的概率值是否大于预设概率阈值;
    若判断结果均为是,则确定所述目标用户具有支付意愿;
    或者,
    若所述特征信息包括用户头部姿态信息,则判断每个预设方向上的所述旋转角度是否小于预设角度阈值;
    若判断结果为是,则确定所述目标用户具有支付意愿;
    或者,
    若所述特征信息包括用户注视信息,则判断所述用户注视支付屏幕的概率值是否大于预设概率阈值;
    若判断结果为是,则确定所述目标用户具有支付意愿。
  8. 一种支付装置,包括:
    人脸图像获取模块,用于获取目标用户的第一人脸图像信息;
    关键特征提取模块,用于从所述第一人脸图像信息中提取特征信息,其中,所述特征信息包括:用户头部姿态信息和/或用户注视信息;
    支付意愿判断模块,用于根据所述用户头部姿态信息和/或所述用户注视信息,判断所述目标用户是否具有支付意愿;
    支付触发模块,用于若判断结果为是,则基于人脸识别功能完成支付操作。
  9. 根据权利要求8所述的装置,其中,所述支付触发模块,具体用于:
    触发执行刷脸支付启动操作,以基于人脸识别功能获取第二人脸图像信息;
    判断从所述第二人脸图像信息中提取出的特征信息是否具有支付意愿;
    若判断结果为是,则触发执行刷脸支付确认操作,以基于所述目标用户对应的支付账号信息完成支付。
  10. 根据权利要求9所述的装置,其中,所述支付触发模块,进一步具体用于:
    判断所述第二人脸图像信息对应的当前用户与所述目标用户是否一致;
    若一致,则判断从所述第二人脸图像信息中提取出的特征信息是否具有支付意愿。
  11. 根据权利要求8所述的装置,其中,所述关键特征提取模块,具体用于:
    利用预先训练好的头部姿态识别模型并基于所述第一人脸图像信息,确定所述目标用户的用户头部姿态信息,其中,所述用户头部姿态信息包括:预设方向上的旋转角度;
    和/或,
    利用预先训练好的注视信息识别模型并基于所述第一人脸图像信息中的眼部区域特征,确定所述目标用户的用户注视信息,其中,所述用户注视信息包括:用户注视支付屏幕的概率值和用户未注视支付屏幕的概率值中至少一项。
  12. 根据权利要求11所述的装置,其中,所述头部姿态识别模型是通过如下方式 训练得到的:
    获取第一样本数据集,其中,所述第一样本数据集包括:多个第一样本数据,每个所述第一样本数据包括样本人脸图像与头部姿态信息之间的对应关系;
    确定多个所述样本人脸图像的均值图像数据和方差图像数据;
    针对每个所述第一样本数据,基于所述均值图像数据和所述方差图像数据,对该第一样本数据中包含的样本人脸图像进行预处理,得到预处理后的样本人脸图像;
    将所述预处理后的样本人脸图像和对应的头部姿态信息确定为最终的第一模型训练样本;
    利用机器学习方法并基于多个所述第一模型训练样本,训练得到头部姿态识别模型。
  13. 根据权利要求11所述的装置,其中,所述注视信息识别模型是通过如下方式训练得到的:
    获取第二样本数据集,其中,所述第二样本数据集包括:多个第二样本数据,每个所述第二样本数据包括样本眼部图像与注视信息之间的对应关系;
    确定多个所述样本眼部图像的均值图像数据和方差图像数据;
    针对每个所述第二样本数据,基于所述均值图像数据和所述方差图像数据,对该第二样本数据中包含的样本眼部图像进行预处理,得到预处理后的样本眼部图像;
    将所述预处理后的样本眼部图像和对应的注视信息确定为最终的第二模型训练样本;
    利用机器学习方法并基于多个所述第二模型训练样本,训练得到注视信息识别模型。
  14. 根据权利要求11所述的装置,其中,所述支付意愿判断模块,具体用于:
    若所述特征信息包括用户头部姿态信息和用户注视信息,则判断每个预设方向上的所述旋转角度是否小于预设角度阈值,以及判断所述用户注视支付屏幕的概率值是否大于预设概率阈值;
    若判断结果均为是,则确定所述目标用户具有支付意愿;
    或者,
    若所述特征信息包括用户头部姿态信息,则判断每个预设方向上的所述旋转角度是否小于预设角度阈值;
    若判断结果为是,则确定所述目标用户具有支付意愿;
    或者,
    若所述特征信息包括用户注视信息,则判断所述用户注视支付屏幕的概率值是否大于预设概率阈值;
    若判断结果为是,则确定所述目标用户具有支付意愿。
  15. 一种支付设备,包括:
    处理器;以及
    被安排成存储计算机可执行指令的存储器,所述可执行指令在被执行时使所述处理器:
    获取目标用户的第一人脸图像信息;
    从所述第一人脸图像信息中提取特征信息,其中,所述特征信息包括:用户头部姿态信息和/或用户注视信息;
    根据所述用户头部姿态信息和/或所述用户注视信息,判断所述目标用户是否具有支付意愿;
    若判断结果为是,则基于人脸识别功能完成支付操作。
  16. 一种存储介质,用于存储计算机可执行指令,所述可执行指令在被处理器执行时实现以下方法:
    获取目标用户的第一人脸图像信息;
    从所述第一人脸图像信息中提取特征信息,其中,所述特征信息包括:用户头部姿态信息和/或用户注视信息;
    根据所述用户头部姿态信息和/或所述用户注视信息,判断所述目标用户是否具有支付意愿;
    若判断结果为是,则基于人脸识别功能完成支付操作。
PCT/CN2020/071363 2019-08-16 2020-01-10 一种支付方法及装置 WO2021031522A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/888,817 US11263634B2 (en) 2019-08-16 2020-05-31 Payment method and device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910758703.3 2019-08-16
CN201910758703.3A CN110570200B (zh) 2019-08-16 2019-08-16 一种支付方法及装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/888,817 Continuation US11263634B2 (en) 2019-08-16 2020-05-31 Payment method and device

Publications (1)

Publication Number Publication Date
WO2021031522A1 true WO2021031522A1 (zh) 2021-02-25

Family

ID=68775665

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/071363 WO2021031522A1 (zh) 2019-08-16 2020-01-10 一种支付方法及装置

Country Status (2)

Country Link
CN (2) CN110570200B (zh)
WO (1) WO2021031522A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117495384A (zh) * 2023-11-07 2024-02-02 广州准捷电子科技有限公司 一种基于ai人脸识别技术的ktv刷脸支付方法

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110570200B (zh) * 2019-08-16 2020-08-25 阿里巴巴集团控股有限公司 一种支付方法及装置
US11263634B2 (en) 2019-08-16 2022-03-01 Advanced New Technologies Co., Ltd. Payment method and device
CN111160251B (zh) * 2019-12-30 2023-05-02 支付宝实验室(新加坡)有限公司 一种活体识别方法及装置
CN111382691A (zh) * 2020-03-05 2020-07-07 甄十信息科技(上海)有限公司 一种屏幕内容翻页的方法及移动终端
CN111291737B (zh) * 2020-05-09 2020-08-28 支付宝(杭州)信息技术有限公司 一种人脸图像采集方法、装置和电子设备
CN111292092B (zh) * 2020-05-09 2020-12-04 支付宝(杭州)信息技术有限公司 刷脸支付方法、装置及电子设备
CN111539740B (zh) * 2020-05-15 2022-11-18 支付宝(杭州)信息技术有限公司 一种支付方法、装置及设备
CN112215084A (zh) * 2020-09-17 2021-01-12 中国银联股份有限公司 识别对象确定方法、装置、设备及存储介质
CN112116355A (zh) * 2020-09-18 2020-12-22 支付宝(杭州)信息技术有限公司 一种基于意愿识别确认是否完成支付的方法、系统及装置
CN112396004B (zh) * 2020-11-23 2023-06-09 支付宝(杭州)信息技术有限公司 用于人脸识别的方法、装置和计算机可读存储介质
US11803831B1 (en) 2020-12-02 2023-10-31 Wells Fargo Bank, N.A. Systems and methods for utilizing a user expression map configured to enable contactless human to device interactions
US11587055B1 (en) 2020-12-02 2023-02-21 Wells Fargo Bank, N.A. Systems and methods for generating a user expression map configured to enable contactless human to device interactions
CN112560768A (zh) * 2020-12-25 2021-03-26 深圳市商汤科技有限公司 闸机通道控制方法、装置、计算机设备及存储介质
CN112580553A (zh) * 2020-12-25 2021-03-30 深圳市商汤科技有限公司 开关控制方法、装置、计算机设备及存储介质
CN112560775A (zh) * 2020-12-25 2021-03-26 深圳市商汤科技有限公司 一种开关控制方法、装置、计算机设备及存储介质
CN112734437B (zh) * 2021-01-11 2022-08-16 支付宝(杭州)信息技术有限公司 刷脸支付的方法和装置
CN117078241A (zh) * 2021-05-27 2023-11-17 支付宝(杭州)信息技术有限公司 支付处理方法及装置
CN113516481B (zh) * 2021-08-20 2024-05-14 支付宝(杭州)信息技术有限公司 刷脸意愿的确认方法、装置和刷脸设备
CN114187628A (zh) * 2021-11-24 2022-03-15 支付宝(杭州)信息技术有限公司 基于隐私保护的身份认证方法、装置及设备
CN114511909A (zh) * 2022-02-25 2022-05-17 支付宝(杭州)信息技术有限公司 一种刷脸支付意愿识别方法、装置以及设备
CN114898431A (zh) * 2022-05-10 2022-08-12 支付宝(杭州)信息技术有限公司 一种刷脸支付意愿识别方法、装置以及设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108615159A (zh) * 2018-05-03 2018-10-02 百度在线网络技术(北京)有限公司 基于注视点检测的访问控制方法和装置
CN109409894A (zh) * 2018-09-20 2019-03-01 百度在线网络技术(北京)有限公司 人脸支付的控制方法、装置、设备及存储介质
CN109461003A (zh) * 2018-11-30 2019-03-12 阿里巴巴集团控股有限公司 基于多视角的多人脸场景刷脸支付风险防控方法和设备
CN109711827A (zh) * 2018-12-27 2019-05-03 武汉市天蝎科技有限公司 一种近眼显示设备的新零售支付方法及支付系统
US20190180364A1 (en) * 2017-12-13 2019-06-13 Acorns Grow Incorporated Method and system for efficient switching of direct deposit payment destination account
CN110570200A (zh) * 2019-08-16 2019-12-13 阿里巴巴集团控股有限公司 一种支付方法及装置

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6215891B1 (en) * 1997-03-26 2001-04-10 Oki Electric Industry Co., Ltd. Eye image recognition method eye image selection method and system therefor
HK1160574A2 (en) * 2012-04-13 2012-07-13 King Hei Francis Kwong Secure electronic payment system and process
CN103824068B (zh) * 2014-03-19 2018-06-01 上海看看智能科技有限公司 人脸支付认证系统及方法
KR101676782B1 (ko) * 2015-07-10 2016-11-21 주식회사 엑스큐어넷 Mdm과 모바일 가상화와 근거리 및 각도식별 안면인식 기술을 사용한 문서 보안관리 시스템
CN105184553B (zh) * 2015-09-06 2019-01-22 宁波大学 基于近场通信的影院移动支付方法
US20180075443A1 (en) * 2016-09-09 2018-03-15 Hans-Peter Fischer Mobile Payment for Goods and Services
CN108009465B (zh) * 2016-10-31 2021-08-27 杭州海康威视数字技术股份有限公司 一种人脸识别方法及装置
CN106803829A (zh) * 2017-03-30 2017-06-06 北京七鑫易维信息技术有限公司 一种认证方法、装置及系统
US10762335B2 (en) * 2017-05-16 2020-09-01 Apple Inc. Attention detection
EP3724813A1 (en) * 2017-12-13 2020-10-21 Humanising Autonomy Limited Systems and methods for predicting pedestrian intent
CN108053218A (zh) * 2017-12-29 2018-05-18 宁波大学 一种安全的移动支付方法
CN108460599B (zh) * 2018-01-30 2021-03-23 维沃移动通信有限公司 一种移动支付方法及移动终端
CN109905595B (zh) * 2018-06-20 2021-07-06 成都市喜爱科技有限公司 一种拍摄及播放的方法、装置、设备及介质
CN208969723U (zh) * 2018-09-30 2019-06-11 深圳市伊世科技有限公司 一种面容识别支付设备
CN109583348A (zh) * 2018-11-22 2019-04-05 阿里巴巴集团控股有限公司 一种人脸识别方法、装置、设备及系统
CN110008673B (zh) * 2019-03-06 2022-02-18 创新先进技术有限公司 一种基于人脸识别的身份鉴权方法和装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190180364A1 (en) * 2017-12-13 2019-06-13 Acorns Grow Incorporated Method and system for efficient switching of direct deposit payment destination account
CN108615159A (zh) * 2018-05-03 2018-10-02 百度在线网络技术(北京)有限公司 基于注视点检测的访问控制方法和装置
CN109409894A (zh) * 2018-09-20 2019-03-01 百度在线网络技术(北京)有限公司 人脸支付的控制方法、装置、设备及存储介质
CN109461003A (zh) * 2018-11-30 2019-03-12 阿里巴巴集团控股有限公司 基于多视角的多人脸场景刷脸支付风险防控方法和设备
CN109711827A (zh) * 2018-12-27 2019-05-03 武汉市天蝎科技有限公司 一种近眼显示设备的新零售支付方法及支付系统
CN110570200A (zh) * 2019-08-16 2019-12-13 阿里巴巴集团控股有限公司 一种支付方法及装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117495384A (zh) * 2023-11-07 2024-02-02 广州准捷电子科技有限公司 一种基于ai人脸识别技术的ktv刷脸支付方法
CN117495384B (zh) * 2023-11-07 2024-04-26 广州准捷电子科技有限公司 一种基于ai人脸识别技术的ktv刷脸支付方法

Also Published As

Publication number Publication date
CN112258193A (zh) 2021-01-22
CN110570200B (zh) 2020-08-25
CN112258193B (zh) 2024-01-30
CN110570200A (zh) 2019-12-13

Similar Documents

Publication Publication Date Title
WO2021031522A1 (zh) 一种支付方法及装置
US11514430B2 (en) User interfaces for transfer accounts
US11100498B2 (en) User interfaces for transfer accounts
US10913463B2 (en) Gesture based control of autonomous vehicles
US11263634B2 (en) Payment method and device
TWI753271B (zh) 資源轉移方法、裝置及系統
CN111539740B (zh) 一种支付方法、装置及设备
EP3543936A1 (en) Systems and methods for translating a gesture to initiate a financial transaction
CN111292092B (zh) 刷脸支付方法、装置及电子设备
KR102092931B1 (ko) 시선 추적 방법 및 이를 수행하기 위한 사용자 단말
TW202006630A (zh) 支付方法、裝置及系統
TWI743427B (zh) 資料處理方法、終端設備和資料處理系統
Gwon et al. Robust eye and pupil detection method for gaze tracking
CN116034334A (zh) 用户输入界面
EP4163854A1 (en) Systems and methods for conducting remote user authentication
CN110598555B (zh) 一种图像的处理方法、装置及设备
US11250242B2 (en) Eye tracking method and user terminal performing same
CN112200070B (zh) 一种用户识别、业务处理方法、装置、设备及介质
CN114511909A (zh) 一种刷脸支付意愿识别方法、装置以及设备
CN110807395A (zh) 一种基于用户行为的信息交互方法、装置及设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20853721

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20853721

Country of ref document: EP

Kind code of ref document: A1