CN111553706A - Face brushing payment method, device and equipment - Google Patents

Face brushing payment method, device and equipment Download PDF

Info

Publication number
CN111553706A
CN111553706A CN202010661783.3A CN202010661783A CN111553706A CN 111553706 A CN111553706 A CN 111553706A CN 202010661783 A CN202010661783 A CN 202010661783A CN 111553706 A CN111553706 A CN 111553706A
Authority
CN
China
Prior art keywords
information
voice
voice information
preset
face image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010661783.3A
Other languages
Chinese (zh)
Inventor
方硕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202010661783.3A priority Critical patent/CN111553706A/en
Publication of CN111553706A publication Critical patent/CN111553706A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Accounting & Taxation (AREA)
  • Computer Security & Cryptography (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The embodiment of the specification discloses a face brushing payment method, a face brushing payment device and face brushing payment equipment, wherein the scheme comprises the following steps: acquiring face image information; acquiring voice information meeting preset conditions, wherein the voice information is acquired in the face-brushing payment service process; determining first account information corresponding to the face image information from an account database; extracting voiceprint characteristic information of the voice information; judging whether the voiceprint characteristic information is consistent with registered voiceprint characteristic information corresponding to the first account information or not to obtain a first judgment result; and when the first judgment result shows that the voiceprint characteristic information is consistent with the registered voiceprint characteristic information corresponding to the first account information, finishing the face brushing payment service based on the first account information.

Description

Face brushing payment method, device and equipment
Technical Field
The application relates to the technical field of computers, in particular to a face brushing payment method, device and equipment.
Background
Under the background that the products of face-brushing payment are more and more popular, the user can enjoy the convenience brought by the face-brushing payment without inputting a mobile phone number. However, although the face-brushing payment provides convenient service for the user, there are some potential safety hazards, for example, when payment is performed on an automatic payment terminal such as a supermarket or a shopping mall, if the face of a user who lines up behind or passes by is collected by the image collecting device, the collected face image is not the face image of the payment user, and the false brushing is caused, and the payment account of the user who lines up behind or passes by is lost.
Therefore, a method for face-brushing payment with higher security is needed to improve the user experience.
Disclosure of Invention
The embodiment of the specification provides a face-brushing payment method, a face-brushing payment device and face-brushing payment equipment, and aims to solve the problems that an existing face-brushing payment method is low in safety and poor in user experience.
In order to solve the above technical problem, the embodiments of the present specification are implemented as follows:
in a first aspect, a method for face-brushing payment provided in an embodiment of the present specification includes:
acquiring face image information;
acquiring voice information meeting preset conditions, wherein the voice information is acquired in the face-brushing payment service process;
determining first account information corresponding to the face image information from an account database;
extracting voiceprint characteristic information of the voice information;
judging whether the voiceprint characteristic information is consistent with registered voiceprint characteristic information corresponding to the first account information or not to obtain a first judgment result;
and when the first judgment result shows that the voiceprint characteristic information is consistent with the registered voiceprint characteristic information corresponding to the first account information, finishing the face brushing payment service based on the first account information.
In a second aspect, a face-brushing payment method provided in an embodiment of the present specification includes:
acquiring face image information;
forwarding the facial image information to a server;
acquiring voice information meeting preset conditions, wherein the voice information is acquired in the face-brushing payment service process;
forwarding the voice information to the server;
obtaining a payment result fed back by the server based on the face image information and the voice information;
and outputting payment result information according to the payment result.
In a third aspect, a face brushing payment device provided in an embodiment of the present specification includes:
the face image information acquisition module is used for acquiring face image information;
the voice information acquisition module is used for acquiring voice information meeting preset conditions, wherein the voice information is acquired in the process of carrying out face-brushing payment service;
the first account information determining module is used for determining first account information corresponding to the face image information from an account database;
the voiceprint characteristic information extraction module is used for extracting voiceprint characteristic information of the voice information;
the first result judging module is used for judging whether the voiceprint characteristic information is consistent with the registered voiceprint characteristic information corresponding to the first account information or not to obtain a first judging result;
and the service payment module is used for finishing the face brushing payment service based on the first account information when the first judgment result shows that the voiceprint characteristic information is consistent with the registered voiceprint characteristic information corresponding to the first account information.
In a fourth aspect, a face brushing payment device provided in an embodiment of the present specification includes:
the face image information acquisition module is used for acquiring face image information;
the face image information forwarding module is used for forwarding the face image information to a server;
the voice information acquisition module is used for acquiring voice information meeting preset conditions, wherein the voice information is acquired in the process of carrying out face-brushing payment service;
the voice information forwarding module is used for forwarding the voice information to the server;
the payment result acquisition module is used for acquiring a payment result based on the face image information and the voice information fed back by the server;
and the payment result output module is used for outputting payment result information according to the payment result.
In a fifth aspect, a face-brushing payment device provided in an embodiment of the present specification includes:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring face image information;
acquiring voice information meeting preset conditions, wherein the voice information is acquired in the face-brushing payment service process;
determining first account information corresponding to the face image information from an account database;
extracting voiceprint characteristic information of the voice information;
judging whether the voiceprint characteristic information is consistent with registered voiceprint characteristic information corresponding to the first account information or not to obtain a first judgment result;
and when the first judgment result shows that the voiceprint characteristic information is consistent with the registered voiceprint characteristic information corresponding to the first account information, finishing the face brushing payment service based on the first account information.
In a sixth aspect, a face payment device provided in an embodiment of the present specification includes:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring face image information;
forwarding the facial image information to a server;
acquiring voice information meeting preset conditions, wherein the voice information is acquired in the face-brushing payment service process;
forwarding the voice information to the server;
obtaining a payment result fed back by the server based on the face image information and the voice information;
and outputting payment result information according to the payment result.
In a seventh aspect, embodiments of the present specification provide a computer-readable medium having computer-readable instructions stored thereon, where the computer-readable instructions are executable by a processor to implement a face-brushing payment method.
One embodiment of the present description achieves the following advantageous effects: by collecting the face image information and the voice information and determining the target payment account according to the face image information and the voice information, the situation of mistaken brushing is reduced, the safety of brushing the face payment is improved, and the user experience is improved.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative effort.
Fig. 1 is an overall flowchart of a face-brushing payment method provided in an embodiment of the present specification;
fig. 2 is a schematic flowchart of a face-brushing payment method executed at a face-brushing payment terminal according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a face payment device corresponding to fig. 1 provided in an embodiment of the present specification;
fig. 4 is a schematic structural diagram of a face brushing payment device corresponding to fig. 2 provided in an embodiment of the present specification;
fig. 5 is a schematic structural diagram of a face-brushing payment device provided in an embodiment of the present specification.
Detailed Description
To make the objects, technical solutions and advantages of one or more embodiments of the present disclosure more apparent, the technical solutions of one or more embodiments of the present disclosure will be described in detail and completely with reference to the specific embodiments of the present disclosure and the accompanying drawings. It is to be understood that the embodiments described are only a few embodiments of the present specification, and not all embodiments. All other embodiments that can be derived by a person skilled in the art from the embodiments given herein without making any creative effort fall within the protection scope of one or more embodiments of the present disclosure.
The technical solutions provided by the embodiments of the present description are described in detail below with reference to the accompanying drawings.
When brushing the face payment and giving people great convenience, some potential safety hazards also exist. For example, when face-brushing payment is adopted on a self-service checkout terminal in some supermarkets or shopping malls, because the environment is complex and more people are in motion, false brushing is easily caused, that is, face image information of a user without payment will is acquired. If payment is made according to the face image information, property loss of other people is caused. In this case, even if the collected face image information is displayed to the user who uses the face brushing payment for confirmation, for some other reasons, if the user who uses the face brushing payment confirms the face image information that is mistakenly brushed, property loss of others may be caused. If the verification information such as the mobile phone number is adopted for secondary verification, the convenience brought by face brushing payment is lost, and therefore the user experience is reduced. Therefore, the existing face brushing method has great potential safety hazard, and a face brushing payment method with high safety and good user experience needs to be provided.
In order to solve the problems of the existing face-brushing payment method, the scheme adds voice recognition operation on the basis of original face image acquisition, determines whether an account corresponding to voice and an account corresponding to face image information are the same account or not through voice recognition, and pays if the account corresponding to voice and the account corresponding to face image information are the same account or not, and does not pay if the account corresponding to voice and the account corresponding to face image information are different. In the database in which payment accounts are stored, each payment account needs to be set aside for the voice information of the user. According to the method, the payment will of the user is determined by combining the face image information with the voice information, the payment accuracy is improved by obtaining information sources from different ways, and the face brushing safety of the user is stronger.
In order to solve the defects in the prior art, the scheme provides the following embodiments:
fig. 1 is an overall flowchart of a face-brushing payment method provided in an embodiment of the present specification. From the viewpoint of a program, the execution subject of the flow may be a program installed in an application server or an application client. From the perspective of the execution subject, the execution subject of the method shown in fig. 1 may be a server of a face-brushing payment terminal, and may also be the face-brushing payment terminal. When the execution main body is the face-brushing payment terminal, the account database is stored in the face-brushing payment terminal and local identification is adopted; and when the execution main body is a server, the account database is stored in the server and is identified by a cloud terminal.
As shown in fig. 1, the process may include the following steps:
step 102: and acquiring face image information, wherein the face image information is acquired in the process of carrying out face brushing payment business.
Step 104: and acquiring voice information meeting preset conditions, wherein the voice information is acquired in the process of carrying out face brushing payment service.
The voice information obtained in the step accords with the preset condition, and the preset condition can be that the acquisition opportunity is set, the voice information of a specific syllable is set, and the voice information accords with a certain volume.
It should be noted that step 104 and step 102 have no chronological order, and may be executed simultaneously, or step 104 may be executed first, or step 102 may be executed first. In addition, step 104 may be performed at any time during the face-brushing payment transaction.
Step 106: and determining first account information corresponding to the face image information from an account database.
It should be noted that the account database includes a plurality of pieces of registered account information, and each piece of registered account information includes information of a reserved face image. The acquired face image information can be compared with the reserved face image information in each registered account information, and then the registered account information corresponding to the reserved face image information meeting the preset conditions, namely the first account information, is determined.
Step 108: and extracting the voiceprint characteristic information of the voice information.
The voiceprint is not only specific, but also has the characteristic of relative stability. After the adult, the voice of the human can be kept relatively stable and unchanged for a long time. Experiments prove that whether a speaker intentionally imitates the voice and tone of other people or speaks with whisper and whisper, even if the imitation is vivid, the voiceprint of the speaker is different all the time. Based on these two characteristics of the voiceprint, a person can be determined from the voiceprint information. Therefore, after the voice information is acquired, the present embodiment needs to extract the voiceprint feature information of the voice information.
Step 110: and judging whether the voiceprint characteristic information is consistent with the registered voiceprint characteristic information corresponding to the first account information or not, and obtaining a first judgment result.
In the scheme, the registered account information in the account database can also comprise registered voiceprint feature information besides the reserved facial image information. Here, the voiceprint feature information may be extracted from voice information previously entered by the user.
In step 110, the voiceprint feature information is compared with the registered voiceprint feature information corresponding to the first account information, and whether the contact ratio of the voiceprint feature information and the registered voiceprint feature information meets a preset threshold value is determined. And the voiceprint characteristic information does not need to be compared with the background voiceprint characteristic information in all the registered account information in the account database, so that the second account information is determined, and then the second account information is compared with the first account information. The method and the device only perform voiceprint characteristic comparison once, and save resources.
Step 112: and when the first judgment result shows that the voiceprint characteristic information is consistent with the registered voiceprint characteristic information corresponding to the first account information, finishing the face brushing payment service based on the first account information.
When the first judgment result is yes, it is indicated that the voiceprint feature information of the acquired voice information is consistent with the registered voiceprint feature information of the first account information, namely the acquired face image information and the voice information belong to the same person, so that the occurrence of false face swiping can be reduced, the face swiping payment safety is improved, and the user experience is improved.
It should be understood that the order of some steps in the method described in one or more embodiments of the present disclosure may be interchanged according to actual needs, or some steps may be omitted or deleted.
Based on the process of fig. 1, some specific embodiments of the process are also provided in the examples of this specification, which are described below.
In the following, the following three situations are described in detail by taking the example that voice information is collected at different times of the face-brushing payment service.
In the first case:
the acquiring of the voice information meeting the preset condition may specifically include: and acquiring voice information matched with the preset awakening voice.
In this embodiment, the acquired voice information is voice information matched with a preset wake-up voice. At the moment, the voice information can be used for determining account information of the user who swipes the face, and can also be used for waking up the terminal for swiping the face to start a face image acquisition program. Since the wake-up voice is actively uttered by the face-brushing payment user, it indicates that the face-brushing payment user is ready to perform face image acquisition, for example, is already located in the image acquisition area. Therefore, the face image information of the user without payment will not be acquired.
Specifically, the acquiring of the voice information matched with the preset wake-up voice may specifically include:
receiving first voice information;
judging whether the first voice information is a preset awakening voice or not to obtain a second judgment result, wherein the preset awakening voice is a preset voice for triggering the face brushing payment service;
the acquiring of the face image information may specifically include:
and when the second judgment result shows that the first voice information is the preset awakening voice, acquiring the face image information.
In this embodiment, the execution subject is a face-brushing payment terminal in which an account database is stored. When the step of receiving the first voice message is executed, the face-brushing payment terminal may be configured to receive various voice messages all the time, and then determine whether each voice message is a preset wake-up voice. If so, starting a face image acquisition function, thereby acquiring face image information.
The preset awakening voice may include voices related to face brushing payment, such as 'face brushing payment', 'face sweeping payment', 'face brushing payment', and the like, and may also be some specific sentences, such as prompting the user to read text information displayed on a display of the face brushing payment terminal.
In addition, in order to avoid resource waste, the user can start the sound information acquisition function after selecting face brushing payment at the face brushing payment terminal, and then acquire the sound information to judge whether the sound information is awakening voice, so that the first sound information can be acquired in a targeted manner, and the processing quantity of the sound information is reduced.
In addition, a time period for acquiring the sound information may be set, for example, a set time period after the face brushing payment is selected, such as 1 minute, 5 minutes, or the like.
When the main execution body of the "acquiring the voice information matched with the preset wake-up voice" is the server, the method may specifically include:
and acquiring second voice information sent by the terminal, wherein the second voice information is determined by the terminal based on the received voice information and is matched with the preset awakening voice.
In this embodiment, the second voice message is a voice message conforming to a preset wake-up voice determined by the terminal. The terminal can analyze each piece of acquired sound information in real time, determine whether the sound information is the awakening voice, and if the sound information is the awakening voice, determine the sound information as second voice information and send the second voice information to the server. In addition, the terminal can also acquire all sound information in a preset time period, then determines second voice information matched with the preset awakening voice from the sound information, and then sends the second voice information to the server.
In some cases, if there are multiple pieces of sound information matching the preset wake-up voice, the sound information may be further filtered according to other conditions, such as comparing the sound intensity, and determining the sound information with the greater sound intensity as the second voice information. Because the greater the sound intensity, the closer to the face-brushing payment terminal is, the more likely it is the sound that occurs to the user with the intention to brush his/her face.
There are various methods for determining whether the first voice message is a preset wake-up voice, and two methods are specifically described below:
a: determining text information of the first voice information; and judging whether the text information is consistent with the text information of the preset awakening voice.
The method adopts a text information comparison method to judge whether the first voice information is a preset awakening voice. If the words corresponding to the first voice message are 'face payment', and the preset awakening voice is also the information related to the 'face payment', it can be determined that the first voice message is matched with the preset awakening voice.
B: determining the semantics of the first voice information by adopting a semantic recognition model; and judging whether the semantics are consistent with the semantics of the preset awakening voice.
The method adopts a semantic recognition model based on voice to determine the semantics of the first voice information, and if the recognized semantics is 'start face brushing payment', the semantics can be determined to be the same as the semantics of the preset awakening voice.
The semantic recognition model can adopt a neural network model and can also adopt other models.
In the second case: the voice information can be acquired at other links of the face-brushing payment service, for example, at the link of confirming the acquired face image information, and a voice confirmation mode is adopted to replace a key mode. In addition, all the user's selection operations or confirmation operations of the key class can select voice completion.
Specifically, the acquiring of the voice information meeting the preset condition may include: and acquiring third voice information conforming to a first preset syllable, wherein the first preset syllable is a syllable used for representing the determination of the face image information.
The first preset syllable may be "yes", or "definite" or the like, which indicates a definite syllable, or may be another preset syllable. In addition, considering that the number of syllables is too small to be beneficial to extracting the voiceprint information, a plurality of syllables can be added. For example, "yes" or "determination" may be repeatedly read, or a specific text may be displayed on the screen, and if the user determines the face image information, the text needs to be read as the confirmation information.
When the execution subject is a server, the acquiring the voice information meeting the preset condition may specifically include:
and acquiring third voice information which is sent by a terminal and accords with a first preset syllable, wherein the first preset syllable is a syllable used for representing the determination of the face image information.
When the execution main part is for brushing face payment terminal, obtain the speech information that accords with the preset condition, specifically can include:
displaying the face image information on a display screen, and prompting a user to perform voice confirmation on the face image information;
and acquiring third voice information conforming to a first preset syllable, wherein the first preset syllable is a syllable used for representing the determination of the face image information.
In the third case:
the acquiring of the voice information meeting the preset condition may specifically include: and acquiring fourth voice information conforming to a second preset syllable, wherein the second preset syllable is a syllable used for representing that the first account information is determined.
The embodiment adopts a link of confirming the first account information to acquire the voice information. The difference from the second case is only in the confirmation object, and the description is omitted here.
When the execution subject is a server, because the first account information is determined by the server, the first account information needs to be sent to the terminal for display, and the acquiring the voice information meeting the preset condition specifically may include:
sending the first account information to a terminal so that the terminal can display the first account information;
and acquiring fourth voice information which is sent by the terminal and accords with a second preset syllable, wherein the second preset syllable is a syllable used for indicating that the first account information is determined.
Wherein, when the execution subject is a face-brushing payment terminal, because the first account information is confirmed by using a local account database, the obtaining of the voice information meeting the preset condition may specifically include:
displaying the first account information on a display screen, and prompting a user to perform voice confirmation on the first account information;
and acquiring fourth voice information conforming to a second preset syllable, wherein the second preset syllable is a syllable used for representing that the first account information is determined.
It should be noted that, the steps of the method for obtaining the voice information matched with the preset wake-up voice may be referred to as "obtaining the third voice information corresponding to the first preset syllable" and "obtaining the fourth voice information corresponding to the second preset syllable", and are not described herein again. The terms "first", "second", "third" and "fourth" are used for distinguishing and do not necessarily mean actual meanings. The "first preset syllable" and the "second preset syllable" may be the same or different, and the "third speech information" and the "fourth speech information" may be the same or different.
Optionally, the method may further include:
and when the first judgment result shows that the voiceprint characteristic information is inconsistent with the registered voiceprint characteristic information corresponding to the first account information, rejecting a face-brushing payment service aiming at the first account information.
In the foregoing embodiment, only the case where the voiceprint feature information of the extracted voice information is consistent with the registered voiceprint feature information corresponding to the first account information is described. In this embodiment, if the two facial image information are not consistent, it is described that the user corresponding to the acquired facial image information has no intention to brush the face, that is, a false brushing situation occurs. Therefore, the face-brushing payment business needs to be refused so as to avoid the property loss of other people.
In addition, some embodiments of the present disclosure further provide a method for extracting voiceprint feature information, where the voiceprint is not displayed as intuitively as an image, and the voiceprint feature information may be displayed through a sound waveform diagram or a sound spectrogram.
Specifically, the extracting of the voiceprint feature information of the speech information may include:
carrying out voice enhancement processing on the voice information;
extracting effective voice information of the processed voice information;
and extracting the voiceprint characteristic information of the effective voice information.
Since the face-brushing payment terminal is in a complex environment, the obtained voice information is doped with background sound or other noises, so that voice enhancement processing needs to be performed on the voice information, that is, useful voice information is amplified, and background sound information is reduced, so that effective voice information is conveniently extracted. Then, effective voice information is extracted by adopting a voiceprint extraction method, and various voiceprint characteristic extraction models can be adopted.
Corresponding to the face brushing payment method in which the execution subject is a server, the present specification also provides a face brushing payment method for a client (face brushing payment terminal) interacting with the server, and it should be noted that, in this embodiment, the first account information confirmation is completed by the server, and the voiceprint comparison is also completed by the server. The client may not store the account database, or may store the account database, but does not perform any operation using the account database. In a possible case, the information of the account database stored by the client is less than that of the account database stored by the server, and in order to improve the accuracy, the comparison between the face image and the voiceprint can be performed by using the account database of the server. Another possible scenario is that the account database of the client cannot perform the comparison between the face image and the voiceprint for some reason. There are other situations that are not listed here.
Fig. 2 is a schematic flowchart of a face-brushing payment method executed at a face-brushing payment terminal according to an embodiment of the present disclosure, where an execution subject of the method is the face-brushing payment terminal, that is, a client corresponding to a server. As shown in fig. 2, the method includes:
step 202: and acquiring the face image information. The face image information is acquired in the process of carrying out face brushing payment business.
Step 204: and forwarding the facial image information to a server. The server can compare the reserved face image in the account database with the acquired face image information, so as to determine the account information corresponding to the face image information.
Step 206: and acquiring voice information meeting preset conditions, wherein the voice information is acquired in the process of carrying out face brushing payment service.
The timing of acquiring the voice information may be in each link of the face-brushing payment service, and may be before acquiring the face image information or after acquiring the face image information, which does not limit the sequence of the step 202 and the step 206.
Step 208: and forwarding the voice information to the server. The server can compare the reserved voice print characteristic information in the account database with the acquired voice information, so as to determine whether the face image information is consistent with the account information corresponding to the voice information.
Step 210: and obtaining a payment result fed back by the server based on the face image information and the voice information. And if the face image information is consistent with the account information corresponding to the voice information, completing payment, and if the face image information is inconsistent with the account information corresponding to the voice information, refusing payment.
Step 212: and outputting payment result information according to the payment result. And after receiving the payment result, the client can display the payment result on the display screen.
Wherein, step 212 may specifically include the following steps:
when the payment result represents that the payment is successful, determining first account information in the payment result, wherein the first account information is determined based on the face image information and the voice information;
and displaying the first account information and the payment success information on a display screen.
If the payment is successful, payment success information can be displayed on the display screen, and in addition, a payment account can be displayed on the display screen. The acquisition of the first account information may be included in the payment result sent by the server, or the server may separately send the first account information to the client after the client sends the payment account information acquisition request.
If the payment fails, there are many possible reasons, such as insufficient account amount, inconsistent account information corresponding to the face image information and the voice information, and the like. Therefore, the reason for the payment failure can be displayed at the same time of displaying the payment failure, so that the user can take corresponding remedial measures.
In the method shown in fig. 2, the client acquires the face image information and the voice information, and sends the face image information and the voice information to the server to confirm the face-brushing account information. Account information is confirmed through two different types of information, the error brushing rate can be effectively reduced, the safety of face brushing payment is improved, and therefore user experience is improved.
Optionally, the acquiring of the voice information meeting the preset condition may specifically include:
receiving first voice information;
judging whether the voice information is voice awakening voice or not to obtain a first judgment result, wherein the voice awakening voice is preset voice for triggering the face brushing payment service;
the acquiring of the face image information specifically includes:
when the first judgment result shows that the voice information is the voice awakening voice, acquiring face image information;
the forwarding the voice information to the server may specifically include:
and forwarding the first voice information to the server.
Corresponding to the first case, since the execution main body is a client (face-brushing payment terminal), a voice wake-up program can be set before the face image information is collected, and the face image information collection mode is started only when the user speaks a specified voice. In the mode, the user consciously collects the face image, so that the collected face image information is the user with the desire to brush the face with high probability.
Optionally, the acquiring of the voice information meeting the preset condition may specifically include:
displaying the face image information on a display screen;
displaying first prompt information on the display screen, wherein the first prompt information is used for inquiring whether the face image information is the face image information of a user to be paid;
after the first prompt message is displayed, third voice message which accords with a first preset syllable is obtained, wherein the first preset syllable is a syllable used for representing the determination of the face image message;
the forwarding the voice information to the server specifically includes:
forwarding the third voice information to the server.
In correspondence with the second case described above, the timing of acquiring the voice information is when the face image information is confirmed. After the face image information is collected, the face image information can be displayed on a display screen and a user is reminded to confirm. In order to acquire the voice information of the user, the user can be prompted to confirm the face image information by a voice method, on one hand, both hands can be liberated, and on the other hand, the voice information of the user can be extracted to further confirm the face brushing payment account.
It should be noted that, if the third voice message meeting the first preset condition is not acquired in the preset time period, the face brushing payment service may be terminated. This may be caused by the fact that the user has not responded for a long time or has left the face-brushing payment terminal, or the user may have given a negative response to the fact that the user has not recognized that the face image information is a problem and is not the image of the user. In addition, if the user responds negatively to the face image information, the face brushing payment service can be terminated immediately as long as the voice information is received, and the user does not need to wait until the preset time period is finished.
Optionally, the acquiring of the voice information meeting the preset condition may specifically include:
receiving first account information which is sent by the server and determined based on the face image information;
displaying the first account information on a display screen;
displaying second prompt information on the display screen, wherein the second prompt information is used for inquiring whether the first account information is account information of a user to be paid or not;
after the second prompt message is displayed, fourth voice message conforming to a second preset syllable is obtained, wherein the second preset syllable is a syllable used for representing that the first account message is determined;
the forwarding the voice information to the server may specifically include:
forwarding the fourth voice information to the server.
Corresponding to the third case, after determining the first account information according to the acquired face image information, the server may send the first account information to the terminal for displaying for the sake of security, so that the user can confirm the first account information. Other parts of the embodiment can refer to the description of relevant parts of other embodiments, and the description is not repeated here.
Based on the same idea, the embodiment of the present specification further provides a device corresponding to the above method. Fig. 3 is a schematic structural diagram of a face brushing payment device corresponding to fig. 1 provided in an embodiment of the present specification. As shown in fig. 3, the apparatus may include:
a face image information obtaining module 302, configured to obtain face image information;
a voice information obtaining module 304, configured to obtain voice information meeting a preset condition, where the voice information is obtained in a process of performing a face-brushing payment service;
a first account information determining module 306, configured to determine first account information corresponding to the facial image information from an account database;
a voiceprint feature information extraction module 308, configured to extract voiceprint feature information of the voice information;
a first result determining module 310, configured to determine whether the voiceprint feature information is consistent with the registered voiceprint feature information corresponding to the first account information, to obtain a first determination result;
and a service payment module 312, configured to, when the first determination result indicates that the voiceprint feature information is consistent with the registered voiceprint feature information corresponding to the first account information, complete the face-brushing payment service based on the first account information.
The device of figure 3, through gathering face image information and speech information to confirm the target payment account jointly according to face image information and speech information, reduced the condition that the mistake was brushed and appeared, improved the security of brushing face payment, improved user experience.
The examples of this specification also provide some specific embodiments of the process based on the apparatus of fig. 3, which is described below.
Optionally, the voice information obtaining module 304 may be specifically configured to obtain the voice information matched with the preset wake-up voice.
Optionally, the voice information obtaining module 304 may specifically include:
the first voice information receiving unit is used for receiving first voice information;
a second judging unit, configured to judge whether the first voice information is a preset wake-up voice, to obtain a second judgment result, where the preset wake-up voice is a preset voice for triggering the face-brushing payment service;
the face image information obtaining module may be specifically configured to obtain the face image information when the second determination result indicates that the first voice information is the preset wake-up voice.
Optionally, the voice information obtaining module 304 may be specifically configured to obtain second voice information sent by the terminal, where the second voice information is voice information that is determined by the terminal based on the received voice information and matches with a preset wake-up voice.
Optionally, the second determining unit may specifically include:
a text information determining subunit, configured to determine text information of the first speech information;
and the text information judging subunit is used for judging whether the text information is consistent with the text information of the preset awakening voice.
Optionally, the second determining unit may specifically include:
the semantic determining subunit is used for determining the semantics of the first voice information by adopting a semantic recognition model;
and the semantic judging subunit is used for judging whether the semantics are consistent with the semantics of the preset awakening voice.
Optionally, the voice information obtaining module 304 may be specifically configured to obtain third voice information that conforms to a first preset syllable, where the first preset syllable is a syllable used for representing that determination is performed on the face image information.
Optionally, the voice information obtaining module 304 may be specifically configured to: and acquiring fourth voice information conforming to a second preset syllable, wherein the second preset syllable is a syllable used for representing that the first account information is determined.
Optionally, the voiceprint feature information extracting module 308 may specifically include:
the voice enhancement processing unit is used for carrying out voice enhancement processing on the voice information;
the effective voice information extraction unit is used for extracting effective voice information of the processed voice information;
and the voiceprint characteristic information extraction unit is used for extracting the voiceprint characteristic information of the effective voice information.
Optionally, the apparatus may further include:
and the service rejection module is used for rejecting the face brushing payment service aiming at the first account information when the first judgment result shows that the voiceprint characteristic information is inconsistent with the registered voiceprint characteristic information corresponding to the first account information.
Based on the same idea, the embodiment of the present specification further provides a device corresponding to the above method. Fig. 4 is a schematic structural diagram of a face brushing payment device corresponding to fig. 2 provided in an embodiment of the present specification. As shown in fig. 4, the apparatus may include:
a face image information obtaining module 402, configured to obtain face image information;
a facial image information forwarding module 404, configured to forward the facial image information to a server;
a voice information obtaining module 406, configured to obtain voice information meeting a preset condition, where the voice information is obtained in a process of performing a face-brushing payment service;
a voice message forwarding module 408, configured to forward the voice message to the server;
a payment result obtaining module 410, configured to obtain a payment result based on the facial image information and the voice information, which is fed back by the server;
a payment result output module 412, configured to output payment result information according to the payment result
The device of figure 4, through gathering face image information and speech information to confirm the target payment account jointly according to face image information and speech information, reduced the condition that the mistake was brushed and appeared, improved the security of brushing face payment, improved user experience.
The examples of this specification also provide some specific embodiments of the process based on the apparatus of fig. 4, which is described below.
Optionally, the voice information obtaining module 406 may specifically include:
the first voice information receiving unit is used for receiving first voice information;
the first result judging unit is used for judging whether the voice information is voice awakening voice or not to obtain a first judging result, wherein the voice awakening voice is preset voice used for triggering the face brushing payment service;
the facial image information obtaining module 402 may be specifically configured to collect facial image information when the first determination result indicates that the voice information is the voice wake-up voice;
the voice information forwarding module is specifically configured to forward the first voice information to the server.
Optionally, the voice information obtaining module 406 may specifically include:
the face image information display unit is used for displaying the face image information on a display screen;
the first prompt information display unit is used for displaying first prompt information on the display screen, and the first prompt information is used for inquiring whether the face image information is the face image information of the user to be paid;
a third voice information obtaining unit configured to obtain third voice information that conforms to a first preset syllable, where the first preset syllable is a syllable used for indicating that determination is performed on the face image information;
the voice information forwarding module 408 may be specifically configured to forward the third voice information to the server.
Optionally, the voice information obtaining module 406 may specifically include:
the first account information receiving unit is used for receiving first account information which is sent by the server and determined based on the face image information;
the first account information display unit is used for displaying the first account information on a display screen;
the second prompt information display unit is used for displaying second prompt information on the display screen, and the second prompt information is used for inquiring whether the first account information is account information of the user to be paid or not;
a fourth voice information obtaining unit, configured to obtain fourth voice information that conforms to a second preset syllable, where the second preset syllable is a syllable used for indicating that determination is performed on the first account information;
the voice information forwarding module 408 is specifically configured to forward the fourth voice information to the server.
Optionally, the payment result output module 412 may specifically include:
a first account information determination unit, configured to determine first account information in the payment result when the payment result indicates that payment is successful, where the first account information is determined based on the face image information and the voice information;
and the first account information and payment success information display unit is used for displaying the first account information and payment success information on a display screen.
Based on the same idea, the embodiment of the present specification further provides a device corresponding to the above method.
Fig. 5 is a schematic structural diagram of a face-brushing payment device provided in an embodiment of the present specification. As shown in fig. 5, the apparatus 500 may include:
at least one processor 510; and the number of the first and second groups,
a memory 530 communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory 530 stores instructions 520 executable by the at least one processor 510, the instructions 520 being executable by the at least one processor 510 to enable the at least one processor 510 to:
acquiring face image information;
acquiring voice information meeting preset conditions, wherein the voice information is acquired in the face-brushing payment service process;
determining first account information corresponding to the face image information from an account database;
extracting voiceprint characteristic information of the voice information;
judging whether the voiceprint characteristic information is consistent with registered voiceprint characteristic information corresponding to the first account information or not to obtain a first judgment result;
when the first judgment result shows that the voiceprint feature information is consistent with the registered voiceprint feature information corresponding to the first account information, finishing the face brushing payment service based on the first account information;
alternatively, the instructions 520 are executable by the at least one processor 510 to enable the at least one processor 510 to:
acquiring face image information;
forwarding the facial image information to a server;
acquiring voice information meeting preset conditions, wherein the voice information is acquired in the face-brushing payment service process;
forwarding the voice information to the server;
obtaining a payment result fed back by the server based on the face image information and the voice information;
and outputting payment result information according to the payment result.
The device in fig. 5 collects the face image information and the voice information, and determines the target payment account according to the face image information and the voice information, so that the occurrence of mistaken brushing is reduced, the safety of brushing the face payment is improved, and the user experience is improved.
Based on the same idea, the embodiment of the present specification further provides a computer-readable medium corresponding to the above method. The computer readable medium has computer readable instructions stored thereon that are executable by a processor to implement the method of:
acquiring face image information;
acquiring voice information meeting preset conditions, wherein the voice information is acquired in the face-brushing payment service process;
determining first account information corresponding to the face image information from an account database;
extracting voiceprint characteristic information of the voice information;
judging whether the voiceprint characteristic information is consistent with registered voiceprint characteristic information corresponding to the first account information or not to obtain a first judgment result;
when the first judgment result shows that the voiceprint feature information is consistent with the registered voiceprint feature information corresponding to the first account information, finishing the face brushing payment service based on the first account information;
alternatively, the computer readable instructions may be executable by a processor to implement the method of:
acquiring face image information;
forwarding the facial image information to a server;
acquiring voice information meeting preset conditions, wherein the voice information is acquired in the face-brushing payment service process;
forwarding the voice information to the server;
obtaining a payment result fed back by the server based on the face image information and the voice information;
and outputting payment result information according to the payment result.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus shown in fig. 5, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to part of the description of the method embodiment.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital character system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate a dedicated integrated circuit chip. Furthermore, nowadays, instead of manually making an integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Language Description Language), traffic, pl (core unified Programming Language), HDCal, JHDL (Java Hardware Description Language), langue, Lola, HDL, laspam, hardsradware (Hardware Description Language), vhjhd (Hardware Description Language), and vhigh-Language, which are currently used in most common. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storage devices, or any other non-transmission medium which can be used to store information which can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (29)

1. A face-brushing payment method comprising:
acquiring face image information;
acquiring voice information meeting preset conditions, wherein the voice information is acquired in the face-brushing payment service process;
determining first account information corresponding to the face image information from an account database;
extracting voiceprint characteristic information of the voice information;
judging whether the voiceprint characteristic information is consistent with registered voiceprint characteristic information corresponding to the first account information or not to obtain a first judgment result;
and when the first judgment result shows that the voiceprint characteristic information is consistent with the registered voiceprint characteristic information corresponding to the first account information, finishing the face brushing payment service based on the first account information.
2. The method according to claim 1, wherein the acquiring of the voice information meeting the preset condition specifically includes:
and acquiring voice information matched with the preset awakening voice.
3. The method according to claim 2, wherein the acquiring the voice information matched with the preset wake-up voice specifically comprises:
receiving first voice information;
judging whether the first voice information is a preset awakening voice or not to obtain a second judgment result, wherein the preset awakening voice is a preset voice for triggering the face brushing payment service;
the acquiring of the face image information specifically includes:
and when the second judgment result shows that the first voice information is the preset awakening voice, acquiring the face image information.
4. The method according to claim 2, wherein the acquiring the voice information matched with the preset wake-up voice specifically comprises:
and acquiring second voice information sent by the terminal, wherein the second voice information is determined by the terminal based on the received voice information and is matched with the preset awakening voice.
5. The method according to claim 3, wherein the determining whether the first voice message is a preset wake-up voice specifically includes:
determining text information of the first voice information;
and judging whether the text information is consistent with the text information of the preset awakening voice.
6. The method according to claim 3, wherein the determining whether the first voice message is a preset wake-up voice specifically includes:
determining the semantics of the first voice information by adopting a semantic recognition model;
and judging whether the semantics are consistent with the semantics of the preset awakening voice.
7. The method according to claim 1, wherein the acquiring of the voice information meeting the preset condition specifically includes:
and acquiring third voice information conforming to a first preset syllable, wherein the first preset syllable is a syllable used for representing the determination of the face image information.
8. The method according to claim 1, wherein the acquiring of the voice information meeting the preset condition specifically includes:
and acquiring fourth voice information conforming to a second preset syllable, wherein the second preset syllable is a syllable used for representing that the first account information is determined.
9. The method according to claim 1, wherein the extracting voiceprint feature information of the speech information specifically includes:
carrying out voice enhancement processing on the voice information;
extracting effective voice information of the processed voice information;
and extracting the voiceprint characteristic information of the effective voice information.
10. The method of claim 1, further comprising:
and when the first judgment result shows that the voiceprint characteristic information is inconsistent with the registered voiceprint characteristic information corresponding to the first account information, rejecting a face-brushing payment service aiming at the first account information.
11. A face-brushing payment method comprising:
acquiring face image information;
forwarding the facial image information to a server;
acquiring voice information meeting preset conditions, wherein the voice information is acquired in the face-brushing payment service process;
forwarding the voice information to the server;
obtaining a payment result fed back by the server based on the face image information and the voice information;
and outputting payment result information according to the payment result.
12. The method according to claim 11, wherein the acquiring of the voice information meeting the preset condition specifically includes:
receiving first voice information;
judging whether the voice information is voice awakening voice or not to obtain a first judgment result, wherein the voice awakening voice is preset voice for triggering the face brushing payment service;
the acquiring of the face image information specifically includes:
when the first judgment result shows that the voice information is the voice awakening voice, acquiring face image information;
the forwarding the voice information to the server specifically includes:
and forwarding the first voice information to the server.
13. The method according to claim 11, wherein the acquiring of the voice information meeting the preset condition specifically includes:
displaying the face image information on a display screen;
displaying first prompt information on the display screen, wherein the first prompt information is used for inquiring whether the face image information is the face image information of a user to be paid;
acquiring third voice information conforming to a first preset syllable, wherein the first preset syllable is a syllable used for representing the determination of the face image information;
the forwarding the voice information to the server specifically includes:
forwarding the third voice information to the server.
14. The method according to claim 11, wherein the acquiring of the voice information meeting the preset condition specifically includes:
receiving first account information which is sent by the server and determined based on the face image information;
displaying the first account information on a display screen;
displaying second prompt information on the display screen, wherein the second prompt information is used for inquiring whether the first account information is account information of a user to be paid or not;
acquiring fourth voice information conforming to a second preset syllable, wherein the second preset syllable is a syllable used for indicating that the first account information is determined;
the forwarding the voice information to the server specifically includes:
forwarding the fourth voice information to the server.
15. The method according to claim 11, wherein the outputting payment result information according to the payment result specifically includes:
when the payment result represents that the payment is successful, determining first account information in the payment result, wherein the first account information is determined based on the face image information and the voice information;
and displaying the first account information and the payment success information on a display screen.
16. A face-brushing payment device, comprising:
the face image information acquisition module is used for acquiring face image information;
the voice information acquisition module is used for acquiring voice information meeting preset conditions, wherein the voice information is acquired in the process of carrying out face-brushing payment service;
the first account information determining module is used for determining first account information corresponding to the face image information from an account database;
the voiceprint characteristic information extraction module is used for extracting voiceprint characteristic information of the voice information;
the first result judging module is used for judging whether the voiceprint characteristic information is consistent with the registered voiceprint characteristic information corresponding to the first account information or not to obtain a first judging result;
and the service payment module is used for finishing the face brushing payment service based on the first account information when the first judgment result shows that the voiceprint characteristic information is consistent with the registered voiceprint characteristic information corresponding to the first account information.
17. The apparatus according to claim 16, wherein the voice information obtaining module is specifically configured to obtain the voice information matching a preset wake-up voice.
18. The apparatus according to claim 17, wherein the voice information obtaining module specifically includes:
the first voice information receiving unit is used for receiving first voice information;
a second judging unit, configured to judge whether the first voice information is a preset wake-up voice, to obtain a second judgment result, where the preset wake-up voice is a preset voice for triggering the face-brushing payment service;
the face image information obtaining module is specifically configured to obtain face image information when the second determination result indicates that the first voice information is the preset wake-up voice.
19. The apparatus according to claim 17, wherein the voice information obtaining module is specifically configured to obtain second voice information sent by the terminal, where the second voice information is voice information that is determined by the terminal based on the received voice information and matches a preset wake-up voice.
20. The apparatus according to claim 18, wherein the second determining unit specifically includes:
a text information determining subunit, configured to determine text information of the first speech information;
and the text information judging subunit is used for judging whether the text information is consistent with the text information of the preset awakening voice.
21. The apparatus according to claim 18, wherein the second determining unit specifically includes:
the semantic determining subunit is used for determining the semantics of the first voice information by adopting a semantic recognition model;
and the semantic judging subunit is used for judging whether the semantics are consistent with the semantics of the preset awakening voice.
22. A face-brushing payment device, comprising:
the face image information acquisition module is used for acquiring face image information;
the face image information forwarding module is used for forwarding the face image information to a server;
the voice information acquisition module is used for acquiring voice information meeting preset conditions, wherein the voice information is acquired in the process of carrying out face-brushing payment service;
the voice information forwarding module is used for forwarding the voice information to the server;
the payment result acquisition module is used for acquiring a payment result based on the face image information and the voice information fed back by the server;
and the payment result output module is used for outputting payment result information according to the payment result.
23. The apparatus according to claim 22, wherein the voice information obtaining module specifically includes:
the first voice information receiving unit is used for receiving first voice information;
the first result judging unit is used for judging whether the voice information is voice awakening voice or not to obtain a first judging result, wherein the voice awakening voice is preset voice used for triggering the face brushing payment service;
the face image information acquisition module is specifically used for acquiring face image information when the first judgment result shows that the voice information is the voice awakening voice;
the voice information forwarding module is specifically configured to forward the first voice information to the server.
24. The apparatus according to claim 22, wherein the voice information obtaining module specifically includes:
the face image information display unit is used for displaying the face image information on a display screen;
the first prompt information display unit is used for displaying first prompt information on the display screen, and the first prompt information is used for inquiring whether the face image information is the face image information of the user to be paid;
a third voice information obtaining unit configured to obtain third voice information that conforms to a first preset syllable, where the first preset syllable is a syllable used for indicating that determination is performed on the face image information;
the voice information forwarding module is specifically configured to forward the third voice information to the server.
25. The apparatus according to claim 22, wherein the voice information obtaining module specifically includes:
the first account information receiving unit is used for receiving first account information which is sent by the server and determined based on the face image information;
the first account information display unit is used for displaying the first account information on a display screen;
the second prompt information display unit is used for displaying second prompt information on the display screen, and the second prompt information is used for inquiring whether the first account information is account information of the user to be paid or not;
a fourth voice information obtaining unit, configured to obtain fourth voice information that conforms to a second preset syllable, where the second preset syllable is a syllable used for indicating that determination is performed on the first account information;
the voice information forwarding module is specifically configured to forward the fourth voice information to the server.
26. The apparatus according to claim 22, wherein the payment result output module specifically includes:
a first account information determination unit, configured to determine first account information in the payment result when the payment result indicates that payment is successful, where the first account information is determined based on the face image information and the voice information;
and the first account information and payment success information display unit is used for displaying the first account information and payment success information on a display screen.
27. A face-brushing payment device comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring face image information;
acquiring voice information meeting preset conditions, wherein the voice information is acquired in the face-brushing payment service process;
determining first account information corresponding to the face image information from an account database;
extracting voiceprint characteristic information of the voice information;
judging whether the voiceprint characteristic information is consistent with registered voiceprint characteristic information corresponding to the first account information or not to obtain a first judgment result;
and when the first judgment result shows that the voiceprint characteristic information is consistent with the registered voiceprint characteristic information corresponding to the first account information, finishing the face brushing payment service based on the first account information.
28. A face-brushing payment device comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring face image information;
forwarding the facial image information to a server;
acquiring voice information meeting preset conditions, wherein the voice information is acquired in the face-brushing payment service process;
forwarding the voice information to the server;
obtaining a payment result fed back by the server based on the face image information and the voice information;
and outputting payment result information according to the payment result.
29. A computer readable medium having computer readable instructions stored thereon which are executable by a processor to implement the face payment method of any one of claims 1 to 15.
CN202010661783.3A 2020-07-10 2020-07-10 Face brushing payment method, device and equipment Pending CN111553706A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010661783.3A CN111553706A (en) 2020-07-10 2020-07-10 Face brushing payment method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010661783.3A CN111553706A (en) 2020-07-10 2020-07-10 Face brushing payment method, device and equipment

Publications (1)

Publication Number Publication Date
CN111553706A true CN111553706A (en) 2020-08-18

Family

ID=72002228

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010661783.3A Pending CN111553706A (en) 2020-07-10 2020-07-10 Face brushing payment method, device and equipment

Country Status (1)

Country Link
CN (1) CN111553706A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112150740A (en) * 2020-09-10 2020-12-29 福建创识科技股份有限公司 Non-inductive secure payment system and method
CN113240428A (en) * 2021-05-27 2021-08-10 支付宝(杭州)信息技术有限公司 Payment processing method and device
CN114822554A (en) * 2022-04-28 2022-07-29 支付宝(杭州)信息技术有限公司 Interactive processing method and device based on voice

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN206480042U (en) * 2016-11-02 2017-09-08 重庆中科云丛科技有限公司 Face payment system
CN108846676A (en) * 2018-08-02 2018-11-20 平安科技(深圳)有限公司 Biological characteristic assistant payment method, device, computer equipment and storage medium
CN109325742A (en) * 2018-09-26 2019-02-12 平安普惠企业管理有限公司 Business approval method, apparatus, computer equipment and storage medium
CN110362290A (en) * 2019-06-29 2019-10-22 华为技术有限公司 A kind of sound control method and relevant apparatus
CN110472980A (en) * 2019-08-19 2019-11-19 广州织点智能科技有限公司 A kind of brush face method of payment, device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN206480042U (en) * 2016-11-02 2017-09-08 重庆中科云丛科技有限公司 Face payment system
CN108846676A (en) * 2018-08-02 2018-11-20 平安科技(深圳)有限公司 Biological characteristic assistant payment method, device, computer equipment and storage medium
CN109325742A (en) * 2018-09-26 2019-02-12 平安普惠企业管理有限公司 Business approval method, apparatus, computer equipment and storage medium
CN110362290A (en) * 2019-06-29 2019-10-22 华为技术有限公司 A kind of sound control method and relevant apparatus
CN110472980A (en) * 2019-08-19 2019-11-19 广州织点智能科技有限公司 A kind of brush face method of payment, device, electronic equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112150740A (en) * 2020-09-10 2020-12-29 福建创识科技股份有限公司 Non-inductive secure payment system and method
CN113240428A (en) * 2021-05-27 2021-08-10 支付宝(杭州)信息技术有限公司 Payment processing method and device
CN114822554A (en) * 2022-04-28 2022-07-29 支付宝(杭州)信息技术有限公司 Interactive processing method and device based on voice
CN114822554B (en) * 2022-04-28 2022-11-22 支付宝(杭州)信息技术有限公司 Interactive processing method and device based on voice

Similar Documents

Publication Publication Date Title
CN110570200B (en) Payment method and device
KR102222421B1 (en) Save metadata related to captured images
KR102597571B1 (en) A virtual assistant configured to automatically customize groups of actions
CN109147770B (en) Voice recognition feature optimization and dynamic registration method, client and server
CN111553706A (en) Face brushing payment method, device and equipment
CN108804536B (en) Man-machine conversation and strategy generation method, equipment, system and storage medium
CN109600525B (en) Virtual reality-based call center control method and device
WO2018227815A1 (en) Systems and methods for conducting multi-task oriented dialogues
CN109656512A (en) Exchange method, device, storage medium and terminal based on voice assistant
JP2021533397A (en) Speaker dialification using speaker embedding and a trained generative model
KR20190046631A (en) System and method for natural language processing
KR20200130352A (en) Voice wake-up method and apparatus
KR20170129203A (en) And a method for activating a business by voice in communication software
CN109254669A (en) A kind of expression picture input method, device, electronic equipment and system
CN107025393B (en) Resource calling method and device
CN107483445A (en) A kind of silent Application on Voiceprint Recognition register method, device, server and storage medium
CN111475714A (en) Information recommendation method, device, equipment and medium
CN109992655B (en) Intelligent customer service method, device, equipment and storage medium
CN112150159B (en) Payment method, device and equipment based on face recognition
CN112261432B (en) Live broadcast interaction method and device in vehicle-mounted environment, storage medium and electronic equipment
CN112487381A (en) Identity authentication method and device, electronic equipment and readable storage medium
CN109902146A (en) Credit information acquisition methods, device, terminal and storage medium
CN112735374A (en) Automatic voice interaction method and device
US10997963B1 (en) Voice based interaction based on context-based directives
CN108776681B (en) Method for consolidating new word learning based on voice search and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination