WO2019109153A2 - A mobile wallet solution for facilitating proximity payments - Google Patents

A mobile wallet solution for facilitating proximity payments Download PDF

Info

Publication number
WO2019109153A2
WO2019109153A2 PCT/BN2018/050001 BN2018050001W WO2019109153A2 WO 2019109153 A2 WO2019109153 A2 WO 2019109153A2 BN 2018050001 W BN2018050001 W BN 2018050001W WO 2019109153 A2 WO2019109153 A2 WO 2019109153A2
Authority
WO
WIPO (PCT)
Prior art keywords
user
application
computing device
request
transaction
Prior art date
Application number
PCT/BN2018/050001
Other languages
French (fr)
Other versions
WO2019109153A3 (en
Inventor
Mohammad Satria Alam Shah MOHD. SURIA
Original Assignee
Mohd Suria Mohammad Satria Alam Shah
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mohd Suria Mohammad Satria Alam Shah filed Critical Mohd Suria Mohammad Satria Alam Shah
Publication of WO2019109153A2 publication Critical patent/WO2019109153A2/en
Publication of WO2019109153A3 publication Critical patent/WO2019109153A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/30Payment architectures, schemes or protocols characterised by the use of specific devices or networks
    • G06Q20/32Payment architectures, schemes or protocols characterised by the use of specific devices or networks using wireless devices
    • G06Q20/327Short range or proximity payments by means of M-devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/30Payment architectures, schemes or protocols characterised by the use of specific devices or networks
    • G06Q20/32Payment architectures, schemes or protocols characterised by the use of specific devices or networks using wireless devices
    • G06Q20/327Short range or proximity payments by means of M-devices
    • G06Q20/3272Short range or proximity payments by means of M-devices using an audio code
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/30Payment architectures, schemes or protocols characterised by the use of specific devices or networks
    • G06Q20/36Payment architectures, schemes or protocols characterised by the use of specific devices or networks using electronic wallets or electronic money safes
    • G06Q20/367Payment architectures, schemes or protocols characterised by the use of specific devices or networks using electronic wallets or electronic money safes involving electronic purses or money safes

Definitions

  • the invention is directed to systems and methods for improving user convenience in transactions by implementing a mobile wallet solution for facilitating electronic transactions based on data-over-audio and authorization for the payment based on voice and face authentication.
  • Examples of the disclosure describe a mobile wallet solution that facilitates proximity payments.
  • a merchant uses a mobile device (iOS or Android) to send a payment request to a customer's mobile device (iOS or Android) in the form of either an audible, near-ultrasonic or ultrasonic audio signal that is transmitted by the speakers of the merchants' mobile device and picked up by the microphone of the customer's mobile device.
  • the customer's device with the mobile app pre-installed, then prompts the customer to either approve or reject the payment request.
  • the proximity payments are made using the mobile devices only without using additional hardware.
  • the proximity payments are facilitated by using voice and face authentication login without having to explicitly type the passwords for login.
  • FIG. 1 is an exemplary block diagram illustrating a mobile computing device with image capturing capabilities.
  • FIG. 2 is an exemplary block diagram illustrating a system for facilitating proximity payments between a merchant and a customer.
  • FIG. 3a illustrates an exemplary screenshot showing splash activity for a mobile application.
  • FIG. 3b illustrates an exemplary flowchart showing a method for splash activity for the mobile application.
  • FIGS. 4a, 4b, and 4c illustrate exemplary screenshots of the mobile application showing registration of the merchant or customer with a third party server for enabling the transaction through the mobile application.
  • FIGS. 4d, 4e, and 4f are parts of an exemplary flowchart illustrating an operation of registration of a merchant or customer with a third party server for enabling a transaction through the mobile application.
  • FIGS. 5a, 5b, 5c, and 5d illustrate exemplary screenshots of the mobile application showing login of the merchant or customer in the mobile application.
  • FIG. 5e is an exemplary flowchart illustrating an operation of login of the merchant or customer with the mobile application.
  • FIG. 6a illustrates an exemplary screenshot of the mobile application showing the main activities available to the merchant or customer in the mobile application.
  • FIG. 6b is an exemplary flowchart illustrating initiating main operations for transactions through the mobile application.
  • FIGS. 7a and 7b illustrate exemplary screenshots of the mobile application showing a merchant sending a request for a payment through the mobile application.
  • FIGS. 7c and 7d are parts of an exemplary flowchart illustrating an operation of sending a request for payment using audio signals from the mobile application executing on the merchant mobile device.
  • FIG. 8a, 8b, 8c, and 8d illustrate exemplary screenshots of the mobile application showing making a payment in response to the payment request.
  • FIGS. 8e, 8f, and 8g are parts of an exemplary flowchart illustrating an operation of making a payment in response to the payment request.
  • FIG. 9a illustrates an exemplary screenshot of the mobile application showing a history of transactions made through the mobile application.
  • FIGS. 9b is an exemplary flowchart illustrating an operation of determining the history of transactions made through the mobile application.
  • FIG. lOa illustrates an exemplary screenshot of the mobile application showing the start of the face enrolment operation.
  • FIG. lOb is an exemplary flowchart illustrating an operation of face enrolment through the mobile application for secure transactions.
  • FIGS. lOc, lOd, lOe, and lOf are parts of an exemplary flowchart illustrating an operation of face enrolment through the mobile application for secure transactions.
  • FIG. l la illustrates an exemplary screenshot of the mobile application showing the start of the face authentication operation.
  • FIG. 1 lb is an exemplary flowchart illustrating an operation of face authentication of a user through the mobile application for secure transactions.
  • FIGS l lc, l ld, l le, and l lf are parts of an exemplary flowchart illustrating an operation of face authentication through the mobile application for secure transactions.
  • FIGS. l2a, l2b, and l2c are parts of another exemplary flowchart illustrating an operation of sending a request for a transaction using audio signals from the mobile application executing on the user mobile device.
  • FIGS. 13 a, l3b, l3c, and l3d are parts of another exemplary flowchart illustrating an operation for performing the transaction based on the received request.
  • the invention is advantageous in that it provides a mobile application solution for automatic payment or conducting transactions via at least one of audible audio signals, near-ultrasonic audio signals or ultrasonic audio signals over existing infrastructure and devices.
  • the audible audio signals having a frequency range of 20 Hz - 16,999 Hz, near-ultrasonic audio signals having a frequency range of 17,000 Hertz to 19,999 Hertz, and ultrasonic audio signals having frequency beyond 20,000 Hertz.
  • a method comprises receiving a request, at an application executing on a computing device from another proximity computing device, for payment to a merchant/ other user having the proximity computing device.
  • the request comprises at least one of the audible audio signal, the near-ultrasonic audio signal or the ultrasonic audio signal.
  • the request includes a transaction number.
  • the mobile application decodes the request to extract the transaction number.
  • the method comprises retrieving the transaction detail from a server in response to a request sent from the computing device to the server based on the transaction number.
  • the method further comprises enabling the customer for making the payment or rejecting the payment to the merchant by sending a payment acceptance or rejection request to the server.
  • a computing device comprising at least one processor, and at least one memory including computer program code for one or more computer programs, the at least one memory and the computer program code configured to, with the at least one processor, cause, at least in part, the computing device to receive a request, at the computing device, for payment to a merchant or other user, the request comprising at least one of the audible audio signals, the near-ultrasonic audio signals and the ultrasonic audio signals, the request including a transaction number.
  • the computing device is caused to retrieve the transaction detail from a server in response to a request sent from the computing device to server based on the transaction number.
  • the computing device is caused to make the payment or reject the payment to the merchant by sending a payment acceptance or rejection request to the server.
  • a computer-readable storage medium carrying one or more sequences of one or more instructions which, when executed by one or more processors, cause, at least in part, the computing device to receive a request, at the computing device, for payment to a merchant or other user, the request including a transaction number.
  • the request comes in the form of at least one of the audible audio signals, the near-ultrasonic audio signals, and the ultrasonic audio signals.
  • the computing device is caused to retrieve the transaction detail from a server in response to a request sent from the computing device to the server based on the transaction number.
  • the computing device is caused to make the payment or reject the payment to the merchant by sending a payment acceptance or rejection request to the server.
  • a computing device comprises means to receive a request, at the computing device, for payment to a merchant or other user, the request comprising at least one of the audible audio signals, the near-ultrasonic audio signals and the ultrasonic audio signals, the request including a transaction number.
  • the computing device comprises means to retrieve the transaction detail from a server in response to a request sent from the computing device to server based on the transaction number.
  • the computing device comprises means to make the payment or reject the payment to the merchant by sending a payment acceptance or rejection request to the server.
  • FIG. 1 is a diagram of exemplary components of a computing device (e.g., handset) for communications, which is capable of operating in the system of FIG. 2, according to one embodiment.
  • the computing device represents any device executing instructions (e.g., as application programs, operating system functionality, or both) to implement the operations and functionality described herein.
  • the computing device may include a mobile computing device or any other portable device associated with a user e.g. a customer or a merchant.
  • the computing device includes a mobile telephone, laptop, desktop computer, tablet, and computing pad.
  • the computing device has at least one processor.
  • the processor includes any quantity of processing units and is programmed to execute computer-executable instructions for implementing aspects of the disclosure.
  • the instructions may be performed by the processor or by multiple processors executing within the computing device, or performed by a processor external to the computing device.
  • the processor is programmed to execute instructions such as those illustrated in the figures.
  • the computing device includes one or more image sensors/cameras for capturing the images.
  • the computing device includes one or more computer-readable media such as the memory.
  • the memory may be internal or external to the computing device or both.
  • the memory stores, among other data, one or more applications and the image data.
  • the applications when executed by the processor, operate to perform functionality on the computing device. Exemplary applications include at least a mobile app for making the proximity transactions.
  • the applications may communicate with counterpart applications or services such as web services accessible via a network.
  • the applications may represent downloaded client-side applications that communicate with server-side services executing in the cloud.
  • the computing device may communicate with another device via a network.
  • Exemplary networks include wired and wireless networks.
  • Exemplary wireless networks include one or more of wireless fidelity (Wi-Fi) networks, BLEIETOOTH brand networks, cellular networks, and satellite networks.
  • Wi-Fi wireless fidelity
  • BLEIETOOTH brand networks BLEIETOOTH brand networks
  • cellular networks cellular networks
  • satellite networks satellite networks.
  • the other device is remote from the computing device.
  • the other device is local to the computing device.
  • the computing device constitutes other means for performing one or more steps of providing an audio message based payment system.
  • a radio receiver is often defined in terms of front-end and back-end characteristics.
  • the front-end of the receiver encompasses all of the Radio Frequency (RF) circuitry whereas the back-end encompasses all of the baseband processing circuitry.
  • RF Radio Frequency
  • circuitry refers to both: (1) hardware-only implementations (such as implementations in only analog and/or digital circuitry), and (2) to combinations of circuitry and software (and/or firmware) (such as, if applicable to the particular context, to a combination of processor(s), including digital signal processor(s), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions).
  • This definition of “circuitry” applies to all uses of this term in this application, including in any claims.
  • the term “circuitry” would also cover an implementation of merely a processor (or multiple processors) and its (or their) accompanying software/or firmware.
  • the term “circuitry” would also cover if applicable to the particular context, for example, a baseband integrated circuit or applications processor integrated circuit in a mobile phone or a similar integrated circuit in a cellular network device or other network devices.
  • Pertinent internal components of the telephone include a Main
  • a main display unit provides a display to the user in support of various applications and mobile terminal functions that perform or support the steps of providing an audio token based payment system.
  • the display includes display circuitry configured to display at least a portion of a user interface of the mobile terminal (e.g., mobile telephone). Additionally, the display and display circuitry are configured to facilitate user control of at least some functions of the mobile terminal.
  • An audio function circuitry includes a microphone and microphone amplifier that amplifies the speech signal output from the microphone. The amplified speech signal output from the microphone is fed to a coder/decoder (CODEC).
  • a radio section amplifies power and converts frequency in order to communicate with a base station, which is included in a mobile communication system, via antenna.
  • the power amplifier (PA) and the transmitter/modulation circuitry are operationally responsive to the MCU, with an output from the PA coupled to the duplexer or circulator or antenna switch, as known in the art.
  • the PA also couples to a battery interface and power control unit.
  • FIG. 2 an exemplary block diagram illustrates a system in which a mobile computing device makes the transactions using the mobile application installed on the device.
  • the mobile computing device comprises a speaker, microphone, camera, and the mobile application for performing the proximity -based transactions.
  • the speaker of the merchant’s device is used for outputting audio signals.
  • the microphone of the customer’s device is used to receive the audio signals.
  • the third party API Server is operable to receive registration information from each mobile device, store registration information into a storage and send a list of registered users and devices to each mobile device.
  • the mobile computing devices may be coupled using one or more of transmitted audible audio signals, the near-ultrasonic audio signals, or the ultrasonic audio signals.
  • FIG. 3b shows a process flow for a splash activity.
  • Splash activity shows a screen that is displayed for a set time when the app is starting and after the set time period, the user is redirected to application login activity.
  • the splash activity starts with launching the payment application and a display screen as in FIG. 3a is displayed for a predefined interval of time showing the name of the application and the company associated with the application.
  • the login screen is displayed to the user where the user can start the login activity.
  • the splash activity ends with starting of the login activity.
  • Users can make the transactions by means of the mobile application running on the mobile device.
  • the mobile application provides the convenience of making the transaction by using the audible audio signals, the near ultrasonic audio signals, or the ultrasonic audio signals.
  • the mobile application provides security by employing t voice and/or face recognition. Before being able to make the transactions using the mobile application, users register with the third party server.
  • FIGS. 4a. 4b, and 4c present registration interfaces for registering a new user in accordance with some embodiments.
  • the interface includes various fields in which the user enters identifying information.
  • the identifying information includes the email ID and country of the user.
  • a second registration interface corresponding to FIG. 4b is displayed.
  • the interface in FIG. 4b further includes various fields in which the user enters another set of identifying information.
  • the second set of identifying information also includes user’s name and telephone number of the mobile device.
  • a third interface corresponding to FIG. 4c is displayed.
  • the third interface allows the user to capture a user identity document, wherein the user identity document comprises one or more images of an identity document provided by the user.
  • the user identity document comprises a photo identity document.
  • the user identity document comprises a short video. The short video is a recording of the user communicating in-person his/her intent to register.
  • FIGS. 4d, 4e, and 4f provide an exemplary flowchart for an online registration process.
  • the application window shows a‘LOGIN’ button and a‘Sign Up’ button.
  • the registration activity can be canceled by clicking the back button of the mobile device. On the clicking the back button, the user is allowed to perform the login activity and the registration activity ends.
  • a registration form is displayed in which the user is allowed to fill his personal information.
  • An error message is displayed on submitting the form with the required fields in the form not properly filled.
  • a registration request is sent to the server on submitting the form with the entire required field properly filled.
  • the application waits for the response to the registration request.
  • An error message is displayed if the response does meet the predefined requirements and the user has to repeat the form submission steps by clicking the‘SUBMIT’ button.
  • the user’s identity document is captured and uploaded to the third party server on determining that the response satisfy the requirements.
  • a response is received based on the request.
  • the application verifies the response and if it is not ok, an error message is displayed.
  • One or more images of the identity document are captured and uploaded to the third party server again till the response is ok.
  • a short video of predefined duration (e.g. 15 seconds) is captured and uploaded to the server on determining that the response is ok.
  • the short video is a recording of the user communicating in-person his/her intent to register.
  • a response is received from the third party server on uploading the video and if the response is not ok, then an error message is displayed.
  • a short video is captured again and uploaded to the server until the received response of uploading the video is ok.
  • the identity of the user is verified by a third party staff by checking that the received image or images of the one or more identity documents are image or images of valid identity document or documents of the person in the video.
  • An enrolment button is displayed when the response received after uploading the video is ok.
  • the application stores at least the username/user ID for next activities on the mobile device.
  • the face and/or voice enrolment activity is performed on activation of the enrolment button and after successful completion of the face or voice enrolment activity, the registration activity ends.
  • FIGS. 5a. 5b, and 5c present login interfaces for login in the mobile application in accordance with some embodiments.
  • the interface includes a field with user ID prompting the user to enter the user ID.
  • a second interface corresponding to FIG. 5b is displayed.
  • the interface in FIG. 5b displays that login is in progress and allows the user to cancel the login by pressing the‘CANCEL’ button.
  • the interface in FIG. 5c is displayed when the user presses the back button of the mobile device and allows the user to logout from the mobile application.
  • FIG. 5d provide an exemplary flowchart for login process.
  • the display screen after the launch of the application displays at least two options for the user interaction with the payment application.
  • the options include login button and sign up button.
  • the user can also interact with the application by using the mobile computing device back button.
  • the application wait for further processing until user interacts with the one of the options or back button of the mobile computing device.
  • the user presses the login button the user is prompted to provide the user ID which is sent to the third party service for validation.
  • a request is sent to the third party server to determine whether the user ID exists in the storage of the third party server.
  • An alert is displayed to the user which displays‘Logging in’ and allows the user to cancel the request.
  • the request is canceled and the display screen displays the two options for the user interaction with the payment application i.e. login button and sign up button.
  • the application receives a response based on the request when the user does not cancel the request.
  • the application determines the user ID for next activities.
  • the next activity is face authentication. Based on the authentication of the user’s face, the user is allowed to carry out further operations with the application and the login activity ends after user is successfully authenticated using face authentication.
  • the user is allowed to register with the payment service. After the registration of the user with service, the login activity ends. When the user does not want to login and presses the phone back button, then also the login activity ends.
  • FIG. 6a presents an interface after the login for performing various transactions in accordance with some embodiments.
  • the interface includes three buttons: PAYMENT, REQUEST, and HISTORY.
  • PAYMENT button the user is redirected to payment interface as explained below.
  • REQUEST button the user is redirected to request interface as described below.
  • HISTORY button the user is redirected to HISTORY interface as explained below.
  • FIG. 6b shows an exemplary process flow for main activities performed by the mobile application.
  • the main activity is started, username is retrieved from the previous activity.
  • the display screen shows multiple options for different sub-activities.
  • the displayed options include at least a payment button, a history button, and a request button.
  • the application waits for further processing until a user input for clicking of one of the displayed button is received or back button of the mobile device is clicked.
  • a corresponding activity is started. For example, on clicking the payment button, payment activity is started, or clicking the request button, request activity is started, or clicking the history button, history activity is started.
  • the main activity After successful completion of the corresponding activity, the main activity ends.
  • the username is stored for next activity before starting the corresponding activity.
  • a logout dialog is displayed. Based on the user confirmation for logout, login activity is started.
  • the main activity ends.
  • FIGS. 7a and 7b present request interfaces for sending a payment request from the merchant mobile device to the customer mobile device in accordance with some embodiments.
  • the interface includes fields such as Invoice ID and amount.
  • the merchant has to provide the invoice ID and amount to be paid by the customer.
  • the ‘SEND’ button on the interface a second interface corresponding to FIG. 7b is displayed.
  • the interface in FIG. 7b displays that request process is in progress and the mobile application is waiting for the reply in response to the request.
  • the interface in FIG. 7b allows the user to cancel the request by pressing the‘CANCEL’ button.
  • FIGS. 7c and 7d show an exemplary process for making a request for payment from the merchant or a user to another user.
  • the user/merchant starts the request activity for receiving payment from other user or customer, the username is retrieved from the previous activity. This is followed by initializing the items required for performing the request activity.
  • the application waits for user input for the further processing.
  • the user can click home button displayed on the displayed screen, click a back button of the mobile device, or click a send button for sending the request for obtaining the payment from the customer.
  • the application keeps the username for next activity, starts the main activity, and ends the payment activity.
  • the application verifies the amount entered by the user as to whether it is a numeric amount or not. If the user failed to enter the numeric amount, then an error message is displayed and the process returns to the initialization step. On determining that the amount entered by the user is numeric, the application generates a reference number and sends a transaction request to the third party server. The application waits for the response and if the response is not OK, the request activity ends or else a message in the form of an audio signal containing the reference number is created. If the created message is OK, then the created message containing the reference number is sent to the proximity device and the third party server. If the created message is not OK, then the request activity for the payment ends.
  • the application waits for some time and checks whether the user has clicked on the cancel button displayed on the display screen. If the user has pressed the cancel button, a request is sent to the server for canceling the transaction.
  • the application sends a query to the server for obtaining the status of the transaction.
  • the application waits for a response and extracts the status of the transaction from the received response.
  • the application checks and determines if the status includes“unpaid” information, the process returns to step“wait for one second”. If the status includes “paid” information, a success dialog is displayed and the process returns to“initialize the necessary items” step. If the status includes“cancelled” or“rejected” information, a failure dialog is displayed and the process returns to“initialize the necessary items” step.
  • FIGS. 8a-8d present exemplary payment interfaces for making a payment by the customer on the customer mobile device application in response to the request from the merchant mobile device.
  • Interface in FIG. 8a is displayed when the user presses‘PAYMENT’ button on the main activity interface.
  • the interface in FIG. 8a includes fields such as Merchant, Receipt ID, and Amount.
  • the interface in FIG. 8a also displays the balance amount in the customer’s wallet.
  • the FIG. 8a shows that mobile application is in listening mode for receiving the payment request transmitted by the merchant mobile device.
  • the received payment request comes in the form of either an audible audio signal, a near-ultrasonic audio signal, or an ultrasonic audio signal.
  • the customer can either accept or reject the request for payment by pressing ACCEPT or REJECT buttons displayed on the interface in FIG. 8b.
  • the fields in FIG. 8b are automatically populated on receiving the valid signals transmitted by merchant mobile device.
  • the interface in FIG. 8b is displayed when the customer mobile application receives valid information from the merchant mobile device.
  • the interface in FIG. 8b displays a reducing time count by which the user can accept the payment request and beyond which the request for payment is automatically rejected.
  • the interfaces in FIG. 8c and FIG. 8d display the dialogs as to whether the payment is successful or rejected.
  • FIGS. 8e-8g describe a process flowchart for the payment activity.
  • the username is retrieved from the previous activity. This is followed by initializing the items required for performing the payment activity.
  • a request is sent to the third party server for obtaining the balance amount available in the user’s account.
  • the application waits for a response to the request and displays the balance amount in the users account on receiving the response. After displaying the balance amount, the application is enabled to receive inputs from the user.
  • the application waits for user input for the further processing.
  • the user can click the home button displayed on the screen, click the back button of the mobile device, or let the application listen for an audio signal containing the transaction request. When the user clicks on either the home button or back button of the mobile device, the application stops listening for the audio signal.
  • the application then keeps the username for the next activity, starts the main activity and ends the payment activity.
  • the application stops listening.
  • the application extracts the transaction reference number from the received audio signal and send a request to the third party server for obtaining the transaction details.
  • the application waits for the response and displays the transaction details based on the response received from the third party server.
  • the application starts a time counter on displaying the transaction details and waits for the user input for a predetermined interval of time for further processing. If the user input is not received within the predetermined interval of time, the transaction is rejected by sending rejectPayment request to the third party server.
  • the application displays two buttons. One for accepting the transaction and the other for rejecting the transaction.
  • the transaction is rejected by sending rejectPayment request to the third party server.
  • accept transaction button is clicked, then an acceptPayment request is sent to the third party server.
  • the application waits for the response of the rejectPayment request or the acceptPayment request.
  • application sends a transaction status query to the third party server for obtaining the status of the transaction.
  • the application receives the transaction status for the transaction and displays a dialog based on the received status. If the status indicates that the transaction is successful i.e.
  • a dialog box displaying the information for the success of the transaction is shown on the display screen. Based on the success of the transaction, the user is allowed to perform another transaction and process repeats form the initialization step. If the status indicates that the transaction is failed i.e. status contains“failed or“rejected”, a dialog box containing the information for failure of the transaction is displayed to the user. Based on the failure or rejection of the transaction, the user is allowed to restart the transaction and process repeats from the initialization step. The payment activity ends based on the user input that no further transactions are to be made.
  • FIG. 9a presents an exemplary history interface for showing the various transaction made by the customer or merchant.
  • FIG. 9b show an exemplary process for displaying the history of the previous transactions made by the user.
  • the application sends a request to the third party server to obtain the history of the previous transactions.
  • the application waits for some time to obtain the response from the server and displays the history of the previous transactions based on the received response.
  • the application waits for the user input for further processing.
  • the user can interact with the application by pressing a cancel button displayed on the screen or back button of the mobile device.
  • the application waits for further processing until the user presses either of the home button or back button of the device.
  • the application stores the username for the next activity and initiates the main activity.
  • the history activity ends on starting the main activity.
  • FIG. lOa presents an exemplary interface for showing the start of the face enrolment process.
  • FIG. lOb discloses an exemplary face enrolment activity using a mobile computing device. An input is received at the mobile computing device for starting the face enrolment activity. Based on the input, the username and user ID is retrieved from one or more previous activities performed by the user. The necessary items and data are initialized after retrieving the user name and user ID. The mobile device waits for a trigger from the user. The face enrolment process is executed when the user presses the start button displayed on the mobile screen or a registration activity is started when a back button of the mobile device is pressed. An error message or success message is displayed on the screen of the mobile device if the face enrolment process failed or is successful respectively. The user is allowed to perform a login activity based on the success of the face enrolment process.
  • FIGS. lOc-lOf describe a detailed process flowchart for the face enrollment activity.
  • the process starts by getting a token from a third party API server to allow enrolment process.
  • the face enrolment process comprises of an initialization stage, getting token from a third party API server, getting face samples and setting directions, uploading samples, and performing enrolment.
  • the initialization step all the arrows are hidden from the user.
  • the process after the initialization starts by getting a token from the third party API server to allow enrolment of face images.
  • a url (urlToken URL) is set.
  • the header is included with identity of the application (i.e appID) and a secret parameter associated with the application.
  • a request is sent to the third party API server for the purpose of getting the token.
  • An error message is displayed on the screen of the mobile device when the token is not received from the third party API server.
  • the user is allowed to perform the next step of getting face samples.
  • a predetermined time duration is allowed to elapse before capturing the face images.
  • a face detection is performed on each captured image.
  • a message“No face found” is displayed when the captured image does not include a face and a variable retryNum is incremented by 1.
  • the user retries and captures an image again. The same process is repeated for the face detection.
  • the user is allowed to retry and capture the images for face detection until the value of variable retryNum reaches a predetermined maximum value.
  • a message “Maximum number of retries reached” is displayed when the value of variable retryNum reaches the maximum value.
  • the face is identified in the image, it is determined whether the image includes multiple images. If the captured image includes multiple faces, a message“Multiple faces found”is displayed and the variable retryNum is incremented by 1. The user retries and captures the image again. The same process is repeated for the face detection. The user is allowed to retry and capture the images for face detection until the value of variable retryNum reaches a predetermined maximum value. A message“Maximum number of retries reached” is displayed when the value of variable retryNum reaches the maximum value.
  • midpoint of the face is identified and compared with midpoint of a face in a previously captured image.
  • a message“No motion detected” when the result of the comparison is true i.e. midpoint of face in the current image matches with the midpoint of the face in the previous image and the variable retryNum is incremented by 1.
  • the user retries and captures image again. The same process is repeated for the face detection.
  • the user is allowed to retry and capture the images for face detection until the value of variable retryNum reaches a predetermined maximum value.
  • a message“Maximum number of retries reached” is displayed when the value of variable retryNum reaches the maximum value.
  • the current image sample is encoded to a base64 string based on the comparison.
  • the result of the comparison is false i.e. midpoint of face in the current image does not match with the midpoint of the face in the previous image
  • the current image sample is encoded to a base64 string.
  • An enrolment image array is created and the current image is included in the array based on the comparison.
  • a value of variable sampleNum is increased by 1 to indicate the number of samples added in the array. If the number of number of image samples in the array are less than a predefined value, a direction challenge determined. The next image is captured after a predefined time if the user does not perform the direction challenge.
  • the user is presented direction sequences options for capturing the images.
  • the application selects a sequence randomly when an input is not received by the user in predetermined time interval and a direction is set as current direction for capturing image sample.
  • An arrow is shown according to the current direction and the image is captured accordingly.
  • a set of face samples (currently 4 images) are taken at different directions (up / down / right / left / center). The first face sample is always taken without direction.
  • the user is asked to follow an arrow for each sample using arrow.
  • the arrows are hidden when the number of the samples are equal to a threshold number of samples.
  • each encoded face sample is uploaded to the third party API server.
  • An upload queue is used to upload the samples.
  • An upload URL is set and authorization information and token value is inserted in the header.
  • a post request is created to upload the image sample to the third party API server. If a success response is received, then next image sample is uploaded by repeating the same process. If the uploading of the image sample is failed then a variable indicating the number of uploads failed is incremented by 1 and next sample is uploaded.
  • a request for face enrolment is sent to the third party API server.
  • a URL URLEnroll
  • an authorization bearer and a token value to the header are inserted.
  • a request for enrolment is created and sent to the third party API server.
  • a response from the third party API server is received in response to the request.
  • An error message is displayed if the enrolment gets failed.
  • the user is allowed to perform next or other activities if a success response is received from the third party API server.
  • an exemplary voice enrolment activity is described in another embodiment using the mobile computing device.
  • An input is received at the mobile computing device for starting the voice enrolment activity.
  • the user name and user ID is retrieved from one or more previous activities performed by the user.
  • the necessary items and data is initialized after retrieving the user name and user ID.
  • the mobile device waits for a trigger from the user.
  • the voice enrolment process is executed when the user presses the start button displayed on the mobile screen or a registration activity is started when a back button of the mobile device is pressed.
  • An error message or success message is displayed on the screen of the mobile device if the voice enrolment process is failed or successes respectively.
  • the user is allowed to perform a login activity based on the success of the voice enrolment process.
  • the voice enrolment process starts by getting a token from a third party API server to allow enrolment process.
  • the voice enrolment process comprises of an initialization stage, getting token from a third party API server, getting voice samples and, uploading samples and performing enrolment.
  • the initialization starts by getting a token from the third party API server to allow enrolment of face images.
  • a token For the purpose of getting a token from the third party API server, a url (urlToken URL) is set. The header is included with identity of the application (i.e appID) and a secret parameter associated with the application.
  • a request is sent to the third party API server for getting the token.
  • An error message is displayed on the screen of the mobile device when the token is not received from the third party API server.
  • the user is allowed to perform the next step of getting voice samples.
  • a predetermined time duration is allowed to elapse before capturing the voice samples.
  • a voice recognition is performed on each captured voice samples.
  • a message“No voice sample found” is displayed when the captured sample does not include a voice and a variable retryNum is incremented by 1.
  • the user retries and captures voice sample again.
  • the same process is repeated for the voice recognition.
  • the user is allowed to retry and capture the voice samples for voice recognition until the value of variable retryNum reaches a predetermined maximum value.
  • a message“Maximum number of retries reached” is displayed when the value of variable retryNum reaches the maximum value.
  • the voice is detected in the sample, it is determined whether the sample includes voice from multiple users. If the captured sample includes voice from multiple users, a message“Voice from multiple users found” is displayed and the variable retryNum is incremented by 1. The user retries and captures the voice samples again. The same process is repeated for the voice recognition. The user is allowed to retry and capture the samples for voice detection until the value of variable retryNum reaches the predetermined maximum value. A message“Maximum number of retries reached” is displayed when the value of variable retryNum reaches the maximum value.
  • the current voice sample is encoded to a base64 string based on the comparison.
  • the result of the comparison is false i.e. the current voice sample does not match with the previous voice sample
  • the current voice sample is encoded to a base64 string.
  • An enrolment voice samples array is created and the current sample is included in the array based on the comparison.
  • a value of variable sampleNum is increased by 1 to indicate the number of samples added in the array.
  • each encoded voice sample is uploaded to the third party API server.
  • An upload queue is used to upload the samples.
  • For uploading each sample an upload URL is set and authorization information and token value is inserted in the header.
  • a post request is created to upload the image sample to the third party API server. If a success response is received, then next voice sample is uploaded by repeating the same process. If the uploading of the voice sample is failed then a variable indicating the number of uploads failed is incremented by 1 and next sample is uploaded.
  • a request for voice enrolment is sent to the third party API server.
  • a URL URLEnroll
  • an authorization bearer and a token value to the header are inserted.
  • a request for enrolment is created and sent to the third party API server.
  • a response from the third party API server is received in response to the request.
  • An error message is displayed if the enrolment fails.
  • the user is allowed to perform next or other activities if a success response is received from the third party API server.
  • FIG. l la presents an exemplary interface for showing the start of the face authentication process.
  • FIGS l lb-l lf describe an exemplary face authentication activity using a mobile computing device.
  • the face verification process comprises of an initialization stage, getting token from a third party API server, getting a challenges list, getting face samples and setting directions, uploading samples and performing verification.
  • the process after the initialization starts by getting a token from the third party API server to allow the verification.
  • a url (urlToken URL) is set.
  • the URL is included with identity of the application (i.e appID) and a secret parameter associated with the application.
  • a GET request is sent to the third party API server for getting the token and a response is received based on the request.
  • An error message is displayed on the screen of the mobile device when the token is not included in the response received from the third party API server and the verification process ends.
  • the response is decoded to obtain the challenge list on determining that the response includes a token and is allowed to capture the face images for verification.
  • the token gotten include a list of challenges (3 challenges) and each challenge has 3 set of direction (up / down / right / left).
  • the user is allowed to perform the next step of getting face samples.
  • a predetermined time interval is allowed to elapse after receiving the successful response before capturing the face images.
  • a face detection is performed on each captured image.
  • a message“No face found” is displayed when no face is detected in the captured image and a variable retryNum is incremented by 1.
  • the user retries and captures image again.
  • the same process is repeated for the face detection.
  • the user is allowed to retry and capture the images for face detection until the value of variable retryNum reaches a predetermined maximum value.
  • a message“Maximum number of retries reached” is displayed when the value of variable retryNum reaches the predetermined maximum value.
  • a face is detected in the image, it is determined whether the image contains more than one face. If the captured image contains multiple faces, a message “Multiple faces found” is displayed and the variable retryNum is incremented by 1. The user retries and captures the image again. The same process is repeated for the face detection. The user is allowed to retry and capture the images for face detection until the value of variable retryNum reaches the predetermined maximum value. A message“Maximum number of retries reached” is displayed when the value of variable retryNum reaches the predetermined maximum value.
  • midpoint of the face is identified and compared with midpoint of a face in a previously captured image.
  • a message“No motion detected” when the result of the comparison is true i.e. midpoint of face in the current image matches with the midpoint of the face in the previous image and the variable retryNum is incremented by 1.
  • the user retries and captures the image again. The same process is repeated for the face detection.
  • the user is allowed to retry and capture the images for face detection until the value of variable retryNum reaches the predetermined maximum value.
  • a message“Maximum number of retries reached” is displayed when the value of variable retryNum reaches the predetermined maximum value.
  • the current image sample is encoded to a base64 string based on the comparison.
  • the result of the comparison is false i.e. midpoint of the face in the current image does not match with the midpoint of the face in the previous image
  • the current image sample is encoded to a base64 string.
  • An enrolment image array is created and the current image is included in the array based on the comparison.
  • a value of variable sampleNum is increased by 1 to indicate the number of samples added in the array. If the number of image samples in the array is less than a predefined value, a direction challenge is determined. The next image is captured after a predefined time if the user does not perform the direction challenge.
  • the user is presented direction sequences options for capturing the images.
  • the application selects a sequence randomly when an input is not received by the user in a predetermined time interval and a direction is set as current direction for capturing image sample.
  • An arrow is showing according to the current direction and the image is captured accordingly.
  • the user must turn his or her head in the direction of the arrow.
  • a set of face samples (currently 4 images) are taken at different directions (up / down / right / left / center). The first face sample is always taken without direction.
  • the arrows are hidden when the number of the samples is equal to a threshold number of samples.
  • each encoded face sample is uploaded to the third party API server.
  • An upload queue is used to upload the samples.
  • an upload URL is set and authorization information and token value is inserted in the header.
  • a post request is created to upload the image sample to the third party API server. If a success response is received, then the next image sample is uploaded by repeating the same process. If the uploading of the image sample is failed then a variable indicating the number of uploads failed is incremented by 1 and the next sample is uploaded.
  • a request for face verification request is sent to the third party API server.
  • a URL URLEnroll
  • an authorization bearer and a token value to the header are inserted.
  • a request for enrolment is created and sent to the third party API server.
  • a response from the third party API server is received in response to the request. If the response received does not satisfy a predetermined criterion, the user is allowed to set new challenge for capturing the face image. The new challenge is allowed to set after determining that the retry for the existing challenges has not reached a predefined number. On determining that retry for the existing challenges has reached the predefined number, an error message “Verification failed” is displayed to show that the verification failed. The user is allowed to perform next or other activities if a success response is received from the third party API server.
  • an exemplary embodiment describes an exemplary voice authentication activity using a mobile computing device.
  • the voice verification process comprises of an initialization stage, getting token from a third party API server, getting a challenges list, getting voice samples, uploading samples and performing verification.
  • each encoded voice sample is uploaded to the third party API server.
  • An upload queue is used to upload the samples.
  • An upload URL is set and authorization information and token value is inserted in the header.
  • a post request is created to upload the image sample to the third party API server. If a success response is received, then next voice sample is uploaded by repeating the same process. If the uploading of the voice sample is failed then a variable indicating the number of uploads failed is incremented by 1 and next sample is uploaded.
  • a request for face verification request is sent to the third party API server.
  • a URL URLEnroll
  • an authorization bearer and a token value to the header is inserted.
  • a request for enrolment is created and sent to the third party API server.
  • a response from the third party API server is received in response to the request. If the response received does not satisfy a predetermined criteria, the user is allowed to set new challenge for capturing the voice sample. The new challenge is allowed to set after determining that the retry for the existing challenges has not exceeded a predefined number. On determining that retry for the existing challenges has exceeded the predefined number , an error message“failed to verify” is displayed to show that the verification is failed. The user is allowed to perform next or other activities if a success response is received from the third party API server.
  • FIGS. l2a, l2b, and l2c are parts of flowchart illustrating an operation of sending a request for transaction using audible signals or inaudible signals from the mobile application executing on the user mobile device.
  • the user starts the request activity for receiving payment from other user or customer, the user name is retrieved from the previous activity. This is followed by initializing the items required for performing the request activity.
  • the application waits for user input for the further processing.
  • the application on initialization provides the user at least three options for entering the input.
  • the user can click home button displayed on the displayed screen, click a back button of the mobile device, and click a send button for sending the request for obtaining the payment from the customer.
  • the application keeps the user name for next activity and starts the main activity which leads to end of the payment activity after completion of the main activity.
  • the application prompts the user to provide an input for the transaction amount.
  • the input can be provided by the speech or user can key in the amount to be paid.
  • the application verifies the amount entered by the user as whether it is numeric amount or not. If the user failed to enter the numeric amount, then an error message is displayed and the process returns to the waiting step.
  • the application On determining that the amount entered by the user is numeric, the application generates an audio signal to send the transaction details. If the created message is OK, then the audio signal is sent to the proximity device and the third party server. If the created signals is not OK, then the request activity for the payment ends.
  • the application waits for some time and checks whether the user has clicked on the cancel button displayed on the display screen. If the user has pressed the cancel button, a request is sent to the server for canceling the transaction.
  • the application sends a query to the server for obtaining the status of the transaction.
  • the application waits for response and extracts the status of the transaction from the received response.
  • the application checks and determines if the status includes“unpaid” information, the process returns to step“wait for 1 second”. If the status includes “paid” information, a success dialog is displayed and the process returns to“initialize the necessary items” step. If the status includes“cancelled” or“rejected” information, a failure dialog is displayed and the process returns to“initialize the necessary items” step.
  • FIGS. l3a-l3d describe another process flowchart for the performing the payment /transaction activity.
  • the user name is retrieved from the previous activity. This is followed by initializing the items required for performing the payment activity. After initializing the required items, a request is sent to the third party server for obtaining the balance amount available in the user’s account.
  • the application waits for a response for the request and displays the balance amount in the users account on receiving the response. After displaying the balance amount, the application is enabled to receive inputs The application waits for an input for further processing.
  • the application receives input in at least three ways.
  • the user can click home button displayed on the display screen, click a back button of the mobile device, or the application will receive an audio signal encoded with the transaction request.
  • the application stops listening for any audio inputs.
  • the application keeps the user name for next activity and starts the main activity which leads to end of the payment activity.
  • the application When the application receives the audio signal the application recognizes the audio signal and then stops listening. The application extracts transaction details form the received audio signal and displays the extracted transaction on the display screen. Along with displaying the transaction details, the application displays two buttons for accepting the transaction or rejecting the transaction. The application starts a time counter on displaying the transaction details and waits for the user inputs for predetermined interval of time for further processing.
  • a request is sent to the third party server for generating a transaction ID.
  • the application waits for the response from the third party server. On receiving the response from the third party server, the application extracts the transaction ID from the response. The application then sends a request to the third party server for rejecting the transaction.
  • the accept transaction button is clicked, a request is sent to the third party server for generating a transaction ID.
  • the application waits for the response from the third party server. On receiving the response from the third party server, the application extracts the transaction ID from the response. The application then sends a request to the third party server for accepting the transaction.
  • the application waits for the response of the request for rejecting the transaction or the accepting the transaction.
  • application sends a transaction status query to the third party server for obtaining the status of the transaction.
  • the application receives the transaction status for the transaction and displays a dialog based on the received status. If the status indicates that the transaction is successful i.e. status contains “paid”, a dialog box displaying the information for the success of the transaction is shown on the display screen. Based on the success of the transaction, the user is allowed to perform another transaction and process repeats form the initialization step. If the status indicates that the transaction is failed i.e. status contains“failed or “rejected”, a dialog box containing the information for failure of the transaction is displayed to the user. Based on the failure or rejection of the transaction, the user is allowed to restart the transaction and process repeats form the initialization step.
  • the payment activity ends based on the user input that no further transactions are to be made.
  • Exemplary computer readable media include flash memory drives, digital versatile discs (DVDs), compact discs (CDs), floppy disks, and tape cassettes.
  • Computer readable media comprise computer storage media and communication media.
  • Computer storage media include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
  • Computer storage media are tangible and mutually exclusive to communication media.
  • Computer storage media are implemented in hardware and exclude carrier waves and propagated signals.
  • Computer storage media for purposes of this disclosure are not signals per se.
  • Exemplary computer storage media include hard disks, flash drives, and other solid-state memory.
  • communication media typically embody computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism and include any information delivery media.
  • Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with aspects of the disclosure include, but are not limited to, mobile computing devices, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, gaming consoles, microprocessor-based systems, set top boxes, programmable consumer electronics, mobile telephones, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
  • Such systems or devices may accept input from the user in any way, including from input devices such as a keyboard or pointing device, via gesture input, touch input, and/or via voice input.

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Accounting & Taxation (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Finance (AREA)
  • Financial Or Insurance-Related Operations Such As Payment And Settlement (AREA)

Abstract

Examples of the disclosure describe a mobile wallet solution that facilitates proximity payments. Through a mobile application a merchant uses a mobile device (iOS or Android) to send a payment request to a customer's mobile device (iOS or Android) in the form of an audio signal that is transmitted by a speaker of the merchant's mobile device and picked up by a microphone of the customer's mobile device. The customer's device, with the mobile application installed beforehand, then prompts the customer to either approve or reject the payment request. When the customer approves the payment request, the mobile app will send a request to the customer's bank to transfer funds from the customer's bank account to the merchant's bank account. The proximity payments are made using the mobile devices only without using additional hardware. The proximity payments are facilitated by using face recognition login with the option of adding voice recognition login as an added layer of security.

Description

A MOBILE WALLET SOLUTION FOR FACILITATING PROXIMITY PAYMENTS
FIELD OF THE INVENTION
[0001] The invention is directed to systems and methods for improving user convenience in transactions by implementing a mobile wallet solution for facilitating electronic transactions based on data-over-audio and authorization for the payment based on voice and face authentication.
BACKGROUND
[0002] Current applications for making payment to a merchant require the consumer to identify himself by providing user account identifiers/credentials or other data to the merchant system. This user experience for making transactions and payment is not very user-friendly. Further, payments or financial transactions using the mobile devices based on data-over-audio has not been fully leveraged by current transaction payment systems. Further, the use of coded information in audio signals also has not been fully explored by current state-of-the-art transaction payment systems. Even further, the state-of-the-art does not disclose the use of a combination of audio signals and authentication based on voice and facial recognition for making mobile payments.
[0003] As for merchants, they need to have additional hardware in order to receive electronic payments, for example a payment card terminal or a Near-Field Communication (NFC) reader, etc. This is a hassle for small businesses and even more for street vendors and the like.
[0004] Therefore, an improved solution was required for facilitating and authorizing mobile payments that mitigate the shortcomings and deficiencies of the current transaction payment systems. This is necessary if we are to move towards a cashless society. A solution that makes use of devices and features that is already available to the masses is needed. The solution that is presented will enable anyone with a smartphone to receive and make proximity mobile payments as long as there is internet connection.
SUMMARY
[0005] Examples of the disclosure describe a mobile wallet solution that facilitates proximity payments. Through a mobile app a merchant uses a mobile device (iOS or Android) to send a payment request to a customer's mobile device (iOS or Android) in the form of either an audible, near-ultrasonic or ultrasonic audio signal that is transmitted by the speakers of the merchants' mobile device and picked up by the microphone of the customer's mobile device. The customer's device, with the mobile app pre-installed, then prompts the customer to either approve or reject the payment request. The proximity payments are made using the mobile devices only without using additional hardware. The proximity payments are facilitated by using voice and face authentication login without having to explicitly type the passwords for login.
BRIEF DESCRIPTION OF THE DRAWINGS
[0006] The present invention will now be described with reference to the accompanying Figures, in which:
[0007] FIG. 1 is an exemplary block diagram illustrating a mobile computing device with image capturing capabilities.
[0008] FIG. 2 is an exemplary block diagram illustrating a system for facilitating proximity payments between a merchant and a customer.
[0009] FIG. 3a illustrates an exemplary screenshot showing splash activity for a mobile application. [0010] FIG. 3b illustrates an exemplary flowchart showing a method for splash activity for the mobile application.
[0011] FIGS. 4a, 4b, and 4c illustrate exemplary screenshots of the mobile application showing registration of the merchant or customer with a third party server for enabling the transaction through the mobile application.
[0012] FIGS. 4d, 4e, and 4f are parts of an exemplary flowchart illustrating an operation of registration of a merchant or customer with a third party server for enabling a transaction through the mobile application.
[0013] FIGS. 5a, 5b, 5c, and 5d illustrate exemplary screenshots of the mobile application showing login of the merchant or customer in the mobile application.
[0014] FIG. 5e is an exemplary flowchart illustrating an operation of login of the merchant or customer with the mobile application.
[0015] FIG. 6a illustrates an exemplary screenshot of the mobile application showing the main activities available to the merchant or customer in the mobile application.
[0016] FIG. 6b is an exemplary flowchart illustrating initiating main operations for transactions through the mobile application.
[0017] FIGS. 7a and 7b illustrate exemplary screenshots of the mobile application showing a merchant sending a request for a payment through the mobile application.
[0018] FIGS. 7c and 7d are parts of an exemplary flowchart illustrating an operation of sending a request for payment using audio signals from the mobile application executing on the merchant mobile device. [0019] FIG. 8a, 8b, 8c, and 8d illustrate exemplary screenshots of the mobile application showing making a payment in response to the payment request.
[0020] FIGS. 8e, 8f, and 8g are parts of an exemplary flowchart illustrating an operation of making a payment in response to the payment request.
[0021] FIG. 9a illustrates an exemplary screenshot of the mobile application showing a history of transactions made through the mobile application.
[0022] FIGS. 9b is an exemplary flowchart illustrating an operation of determining the history of transactions made through the mobile application.
[0023] FIG. lOa illustrates an exemplary screenshot of the mobile application showing the start of the face enrolment operation.
[0024] FIG. lOb is an exemplary flowchart illustrating an operation of face enrolment through the mobile application for secure transactions.
[0025] FIGS. lOc, lOd, lOe, and lOf are parts of an exemplary flowchart illustrating an operation of face enrolment through the mobile application for secure transactions.
[0026] FIG. l la illustrates an exemplary screenshot of the mobile application showing the start of the face authentication operation.
[0027] FIG. 1 lb is an exemplary flowchart illustrating an operation of face authentication of a user through the mobile application for secure transactions.
[0028] FIGS l lc, l ld, l le, and l lf are parts of an exemplary flowchart illustrating an operation of face authentication through the mobile application for secure transactions. [0029] FIGS. l2a, l2b, and l2c are parts of another exemplary flowchart illustrating an operation of sending a request for a transaction using audio signals from the mobile application executing on the user mobile device.
[0030] FIGS. 13 a, l3b, l3c, and l3d are parts of another exemplary flowchart illustrating an operation for performing the transaction based on the received request.
DETAILED DESCRIPTION
[0031] The invention
[0032] The invention is advantageous in that it provides a mobile application solution for automatic payment or conducting transactions via at least one of audible audio signals, near-ultrasonic audio signals or ultrasonic audio signals over existing infrastructure and devices. The audible audio signals having a frequency range of 20 Hz - 16,999 Hz, near-ultrasonic audio signals having a frequency range of 17,000 Hertz to 19,999 Hertz, and ultrasonic audio signals having frequency beyond 20,000 Hertz.
[0033] According to one embodiment, a method comprises receiving a request, at an application executing on a computing device from another proximity computing device, for payment to a merchant/ other user having the proximity computing device. The request comprises at least one of the audible audio signal, the near-ultrasonic audio signal or the ultrasonic audio signal. The request includes a transaction number. The mobile application decodes the request to extract the transaction number. The method comprises retrieving the transaction detail from a server in response to a request sent from the computing device to the server based on the transaction number. The method further comprises enabling the customer for making the payment or rejecting the payment to the merchant by sending a payment acceptance or rejection request to the server.
[0034] According to another embodiment, a computing device comprising at least one processor, and at least one memory including computer program code for one or more computer programs, the at least one memory and the computer program code configured to, with the at least one processor, cause, at least in part, the computing device to receive a request, at the computing device, for payment to a merchant or other user, the request comprising at least one of the audible audio signals, the near-ultrasonic audio signals and the ultrasonic audio signals, the request including a transaction number. The computing device is caused to retrieve the transaction detail from a server in response to a request sent from the computing device to server based on the transaction number. The computing device is caused to make the payment or reject the payment to the merchant by sending a payment acceptance or rejection request to the server.
[0035] According to another embodiment, a computer-readable storage medium carrying one or more sequences of one or more instructions which, when executed by one or more processors, cause, at least in part, the computing device to receive a request, at the computing device, for payment to a merchant or other user, the request including a transaction number. The request comes in the form of at least one of the audible audio signals, the near-ultrasonic audio signals, and the ultrasonic audio signals. The computing device is caused to retrieve the transaction detail from a server in response to a request sent from the computing device to the server based on the transaction number. The computing device is caused to make the payment or reject the payment to the merchant by sending a payment acceptance or rejection request to the server.
[0036] According to another embodiment, a computing device comprises means to receive a request, at the computing device, for payment to a merchant or other user, the request comprising at least one of the audible audio signals, the near-ultrasonic audio signals and the ultrasonic audio signals, the request including a transaction number. The computing device comprises means to retrieve the transaction detail from a server in response to a request sent from the computing device to server based on the transaction number. The computing device comprises means to make the payment or reject the payment to the merchant by sending a payment acceptance or rejection request to the server.
[0037] FIG. 1 is a diagram of exemplary components of a computing device (e.g., handset) for communications, which is capable of operating in the system of FIG. 2, according to one embodiment. The computing device represents any device executing instructions (e.g., as application programs, operating system functionality, or both) to implement the operations and functionality described herein. The computing device may include a mobile computing device or any other portable device associated with a user e.g. a customer or a merchant. In some examples, the computing device includes a mobile telephone, laptop, desktop computer, tablet, and computing pad.
[0038] In some examples, the computing device has at least one processor. The processor includes any quantity of processing units and is programmed to execute computer-executable instructions for implementing aspects of the disclosure. The instructions may be performed by the processor or by multiple processors executing within the computing device, or performed by a processor external to the computing device. In some examples, the processor is programmed to execute instructions such as those illustrated in the figures.
[0039] The computing device includes one or more image sensors/cameras for capturing the images. The computing device includes one or more computer-readable media such as the memory. The memory may be internal or external to the computing device or both. The memory stores, among other data, one or more applications and the image data. The applications, when executed by the processor, operate to perform functionality on the computing device. Exemplary applications include at least a mobile app for making the proximity transactions. The applications may communicate with counterpart applications or services such as web services accessible via a network. For example, the applications may represent downloaded client-side applications that communicate with server-side services executing in the cloud.
[0040] The computing device may communicate with another device via a network. Exemplary networks include wired and wireless networks. Exemplary wireless networks include one or more of wireless fidelity (Wi-Fi) networks, BLEIETOOTH brand networks, cellular networks, and satellite networks. In some examples, the other device is remote from the computing device. In other examples, the other device is local to the computing device.
[0041] The computing device constitutes other means for performing one or more steps of providing an audio message based payment system. Generally, a radio receiver is often defined in terms of front-end and back-end characteristics. The front-end of the receiver encompasses all of the Radio Frequency (RF) circuitry whereas the back-end encompasses all of the baseband processing circuitry. As used in this application, the term “circuitry” refers to both: (1) hardware-only implementations (such as implementations in only analog and/or digital circuitry), and (2) to combinations of circuitry and software (and/or firmware) (such as, if applicable to the particular context, to a combination of processor(s), including digital signal processor(s), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions). This definition of “circuitry” applies to all uses of this term in this application, including in any claims. As a further example, as used in this application and if applicable to the particular context, the term “circuitry” would also cover an implementation of merely a processor (or multiple processors) and its (or their) accompanying software/or firmware. The term “circuitry” would also cover if applicable to the particular context, for example, a baseband integrated circuit or applications processor integrated circuit in a mobile phone or a similar integrated circuit in a cellular network device or other network devices.
[0042] Pertinent internal components of the telephone include a Main
Control Unit (MCU), a Digital Signal Processor (DSP), and a receiver/transmitter unit including a microphone gain control unit and a speaker gain control unit. A main display unit provides a display to the user in support of various applications and mobile terminal functions that perform or support the steps of providing an audio token based payment system. The display includes display circuitry configured to display at least a portion of a user interface of the mobile terminal (e.g., mobile telephone). Additionally, the display and display circuitry are configured to facilitate user control of at least some functions of the mobile terminal. An audio function circuitry includes a microphone and microphone amplifier that amplifies the speech signal output from the microphone. The amplified speech signal output from the microphone is fed to a coder/decoder (CODEC).
[0043] A radio section amplifies power and converts frequency in order to communicate with a base station, which is included in a mobile communication system, via antenna. The power amplifier (PA) and the transmitter/modulation circuitry are operationally responsive to the MCU, with an output from the PA coupled to the duplexer or circulator or antenna switch, as known in the art. The PA also couples to a battery interface and power control unit.
[0044] Referring to FIG. 2, an exemplary block diagram illustrates a system in which a mobile computing device makes the transactions using the mobile application installed on the device. FIG. 2 shows that the mobile computing device comprises a speaker, microphone, camera, and the mobile application for performing the proximity -based transactions. The speaker of the merchant’s device is used for outputting audio signals. The microphone of the customer’s device is used to receive the audio signals. [0045] The third party API Server is operable to receive registration information from each mobile device, store registration information into a storage and send a list of registered users and devices to each mobile device. The mobile computing devices may be coupled using one or more of transmitted audible audio signals, the near-ultrasonic audio signals, or the ultrasonic audio signals.
[0046] Referring next to FIG. 3a and 3b, an exemplary screenshot for splash activity for the mobile application and an exemplary flowchart of a method for splash activity for the mobile application are illustrated. FIG. 3b shows a process flow for a splash activity. Splash activity shows a screen that is displayed for a set time when the app is starting and after the set time period, the user is redirected to application login activity. The splash activity starts with launching the payment application and a display screen as in FIG. 3a is displayed for a predefined interval of time showing the name of the application and the company associated with the application. After the predefined interval of time, the login screen is displayed to the user where the user can start the login activity. The splash activity ends with starting of the login activity.
[0047] Users can make the transactions by means of the mobile application running on the mobile device. The mobile application provides the convenience of making the transaction by using the audible audio signals, the near ultrasonic audio signals, or the ultrasonic audio signals. The mobile application provides security by employing t voice and/or face recognition. Before being able to make the transactions using the mobile application, users register with the third party server.
[0048] FIGS. 4a. 4b, and 4c present registration interfaces for registering a new user in accordance with some embodiments. As shown in FIG. 4a, the interface includes various fields in which the user enters identifying information. The identifying information includes the email ID and country of the user. When the user presses the ‘NEXT’ button on the interface, a second registration interface corresponding to FIG. 4b is displayed. The interface in FIG. 4b further includes various fields in which the user enters another set of identifying information. The second set of identifying information also includes user’s name and telephone number of the mobile device. When the user submits the information in the interface of FIG. 4b by pressing the‘SUBMIT’ button, a third interface corresponding to FIG. 4c is displayed. The third interface allows the user to capture a user identity document, wherein the user identity document comprises one or more images of an identity document provided by the user. In one of the embodiment, the user identity document comprises a photo identity document. In another embodiment, the user identity document comprises a short video. The short video is a recording of the user communicating in-person his/her intent to register.
[0049] FIGS. 4d, 4e, and 4f provide an exemplary flowchart for an online registration process. When the application is launched, the application window shows a‘LOGIN’ button and a‘Sign Up’ button. The registration activity can be canceled by clicking the back button of the mobile device. On the clicking the back button, the user is allowed to perform the login activity and the registration activity ends.
[0050] The registration process starts with the activation of the‘Sign
Up’ button. A registration form is displayed in which the user is allowed to fill his personal information. An error message is displayed on submitting the form with the required fields in the form not properly filled. A registration request is sent to the server on submitting the form with the entire required field properly filled. The application waits for the response to the registration request. An error message is displayed if the response does meet the predefined requirements and the user has to repeat the form submission steps by clicking the‘SUBMIT’ button. The user’s identity document is captured and uploaded to the third party server on determining that the response satisfy the requirements. A response is received based on the request. The application verifies the response and if it is not ok, an error message is displayed. One or more images of the identity document are captured and uploaded to the third party server again till the response is ok. A short video of predefined duration (e.g. 15 seconds) is captured and uploaded to the server on determining that the response is ok. The short video is a recording of the user communicating in-person his/her intent to register. A response is received from the third party server on uploading the video and if the response is not ok, then an error message is displayed. A short video is captured again and uploaded to the server until the received response of uploading the video is ok. The identity of the user is verified by a third party staff by checking that the received image or images of the one or more identity documents are image or images of valid identity document or documents of the person in the video.
[0051] An enrolment button is displayed when the response received after uploading the video is ok. The application stores at least the username/user ID for next activities on the mobile device. The face and/or voice enrolment activity is performed on activation of the enrolment button and after successful completion of the face or voice enrolment activity, the registration activity ends.
[0052] FIGS. 5a. 5b, and 5c present login interfaces for login in the mobile application in accordance with some embodiments. As shown in FIG. 5a, the interface includes a field with user ID prompting the user to enter the user ID. When the user presses the ‘LOGIN’ button on the interface, a second interface corresponding to FIG. 5b is displayed. The interface in FIG. 5b displays that login is in progress and allows the user to cancel the login by pressing the‘CANCEL’ button. The interface in FIG. 5c is displayed when the user presses the back button of the mobile device and allows the user to logout from the mobile application.
[0053] FIG. 5d provide an exemplary flowchart for login process.
When the application for the payment is launched, all the items required for the login are initialized. The display screen after the launch of the application displays at least two options for the user interaction with the payment application. The options include login button and sign up button. The user can also interact with the application by using the mobile computing device back button. The application wait for further processing until user interacts with the one of the options or back button of the mobile computing device. When the user presses the login button, the user is prompted to provide the user ID which is sent to the third party service for validation. A request is sent to the third party server to determine whether the user ID exists in the storage of the third party server. An alert is displayed to the user which displays‘Logging in’ and allows the user to cancel the request. Based on the user interaction for cancellation of the request, the request is canceled and the display screen displays the two options for the user interaction with the payment application i.e. login button and sign up button. The application receives a response based on the request when the user does not cancel the request. On receiving the response, the application determines the user ID for next activities. The next activity is face authentication. Based on the authentication of the user’s face, the user is allowed to carry out further operations with the application and the login activity ends after user is successfully authenticated using face authentication.
[0054] Based on the user interaction with the sign up button, the user is allowed to register with the payment service. After the registration of the user with service, the login activity ends. When the user does not want to login and presses the phone back button, then also the login activity ends.
[0055] FIG. 6a presents an interface after the login for performing various transactions in accordance with some embodiments. As shown in FIG. 6a, the interface includes three buttons: PAYMENT, REQUEST, and HISTORY. When the user interacts with PAYMENT button, the user is redirected to payment interface as explained below. When the user interacts with REQUEST button, the user is redirected to request interface as described below. When the user interacts with HISTORY button, the user is redirected to HISTORY interface as explained below.
[0056] FIG. 6b shows an exemplary process flow for main activities performed by the mobile application. When the main activity is started, username is retrieved from the previous activity. The display screen shows multiple options for different sub-activities. The displayed options include at least a payment button, a history button, and a request button. The application waits for further processing until a user input for clicking of one of the displayed button is received or back button of the mobile device is clicked. Based on the clicking of the one of the buttons i.e. on clicking of one of payment button, history button, and request button, a corresponding activity is started. For example, on clicking the payment button, payment activity is started, or clicking the request button, request activity is started, or clicking the history button, history activity is started. After successful completion of the corresponding activity, the main activity ends. The username is stored for next activity before starting the corresponding activity. Whereas on clicking the back button of the mobile device, a logout dialog is displayed. Based on the user confirmation for logout, login activity is started. On successful completion of the login activity, the main activity ends.
[0057] FIGS. 7a and 7b present request interfaces for sending a payment request from the merchant mobile device to the customer mobile device in accordance with some embodiments. As shown in FIG. 7a, the interface includes fields such as Invoice ID and amount. To send a request for payment to a customer, the merchant has to provide the invoice ID and amount to be paid by the customer. When the user presses the ‘SEND’ button on the interface, a second interface corresponding to FIG. 7b is displayed. The interface in FIG. 7b displays that request process is in progress and the mobile application is waiting for the reply in response to the request. The interface in FIG. 7b allows the user to cancel the request by pressing the‘CANCEL’ button.
[0058] FIGS. 7c and 7d show an exemplary process for making a request for payment from the merchant or a user to another user. When the user/merchant starts the request activity for receiving payment from other user or customer, the username is retrieved from the previous activity. This is followed by initializing the items required for performing the request activity. The application waits for user input for the further processing. The user can click home button displayed on the displayed screen, click a back button of the mobile device, or click a send button for sending the request for obtaining the payment from the customer.
[0059] When the user either clicks on the home button or back button of the mobile device, the application keeps the username for next activity, starts the main activity, and ends the payment activity.
[0060] When the user clicks the ‘SEND’ button, the application verifies the amount entered by the user as to whether it is a numeric amount or not. If the user failed to enter the numeric amount, then an error message is displayed and the process returns to the initialization step. On determining that the amount entered by the user is numeric, the application generates a reference number and sends a transaction request to the third party server. The application waits for the response and if the response is not OK, the request activity ends or else a message in the form of an audio signal containing the reference number is created. If the created message is OK, then the created message containing the reference number is sent to the proximity device and the third party server. If the created message is not OK, then the request activity for the payment ends.
[0061] The application waits for some time and checks whether the user has clicked on the cancel button displayed on the display screen. If the user has pressed the cancel button, a request is sent to the server for canceling the transaction.
[0062] If the user has not pressed the cancel button, the application sends a query to the server for obtaining the status of the transaction. The application waits for a response and extracts the status of the transaction from the received response. The application checks and determines if the status includes“unpaid” information, the process returns to step“wait for one second”. If the status includes “paid” information, a success dialog is displayed and the process returns to“initialize the necessary items” step. If the status includes“cancelled” or“rejected” information, a failure dialog is displayed and the process returns to“initialize the necessary items” step.
[0063] FIGS. 8a-8d present exemplary payment interfaces for making a payment by the customer on the customer mobile device application in response to the request from the merchant mobile device. Interface in FIG. 8a is displayed when the user presses‘PAYMENT’ button on the main activity interface. The interface in FIG. 8a includes fields such as Merchant, Receipt ID, and Amount. The interface in FIG. 8a also displays the balance amount in the customer’s wallet. The FIG. 8a shows that mobile application is in listening mode for receiving the payment request transmitted by the merchant mobile device. The received payment request comes in the form of either an audible audio signal, a near-ultrasonic audio signal, or an ultrasonic audio signal. The customer can either accept or reject the request for payment by pressing ACCEPT or REJECT buttons displayed on the interface in FIG. 8b. The fields in FIG. 8b are automatically populated on receiving the valid signals transmitted by merchant mobile device. The interface in FIG. 8b is displayed when the customer mobile application receives valid information from the merchant mobile device. The interface in FIG. 8b displays a reducing time count by which the user can accept the payment request and beyond which the request for payment is automatically rejected. The interfaces in FIG. 8c and FIG. 8d display the dialogs as to whether the payment is successful or rejected.
[0064] FIGS. 8e-8g describe a process flowchart for the payment activity. When the user starts the payment activity, the username is retrieved from the previous activity. This is followed by initializing the items required for performing the payment activity. After initializing the required items, a request is sent to the third party server for obtaining the balance amount available in the user’s account. The application waits for a response to the request and displays the balance amount in the users account on receiving the response. After displaying the balance amount, the application is enabled to receive inputs from the user. The application waits for user input for the further processing. The user can click the home button displayed on the screen, click the back button of the mobile device, or let the application listen for an audio signal containing the transaction request. When the user clicks on either the home button or back button of the mobile device, the application stops listening for the audio signal. The application then keeps the username for the next activity, starts the main activity and ends the payment activity.
[0065] When the application has completed listening to the audio signal containing a transaction reference number the application stops listening. The application extracts the transaction reference number from the received audio signal and send a request to the third party server for obtaining the transaction details. The application waits for the response and displays the transaction details based on the response received from the third party server. The application starts a time counter on displaying the transaction details and waits for the user input for a predetermined interval of time for further processing. If the user input is not received within the predetermined interval of time, the transaction is rejected by sending rejectPayment request to the third party server.
[0066] Along with displaying the transaction details, the application displays two buttons. One for accepting the transaction and the other for rejecting the transaction. When the user clicks on the reject transaction button, the transaction is rejected by sending rejectPayment request to the third party server. On the other hand, when accept transaction button is clicked, then an acceptPayment request is sent to the third party server. The application waits for the response of the rejectPayment request or the acceptPayment request. On receiving the response for the rejectPayment request or the acceptPayment request, application sends a transaction status query to the third party server for obtaining the status of the transaction. The application receives the transaction status for the transaction and displays a dialog based on the received status. If the status indicates that the transaction is successful i.e. status contains“paid”, a dialog box displaying the information for the success of the transaction is shown on the display screen. Based on the success of the transaction, the user is allowed to perform another transaction and process repeats form the initialization step. If the status indicates that the transaction is failed i.e. status contains“failed or“rejected”, a dialog box containing the information for failure of the transaction is displayed to the user. Based on the failure or rejection of the transaction, the user is allowed to restart the transaction and process repeats from the initialization step. The payment activity ends based on the user input that no further transactions are to be made.
[0067] FIG. 9a presents an exemplary history interface for showing the various transaction made by the customer or merchant. FIG. 9b show an exemplary process for displaying the history of the previous transactions made by the user. When the user starts the history activity, the username is retrieved from the previous activity. This is followed by initializing the items required for performing the request activity. The application sends a request to the third party server to obtain the history of the previous transactions. The application waits for some time to obtain the response from the server and displays the history of the previous transactions based on the received response. After displaying the history of the previous transaction, the application waits for the user input for further processing. The user can interact with the application by pressing a cancel button displayed on the screen or back button of the mobile device.
[0068] The application waits for further processing until the user presses either of the home button or back button of the device. When the user presses either of the buttons, the application stores the username for the next activity and initiates the main activity. The history activity ends on starting the main activity.
[0069] FIG. lOa presents an exemplary interface for showing the start of the face enrolment process. FIG. lOb discloses an exemplary face enrolment activity using a mobile computing device. An input is received at the mobile computing device for starting the face enrolment activity. Based on the input, the username and user ID is retrieved from one or more previous activities performed by the user. The necessary items and data are initialized after retrieving the user name and user ID. The mobile device waits for a trigger from the user. The face enrolment process is executed when the user presses the start button displayed on the mobile screen or a registration activity is started when a back button of the mobile device is pressed. An error message or success message is displayed on the screen of the mobile device if the face enrolment process failed or is successful respectively. The user is allowed to perform a login activity based on the success of the face enrolment process.
[0070] FIGS. lOc-lOf describe a detailed process flowchart for the face enrollment activity. The process starts by getting a token from a third party API server to allow enrolment process. The face enrolment process comprises of an initialization stage, getting token from a third party API server, getting face samples and setting directions, uploading samples, and performing enrolment.
[0071] In the initialization step, all the arrows are hidden from the user. The process after the initialization starts by getting a token from the third party API server to allow enrolment of face images. For the purpose of getting the token from the third party API server, a url (urlToken URL) is set. The header is included with identity of the application (i.e appID) and a secret parameter associated with the application. A request is sent to the third party API server for the purpose of getting the token. An error message is displayed on the screen of the mobile device when the token is not received from the third party API server.
[0072] In the case where the request is successful, the user is allowed to perform the next step of getting face samples. A predetermined time duration is allowed to elapse before capturing the face images. A face detection is performed on each captured image. A message“No face found” is displayed when the captured image does not include a face and a variable retryNum is incremented by 1. The user retries and captures an image again. The same process is repeated for the face detection. The user is allowed to retry and capture the images for face detection until the value of variable retryNum reaches a predetermined maximum value. A message “Maximum number of retries reached” is displayed when the value of variable retryNum reaches the maximum value.
[0073] When the face is identified in the image, it is determined whether the image includes multiple images. If the captured image includes multiple faces, a message“Multiple faces found”is displayed and the variable retryNum is incremented by 1. The user retries and captures the image again. The same process is repeated for the face detection. The user is allowed to retry and capture the images for face detection until the value of variable retryNum reaches a predetermined maximum value. A message“Maximum number of retries reached” is displayed when the value of variable retryNum reaches the maximum value.
[0074] When it is determined that the captured image includes only one face, then midpoint of the face is identified and compared with midpoint of a face in a previously captured image. A message“No motion detected” when the result of the comparison is true i.e. midpoint of face in the current image matches with the midpoint of the face in the previous image and the variable retryNum is incremented by 1. The user retries and captures image again. The same process is repeated for the face detection. The user is allowed to retry and capture the images for face detection until the value of variable retryNum reaches a predetermined maximum value. A message“Maximum number of retries reached” is displayed when the value of variable retryNum reaches the maximum value.
[0075] The current image sample is encoded to a base64 string based on the comparison. In other words, when the result of the comparison is false i.e. midpoint of face in the current image does not match with the midpoint of the face in the previous image, the current image sample is encoded to a base64 string. An enrolment image array is created and the current image is included in the array based on the comparison. A value of variable sampleNum is increased by 1 to indicate the number of samples added in the array. If the number of number of image samples in the array are less than a predefined value, a direction challenge determined. The next image is captured after a predefined time if the user does not perform the direction challenge.
[0076] Based on the user decision to perform the direction challenge, the user is presented direction sequences options for capturing the images. The application selects a sequence randomly when an input is not received by the user in predetermined time interval and a direction is set as current direction for capturing image sample. An arrow is shown according to the current direction and the image is captured accordingly. A set of face samples (currently 4 images) are taken at different directions (up / down / right / left / center). The first face sample is always taken without direction. The user is asked to follow an arrow for each sample using arrow. The arrows are hidden when the number of the samples are equal to a threshold number of samples.
[0077] After enough face samples (currently 4 images) are captured, each encoded face sample is uploaded to the third party API server. An upload queue is used to upload the samples. For uploading each sample, an upload URL is set and authorization information and token value is inserted in the header. A post request is created to upload the image sample to the third party API server. If a success response is received, then next image sample is uploaded by repeating the same process. If the uploading of the image sample is failed then a variable indicating the number of uploads failed is incremented by 1 and next sample is uploaded.
[0078] After getting the response of uploading all the samples, a request for face enrolment is sent to the third party API server. For performing the enrolment, a URL (urlEnroll) is set. In the header, an authorization bearer and a token value to the header are inserted. Based on setting the URL for enrolment, a request for enrolment is created and sent to the third party API server.
[0079] A response from the third party API server is received in response to the request. An error message is displayed if the enrolment gets failed. The user is allowed to perform next or other activities if a success response is received from the third party API server.
[0080] Similar to face enrolment as described above, an exemplary voice enrolment activity is described in another embodiment using the mobile computing device. An input is received at the mobile computing device for starting the voice enrolment activity. Based on the input, the user name and user ID is retrieved from one or more previous activities performed by the user. The necessary items and data is initialized after retrieving the user name and user ID. The mobile device waits for a trigger from the user. The voice enrolment process is executed when the user presses the start button displayed on the mobile screen or a registration activity is started when a back button of the mobile device is pressed. An error message or success message is displayed on the screen of the mobile device if the voice enrolment process is failed or successes respectively. The user is allowed to perform a login activity based on the success of the voice enrolment process.
[0081] The voice enrolment process starts by getting a token from a third party API server to allow enrolment process. The voice enrolment process comprises of an initialization stage, getting token from a third party API server, getting voice samples and, uploading samples and performing enrolment.
[0082] In the initialization step, all the arrows are hidden from the user. The initialization starts by getting a token from the third party API server to allow enrolment of face images. For the purpose of getting a token from the third party API server, a url (urlToken URL) is set. The header is included with identity of the application (i.e appID) and a secret parameter associated with the application. A request is sent to the third party API server for getting the token. An error message is displayed on the screen of the mobile device when the token is not received from the third party API server.
[0083] In the case where the request is successful, the user is allowed to perform the next step of getting voice samples. A predetermined time duration is allowed to elapse before capturing the voice samples. A voice recognition is performed on each captured voice samples. A message“No voice sample found” is displayed when the captured sample does not include a voice and a variable retryNum is incremented by 1. The user retries and captures voice sample again. The same process is repeated for the voice recognition. The user is allowed to retry and capture the voice samples for voice recognition until the value of variable retryNum reaches a predetermined maximum value. A message“Maximum number of retries reached” is displayed when the value of variable retryNum reaches the maximum value.
[0084] When the voice is detected in the sample, it is determined whether the sample includes voice from multiple users. If the captured sample includes voice from multiple users, a message“Voice from multiple users found” is displayed and the variable retryNum is incremented by 1. The user retries and captures the voice samples again. The same process is repeated for the voice recognition. The user is allowed to retry and capture the samples for voice detection until the value of variable retryNum reaches the predetermined maximum value. A message“Maximum number of retries reached” is displayed when the value of variable retryNum reaches the maximum value.
[0085] The current voice sample is encoded to a base64 string based on the comparison. In other words, when the result of the comparison is false i.e. the current voice sample does not match with the previous voice sample, the current voice sample is encoded to a base64 string. An enrolment voice samples array is created and the current sample is included in the array based on the comparison. A value of variable sampleNum is increased by 1 to indicate the number of samples added in the array.
[0086] After enough voice samples are captured, each encoded voice sample is uploaded to the third party API server. An upload queue is used to upload the samples. For uploading each sample, an upload URL is set and authorization information and token value is inserted in the header. A post request is created to upload the image sample to the third party API server. If a success response is received, then next voice sample is uploaded by repeating the same process. If the uploading of the voice sample is failed then a variable indicating the number of uploads failed is incremented by 1 and next sample is uploaded.
[0087] After getting the response of uploading all the samples, a request for voice enrolment is sent to the third party API server. For the purpose of performing the enrolment, a URL (urlEnroll) is set. In the header, an authorization bearer and a token value to the header are inserted. Based on setting the URL for enrolment, a request for enrolment is created and sent to the third party API server.
[0088] A response from the third party API server is received in response to the request. An error message is displayed if the enrolment fails. The user is allowed to perform next or other activities if a success response is received from the third party API server.
[0089] FIG. l la presents an exemplary interface for showing the start of the face authentication process. FIGS l lb-l lf describe an exemplary face authentication activity using a mobile computing device. The face verification process comprises of an initialization stage, getting token from a third party API server, getting a challenges list, getting face samples and setting directions, uploading samples and performing verification.
[0090] In the initialization step, all the arrows are hidden. The process after the initialization starts by getting a token from the third party API server to allow the verification. For the purpose of getting a token from the third party API server, a url (urlToken URL) is set. The URL is included with identity of the application (i.e appID) and a secret parameter associated with the application. A GET request is sent to the third party API server for getting the token and a response is received based on the request. An error message is displayed on the screen of the mobile device when the token is not included in the response received from the third party API server and the verification process ends. The response is decoded to obtain the challenge list on determining that the response includes a token and is allowed to capture the face images for verification. The token gotten include a list of challenges (3 challenges) and each challenge has 3 set of direction (up / down / right / left).
[0091] In the case where the request is successful, the user is allowed to perform the next step of getting face samples. A predetermined time interval is allowed to elapse after receiving the successful response before capturing the face images. A face detection is performed on each captured image. A message“No face found” is displayed when no face is detected in the captured image and a variable retryNum is incremented by 1. The user retries and captures image again. The same process is repeated for the face detection. The user is allowed to retry and capture the images for face detection until the value of variable retryNum reaches a predetermined maximum value. A message“Maximum number of retries reached” is displayed when the value of variable retryNum reaches the predetermined maximum value.
[0092] When a face is detected in the image, it is determined whether the image contains more than one face. If the captured image contains multiple faces, a message “Multiple faces found” is displayed and the variable retryNum is incremented by 1. The user retries and captures the image again. The same process is repeated for the face detection. The user is allowed to retry and capture the images for face detection until the value of variable retryNum reaches the predetermined maximum value. A message“Maximum number of retries reached” is displayed when the value of variable retryNum reaches the predetermined maximum value.
[0093] When it is determined that the captured image contains only one face, then midpoint of the face is identified and compared with midpoint of a face in a previously captured image. A message“No motion detected” when the result of the comparison is true i.e. midpoint of face in the current image matches with the midpoint of the face in the previous image and the variable retryNum is incremented by 1. The user retries and captures the image again. The same process is repeated for the face detection. The user is allowed to retry and capture the images for face detection until the value of variable retryNum reaches the predetermined maximum value. A message“Maximum number of retries reached” is displayed when the value of variable retryNum reaches the predetermined maximum value.
[0094] The current image sample is encoded to a base64 string based on the comparison. In other words, when the result of the comparison is false i.e. midpoint of the face in the current image does not match with the midpoint of the face in the previous image, the current image sample is encoded to a base64 string. An enrolment image array is created and the current image is included in the array based on the comparison. A value of variable sampleNum is increased by 1 to indicate the number of samples added in the array. If the number of image samples in the array is less than a predefined value, a direction challenge is determined. The next image is captured after a predefined time if the user does not perform the direction challenge.
[0095] Based on the user decision to perform the direction challenge, the user is presented direction sequences options for capturing the images. The application selects a sequence randomly when an input is not received by the user in a predetermined time interval and a direction is set as current direction for capturing image sample. An arrow is showing according to the current direction and the image is captured accordingly. The user must turn his or her head in the direction of the arrow. A set of face samples (currently 4 images) are taken at different directions (up / down / right / left / center). The first face sample is always taken without direction. The arrows are hidden when the number of the samples is equal to a threshold number of samples.
[0096] After enough face samples (currently 4 images) are captured, each encoded face sample is uploaded to the third party API server. An upload queue is used to upload the samples. For uploading each sample, an upload URL is set and authorization information and token value is inserted in the header. A post request is created to upload the image sample to the third party API server. If a success response is received, then the next image sample is uploaded by repeating the same process. If the uploading of the image sample is failed then a variable indicating the number of uploads failed is incremented by 1 and the next sample is uploaded.
[0097] After getting the response of uploading all the samples, a request for face verification request is sent to the third party API server. For performing the verification, a URL (urlEnroll) is set. In the header, an authorization bearer and a token value to the header are inserted. Based on setting the URL for enrolment, a request for enrolment is created and sent to the third party API server.
[0098] A response from the third party API server is received in response to the request. If the response received does not satisfy a predetermined criterion, the user is allowed to set new challenge for capturing the face image. The new challenge is allowed to set after determining that the retry for the existing challenges has not reached a predefined number. On determining that retry for the existing challenges has reached the predefined number, an error message “Verification failed” is displayed to show that the verification failed. The user is allowed to perform next or other activities if a success response is received from the third party API server.
[0099] Similar to face authentication as described above, an exemplary embodiment describes an exemplary voice authentication activity using a mobile computing device. The voice verification process comprises of an initialization stage, getting token from a third party API server, getting a challenges list, getting voice samples, uploading samples and performing verification.
[00100] After enough voice samples (currently 4 images) are captured, each encoded voice sample is uploaded to the third party API server. An upload queue is used to upload the samples. For uploading each sample, an upload URL is set and authorization information and token value is inserted in the header. A post request is created to upload the image sample to the third party API server. If a success response is received, then next voice sample is uploaded by repeating the same process. If the uploading of the voice sample is failed then a variable indicating the number of uploads failed is incremented by 1 and next sample is uploaded.
[00101] After getting the response of uploading all the samples, a request for face verification request is sent to the third party API server. For performing the verification, a URL (urlEnroll) is set. In the header, an authorization bearer and a token value to the header is inserted. Based on setting the URL for enrolment, a request for enrolment is created and sent to the third party API server.
[00102] A response from the third party API server is received in response to the request. If the response received does not satisfy a predetermined criteria, the user is allowed to set new challenge for capturing the voice sample. The new challenge is allowed to set after determining that the retry for the existing challenges has not exceeded a predefined number. On determining that retry for the existing challenges has exceeded the predefined number , an error message“failed to verify” is displayed to show that the verification is failed. The user is allowed to perform next or other activities if a success response is received from the third party API server.
[00103] In another exemplary embodiment, the FIGS. l2a, l2b, and l2c are parts of flowchart illustrating an operation of sending a request for transaction using audible signals or inaudible signals from the mobile application executing on the user mobile device. When the user starts the request activity for receiving payment from other user or customer, the user name is retrieved from the previous activity. This is followed by initializing the items required for performing the request activity. The application waits for user input for the further processing. The application on initialization provides the user at least three options for entering the input. The user can click home button displayed on the displayed screen, click a back button of the mobile device, and click a send button for sending the request for obtaining the payment from the customer. [00104] When the user either clicks on the home button or back button of the mobile device, the application keeps the user name for next activity and starts the main activity which leads to end of the payment activity after completion of the main activity.
[00105] When the user clicks the send button, the application prompts the user to provide an input for the transaction amount. The input can be provided by the speech or user can key in the amount to be paid. The application verifies the amount entered by the user as whether it is numeric amount or not. If the user failed to enter the numeric amount, then an error message is displayed and the process returns to the waiting step. On determining that the amount entered by the user is numeric, the application generates an audio signal to send the transaction details. If the created message is OK, then the audio signal is sent to the proximity device and the third party server. If the created signals is not OK, then the request activity for the payment ends.
[00106] The application waits for some time and checks whether the user has clicked on the cancel button displayed on the display screen. If the user has pressed the cancel button, a request is sent to the server for canceling the transaction.
[00107] If the user has not pressed the cancel button, the application sends a query to the server for obtaining the status of the transaction. The application waits for response and extracts the status of the transaction from the received response. The application checks and determines if the status includes“unpaid” information, the process returns to step“wait for 1 second”. If the status includes “paid” information, a success dialog is displayed and the process returns to“initialize the necessary items” step. If the status includes“cancelled” or“rejected” information, a failure dialog is displayed and the process returns to“initialize the necessary items” step.
[00108] In another exemplary embodiment, FIGS. l3a-l3d describe another process flowchart for the performing the payment /transaction activity. When the user starts the payment/ transaction activity, the user name is retrieved from the previous activity. This is followed by initializing the items required for performing the payment activity. After initializing the required items, a request is sent to the third party server for obtaining the balance amount available in the user’s account. The application waits for a response for the request and displays the balance amount in the users account on receiving the response. After displaying the balance amount, the application is enabled to receive inputs The application waits for an input for further processing. The application receives input in at least three ways. The user can click home button displayed on the display screen, click a back button of the mobile device, or the application will receive an audio signal encoded with the transaction request. When the user either clicks on the home button or back button of the mobile device, the application stops listening for any audio inputs. The application keeps the user name for next activity and starts the main activity which leads to end of the payment activity.
[00109] When the application receives the audio signal the application recognizes the audio signal and then stops listening. The application extracts transaction details form the received audio signal and displays the extracted transaction on the display screen. Along with displaying the transaction details, the application displays two buttons for accepting the transaction or rejecting the transaction. The application starts a time counter on displaying the transaction details and waits for the user inputs for predetermined interval of time for further processing.
[00110] If the user input is not received within the predetermined interval of time (15 second) or when the user clicks on the reject transaction button, a request is sent to the third party server for generating a transaction ID. The application waits for the response from the third party server. On receiving the response from the third party server, the application extracts the transaction ID from the response. The application then sends a request to the third party server for rejecting the transaction. [00111] On the other side, when the accept transaction button is clicked, a request is sent to the third party server for generating a transaction ID. The application waits for the response from the third party server. On receiving the response from the third party server, the application extracts the transaction ID from the response. The application then sends a request to the third party server for accepting the transaction.
[00112] The application waits for the response of the request for rejecting the transaction or the accepting the transaction. On receiving the response based on the sent request, application sends a transaction status query to the third party server for obtaining the status of the transaction. The application receives the transaction status for the transaction and displays a dialog based on the received status. If the status indicates that the transaction is successful i.e. status contains “paid”, a dialog box displaying the information for the success of the transaction is shown on the display screen. Based on the success of the transaction, the user is allowed to perform another transaction and process repeats form the initialization step. If the status indicates that the transaction is failed i.e. status contains“failed or “rejected”, a dialog box containing the information for failure of the transaction is displayed to the user. Based on the failure or rejection of the transaction, the user is allowed to restart the transaction and process repeats form the initialization step. The payment activity ends based on the user input that no further transactions are to be made.
[00113] Exemplary computer readable media include flash memory drives, digital versatile discs (DVDs), compact discs (CDs), floppy disks, and tape cassettes. By way of example and not limitation, computer readable media comprise computer storage media and communication media. Computer storage media include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media are tangible and mutually exclusive to communication media. Computer storage media are implemented in hardware and exclude carrier waves and propagated signals. Computer storage media for purposes of this disclosure are not signals per se. Exemplary computer storage media include hard disks, flash drives, and other solid-state memory. In contrast, communication media typically embody computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism and include any information delivery media.
[00114] Although described in connection with an exemplary computing system environment, examples of the disclosure are capable of implementation with numerous other general purpose or special purpose computing system environments, configurations, or devices.
[00115] Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with aspects of the disclosure include, but are not limited to, mobile computing devices, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, gaming consoles, microprocessor-based systems, set top boxes, programmable consumer electronics, mobile telephones, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. Such systems or devices may accept input from the user in any way, including from input devices such as a keyboard or pointing device, via gesture input, touch input, and/or via voice input.
[00116] The order of execution or performance of the operations in examples of the disclosure illustrated and described herein is not essential, unless otherwise specified. That is, the operations may be performed in any order, unless otherwise specified, and examples of the disclosure may include additional or fewer operations than those disclosed herein. For example, it is contemplated that executing or performing a particular operation before, contemporaneously with, or after another operation is within the scope of aspects of the disclosure. [00117] When introducing elements of aspects of the disclosure or the examples thereof, the articles "a," "an," "the," and "said" are intended to mean that there are one or more of the elements. The terms "comprising," "including," and "having" are intended to be inclusive and mean that there may be additional elements other than the listed elements. The term“exemplary” is intended to mean“an example of.” The phrase“one or more of the following: A, B, and C” means“at least one of A and/or at least one of B and/or at least one of C."
[00118] In this manner, the exemplary embodiments are provided which enable the mobile devices within a specified proximity to conduct a transaction between registered users that are within a specified proximity of each other. It should be noted that various modifications and changes may be made without departing from the spirit and scope of the present invention. Consequently, these and other modifications are contemplated to be within the spirit and scope of the following claims. As various changes could be made in the above constructions, products, and methods without departing from the scope of aspects of the disclosure, it is intended that the matter contained in the above description and shown in the accompanying drawings shall be interpreted as illustrative and not in a limiting sense.

Claims

WHAT IS CLAIMED IS:
1. A method for facilitating or performing an electronic transaction, the method comprising: encoding into an audio signal a request for the transaction using an application executing at a computing device wherein the request comprises the transaction amount, a user ID of a user of the application, the date of the request, and the time of the request; transmitting the audio signal using the application through one or more speakers of the computing device or one or more speakers operably connected to the computing device wherein the connection is wireless or not wireless; and receiving directly in a bank account associated with the user or the computing device funds corresponding to the transaction amount less any applicable fee or not less.
2. The method of Claim 1, further comprising logging in the user to the application using face authentication.
3. The method of Claim 2, wherein logging in to the application involves voice authentication, in addition to face authentication, as an extra layer of security.
4. A method for facilitating or performing an electronic transaction, the method comprising: logging in a first user to an application executing at a first computing device using face authentication; encoding into an audio signal a request for the transaction using the application executing at the first computing device wherein the request comprises the transaction amount, a user ID of the first user of the first computing device, the date of the request, and the time of the request; and transmitting the audio signal using the application through one or more speakers of the first computing device or one or more speakers operably connected to the first computing device wherein the connection is wireless or not wireless.
5. The method of Claim 4, wherein logging in to the application involves voice authentication, in addition to face authentication, as an extra layer of security.
6. A method for facilitating or performing an electronic transaction, the method comprising: receiving a request for the electronic transaction at a first application executing at a first computing device in the form of an audio signal wherein the audio signal is received by one or more microphones of the first computing device or one or more microphones operably connected to the first computing device; decoding the audio signal using the first application to extract details of the request wherein the request comprises the transaction amount, a user ID of a first user of a second application that generated the audio signal at a second computing device, the date of the request, and the time of the request; allowing a second user of the first application to either approve or reject the received request for the electronic transaction; and transferring directly from a bank account associated with the second user or the first computing device funds corresponding to the transaction amount.
7. The method of Claim 6, further comprising logging in the second user to the first application executing at the first computing device using face authentication.
8. The method of Claim 7, wherein logging in to the application involves voice authentication, in addition to face authentication, as an extra layer of security.
9. A method for facilitating or performing an electronic transaction, the method comprising: logging in a first user to a first application executing at a first computing device using face authentication; receiving a request for the electronic transaction at the first application executing at the first computing device in the form of an audio signal wherein the audio signal is received by one or more microphones of the first computing device or one or more microphones operably connected to the first computing device; decoding the audio signal using the first application to extract details of the request wherein the request comprises the transaction amount, a user ID of a second user of a second application that generated the audio signal at a second computing device, the date of the request, and the time of the request; and allowing the first user of the first application to either approve or reject the received request for the electronic transaction.
10. The method of Claim 9, wherein logging in to the application involves voice authentication, in addition to face authentication, as an extra layer of security.
11. A method for facilitating or performing an electronic transaction, the method comprising: encoding into an audio signal a request for the transaction using a first application executing at a first computing device wherein the request comprises a transaction amount, a user ID of a first user of the first application, the date of the request, and the time of the request; transmitting the audio signal using the first application through one or more speakers of the first computing device or one or more speakers operably connected to the first computing device wherein the connection is wireless or not wireless; receiving the audio signal at a second application executing at a second computing device; decoding the audio signal using the second application to extract details of the transaction request; allowing a second user of the second application to either approve or reject the received request for the electronic transaction; transferring directly from a bank account associated with the second user or the second computing device funds corresponding to the transaction amount; and receiving directly in a bank account associated with the first user and/or the first computing device funds corresponding to the transaction amount less any applicable fee or not less.
12. The method of Claim 11, further comprising logging in the users to the applications using face authentication.
13. The method of Claim 12, wherein logging in to at least one of the applications involves voice authentication, in addition to face authentication, as an extra layer of security.
14. A method for facilitating or performing an electronic transaction, the method comprising: logging in a first user to a first application executing at a first computing device using face authentication; logging in a second user to a second application executing at a second computing device using face authentication; encoding into an audio signal a request for the transaction using the first application executing at the first computing device wherein the request comprises a transaction amount, a user ID of the first user of the first application, the date of the request, and the time of the request; transmitting the audio signal using the first application through one or more speakers of the first computing device or one or more speakers operably connected to the first computing device wherein the connection is wireless or not wireless; receiving the audio signal at the second application executing at the second computing device; decoding the audio signal using the second application to extract details of the transaction request; allowing the second user of the second application to either approve or reject the received request for the electronic transaction.
15. The method of Claim 14, wherein logging in to at least one of the applications involves voice authentication, in addition to face authentication, as an extra layer of security.
16. The method of Claim 14, wherein both the first application and the second application are the same application.
17. The method as in any one of Claims 2, 3, 4, 5, 7, 8, 9, 10, 12, 13, 14, 15, or 16 wherein the face authentication comprises of at least one method of liveness detection to detect the fraudulent use of a photo and video replay of a face.
18. The method of Claim 17, wherein the liveness detection method comprises of instructing the concerned user or users of the first computing device to move his/her head one or more times.
19. The method as in any one of Claims 1-10 inclusive, further comprising an initial step of registering first the concerned user or users with the respective application or applications by uploading: one or more images of one or more identity documents of the concerned user or users; and a video of short duration wherein the concerned user or users communicate in person his or her intent to register with the respective application or applications.
20. The method of Claims 11-16 inclusive, further comprising the initial steps of: registering first the concerned user or users with the respective application or applications by receiving: one or more images of one or more identity documents of the concerned user or users; and a video of short duration wherein the concerned user or users communicate in person his or her or their intent to register with the respective application or applications; and verifying that the received image or images of the one or more identity documents are image or images of valid identity document or documents of the person in the video.
21. A method for verifying the identity of an online applicant in an initial online registration to register for an account, a service, a test, or any combination of an account, a service, and a test; the method comprising:
receiving one or more images of one or more identity documents of the online applicant; receiving a video of short duration wherein the online applicant communicate in person his or her intent to register for the account, service, or test or any combination of the account, service or test; and verifying that the received image or images of the one or more identity documents are image or images of valid identity document or documents of the person in the video.
22. The method of claim 21, wherein the one or more images of the online applicant’s one or more identity documents and the video are uploaded by the applicant via a mobile site on which the registration is being done.
23. The method of claim 21, wherein the one or more images of the online applicant’s one or more identity documents and the video are uploaded by the applicant via a mobile application also used for registration.
24. The method of claim 22, wherein the one or more images and the video are captured using one or more cameras of the mobile device and one or more microphones of the mobile device accessing the mobile site.
25. The method of claim 23, wherein the one or more images and the video are captured using one or more cameras of the mobile device and one or more microphone of the mobile device running the mobile application.
26. A system for performing an electronic transaction, the system comprising: a memory associated with a first mobile computing device, the memory storing an application for performing the electronic transaction with a second mobile computing device in proximity to the first mobile computing device; and a processor programmed to: receive a request for the electronic transaction at the application executing at the first computing device in the form of audio signals, and
allows a user of the first computing device to either approve or reject the received request for the electronic transaction.
27. The system of claim 26, wherein the application executing at the first computing device interacts with a third party server to extract a balance amount in an account associated with the first computing device before performing the electronic transaction.
28. The system of claim 27, wherein the application executing at the first computing device interacts with a third party server to extract a history of the previous transaction made by a user of the first computing device.
29. A method for facilitating or performing an electronic transaction, the method comprising: registering a user with a third party API server using a mobile application executing on a mobile computing device for performing the electronic transaction, wherein the registering the user with the third party API server using the mobile application includes enrolling one or more face images of the user with the third party API server;
logging into the mobile application by performing face authentication of the user; and sending a request to the third party API server to verify a transaction number;
generating an audio signal at the mobile computing device based on the verification, wherein the audio signal is of a frequency of either within the human hearing range or not; and sending the audio signal including the transaction number to a proximity mobile computing device.
30. The method of claim 29, further includes sending a transaction status query to the third party API server.
31. The method of claim 29, wherein the transaction number is generated by the mobile application based on the verification of transaction amount.
32. The method of claim 29, wherein the face authentication comprises doing direction challenges for capturing face images for authentication.
33. The method of claim 29, wherein the face authentication comprises allowing a user to perform a threshold number of attempts for face detection.
PCT/BN2018/050001 2017-07-28 2018-09-29 A mobile wallet solution for facilitating proximity payments WO2019109153A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762538532P 2017-07-28 2017-07-28
US62/538,532 2017-07-28

Publications (2)

Publication Number Publication Date
WO2019109153A2 true WO2019109153A2 (en) 2019-06-13
WO2019109153A3 WO2019109153A3 (en) 2020-02-13

Family

ID=66749898

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/BN2018/050001 WO2019109153A2 (en) 2017-07-28 2018-09-29 A mobile wallet solution for facilitating proximity payments

Country Status (1)

Country Link
WO (1) WO2019109153A2 (en)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA3186147A1 (en) * 2014-08-28 2016-02-28 Kevin Alan Tussy Facial recognition authentication system including path parameters
EP3320649A4 (en) * 2015-07-09 2019-01-09 Naffa Innovations Private Limited A system and method for data communication between computing devices using audio signals
US9519901B1 (en) * 2015-09-16 2016-12-13 Square, Inc. Biometric payment technology

Also Published As

Publication number Publication date
WO2019109153A3 (en) 2020-02-13

Similar Documents

Publication Publication Date Title
US20220398594A1 (en) Pro-active identity verification for authentication of transaction initiated via non-voice channel
US10375082B2 (en) Method and apparatus for geographic location based electronic security management
US9553872B2 (en) Method and system for providing zero sign on user authentication
US8504475B2 (en) Systems and methods for enrolling users in a payment service
US20120011007A1 (en) Mobile Payment Using DTMF Signaling
US10623403B1 (en) Leveraging multiple audio channels for authentication
CN105072178B (en) Cell-phone number binding information acquisition methods and device
JP2010525461A5 (en)
WO2019182725A1 (en) Leveraging multiple audio channels for authentication
JP6469933B2 (en) Data communication system and method between computer devices using audio signals
CN110049062A (en) Verify code check method, device, electronic equipment and storage medium
CN106060027B (en) Method, apparatus, equipment and the system verified based on identifying code
CN104217328A (en) Multi-verification payment method and multi-verification payment device
CN111242775A (en) Bank agent business processing method and device
US10769630B2 (en) Mobile person to person voice payment
US10867302B2 (en) Emitter recognition and sequencing for risk analytics
WO2018165830A1 (en) Payment method and device based on verification terminal
CN108777692A (en) Method, apparatus, electronic equipment, login service device and the medium that user logs in
US11924636B2 (en) System and method for authenticating using a multi-provider platform
US20200167539A1 (en) Location- and identity-referenced authentication method and communication system
WO2019109153A2 (en) A mobile wallet solution for facilitating proximity payments
KR101740402B1 (en) Method and system for driving mobile application with using sonic signal
WO2014154058A1 (en) System and method for mobile identity authentication and payment
US8731195B2 (en) Method and system for initiating secure transactions within a defined geographic region
US20190156334A1 (en) System and method for providing anonymous payments

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18885005

Country of ref document: EP

Kind code of ref document: A2