EP3533015A1 - Verfahren und vorrichtung zur anwendung von künstlicher intelligenz zum senden von geld durch verwendung von spracheingabe - Google Patents

Verfahren und vorrichtung zur anwendung von künstlicher intelligenz zum senden von geld durch verwendung von spracheingabe

Info

Publication number
EP3533015A1
EP3533015A1 EP17872539.6A EP17872539A EP3533015A1 EP 3533015 A1 EP3533015 A1 EP 3533015A1 EP 17872539 A EP17872539 A EP 17872539A EP 3533015 A1 EP3533015 A1 EP 3533015A1
Authority
EP
European Patent Office
Prior art keywords
user
data
voice input
recipient
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
EP17872539.6A
Other languages
English (en)
French (fr)
Other versions
EP3533015A4 (de
Inventor
Seung-hak YU
Min-Seo Kim
In-Chul Hwang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Priority claimed from PCT/KR2017/013226 external-priority patent/WO2018093229A1/en
Publication of EP3533015A1 publication Critical patent/EP3533015A1/de
Publication of EP3533015A4 publication Critical patent/EP3533015A4/de
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/163Wearable computers, e.g. on a belt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/08Payment architectures
    • G06Q20/10Payment architectures specially adapted for electronic funds transfer [EFT] systems; specially adapted for home banking systems
    • G06Q20/108Remote banking, e.g. home banking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/30Payment architectures, schemes or protocols characterised by the use of specific devices or networks
    • G06Q20/32Payment architectures, schemes or protocols characterised by the use of specific devices or networks using wireless devices
    • G06Q20/322Aspects of commerce using mobile devices [M-devices]
    • G06Q20/3223Realising banking transactions through M-devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/02Banking, e.g. interest calculation or account maintenance
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/04Training, enrolment or model building
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/22Interactive procedures; Man-machine interfaces
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/26Recognition of special voice characteristics, e.g. for use in lie detectors; Recognition of animal voices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/30Payment architectures, schemes or protocols characterised by the use of specific devices or networks
    • G06Q20/34Payment architectures, schemes or protocols characterised by the use of specific devices or networks using cards, e.g. integrated circuit [IC] cards or magnetic cards
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Definitions

  • the present disclosure generally relates to a method and device for sending money using voice input.
  • the present disclosure also relates to an artificial intelligence (AI) system and its application that simulates functions, such as recognition and determinations of the human brain, using a machine learning algorithm.
  • AI artificial intelligence
  • a user may receive various services using a device.
  • a user may input his or her voice to the device, and the device may perform an operation according to the user's voice (e.g., according to commands spoken by the user).
  • a user may access a financial service using a device executing an application provided by a bank. For example, the user may send money to an account of a recipient by using the device. The user may execute the application, input an account number, a password, etc., and send money to the account of the recipient.
  • AI artificial intelligence
  • An AI system is a machine-learning system that learns for itself, makes determinations, and becomes “smarter”, unlike existing rule-based systems.
  • the AI system may provide an improved recognition rate and understand user preferences more accurately as it is used, and thus existing rule-based systems are increasingly being replaced by deep-learning based AI systems.
  • AI technology includes machine learning (e.g., deep learning) and element technologies that utilize machine learning.
  • Machine learning is an algorithm technology in which the machine itself classifies/learns characteristics of input data.
  • Element technology is technology that simulates functions such as recognition and determinations of the human brain using machine-learning algorithms such as deep learning and includes technical fields such as linguistic understanding, visual understanding, inference/prediction, knowledge representation, motion control, etc.
  • AI technology has been applied to various fields.
  • Linguistic understanding is technology for recognizing and applying/processing human language/characters and includes natural language processing, machine translation, dialogue system, query/response, speech recognition/synthesis, and the like.
  • Visual understanding is technology for recognizing and processing objects as human vision, and includes object recognition, object tracking, image search, human recognition, scene understanding, spatial understanding, image enhancement, and the like.
  • Inference/prediction is technology for determining and logically inferring and predicting information, and includes knowledge/probability-based inference, optimization prediction, preference-based planning, recommendation, and the like.
  • Knowledge representation is technology for automating experience information of humans into knowledge data and includes knowledge building (generation/classification of data), knowledge management (utilization of data), and the like.
  • Motion control is technology for controlling an autonomous travel of a vehicle and motion of a robot, and includes motion control (navigation, collision, and traveling), operation control (behavior control), and the like.
  • a method and a device for sending money to an account of a recipient by using voice are provided.
  • FIG. 1 is a diagram illustrating a method by which a user sends money by using a user’s voice, according to an example embodiment
  • FIG. 2 is a block diagram illustrating a device according to an example embodiment
  • FIG. 3 is a diagram illustrating a device learning a pattern according to an example embodiment
  • FIG. 4 is a diagram illustrating a method of approving remittance details, according to an example embodiment
  • FIG. 5 is a diagram illustrating a method of selecting one of a plurality of recipients, according to an example embodiment
  • FIG. 6 is a diagram illustrating a method of selecting any of a plurality of banks, according to an example embodiment
  • FIG. 7 is a flowchart illustrating a method of sending money by using voice, according to an example embodiment
  • FIG. 8 is a diagram illustrating a method of paying by using voice according to another example embodiment
  • FIG. 9 is a diagram illustrating a device learning a payment pattern according to an example embodiment
  • FIG. 10 is a flowchart illustrating a method of paying using voice, according to an example embodiment
  • FIG. 11 is a block diagram of a processor according to some example embodiments.
  • FIG. 12 is a block diagram of a data learner according to some example embodiments.
  • FIG. 13 is a block diagram of a data recognizer according to some example embodiments.
  • FIG. 14 is a diagram illustrating an example of learning and recognizing data by interaction between a device and a server according to some example embodiments.
  • FIGS. 15 and 16 are flowcharts of a network system using a data recognition model according to some example embodiments.
  • a method and a device for sending money to an account of a recipient by using voice are provided.
  • a device includes a memory configured to store at least one program; a microphone configured to receive a voice input; and at least one processor configured to execute the at least one program to perform operations for sending money to a recipient, wherein the operations include determining a payment intention of a user based on analyzing the received voice input; retrieving contact information from a stored contact list based on a name of the recipient; transmitting the name and the contact information of the recipient to a bank server together with an amount of money specified in the voice input; receiving remittance details from the bank server; and approving the remittance details
  • a paying method includes receiving a voice input of a user; determining a payment intention of a user based on an analysis of the received voice input; retrieving contact information from a stored contact list based on a name of a recipient specified in the voice input; transmitting the name and the contact information of the recipient to a bank server together with an amount specified in the voice input; receiving remittance details from the bank server; and approving the remittance details.
  • FIG. 1 is a diagram illustrating a method by which a user sends money by using the user’s voice, according to an example embodiment.
  • the user may input his or her voice to a device 10 by speaking (e.g., into a microphone) in order to send the money to a recipient.
  • the user may send the money to a recipient by speaking only by a name of the recipient without speaking or inputting an account number of the recipient.
  • the device 10 may receive voice input from the user.
  • the device 10 may include a microphone, which receives the user's voice.
  • the device 10 may receive the voice input of the user via the microphone by executing, for example, a voice assistant application such as "S Voice" and controlling the executed application.
  • the device 10 may recognize the user's voice as indicated at item 1 in FIG. 1.
  • the device 10 may analyze the voice to determine an intention of the user. For example, if the device receives a voice input of the user saying 'send 100 million won to Samsung', the device 10 may determine from the user’s voice whether the user intends to send money.
  • the device 10 may store in memory the entire user voice input when the user sends money and use the stored information to learn a pattern of the voice input when sending money.
  • the device 10 may determine the intention of the user more accurately through learning. At the beginning of learning, when the user's voice is input, the device 10 may confirm whether to send money. The device 10 may more accurately determine the sending intention of the user through repeated learning.
  • the device 10 may compare a stored voice pattern with the pattern of the input voice to determine the intention of the user.
  • the stored voice pattern can include the pattern of the user’s voice input when the user intends to send money.
  • the device 10 may determine that the user intends to send money if the stored voice pattern is similar or identical to the pattern of the input voice (e.g., the similarity is equal to or exceeds a threshold similarity).
  • the pattern of the stored voice may be updated or added to through learning.
  • the device 10 may confirm the name of the recipient or a title and search for the name or the title stored in a contact list. For example, if the user inputs the recipient as 'Samsung', the device 10 may search for 'Samsung' in the contact list. For example, the device 10 may confirm a phone number of 'Samsung' in the contact list.
  • the device 10 may transmit user information, recipient information, and an amount of money to a bank server 20 as indicated at item 2 in FIG. 1.
  • the user information includes, without limitation, a name of the user, an account number, and the like.
  • the recipient information includes, without limitation, the name of the recipient, a telephone number, and the like.
  • the recipient information may not include an account number of the recipient.
  • the amount of money indicates an amount of money specified in the user's voice input, and is an amount of money that the user will send to the recipient.
  • the device 10 may be, without limitation, a smart phone, a tablet PC, a PC, a smart TV, a mobile phone, a personal digital assistant (PDA), a laptop, a media player, a micro server, a global positioning system (GPS) device, an e-book terminal, a digital broadcasting terminal, a navigation system, a kiosk, an MP3 player, a digital camera, consumer electronics, and other mobile or non-mobile computing devices.
  • the device 10 may also be a wearable device, such as, without limitation, a watch, glasses, a hair band, a ring, and the like having a communication function and a data processing function.
  • the device 10 may include any kind of device capable of receiving voice input of a user and providing a reply message to the user.
  • the device 10 may communicate with other devices (not shown) over a network in order to use various types of context information.
  • the network may include a local area network (LAN), a wide area network (WAN), a value added network (VAN), a mobile radio communication network, a satellite communication network, and/or a combination thereof, may be a data communication network in a comprehensive sense for allowing respective network elements to smoothly communicate with each other, and may include wired Internet, wireless Internet, and a mobile wireless communication network.
  • Wireless communication may include, for example, Wi-Fi, Bluetooth, Bluetooth low energy, ZigBee, Wi-Fi Direct (WFD), ultra wideband (UWB), infrared data association (IrDA), Near Field Communication (NFC), and the like, but is not limited thereto.
  • the bank server 20 may receive the user information and the recipient information as indicated at item 3 in FIG. 1.
  • the bank server 20 may search for an account number that matches the user information.
  • the bank server 20 may search for the account number of the user by using, for example, the name of the user and the telephone number.
  • the bank server 20 may search for an account number assigned (or matched) to unique identification information of the device 10.
  • the device 10 may include unique identification information and the bank server 20 may use the unique identification information of the device 10 to search an account database for the account number of the user of the device 10.
  • the bank server 20 may also search for an account number matching the recipient information. For example, the bank server 20 may search for the account number matching the name and the telephone number of the recipient.
  • the bank server 20 may generate remittance details as indicated at item 4 in FIG. 1.
  • the bank server 20 may generate the remittance details including, without limitation, the account number of the user, the name of the recipient, the account number of the recipient, and the amount of money.
  • the bank server 20 may generate remittance details 'send 10 thousand won from bank A, 11-1111 (account number), and AAA (user name) to bank B, 22-2222 (account number), and BBB (recipient name)'.
  • the bank server 20 may transmit the remittance details to the device 10.
  • the device 10 may display the remittance details.
  • the device 10 may display the remittance details to allow the user to confirm whether the intention of the user’s voice input and the remittance details coincide with each other.
  • the user may approve the remittance details.
  • the user may input, for example, one or more of voice, a fingerprint, an iris scan, a vein image, a face image, and a password if the user wants to send money according to remittance details.
  • the device 10 may perform authentication by determining whether the input voice, fingerprint, iris scan, vein image, face image and/or password matches (or match) personal information of the user. This authentication of remittance details is shown as item 5 in FIG. 1.
  • the device 10 may transmit an authentication result to the bank server 20 as shown in item 6 in FIG. 1.
  • the bank server 20 may receive the authentication result and send the money to the recipient according to a received authentication result authenticating the remittance details as shown at item 7 in FIG 1.
  • the bank server 20 may send the money to the recipient if the user is authenticated as a legitimate user (and optionally send confirmation to the device 10 that the money is sent), and may not send the money and transmit an error message to the device 10 otherwise.
  • FIG. 2 is a block diagram illustrating the device 10 according to an example embodiment.
  • the device 10 may include a processor 11, a memory 12, a display 13, and a microphone 14.
  • the processor 11 may control the overall operation of the device 10 including the memory 12, the display 13, and the microphone 14.
  • the processor 11 may control storing data in and/or reading data to/from the memory 12.
  • the processor 11 may determine an image to be displayed on the display 13 and may control the display 13 to display the image.
  • the processor 11 may control turning the microphone 14 on/off and analyze (e.g., by executing a voice analysis application) a voice input through the microphone 14.
  • the memory 12 may store personal information of a user, biological information, and the like.
  • the memory 12 may store, without limitation, a user's voice, fingerprint, iris scan, vein image, facial image, and/or a password.
  • the memory 12 may store samples of the user's voice and/or prior voice input for analyzing a pattern of the user's voice.
  • the display (e.g., LCD, OLED, and the like) 13 may display images and reproduce video content under control of the processor 11.
  • the microphone 14 may receive voice input.
  • the microphone 14 may include circuitry to convert a sound (e.g., voice input) generated in a periphery of the device 10 into an electric signal and output the electric signal to the processor 11.
  • FIG. 3 is a diagram illustrating the device 10 learning a pattern according to an example embodiment.
  • the device 10 may, for example, execute a voice analysis application for analyzing various types of sentences and learn patterns based thereon.
  • a user may say various types of sentences to send money. For example, in order to send 100 million won from a bank account of the user to Samsung (a recipient), the user may say the following type of sentences:
  • the device 10 may analyze and learn a pattern of the user’s voice to identify sentences including the user’s intention to send money.
  • the device 10 may confirm with the user the account among the plurality of accounts from which to withdraw money. Once an account is designated, the money transfers initiated using device 10 may withdraw money from the designated account from then on, unless there is a different instruction from the user.
  • FIG. 4 is a diagram for illustrating a method of approving remittance details, according to an example embodiment.
  • a user may approve the remittance details by using, without limitation, voice input, fingerprint, vein image, face image or iris scan.
  • the device 10 may receive the remittance details from the bank server 20 and display the remittance details on the display 13 thereof.
  • the remittance details may include, without limitation, an account number of the user, an account number of a recipient, and an amount of money.
  • the user may approve the remittance details after visually confirming the displayed remittance details.
  • the user may use a voice input, fingerprint, vein image, facial image or iris scan.
  • the device 10 may transmit a message indicating that the remittance details are approved to the bank server 20 if an input voice, fingerprint, or iris matches (e.g., has similarity equal to exceeding a predetermined similarity threshold) the user's voice input, fingerprint, vein image, facial image or iris scan as reflected in information stored in memory 12 of the device.
  • FIG. 5 is a diagram illustrating a method of selecting one of a plurality of recipients according to an example embodiment.
  • a user may select any one of the plurality of recipients through, for example, voice input.
  • the device 10 may search a contact list stored in memory 12 (or some other external memory) for a name identified as a recipient. If a plurality of recipients including the identified name are found in the contact list, the device 10 may display names of the plurality of found recipients on display 13. The user may select any one of the displayed names by voice input.
  • the device 10 may display the two recipients on the display 13.
  • the user may select either a 1st recipient or a 2nd recipient by voice input.
  • the user may select a recipient by inputting a voice such as 'send money to the 1st one' or 'send money to Samsung Electronics'.
  • display 13 is configured as a touch screen, the user may use a touch input to select a recipient.
  • FIG. 6 is a diagram for illustrating a method of selecting any of a plurality of banks according to an example embodiment.
  • a user may select any bank (or an account number) from the plurality of banks (or account numbers) using voice input.
  • the bank server 20 may transmit the plurality of banks (or account numbers) registered in a name of a recipient when transmitting remittance details to the device 10. For example, if there are a plurality of account numbers registered in the name of the recipient, the device 10 may display the account numbers to the user on display 13 for the user to determine to which account number to send money. As above, the user may select any one of the displayed account numbers by voice or touch input.
  • the device 10 may display the two account numbers on the display 13.
  • the user may select either a 1st or 2nd account number by voice input.
  • the user may select a bank or an account number by a voice input providing by speaking, for example, 'Send money to Bank A', 'Send money to the 1st one', or 'Send money to the 55 - 5555 account'.
  • FIG. 7 is a flowchart illustrating a method of sending money by using voice input, according to an example embodiment.
  • a user may input a name of a recipient and an amount of money by voice input and send money to the recipient.
  • the device 10 may receive the user's voice input to microphone 14.
  • the device 10 may analyze the received voice input to determine an intention of the user to send money. As a result of analyzing the received voice, if it is determined that there is no intention to send money, the device 10 does not perform a process for sending money.
  • the voice input may include the name of the recipient and the amount of money, etc.
  • the device 10 may analyze the voice input and determine that the user intends to send money if an instruction, the name, the amount of money, and the like are included in the voice input.
  • the device 10 may search in a stored contact list for a contact corresponding to the name of the recipient. If a contact corresponding to the name of the recipient is not found, the device 10 may display on the display 13 an information indicating that the contact is missing/not found. The user may input contact information for the recipient by voice input. The device 10 may store the name of the recipient and the corresponding contact in the contact list based on the input voice.
  • the device 10 may send the name of the recipient and the contact information to the bank server 20 along with the amount of money included in the voice input.
  • the contact information may be searched for by the name of the recipient or entered via voice input by the user.
  • the device 10 may receive remittance details from the bank server 20.
  • the bank server 20 may search for an account number of the recipient by using the name and the contact information of the recipient and transmit to the device 10 the remittance details including, without limitation, the name of the recipient, the account number, and the amount of money.
  • the device 10 may approve (authenticate) the remittance details.
  • the device 10 may approve the remittance details by using, without limitation, the user's voice input, fingerprint, iris scan, vein image, facial image, and/or a password.
  • the user may confirm the remittance details and use voice input to the device 10 to approve the remittance details or allow the device 10 recognize the iris, the fingerprint, and the like.
  • the user wears a wearable device such as a smart watch
  • the user may perform authentication by using a vein in the back of a user’s hand by using the smart watch. For example, the user may manipulate the smart watch to identify the vein in the back of the hand and perform authentication so as to approve the remittance details.
  • FIG. 8 is a diagram illustrating a method for paying using voice input according to another example embodiment. Referring to FIG. 8, a user may pay by using voice input.
  • the device 10 may display a screen on display 13 for paying for goods or services purchased by the user on the Internet. For example, when the user purchases a Galaxy Note 7, the device 10 may display a message 'Would you like to purchase a Galaxy Note 7?'
  • the user may provide voice input to pay. For example, when the user inputs 'pay with Samsung Card', the device 10 may recognize the user's voice as noted at item 1’ in FIG. 8. The user may simply input 'Pay', and the device 10 may proceed with payment by using a card previously used by the user to pay.
  • the device 10 may transmit card information of the user and payment information to a card issuer server 30 as noted by item 2’ in FIG. 8.
  • the card information of the user may include a card number, an expiration date of the card, a password, and the like.
  • the payment information may include goods or services to be paid, seller information, and the like.
  • the card issuer server 30 may confirm the card information and proceed with payment as noted by item 3’ in FIG. 8.
  • the card issuer server 30 may transmit a payment completion message to the device 10 when payment is completed.
  • the device 10 may display the payment completion message to notify the user that the payment has been completed normally.
  • the smart watch may automatically perform biometric authentication on the user. For example, if the user is wearing the smart watch, the smart watch may capture a vein of a user's wrist and perform vein authentication through a pattern of the captured vein. Therefore, the user may automatically pay through the smart watch without inputting the voice, a password, etc. separately. More specifically, the device 10 may determine whether the user wears the smart watch when the user touches a payment button over the Internet. If the user is wearing the smart watch, the device 10 may send a signal to the smart watch to perform vein authentication. The smart watch may capture the vein of the user under control of the device 10 and transmit a result of performing vein authentication to the device 10.
  • the smart watch may transmit an image of the captured vein to the device 10, and the device 10 may perform vein authentication.
  • Vein authentication may compare a registered vein image (or a vein pattern) with a captured vein image (or the vein pattern).
  • the device 10 may proceed with payment without receiving a separate input from the user.
  • FIG. 9 is a diagram for illustrating the device 10 learning a payment pattern according to an example embodiment.
  • the device 10 may analyze payment patterns by analyzing various types of sentences. Learning of the payment pattern may mean to identify and record a type of voice input that a user speaks when paying.
  • the user may say various types of sentences to pay.
  • the user may say the following types of sentences.
  • the device 10 may store in memory 12 an expression mainly (most often) said when the user pays and determine whether the user says the same or similar sentence as the stored sentence, and proceed with payment.
  • the device 10 may register card information of the user at the beginning of learning or request the card information from the user in order to obtain the card information that the user mainly uses.
  • the device 10 may proceed with payment by using the previously registered card information of the user.
  • FIG. 10 is a flowchart illustrating a method for paying by voice input according to an example embodiment.
  • a user may pay for goods or services by using voice input.
  • the device 10 may display payment details on display 13.
  • the device 10 may receive the user's voice input via microphone 14.
  • the user may check the payment details, and may express whether or not to pay by providing a voice input. For example, the user may say “Pay” when paying and say “Do not pay” when not paying.
  • the device 10 may analyze the received voice input to determine an intention of the user.
  • the device 10 may analyze the voice input and determine whether the user would like to approve the displayed payment details.
  • the device 10 may perform user authentication by voice input.
  • the device 10 may perform user authentication by determining whether the voice input matches the user's voice (e.g., by comparing to a registered voice sample).
  • the device 10 may determine whether the registered voice sample matches the input voice and, if so, proceed with payment.
  • the device 10 may perform user authentication through not only voice, but also fingerprints, irises, veins, faces, or passwords.
  • the device 10 may transmit payment information to a card company. If authentication is successful, the device 10 may transmit the payment information and the card information to the card company.
  • the payment information may include goods, seller information, an amount of money, and the like.
  • the card information may include a card number of the user, a password, an expiration date, and the like.
  • the device 10 may display a payment completion message upon completion of payment.
  • the user when the user purchases goods or services via the Internet, the user may purchase goods or services through voice input.
  • FIG. 11 is a block diagram of a processor 1300 according to some example embodiments.
  • the processor 1300 may include a data learner 1310 and a data recognizer 1320.
  • the data learner 1310 may learn a reference for determining a situation.
  • the data learner 1310 may learn a reference for what data to use to determine a predetermined situation and how to determine the situation by using data.
  • the data learner 1310 may obtain data to be used for learning, and apply the obtained data to a data recognition model that will be described below, thereby learning the reference for determining the situation.
  • the data learner 1310 may learn a data recognition model by using voice input or a sentence to generate the data recognition model set to estimate an intention of a user.
  • the voice input or the sentence may include a voice uttered by the user of the device 10 or a sentence with which a user’s voice is recognized.
  • the voice or the sentence may include a voice uttered by the third party or a third party’s voice.
  • the data learner 1310 may learn the data recognition model by using a supervised learning method using a voice or a sentence and a learning entity as learning data.
  • the data recognition model may be a model set to estimate an intention of the user to send money.
  • the learning entity may include, without limitation, at least one of user information, recipient information, remittance amount, and remittance intention.
  • the user information may include, without limitation, identification information (e.g., a name or a nickname) of the user or identification information of an account of the user (e.g., an account bank, an account name, an account nickname or an account number).
  • the recipient information may include, without limitation, identification information (e.g., a name, a nickname or a phone number) or identification information of an account of a recipient (e.g., an account bank, an account name, an account nickname, or an account number).
  • the remittance intention may include whether the user will send money.
  • the remittance intention may include, without limitation, remittance proceedings, remittance reservations, reservation cancellation, remittance holding, or remittance confirmation.
  • At least one learning entity value may have a value of 'NULL'.
  • the value 'NULL' may indicate that there is no information about an entity value for the voice input or the sentence used as learning data.
  • the learning entity is ⁇ user information: A bank, recipient information: Samsung, remittance amount: 100 million won, remittance instruction: proceed with remittance ⁇ .
  • the learning entity may consist of ⁇ user information: NULL, recipient information: Samsung, remittance amount: 100 million won, remittance instruction: proceed with remittance ⁇ .
  • the learning entity may consist of ⁇ user information: NULL, recipient information: Samsung, remittance amount: 100 million won, remittance instruction: confirm remittance ⁇ .
  • the learning entity may consist of ⁇ user information: NULL, recipient information: Samsung, remittance amount: 100 million won, remittance instruction: cancel reservation ⁇ .
  • the data recognition model may be a model set to estimate a payment intention of the user.
  • the learning entity may include, without limitation, at least one of a payment card, a payment item, a payment method, and the payment intention.
  • the payment method may include, for example, payment in full or the number of monthly installments.
  • the payment intention may include whether the user will pay.
  • the payment intention may include payment proceeding, payment cancellation, payment holding, a payment method change, or payment confirmation.
  • the learning entity may be composed of ⁇ payment means: Samsung card, payment item: NULL, payment method: payment in full, payment instruction: proceed with payment ⁇ .
  • the learning entity may be composed of ⁇ payment means: NULL, payment item: NULL, settlement method: 10 monthly installments, payment instruction: proceed with payment ⁇ .
  • the learning entity may be composed of ⁇ payment means: NULL, payment item: NULL, payment method: NULL, payment instruction: cancel payment ⁇ .
  • the data recognition model set to determine the remittance intention of the user and the data recognition model set to determine the payment intention of the user may be the same recognition model or different recognition models.
  • each of the data recognition models may include a plurality of data recognition models.
  • the intention of the user may be determined by using the plurality of data recognition models customized for each environment considering a use environment (for example, a use time or a place of use) of the user.
  • the data recognizer 1320 may determine a situation based on data.
  • the data recognizer 1320 may recognize the situation from predetermined data by using the learned data recognition model.
  • the data recognizer 1320 may determine a predetermined situation based on predetermined data by obtaining the predetermined data according to a predetermined reference by learning and using the data recognition model by using the obtained data as an input value. Further, a resultant value output by the data recognition model by using the obtained data as the input value may be used to update the data recognition model.
  • the data recognizer 1320 may estimate an intention of the user by applying the user’s voice input or a sentence with which the user's voice is recognized to the data recognition model. For example, the data recognizer 1320 may apply the user’s voice input or the sentence with which the user's voice is recognized to the data recognition model to obtain a recognition entity and provide the recognition entity to a processor of a device (e.g., the processor 11 of the device 10 of FIG. 2). The processor 11 may determine the intention of the user by using the obtained recognition entity.
  • a processor of a device e.g., the processor 11 of the device 10 of FIG. 2
  • the data recognition model may be a model set to estimate an intention of the user to send money.
  • the data recognizer 1320 may estimate the intention of the user to send money by applying the user’s voice input or the sentence with which the user's voice is recognized to the data recognition model.
  • the data recognizer 1320 may obtain a recognition entity from the user’s voice input or the sentence with which the user's voice is recognized.
  • the recognition entity may include, for example, at least one of user information, recipient information, a remittance amount, and a remittance instruction.
  • the data recognizer 1320 may provide the obtained recognition entity to the processor 11.
  • the processor 11 (or a dialog management module of the processor 11) may determine the intention of the user based on the recognition entity.
  • the processor 11 may not perform a process for sending money. On the other hand, if it is determined based on the recognition entity that the intention of the user is to send money, the processor 11 may perform the process for sending money.
  • the processor 11 may determine a value corresponding to a value 'NULL' using history information of the user or preset information. For example, the processor 11 may determine a value corresponding to the value 'NULL' by referring to a recent remittance history. Alternatively, the processor 11 may determine a value corresponding to the value 'NULL' by referring to information (for example, an account number, an account bank, etc.) preset by the user in a preference setting.
  • information for example, an account number, an account bank, etc.
  • the processor 11 may request a value corresponding to the value 'NULL' from the user. For example, the processor 11 may control display 13 to display a sentence indicating that there is no information about at least one of the user information, the recipient information, the remittance amount, or the remittance instruction.
  • the processor 11 may perform a process for sending money by using the recognition entity value obtained from the data recognizer 1320 and user input information.
  • the data recognition model may be a model set to estimate a payment intention of the user.
  • the data recognizer 1320 may estimate the payment intention of the user by applying the user’s voice input or the sentence with which the user's voice is recognized to the data recognition model.
  • the data recognizer 1320 may obtain a recognition entity from the user’s voice input or the sentence with which the user's voice is recognized.
  • the recognition entity may include, for example, at least one of payment means, a payment item, a payment method and a payment instruction.
  • the data recognizer 1320 may provide the obtained recognition entity to the processor 11.
  • the processor 11 (or a dialog management module of the processor 11) may determine the intention of the user based on the recognition entity.
  • the processor 11 may not perform a process for payment. On the other hand, if it is determined based on the recognition entity that the intention of the user is to pay, the processor 11 may perform the process for payment.
  • the processor 11 may determine a value corresponding to the value 'NULL' using history information of the user or preset information. Alternatively, the processor 11 may request the user to input a value corresponding to the value 'NULL'.
  • At least one of the data learner 1310 and the data recognizer 1320 may be manufactured as at least one hardware chip and mounted on an electronic device.
  • at least one of the data learner 1310 and the data recognizer 1320 may be manufactured as a dedicated hardware chip for artificial intelligence (AI) or may be manufactured as a part of a conventional general purpose processor (e.g. a CPU or an application processor) or a graphics processor (e.g., a GPU) and may be mounted on various electronic devices as described above.
  • the dedicated hardware chip for AI may be a dedicated processor specialized for probability calculation, and have a higher parallel processing performance than the conventional general purpose processor, thereby quickly processing arithmetic operations in an AI field such as machine learning.
  • the data learner 1310 and the data recognizer 1320 may be mounted on one electronic device or on separate electronic devices.
  • one of the data learner 1310 and the data recognizer 1320 may be included in the electronic device, and the other may be included in a server.
  • the data learner 1310 and the data recognizer 1320 may provide model information constructed by the data learner 1310 to the data recognizer 1320 via wired or wireless communication. Data input to the data recognizer 1320 may be provided to the data learner 1310 as additional learning data.
  • At least one of the data learner 1310 and the data recognizer 1320 may be implemented as a software module.
  • the software module may be stored in non-transitory computer-readable media.
  • the at least one software module may be provided by an operating system (OS) or by a predetermined application.
  • OS operating system
  • some of the at least one software module may be provided by the OS, and the others may be provided by the predetermined application.
  • FIG. 12 is a block diagram of the data learner 1310 according to some example embodiments.
  • the data learner 1310 may include a data obtainer 1310-1, a preprocessor 1310-2, a learning data selector 1310-3, a model learner 1310-4, and a model evaluator 1310-5.
  • the data learner 1310 may indispensably include the data obtainer 1310-1 and the model learner 1310-4, and may selectively include at least one of or may not include all of the preprocessor 1310-2, the learning data selector 1310-3, and the model evaluator 1310-5.
  • the data obtainer 1310-1 may obtain data necessary for learning for determining a situation.
  • the data obtainer 1310-1 may obtain voice data, image data, text data, biometric signal data, or the like. Specifically, the data obtainer 1310-1 may obtain a voice input or a sentence for sending money or payment. Alternatively, the data obtainer 1310-1 may obtain voice data or text data including the voice or the sentence for sending money or payment.
  • the data obtainer 1310-1 may receive data through an input device (e.g., a microphone, a camera, a sensor, keyboard, or the like) of an electronic device. Alternatively, the data obtainer 1310-1 may obtain data via an external device (e.g., a server) that communicates with a device.
  • an input device e.g., a microphone, a camera, a sensor, keyboard, or the like
  • the data obtainer 1310-1 may obtain data via an external device (e.g., a server) that communicates with a device.
  • the preprocessor 1310-2 may preprocess the obtained data so that the data obtained for learning for determining the situation may be used.
  • the preprocessor 1310-2 may process the obtained data in a predetermined format so that the model learner 1310-4, which will be described below, may use the obtained data for learning for determining the situation.
  • the preprocessor 1310-2 may extract learning entity values from the voice data according to the predetermined format.
  • the predetermined format is composed of ⁇ user information, recipient information, remittance amount, and remittance instruction ⁇
  • the preprocessor 1310-2 may extract the learning entity value from the voice data according to the format.
  • the preprocessor 1310-2 may cause display a specific entity value to be 'NULL'.
  • the learning data selector 1310-3 may select data required for learning from the preprocessed data.
  • the selected data may be provided to the model learner 1310-4.
  • the data obtained by the data obtainer 1310-1 or the data processed by the preprocessor 1310-2 may be provided to the model learner 1310-4 as learning data.
  • the learning data selector 1310-3 may select data required for learning from the preprocessed data according to a predetermined reference for determining the situation.
  • the predetermined reference may be determined, for example, considering at least one of attributes of the data, a generation time of the data, a creator of the data, reliability of the data, a target of the data, a generation region of the data, and size of the data.
  • the learning data selector 1310-3 may select the data according to the predetermined reference by learning by using the model learner 1310-4, which will be described below.
  • the model learner 1310-4 may learn a reference on how to determine the situation based on the learning data. Also, the model learner 1310-4 may learn a reference on which learning data should be used for determining the situation. For example, the model learner 1310-4 may learn a determination model according to a supervised learning method or an unsupervised learning method to generate a data recognition model for predicting, determining, or estimating.
  • the data recognition model may be, for example, a model set for estimating a remittance intention of a user or a model set for estimating a payment intention of the user.
  • the model learner 1310-4 may learn the data recognition model used for determining the situation by using the learning data.
  • the data recognition model may be a pre-built model.
  • the data recognition model may be a pre-built model by receiving basic learning data (e.g., sample data, etc.).
  • the data recognition model may be constructed considering an application field of the recognition model, a purpose of learning, or the computer performance of the device.
  • the data recognition model may be, for example, a model based on a neural network.
  • the data recognition model may be designed to simulate a human brain structure on a computer.
  • the data recognition model may include a plurality of network nodes having weights to simulate a neuron of a human neural network.
  • the plurality of network nodes may establish a connection relationship to simulate a synaptic activity of a neuron sending and receiving signals via synapse.
  • the data recognition model may include, for example, a neural network model or a deep learning model developed from the neural network model.
  • the plurality of network nodes may be located at different depths (or layers) and may exchange data according to a convolution connection relationship.
  • a model such as a Deep Neural Network (DNN), a Recurrent Neural Network (RNN), or a Bidirectional Recurrent Deep Neural Network (BRDNN) may be used as a data recognition model, but the present disclosure is not limited thereto.
  • DNN Deep Neural Network
  • RNN Recurrent Neural Network
  • BBDNN Bidirectional Recurrent Deep Neural Network
  • the model learner 1310-4 may determine a data recognition model having a high relation between input learning data and basic learning data as a data recognition model to learn.
  • the basic learning data may be pre-classified according to data types, and the data recognition model may be pre-built for each data type.
  • the basic learning data may be pre-classified by various references such as a region where learning data is generated, a time at which the learning data is generated, a size of the learning data, a genre of the learning data, a creator of the learning data, a genre of an object in the learning data, etc.
  • model learner 1310-4 may learn the data recognition model by using, for example, a learning algorithm including an error back-propagation method or a gradient descent method.
  • the model learner 1310-4 may learn the data recognition model through supervised learning by, for example, using the learning data as an input value. Also, the model learner 1310-4 may learn the data recognition model through unsupervised learning to find a reference for determining the situation by, for example, learning a type of data necessary for determining the situation for itself. Also, the model learner 1310-4 may learn the data recognition model through reinforcement learning, for example by using feedback on whether a result of determination of the situation based on the learning is correct.
  • the learning data may include a user’s voice input or a third party’s voice input, a sentence via which the user’s voice or the third party’s voice is recognized, a sentence entered by the user or the third party, and the like. Also, the learning data may include a learning entity associated with the voice input or the sentence. Various examples of learning entities are described in detail with reference to FIG. 11, and thus redundant descriptions thereof are omitted.
  • the model learner 1310-4 may store the learned data recognition model.
  • the model learner 1310-4 may store the learned data recognition model in a memory of an electronic device (for example, memory 12 of the above-described device 10) including the data recognizer 1320.
  • the model learner 1310-4 may store the learned data recognition model in a memory of an electronic device including the data recognizer 1320 that will be described below.
  • the model learner 1310-4 may store the learned data recognition model in a memory of a server connected to the electronic device (for example, the above-described device 10) via a wired or wireless network.
  • the memory in which the learned data recognition model is stored may also store, for example, instructions or data associated with at least one other component of the electronic device.
  • the memory may also store software and/or a program.
  • the program may include, for example, a kernel, middleware, an application programming interface (API), and/or an application program (or "application").
  • the model evaluator 1310-5 may input evaluation data to the data recognition model, and if a recognition result output from the evaluation data does not satisfy a predetermined reference, the model evaluator 1310-5 may allow the model learner 1310-4 to learn again.
  • the evaluation data may be predetermined data for evaluating the data recognition model.
  • the model evaluator 1310-5 may evaluate the learned data recognition model as not satisfying the predetermined reference. For example, when the predetermined reference is defined as a ratio of 2%, when the learned data recognition model outputs an incorrect recognition result for evaluation data exceeding 20 pieces of evaluation data among a total of 1000 pieces of the evaluation data, the model evaluator 1310-5 may evaluate that the learned data recognition model is not suitable.
  • the model evaluator 1310-5 may evaluate whether each of the learned data recognition models satisfies a predetermined reference and may determine a model satisfying the predetermined reference as a final data recognition model. In this case, when there are a plurality of models satisfying the predetermined reference, the model evaluator 1310-5 may determine any one model previously set in descending order of an evaluation score or a predetermined number of models as the final data recognition model.
  • At least one of the data obtainer 1310-1, the preprocessor 1310-2, the learning data selector 1310-3, the model learner 1310-4, and the model evaluator 1310-5 included in the data learner 1310 may be manufactured in at least one hardware chip and mounted on an electronic device.
  • at least one of the data obtainer 1310-1, the preprocessor 1310-2, the learning data selector 1310-3, the model learner 1310-4, and the model evaluator 1310-5 may be manufactured as a dedicated hardware chip for AI or may be manufactured as a part of a conventional general purpose processor (e.g. a CPU or an application processor) or a graphics processor (e.g., a GPU) and may be mounted on various electronic devices as described above.
  • a conventional general purpose processor e.g. a CPU or an application processor
  • a graphics processor e.g., a GPU
  • the data obtainer 1310-1, the preprocessor 1310-2, the learning data selector 1310-3, the model learner 1310-4, and the model evaluator 1310-5 may be mounted on one electronic device or may be mounted on separate electronic devices.
  • some of the data obtainer 1310-1, the preprocessor 1310-2, the learning data selector 1310-3, the model learner 1310-4, and the model evaluator 1310-5 may be included in the electronic device, and the others may be included in a server.
  • At least one of the data obtainer 1310-1, the preprocessor 1310-2, the learning data selector 1310-3, the model learner 1310-4, and the model evaluator 1310-5 may be implemented as a software module.
  • the software module may be stored in non-transitory computer-readable media.
  • the at least one software module may be provided by an operating system (OS) or by a predetermined application. Alternatively, some of the at least one software module may be provided by the OS, and the others may be provided by the predetermined application.
  • OS operating system
  • the at least one software module may be provided by the OS, and the others may be provided by the predetermined application.
  • FIG. 13 is a block diagram of the data recognizer 1320 according to some example embodiments.
  • the data recognizer 1320 may include a data obtainer 1320-1, a preprocessor 1320-2, a recognition data selector 1320-3, a recognition result provider 1320-4 and a model updater 1320-5.
  • the data recognizer 1320 may indispensably include the data obtainer 1320-1 and the recognition result provider 1320-4, and may selectively include at least one of the preprocessor 1320-2, the recognition data selector 1320-2, and the model updater 1320-5.
  • the data obtainer 1320-1 may obtain data necessary for determining a situation. For example, the data obtainer 1320-1 may obtain a user’s voice input or a sentence with which the user's voice is recognized. Specifically, the data obtainer 1320-1 may obtain the user’s voice input or the sentence for performing remittance or payment. Alternatively, the data obtainer 1320-1 may obtain voice data or text data including the user’s voice or the sentence for performing remittance or payment.
  • the preprocessor 1320-2 may preprocess the obtained data so that the data obtained for determining the situation may be used.
  • the preprocessor 1320-2 may process the obtained data into a predetermined format so that the recognition result provider 1320-4, which will be described below, may use the obtained data for determining the situation.
  • the preprocessor 1320-2 may extract learning entity values from the voice data according to the predetermined format.
  • the preprocessor 1320-2 may extract the learning entity values according to the format of ⁇ user information, recipient information, remittance amount, and remittance instruction ⁇ or ⁇ payment means, payment item, payment method, payment instruction ⁇ .
  • the recognition data selector 1320-3 may select data necessary for the determination of the situation from the preprocessed data.
  • the selected data may be provided to the recognition result provider 1320-4.
  • the recognition data selector 1320-3 may select a part or a whole of the preprocessed data according to a preset reference for determining the situation.
  • the predetermined reference may be determined, for example, considering at least one of attributes of the data, a generation time of the data, a creator of the data, a reliability of the data, a target of the data, a generation region of the data and a size of the data.
  • the recognition data selector 1320-3 may select data according to the predetermined reference by learning by the model learner 1310-4.
  • the recognition result provider 1320-4 may apply the selected data to a data recognition model to determine the situation.
  • the recognition result provider 1320-4 may provide a recognition result according to a data recognition purpose.
  • the recognition result provider 1320-4 may apply the selected data to the data recognition model by using the data selected by the recognition data selector 1320-3 as an input value. Also, the recognition result may be determined by the data recognition model.
  • the recognition result provider 1320-4 may apply the user’s voice input saying the sentence via which the user's voice input is recognized to the data recognition model to estimate, infer or predict the remittance intention of the user.
  • the recognition result provider 1320-4 may apply the user’s voice input or the sentence with which the user's voice input is recognized to the data recognition model to estimate (or infer or predict) the payment intention of the user.
  • the recognition result provider 1320-4 may obtain a recognition entity as a result of estimating the intention of the user.
  • the recognition result provider 1320-4 may provide the obtained recognition entity to a processor (e.g., the processor 11 of the device 10 of FIG. 2).
  • the processor may determine the intention of the user based on the recognition entity and proceed with a process for sending money or payment.
  • the model updater 1320-5 may update the data recognition model based on an evaluation of the recognition result provided by the recognition result provider 1320-4. For example, the model updater 1320-5 may provide the model learner 1310-4 with the recognition result provided by the recognition result provider 1320-4 so that the model learner 1310-4 may update the data recognition model.
  • the model updater 1320-5 may receive an evaluation (or feedback) on the recognition result from a processor (for example, the processor 11 of the device 10 of FIG. 2).
  • the device 10 may display remittance details according to the remittance intention of the user by applying a voice input by the user to the data recognition model.
  • the user may approve the remittance details or refuse to approve the remittance details. For example, if the user approves the remittance details, the user may enter voice, fingerprints, an iris scan, a vein image, a face image or a password. On the other hand, when the user refuses to approve the remittance details, the user may select a cancel button, enter a voice requesting cancellation, or not perform any input for a predetermined period of time.
  • user feedback according to an approval or a rejection of the user may be provided to the model updater 1320-5 as an evaluation of the recognition result. That is, the user feedback may include information indicating that a determination result of the data recognizer 1320 is false or information indicating that the determination result is true.
  • the model updater 1320-5 may update a determination model by using the obtained user feedback.
  • At least one of the data obtainer 1320-1, the preprocessor 1320-2, the recognition data selector 1320-3, the recognition result provider 1320-4, and the model updater 1320-5 in the data recognizer 1320 may be manufactured in at least one hardware chip and mounted on an electronic device.
  • at least one of the data obtainer 1320-1, the preprocessor 1320-2, the recognition data selector 1320-3, the recognition result provider 1320-4, and the model updater 1320-5 may be manufactured as a dedicated hardware chip for AI or may be manufactured as a part of a conventional general purpose processor (e.g. a CPU or an application processor) or a graphics processor (e.g., a GPU) and may be mounted on various electronic devices as described above.
  • a conventional general purpose processor e.g. a CPU or an application processor
  • a graphics processor e.g., a GPU
  • At least one of the data obtainer 1320-1, the preprocessor 1320-2, the recognition data selector 1320-3, the recognition result provider 1320-4, and the model updater 1320-5 may be mounted on one electronic device or may be mounted on separate electronic devices.
  • some of the data obtainer 1320-1, the preprocessor 1320-2, the recognition data selector 1320-3, the recognition result provider 1320-4, and the model updater 1320-5 may be included in the electronic device, and the others may be included in a server.
  • At least one of the data obtainer 1320-1, the preprocessor 1320-2, the recognition data selector 1320-3, the recognition result provider 1320-4, and the model updater 1320-5 may be implemented as a software module.
  • the software module may be stored in non-transitory computer-readable media.
  • the at least one software module may be provided by an OS or by a predetermined application. Alternatively, some of the at least one software module may be provided by the OS, and the others may be provided by the predetermined application.
  • FIG. 14 is a diagram illustrating an example of learning and recognizing data by interaction between a device 1000 and a server 2000 according to some non-limiting embodiments.
  • the device 1000 may correspond to, for example, the device 10 of FIG. 2.
  • the data obtainer 1320-1, the preprocessor 1320-2, the recognition data selector 1320-3, the recognition result provider 1320-4 and the model updater 1320-5 of the data recognizer 1320 of the device 1000 may respectively correspond to the data obtainer 1320-1, the preprocessor 1320-2, the recognition data selector 1320-3, the recognition result provider 1320-4, and the model updater 1320-5 of the data recognizer 13320 of FIG. 13.
  • the data obtainer 2310, the preprocessor 2320, the learning data selector 2330, the model learner 2340, and the model evaluator 2350 of the data learner 2300 of the server 2000 respectively correspond to the data obtainer 1310-1, the preprocessor 1310-2, the learning data selector 1310-3, the model learner 1310-4, and the model evaluator 1310-5.
  • the device 1000 may interact with the server 2000 through short-range or long-distance communication.
  • Connecting of the device 1000 and the server 2000 to each other means that the device 1000 and the server 2000 are directly connected to each other or connected to each other through another component (e.g., at least one of an access point (AP), a hub, a relay device, a base station, a router, and a gateway as a third component).
  • AP access point
  • a hub e.g., a hub, a relay device, a base station, a router, and a gateway as a third component.
  • the server 2000 may learn a reference for determining a situation, and the device 1000 may determine the situation based on a learning result by the server 2000.
  • the model learner 2340 of the server 2000 may perform a function of the data learner 1310 shown in FIG. 12.
  • the model learner 2340 of the server 2000 may learn what data to use to determine a predetermined situation and how to determine the situation by using data.
  • the model learner 2340 may obtain data to be used for learning, and apply the obtained data to a data recognition model to learn the reference for determining the situation.
  • the model learner 2340 may learn the data recognition model by using a voice input or a sentence to generate a data recognition model set to estimate an intention of a user.
  • the generated data recognition model may be, for example, a model set for estimating at least one of a remittance intention of the user and a payment intention.
  • the recognition result provider 1320-4 of the device 1000 may determine the situation by applying the data selected by the recognition data selector 1320-3 to the data recognition model generated by the server 2000. For example, the recognition result provider 1320-4 may transmit the data selected by the recognition data selector 1320-3 to the server 2000. The server 2000 may apply the data selected by the recognition data selector 1320-3 to the data recognition model to request determination of the situation. Also, the recognition result provider 1320-4 may receive from the server 2000 information about the situation determined by the server 2000. For example, when the selected data includes a user’s voice input or a sentence with which the user's voice is recognized, the server 2000 may apply the selected data to the data recognition model set to estimate an intention of the user to obtain a recognition entity including the intention of the user. The server 2000 may provide the obtained entity to the recognition result provider 1320-4 as information on the determined situation.
  • the recognition result provider 1320-4 of the device 1000 may receive the recognition model generated by the server 2000 from the server 2000 and determine the situation by using the received recognition model. In this case, the recognition result provider 1320-4 of the device 1000 may apply the data selected by the recognition data selector 1320-3 to the data recognition model received from the server 2000 to determine the situation. For example, when the selected data includes the user’s voice input or the sentence with which the user's voice is recognized, the recognition result provider 1320-4 of the device 1000 may apply the selected data to a data recognition model set to estimate an intention of the user received from the server 2000 to obtain a recognition entity including the intention of the user. The device 1000 may then provide the obtained entity to a processor (e.g., the processor 11 of FIG. 2) as information about the determined situation.
  • a processor e.g., the processor 11 of FIG. 2
  • the processor 11 may determine a remittance intention of the user or a payment intention based on the recognition entity, and may perform a process for sending money or payment.
  • the device 10 may send money to a recipient by only a voice input.
  • the device 10 may send money to the recipient by transmitting a name of the recipient, a contact, and an amount of money to the bank server 20 without having to transmit an account number of the recipient.
  • the device 10 may pay by only a voice input.
  • FIGS. 15 and 16 are flowcharts of a network system using a data recognition model according to some non-limiting example embodiments.
  • the network system may include first components 1501 and 1601 and second components 1502 and 1602.
  • the first components 1501 and 1601 may be the device 1000
  • the second components 1502 and 1602 may be the server 2000 that stores a data analysis model.
  • the first components 1501 and 1601 may be a general purpose processor
  • the second components 1502 and 1602 may be an AI dedicated processor.
  • the first components 1501 and 1601 may be at least one application
  • the second components 1502 and 1602 may be an OS.
  • the second components 1502 and 1602 may be components that are more integrated and dedicated and less delayed than the first components 1501 and 1601 and have better performance and more resources than the first components 1501 and 1601 and may process many operations required for creating, updating, or applying a data recognition model more quickly and effectively than the first components 1501 and 1601.
  • an interface for transmitting/receiving data between the first components 1501 and 1601 and the second components 1502 and 1602 may be defined.
  • an application program interface having learning data to be applied to the data recognition model as a factor value (or a medium value or a transfer value) may be defined.
  • the API may be defined as a set of subroutines or functions that may be called for any processing of any protocol (e.g., a protocol defined in the device 1000) to another protocol (e.g., a protocol defined in the server 2000). That is, an environment in which an operation of another protocol may be performed in any one protocol through the API may be provided.
  • the first component 1501 may analyze a remittance intention of a user by using a data recognition model.
  • the first component 1501 may receive a user's voice uttered with the remittance intention.
  • the first component 1501 may transmit the received voice input or a sentence used to recognize the received voice to the second component 1502.
  • the first component 1501 may apply a voice input or a sentence as a factor value of an API function provided for use of the data recognition model.
  • the API function may transmit the voice input or the sentence to the second component 1502 as recognition data to be applied to the data recognition model.
  • the voice input or the sentence may be changed and transmitted according to a promised communication format.
  • the second component 1502 may apply the received voice input or sentence to a data recognition model set to estimate the remittance intention of the user.
  • the second component 1502 may obtain a recognition entity.
  • the recognition entity may include at least one of user information, recipient information (e.g., a name of a recipient), a remittance amount, and a remittance instruction.
  • the second component 1502 may transmit the recognition entity to the first component 1501. At this time, the recognition entity may be changed and transmitted according to the promised communication format.
  • the first component 1501 may determine that the user's voice input has the remittance intention based on the recognition entity. For example, the first component 1501 may determine that the user's voice has the remittance intention if the 'proceed with remittance, the name of the recipient, and the remittance amount’ are included as remittance instruction values of the recognizing entity.
  • operations 1513 to 1521 may correspond to an embodiment of a process in which the device 10 analyzes the received voice to determine the remittance intention of the user in operation 720 of FIG. 2.
  • the first component 1501 may search a contact list for a contact corresponding to the name of the recipient included in the recognition entity in operation 1523.
  • the first component 1501 may approve details to send money to an account number of the recipient based on the found contact of the recipient.
  • the corresponding process corresponds to operations 740 to 760 of FIG. 7, and a redundant description thereof will be omitted.
  • the first component 1601 may analyze a payment intention of the user by using the data recognition model.
  • the first component 1601 may provide payment details.
  • the first component 1601 may display the payment details on a screen or output the payment details by voice.
  • the user may check the payment details displayed on the screen, and may express whether or not to pay by voice input.
  • the first component 1601 may receive a user's voice input.
  • the first component 1601 may transmit the received voice input or a sentence that recognizes the received voice to the second component 1602.
  • the first component 1601 may apply a voice input or a sentence as a factor value of an API function provided for use of the data recognition model.
  • the API function may transmit the voice input or the sentence to the second component 1602 as recognition data to be applied to the data recognition model.
  • the voice input or the sentence may be changed and transmitted according to a promised communication format.
  • the second component 1602 may apply the received voice or the sentence to a data recognition model set to estimate the payment intention of the user.
  • the second component 1602 may obtain a recognition entity.
  • the recognition entity may include, without limitation, at least one of payment means, a payment item, a payment method and a payment instruction.
  • the second component 1602 may transmit the recognition entity to the first component 1601. At this time, the recognition entity may be changed and transmitted according to the promised communication format.
  • the first component 1601 may determine that the user's voice has the payment intention based on the recognition entity. For example, if 'cancel payment’ is included as a payment instruction value of the recognition entity, the first component 1601 may determine that the user's voice has an intention not to proceed with payment. On the other hand, if 'proceed with payment’ is included as the payment instruction value of the recognition entity, the first component 1601 may determine that the user's voice has an intention to proceed with payment.
  • operations 1615 to 1623 may correspond to an embodiment of a process in which the device 10 analyzes the received voice and determines the payment intention of the user in operation 1030 of FIG. 10 described above.
  • the first component 1601 may transmit payment information to a card company if user authentication through voice is successful in operations 1625 and 1627.
  • the corresponding process corresponds to operations 1040 and 1050 of FIG. 10, and a redundant description thereof is omitted.
  • One or more example embodiments may be implemented using a recording medium including computer-executable instructions such as a program module executed by a computer system.
  • a non-transitory computer-readable recording medium may be an arbitrary available medium which may be accessed by a computer system and includes all types of volatile and non-volatile media and separated and non-separated media.
  • the non-transitory computer-readable recording medium may include all types of computer storage media and communication media.
  • the computer storage media include all types of volatile and non-volatile and separated and non-separated media implemented by an arbitrary method or technique for storing information such as computer-readable instructions, a data structure, a program module, or other data.
  • the communication media typically include computer-readable instructions, a data structure, a program module, other pieces of data of a modulated signal, other transmission mechanisms, and arbitrary information delivery media.
  • the method according to the embodiments may be provided as a computer program product.
  • the computer program product may include a software program, a computer-readable storage medium storing the software program, or a product traded between a seller and a purchaser.
  • the computer program product may include a product (e.g. a downloadable app) in the form of a software program distributed electronically via the device 10 or a manufacturer of the device 10 or an electronic market (e.g. Google Play Store, App Store).
  • a product e.g. a downloadable app
  • an electronic market e.g. Google Play Store, App Store
  • the storage medium may be a storage medium of a manufacturer or a server of the electronic market, or a relay server.
  • unit may be a hardware component such as a processor or a circuit and/or a software component to be executed by a hardware component such as a processor.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Physics & Mathematics (AREA)
  • Finance (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Human Computer Interaction (AREA)
  • Computer Security & Cryptography (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Economics (AREA)
  • Development Economics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Marketing (AREA)
  • Technology Law (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Financial Or Insurance-Related Operations Such As Payment And Settlement (AREA)
EP17872539.6A 2016-11-21 2017-11-21 Verfahren und vorrichtung zur anwendung von künstlicher intelligenz zum senden von geld durch verwendung von spracheingabe Ceased EP3533015A4 (de)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
KR20160154879 2016-11-21
KR1020170132758A KR102457811B1 (ko) 2016-11-21 2017-10-12 음성을 이용하여 송금하는 방법 및 장치
PCT/KR2017/013226 WO2018093229A1 (en) 2016-11-21 2017-11-21 Method and device applying artificial intelligence to send money by using voice input

Publications (2)

Publication Number Publication Date
EP3533015A1 true EP3533015A1 (de) 2019-09-04
EP3533015A4 EP3533015A4 (de) 2019-11-27

Family

ID=62300225

Family Applications (1)

Application Number Title Priority Date Filing Date
EP17872539.6A Ceased EP3533015A4 (de) 2016-11-21 2017-11-21 Verfahren und vorrichtung zur anwendung von künstlicher intelligenz zum senden von geld durch verwendung von spracheingabe

Country Status (3)

Country Link
EP (1) EP3533015A4 (de)
KR (1) KR102457811B1 (de)
CN (1) CN109983491B (de)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102623727B1 (ko) * 2018-10-29 2024-01-11 삼성전자주식회사 전자 장치 및 이의 제어 방법
KR20200067673A (ko) * 2018-12-04 2020-06-12 (주)이더블유비엠 공유 ai 스피커
CN112201245B (zh) * 2020-09-30 2024-02-06 中国银行股份有限公司 信息处理方法、装置、设备及存储介质
US11741517B1 (en) 2020-10-26 2023-08-29 Wells Fargo Bank, N.A. Smart table system for document management
US11572733B1 (en) 2020-10-26 2023-02-07 Wells Fargo Bank, N.A. Smart table with built-in lockers
US11429957B1 (en) 2020-10-26 2022-08-30 Wells Fargo Bank, N.A. Smart table assisted financial health
US11397956B1 (en) 2020-10-26 2022-07-26 Wells Fargo Bank, N.A. Two way screen mirroring using a smart table
US11457730B1 (en) 2020-10-26 2022-10-04 Wells Fargo Bank, N.A. Tactile input device for a touch screen
US11740853B1 (en) 2020-10-26 2023-08-29 Wells Fargo Bank, N.A. Smart table system utilizing extended reality
US11727483B1 (en) 2020-10-26 2023-08-15 Wells Fargo Bank, N.A. Smart table assisted financial health

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4240807B2 (ja) * 2000-12-25 2009-03-18 日本電気株式会社 移動通信端末装置、音声認識方法、およびそのプログラムを記録した記録媒体
KR20030026428A (ko) * 2001-09-25 2003-04-03 주식회사 엠보이스텔레소프트 음성인식을 이용한 폰뱅킹방법
US20030229588A1 (en) * 2002-06-05 2003-12-11 Pitney Bowes Incorporated Voice enabled electronic bill presentment and payment system
KR20030012912A (ko) * 2003-01-09 2003-02-12 이호권 휴대폰을 이용한 송금 서비스 시스템
JP2006119851A (ja) * 2004-10-20 2006-05-11 Nec Corp 登録振込方法および登録振込システム
CN101595521A (zh) * 2006-06-28 2009-12-02 行星付款公司 基于电话的商务系统和方法
GB2476054A (en) * 2009-12-08 2011-06-15 Voice Commerce Group Technologies Ltd Voice authentication of bill payment transactions
US8515751B2 (en) * 2011-09-28 2013-08-20 Google Inc. Selective feedback for text recognition systems
KR20130082645A (ko) * 2011-12-13 2013-07-22 장형윤 음성인식을 이용한 스마트폰뱅킹 결제방법
KR20140061047A (ko) * 2012-11-13 2014-05-21 한국전자통신연구원 음성 인식에 기반한 의료 장치 제어용 단말 장치 및 이를 위한 방법
KR20140066467A (ko) * 2012-11-23 2014-06-02 주식회사 우리은행 음성 인식을 이용한 계좌 이체 처리 방법 및 이를 실행하는 장치
KR20140003840U (ko) * 2012-12-13 2014-06-23 한국전력공사 휴대용 전력량계의 오차 시험장치
US10354237B2 (en) * 2012-12-17 2019-07-16 Capital One Services Llc Systems and methods for effecting personal payment transactions
KR20150011293A (ko) * 2013-07-22 2015-01-30 김종규 인스턴트 메신저를 이용한 생체인증 전자서명 서비스 방법
US20150149354A1 (en) * 2013-11-27 2015-05-28 Bank Of America Corporation Real-Time Data Recognition and User Interface Field Updating During Voice Entry

Also Published As

Publication number Publication date
KR20180057507A (ko) 2018-05-30
EP3533015A4 (de) 2019-11-27
KR102457811B1 (ko) 2022-10-24
CN109983491B (zh) 2023-12-29
CN109983491A (zh) 2019-07-05

Similar Documents

Publication Publication Date Title
WO2018093229A1 (en) Method and device applying artificial intelligence to send money by using voice input
EP3533015A1 (de) Verfahren und vorrichtung zur anwendung von künstlicher intelligenz zum senden von geld durch verwendung von spracheingabe
WO2018117704A1 (en) Electronic apparatus and operation method thereof
WO2019098573A1 (en) Electronic device and method for changing chatbot
WO2019194451A1 (ko) 인공지능을 이용한 음성 대화 분석 방법 및 장치
WO2018143630A1 (ko) 상품을 추천하는 디바이스 및 방법
WO2020067633A1 (en) Electronic device and method of obtaining emotion information
WO2020149454A1 (ko) 사용자에 대한 인증을 수행하는 전자 장치 및 그 동작 방법
WO2019039915A1 (en) METHOD FOR ACTIVATION OF VOICE RECOGNITION SERVICE AND ELECTRONIC DEVICE IMPLEMENTING THE SAME
WO2019031714A1 (ko) 객체를 인식하는 방법 및 장치
WO2019059505A1 (ko) 객체를 인식하는 방법 및 장치
WO2019146942A1 (ko) 전자 장치 및 그의 제어방법
EP3539056A1 (de) Elektronische vorrichtung und betriebsverfahren dafür
EP3523710A1 (de) Vorrichtung und verfahren zur bereitstellung eines satzes auf der basis einer benutzereingabe
WO2018101671A1 (en) Apparatus and method for providing sentence based on user input
WO2019132410A1 (en) Electronic device and control method thereof
EP3820369A1 (de) Elektronische vorrichtung und verfahren zum erhalt von gefühlsinformationen
WO2019164120A1 (ko) 전자 장치 및 그 제어 방법
EP3864607A1 (de) Verfahren und elektronische vorrichtung zur anzeige mindestens eines visuellen objektes
WO2018074895A1 (en) Device and method for providing recommended words for character input
WO2018097439A1 (ko) 발화의 문맥을 공유하여 번역을 수행하는 전자 장치 및 그 동작 방법
EP3785258A1 (de) Elektronische vorrichtung und verfahren zum bereitstellen oder erhalten von daten zum training derselben
WO2020050554A1 (ko) 전자 장치 및 그의 제어방법
WO2019107674A1 (en) Computing apparatus and information input method of the computing apparatus
WO2020180001A1 (ko) 전자 장치 및 이의 제어 방법

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20190527

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

A4 Supplementary search report drawn up and despatched

Effective date: 20191024

RIC1 Information provided on ipc code assigned before grant

Ipc: G06F 1/16 20060101ALI20191018BHEP

Ipc: G06Q 20/32 20120101ALI20191018BHEP

Ipc: G10L 17/26 20130101ALI20191018BHEP

Ipc: G10L 17/04 20130101ALI20191018BHEP

Ipc: G06Q 20/40 20120101ALI20191018BHEP

Ipc: G06Q 40/02 20120101ALI20191018BHEP

Ipc: G06Q 20/10 20120101AFI20191018BHEP

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20210120

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

REG Reference to a national code

Ref country code: DE

Ref legal event code: R003

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED

18R Application refused

Effective date: 20220628