CN109983491A - By artificial intelligence application in converging the method and apparatus of money by using voice input - Google Patents
By artificial intelligence application in converging the method and apparatus of money by using voice input Download PDFInfo
- Publication number
- CN109983491A CN109983491A CN201780071950.5A CN201780071950A CN109983491A CN 109983491 A CN109983491 A CN 109983491A CN 201780071950 A CN201780071950 A CN 201780071950A CN 109983491 A CN109983491 A CN 109983491A
- Authority
- CN
- China
- Prior art keywords
- user
- data
- voice
- payment
- money
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000013473 artificial intelligence Methods 0.000 title claims abstract description 17
- 238000000034 method Methods 0.000 title description 45
- 238000003860 storage Methods 0.000 claims abstract description 21
- 230000015654 memory Effects 0.000 claims abstract description 20
- 238000004458 analytical method Methods 0.000 claims abstract description 10
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 5
- 210000003462 vein Anatomy 0.000 claims description 25
- 238000012011 method of payment Methods 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 4
- 238000004891 communication Methods 0.000 description 16
- 238000011156 evaluation Methods 0.000 description 16
- 230000008569 process Effects 0.000 description 16
- 238000012986 modification Methods 0.000 description 15
- 230000004048 modification Effects 0.000 description 15
- 238000005516 engineering process Methods 0.000 description 14
- 230000006870 function Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 8
- 238000012545 processing Methods 0.000 description 8
- 238000010801 machine learning Methods 0.000 description 6
- 238000012790 confirmation Methods 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 4
- 230000033001 locomotion Effects 0.000 description 4
- 238000012795 verification Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 210000004556 brain Anatomy 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 238000007726 management method Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 235000013399 edible fruits Nutrition 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 241001269238 Data Species 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000013499 data model Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000000151 deposition Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000000946 synaptic effect Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000012384 transportation and delivery Methods 0.000 description 1
- 239000003039 volatile agent Substances 0.000 description 1
- 210000000707 wrist Anatomy 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q20/00—Payment architectures, schemes or protocols
- G06Q20/38—Payment protocols; Details thereof
- G06Q20/40—Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
- G06Q20/401—Transaction verification
- G06Q20/4014—Identity check for transactions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/163—Wearable computers, e.g. on a belt
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q20/00—Payment architectures, schemes or protocols
- G06Q20/08—Payment architectures
- G06Q20/10—Payment architectures specially adapted for electronic funds transfer [EFT] systems; specially adapted for home banking systems
- G06Q20/108—Remote banking, e.g. home banking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q20/00—Payment architectures, schemes or protocols
- G06Q20/30—Payment architectures, schemes or protocols characterised by the use of specific devices or networks
- G06Q20/32—Payment architectures, schemes or protocols characterised by the use of specific devices or networks using wireless devices
- G06Q20/322—Aspects of commerce using mobile devices [M-devices]
- G06Q20/3223—Realising banking transactions through M-devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q20/00—Payment architectures, schemes or protocols
- G06Q20/38—Payment protocols; Details thereof
- G06Q20/40—Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
- G06Q20/401—Transaction verification
- G06Q20/4014—Identity check for transactions
- G06Q20/40145—Biometric identity checks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q40/00—Finance; Insurance; Tax strategies; Processing of corporate or income taxes
- G06Q40/02—Banking, e.g. interest calculation or account maintenance
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/04—Training, enrolment or model building
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/22—Interactive procedures; Man-machine interfaces
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/26—Recognition of special voice characteristics, e.g. for use in lie detectors; Recognition of animal voices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q20/00—Payment architectures, schemes or protocols
- G06Q20/30—Payment architectures, schemes or protocols characterised by the use of specific devices or networks
- G06Q20/34—Payment architectures, schemes or protocols characterised by the use of specific devices or networks using cards, e.g. integrated circuit [IC] cards or magnetic cards
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Accounting & Taxation (AREA)
- Physics & Mathematics (AREA)
- Finance (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Human Computer Interaction (AREA)
- Computer Security & Cryptography (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Acoustics & Sound (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Development Economics (AREA)
- Economics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Marketing (AREA)
- Software Systems (AREA)
- Technology Law (AREA)
- Computational Linguistics (AREA)
- Financial Or Insurance-Related Operations Such As Payment And Settlement (AREA)
Abstract
Exemplary device includes the memory for being configured as storing at least one program;It is configured as receiving the microphone of voice;And it is configured as executing at least one program and carries out at least one processor for converging money to the operation of payee with control device.Operation includes that the payment intention of user is determined based on the received voice input of analysis institute;Contact information is fetched from the contacts list of storage based on the title of payee;The amount of money specified in the title of payee and contact information and voice input is sent collectively to bank server;Remittance details are received from bank server;And ratify details of remitting money.Device can to analyze, received voice be inputted by using artificial intelligence (AI) algorithm.
Description
Technical field
The present disclosure generally relates to use voice to input to converge the method and apparatus of money.
Present disclosure also relates to a kind of artificial intelligence (AI) system and its applications, are simulated such as using machine learning algorithm
To the function of identification and the determination of human brain.
Background technique
With the development of multimedia technology and network technology, user can be used device and receive various services.Particularly, with
The development of speech recognition technology, his or her voice can be input to device by user, and device can be according to user's
Voice (for example, according to the order said by user) operates.
Device can be used to access financial service in user, which executes the application provided by bank.For example, user can
With converged by using device money to payee account.User can execute application, input account, password etc., and converge money
To the account of payee.
Moreover, in recent years, realizing that artificial intelligence (AI) system of human levels' intelligence has been used to various fields.AI system
It is a kind of machine learning system, different from existing rule-based system, it can learn by oneself, make decision and become " more clever
It is bright ".AI system can provide improved discrimination and more accurately understand user preference when in use, and therefore existing base
In rule system more and more replaced the AI system based on deep learning.
AI technology includes machine learning (for example, deep learning) and the element technology (element using machine learning
technology)。
Machine learning is a kind of algorithmic technique, and wherein machine itself carries out classified/learned to the characteristic of input data.Element
Technology is that the technology of the function such as to the identification of human brain and determination is simulated using machine learning algorithm (such as deep learning),
It and include the technical field of language understanding, visual analysis, reasoning/prediction, knowledge representation, motion control etc..
AI technology has been applied to every field.Language understanding be for identification with application/processing human language/character skill
Art, and include natural language processing, machine translation, conversational system, query/response, speech recognition/synthesis etc..Visual analysis
It is for Object identifying and the technology for human vision will to be handled, and includes Object identifying, to image tracing, picture search, people
Class identification, scene understanding, space understanding, image enhancement etc..Reasoning/prediction is for determining and reasoning from logic and predictive information
Technology, and include the reasoning of knowledge based/probability, Optimization Prediction, the plan based on preference, recommendation etc..Knowledge representation is
For the experience information of the mankind to be automated as to the technology of knowledge data, and include knowledge architecture (generation/classification of data),
Information management (utilizations of data) etc..Motion control is the technology of the movement of the autonomous traveling and robot for controlling vehicle,
It and include motion control (navigation, collision and traveling), operation control (behaviour control) etc..
Summary of the invention
Provide it is a kind of converged by using voice money to the account of payee method and apparatus.
Detailed description of the invention
From the detailed description below in conjunction with attached drawing, these and/or other aspects, feature and the adjoint advantage of the disclosure will
It becomes apparent and is easier to understand, wherein identical appended drawing reference refers to identical element, and wherein:
Fig. 1 be illustrate user according to example embodiment converged by using the voice of user money method figure;
Fig. 2 is the block diagram for illustrating device according to example embodiment;
Fig. 3 is the figure for illustrating the device of mode of learning according to example embodiment;
Fig. 4 is the figure for illustrating the method for approval remittance details according to example embodiment;
Fig. 5 is the figure of one method in the multiple payees of selection illustrated according to example embodiment;
Fig. 6 is the figure for illustrating the method for selecting any one of multiple banks according to example embodiment;
Fig. 7 be illustrate according to example embodiment converged by using voice money method flow chart;
Fig. 8 is illustrated according to the figure for the method for another example embodiment paid by using voice;
Fig. 9 is the figure for illustrating the device of study payment mode according to example embodiment;
Figure 10 is the flow chart for illustrating the method paid using voice according to example embodiment;
Figure 11 is the block diagram according to the processor of some example embodiments;
Figure 12 is the block diagram according to the data learner of some example embodiments;
Figure 13 is the block diagram according to the data identifier of some example embodiments;
Figure 14 is to illustrate to learn and identify by the interaction between device and server according to some example embodiments
The exemplary figure of data;And
Figure 15 and 16 is the flow chart according to the network system using data identification model of some example embodiments.
Specific embodiment
Best mode
Provide a kind of method and apparatus for converging account of the money to payee by using voice.
Additional aspect partly will be set forth in the description that follows, and partly will be apparent from description
, or can be learnt by practicing the disclosed embodiments.
One side according to example embodiment, device include: memory, are configured as storing at least one program;Wheat
Gram wind is configured as receiving voice input;And at least one processor, be configured as executing at least one program with into
Operation of the row for converging money to payee, wherein operation is comprising inputting the payment to determine user based on the received voice of analysis institute
It is intended to;Contact information is fetched from the contacts list of storage based on the title of payee;By the title and connection of payee
The amount of money specified in people's information and voice input is sent collectively to bank server;Remittance details are received from bank server;And
And approval remittance details.
According to the aspect of another example embodiment, method of payment includes to receive the voice input of user;Based on to being connect
The payment for analyzing to determine user of the voice input of receipts is intended to;Title based on the payee specified in voice input is from storage
Contacts list in fetch contact information;The number that will be specified in the title of payee and contact information and voice input
It is sent collectively to bank server;Remittance details are received from bank server;And ratify details of remitting money.
Invention mode
Now with detailed reference to various non-limiting embodiments, example illustrates in the accompanying drawings.In the accompanying drawings, it omits and retouches
Unrelated part is stated so that example embodiment is explicitly described, and identical appended drawing reference indicates identical throughout the specification
Element.In this regard, example embodiment can have different forms, and should not be construed as being limited to described herein
Description.Therefore, example embodiment is described below to explain all aspects of this disclosure by reference to attached drawing.As used herein
, term "and/or" includes any one and all combinations of one or more of associated project listed.Such as
The expression of at least one of " ... " modifies each member of the entire element list without modifying list when after element list
Part.
In the entire disclosure, when a certain component of description ' attach ' to another component, it should be appreciated that a certain component
It " can be directly connected to " to another component or via in another intermediate element " electrical connection " to another component.In addition,
When component "comprising" element, unless there are another opposite to that descriptions, otherwise it should be understood that the component be not excluded for it is another
A element, but another element can be further included.
Hereinafter, the disclosure will be described in detail with reference to the attached drawings.
Fig. 1 be illustrate user according to example embodiment converged by using the voice of user money method figure.With reference to
His or her voice can be input to device 10 by speaking (for example, to microphone) by Fig. 1, user, to converge money to receipts
Money people.Particularly, user can pass through the title for only saying payee in the case where not saying or inputting the account of payee
To converge money to payee.
Device 10 can receive voice input from user.Device 10 may include the microphone for receiving user speech.Device
10 can be received by executing such as voice assistant application (such as, " S voice ") and controlling the application of execution via microphone
The voice of user inputs.
Shown in the 1st as shown in figure 1, device 10 can identify the voice of user.Device 10 can analyze voice with determination
The intention of user.For example, if device receives the voice input that user says " converge 100,000,000 won to Samsung ", device 10 can be with
Determine whether user intends remittance money from the voice of user.In the exemplary embodiment, device 10 can be stored in use in memory
Family converge money when the input of entire user speech, and learn using stored information to converge money when voice input mode.
Device 10 can be by learning the intention more accurately to determine user.When learning to start, when inputting the voice of user, dress
It sets 10 and can be confirmed whether converge money.Device 10 can more accurately determine that the transmission of user is intended to by repetitive learning.
As an example, device 10 can be by the model comparision of the speech pattern of storage and input voice, to determine user's
It is intended to.The speech pattern of storage may be embodied in the mode that user speech when user intends remittance money inputs.If the language of storage
Sound mode and the mode of input voice are similar or identical (for example, similarity equals or exceeds threshold similarity), then device 10 can
To determine that user intends remittance money.The mode of stored voice can be updated or added by learning.
The title or meaning of payee can be confirmed in device 10, and search for storage title in the contact list or
Meaning.For example, device 10 can search for " Samsung " in the contact list if user inputs payee for " Samsung ".Example
Such as, the telephone number of " Samsung " in contacts list can be confirmed in device 10.
Shown in the 2nd as shown in figure 1, device 10 can send user information, receiver information and the amount of money to bank's clothes
Business device 20.Title, account etc. of the user information including but not limited to user.Name of the receiver information including but not limited to payee
Title, telephone number etc..Receiver information can not include the account of payee.It is specified in the voice input of amount of money instruction user
The amount of money, and be the amount of money that user will remit payee.
Device 10 can be but not limited to smart phone, tablet PC, PC, smart television, mobile phone, personal digital assistant
(PDA), laptop computer, media player, microserver, global positioning system (GPS) device, e-book terminal, number
Broadcast terminal, navigation system, self-service terminal (kiosk), MP3 player, digital camera, consumption electronic product and other shiftings
Dynamic or non-moving computing device.Device 10 is also possible to wearable device, such as, but not limited to has communication function and data
Wrist-watch, glasses, hair band, ring of processing function etc..Device 10 may include the voice input that can receive user and to user
Any kind of device replied message is provided.
In addition, device 10 can be communicated by network with other device (not shown), to use above and below various types of
Literary information.Network may include local area network (LAN), wide area network (WAN), value-added network (VAN), mobile radio communication network,
Satellite communication network and/or combination thereof can be in comprehensive significance for allowing the smooth communication each other of corresponding network element
Data communication network, and may include wired Internet, wireless Internet and mobile radio communication network.Wireless communication can be with
Including, for example, Wi-Fi, bluetooth, Bluetooth Low Energy, ZigBee, Wi-Fi direct (WFD), ultra wide band (UWB), Infrared Data Association
(IrDA), near-field communication (NFC) etc., but not limited to this.
Shown in the 3rd as shown in figure 1, bank server 20 can receive user information and receiver information,.Bank service
Device 20 may search for and the matched account of user information.Bank server 20 can be by using the title and phone of such as user
Number searches for the account of user.In addition, bank server 20 may search for distribution (or matching) to the unique identification of device 10
The account of information.Device 10 may include unique identification information, and unique mark of device 10 can be used in bank server 20
Information is known with the account of the user of searcher 10 in account database.Bank server 20 can also search for believing with payee
Cease matched account.For example, bank server 20 may search for the account with the title of payee and telephone number matches.
Shown in the 4th as shown in figure 1, remittance details are can be generated in bank server 20.Bank server 20 can be generated
Remittance details, including but not limited to the account of user, the title of payee, the account of payee and the amount of money.For example, bank service
Device 20 can be generated " from bank A, 11-1111 (account) and AAA (user's name) Xiang Yinhang B, 22-2222 (account) and BBB
The remittance details of 10,000 won of (payee's name) remittance ".
Bank server 20 can send device 10 for remittance details.
Device 10 can show remittance details.Device 10 can show remittance details to allow the user to confirm that the voice of user
Whether the intention and remittance details of input are consistent with each other.
User can ratify details of remitting money.If user wants according to remittance details remittance money, user can be inputted for example
One or more of voice, fingerprint, iris scan, vein image, face-image and password.Device 10 can be defeated by determination
Enter whether voice, fingerprint, iris scan, vein image, face-image and/or password match the personal information of user to test
Card.Shown in this verifying as shown in figure 1 the 5th for details of remitting money.
Shown in the 6th as shown in figure 1, device 10 can send verification result to bank server 20.
Shown in the 7th as shown in figure 1, bank server 20 can receive verification result and be remitted money according to the received verifying of institute
The verification result of details converges money to payee.If user is verified as legitimate user, bank server 20 can converge money
Payee (and optionally send to device 10 converged the confirmation of money) is given, and otherwise can not converge the transmission of Qian Bingxiang device 10
Error message.
Fig. 2 is the block diagram for illustrating device 10 according to example embodiment.With reference to Fig. 2, device 10 may include processor 11,
Memory 12, display 13 and microphone 14.
Processor 11 (for example, processing circuit comprising such as CPU and/or special hardware circuit) can control comprising storage
The integrated operation of the device 10 of device 12, display 13 and microphone 14.Processor 11 can control data storage to memory
Data are read in 12 and/or from memory 12.Processor 11 can the image to show on the monitor 13 of determination, and can be with
It controls display 13 and shows the image.Processor 11, which can control, to be opened/closed microphone 14 and analyzes (for example, passing through execution
Speech analysis application) it is inputted by the voice of microphone 14.
Memory 12 (for example, ROM, RAM, storage card, non-volatile, volatibility, solid, hard disk etc.) can store up
Deposit personal information, the biological information etc. of user.For example, memory 12 can store but be not limited to the voice, fingerprint, iris of user
Scanning, vein image, face-image and/or password.Memory 12 can store the sample and/or first voice of the voice of user
Input, with the mode of the voice for analyzing user.
Display 13 (for example, LCD, OLED etc.) can show image under the control of processor 11 and reproduce in video
Hold.
Microphone 14 can receive voice input.Microphone 14 may include circuit, will be raw in the periphery of device 10
At sound (for example, voice input) be converted to electric signal and the electric signal be output to processor 11.
Fig. 3 is the figure for illustrating the device 10 of mode of learning according to example embodiment.With reference to Fig. 3, device 10 can be such as
Speech analysis application is executed, for analyzing various types of sentences and based on this mode of learning.
User is it may be said that various types of sentences converge money.For example, in order to (collect money from the bank account of user to Samsung
People) 100,000,000 won of remittance, user is it may be said that following kind of sentence:
1,100,000,000 won are converged from " A " bank account to Samsung
2,100,000,000 won of Samsung of remittance
3,100,000,000 are converged to Samsung
Device 10 can analyze and learn the mode of the voice of user, to identify the sentence of the intention comprising user's remittance money.
When user has multiple accounts, device 10 can confirm the account that money is therefrom extracted in multiple accounts with user.
Once specifying account, the money transfer that use device 10 is initiated from that time can extract money from the destiny account, unless depositing
In different instruction from the user.
Fig. 4 is the figure for illustrating the method for approval remittance details according to example embodiment.User can by using but not
It is limited to voice input, fingerprint, vein image, face-image or iris scan to ratify details of remitting money.
Device 10 can receive remittance details from bank server 20, and remittance details are shown on its display 13.It converges
Money details may include but be not limited to the account of user, the account of payee and the amount of money.
User ratifies remittance details after can visually confirming shown remittance details.When user ratifies remittance in detail
When feelings, voice input, fingerprint, vein image, face-image or iris scan is can be used in user.If the voice of input refers to
Line or iris matching (for example, having the similarity being equal to more than scheduled similarity threshold) are such as in the memory for being stored in device
Voice input, fingerprint, vein image, face-image or the iris scan for the user reflected in information in 12, then device 10
The instruction remittance approved message of details can be sent to bank server 20.
Fig. 5 is the figure of one method in the multiple payees of selection illustrated according to example embodiment.User can lead to
Such as voice input is crossed to select any one of multiple payees.
Device 10 can be searched in the contacts list being stored in memory 12 (or some other external memories)
It is identified as the title of payee.If finding multiple payees of the title comprising being identified, device in the contact list
10 can show the title of multiple payees found on the monitor 13.User can select display by voice input
Any one of title.
The case where finding following two payee using the title of Samsung is as example.
1, Samsung
2, Samsung
Device 10 can show two payees on the monitor 13.User can select the first receipts by voice input
Money people or second beneficiary.For example, user can be by inputting such as " remittance money is to first " or " converging money to Samsung "
Voice selects payee.If display 13 is configured as touch screen, touch input is can be used to select to collect money in user
People.
Fig. 6 is the figure for illustrating the method for selecting any one of multiple banks according to example embodiment.User can be with
Using voice input come any bank's (or account) of selection from multiple banks (or account).
When the details that will remit money are sent to device 10, bank server 20 can be sent in registers in the title of payee
Multiple banks' (or account).For example, then device 10 can be aobvious if there is the multiple accounts registered in the title of payee
Show and show account to user on device 13, converges money to which account to determine for user.As described above, user can pass through voice
Or touch input selects shown any one of account.
For example, following two account can be found under the title of Samsung.
1, bank A (33-3333)
2, bank B (55-5555)
Device 10 can show two accounts on the monitor 13.User can select the first account by voice input
Or second account.For example, user can be by saying such as " money that converges gives bank A ", " converging money to first " or " converging money to 55-
5555 accounts " input to select bank or account to provide voice.
Fig. 7 be illustrate according to example embodiment converged by using voice input money method flow chart.With reference to figure
7, user can input the title and the amount of money of payee by voice input, and converge money to payee.
In operation 710, device 10 can receive user and input to the voice of microphone 14.
In operation 720, device 10 can analyze institute's received voice input to determine the intention of user's remittance money.As point
The result of the received voice of analysis institute, if it is determined that there is no the intention of remittance money, then device 10 is without for converging the process of money.Language
Sound input may include title and amount of money of payee etc..For example, device 10 can analyze voice input, and if in voice
Then determine that user intends remittance money comprising instruction, title, amount of money etc. in input.
In operation 730, device 10 can be searched for corresponding with the title of payee in the contacts list of storage
Contact person.If not finding contact person corresponding with payee's name, device 10 can show instruction on the monitor 13
The information that contact person lose/does not find.User can input the contact information of payee by voice input.Device 10 can
The title of payee and corresponding contact person to be stored in the contact list based on input voice.
In operation 740, device 10 can be by the title of payee and contact information together with included in voice input
The amount of money be sent collectively to bank server 20.Contact information can be searched for the title of payee, or via user
The voice of input carrys out typing.
In operation 750, device 10 can receive remittance details from bank server 20.Bank server 20 can pass through
The account of payee is searched for using the title of payee and contact information, and sends remittance details, packet to device 10
Contain but be not limited to the title, account and the amount of money of payee.
In operation 760, device 10 can ratify (verifying) remittance details.Device 10 can by using but be not limited to use
Voice input, fingerprint, iris scan, vein image, face-image and/or the password at family come ratify remit money details.User can be with
Confirmation remittance details are simultaneously input to device 10 using voice with ratify to remit money details or the permission identification of device 10 iris, fingerprint etc..
In addition, user can use user hand by using smartwatch when user dresses the wearable device of such as smartwatch
The vein of back is verified.For example, user can manipulate smartwatch with identify the vein in the back of the hand and verified with
Approval remittance details.
Fig. 8 is the figure for illustrating the method paid using voice input according to another example embodiment.With reference to Fig. 8,
User can be paid by using voice input.
Device 10 can show the screen for paying the commodity that user buys on the internet or service on the monitor 13
Curtain.For example, device 10 can show that " you want to buy Galaxy Note 7 message when user buys Galaxy Note 7
? "
After checking payment details, user can provide voice input to pay.For example, such as the 1st in Fig. 8
Shown, when user inputs " paying using Samsung card ", device 10 can identify the voice of user.User can simply enter
" payment ", and device 10 can be previously used for the card of payment by using user to continue to pay.
As shown in the 2nd in Fig. 8, device 10 can send the card information of user and payment information to card sending mechanism clothes
Business device 30.The card information of user may include card number, the validity period of card, password etc..Payment information may include the quotient to be paid
Product or service, sellers' information etc..
As shown in the 3rd in Fig. 8, Issuer server 30 can be confirmed card information and continue to pay.Work as branch
When paying completion, payment can be completed message sending device 10 by Issuer server 30.Device 10 can show and pay
At message to notify user that payment has normally completed.
As an example, if user dresses smartwatch and user's payment for merchandise or service, smartwatch can be certainly
It is dynamic that biometric verification is executed to user.For example, smartwatch can capture user's wrist if user dresses smartwatch
Vein and vein verifying is carried out by the mode of the vein of capture.Therefore, user can be paid automatically by smartwatch,
Without individually inputting voice, password etc..More specifically, device 10 can when user touches payment button on the internet
To determine whether user dresses smartwatch.If user dresses smartwatch, device 10 can be sent to smartwatch to be believed
Number to carry out vein verifying.Smartwatch can capture the vein of user under the control of device 10, and test vein is carried out
The result of card is sent to device 10.In addition, smartwatch can send the image of the vein of capture to device 10, and device
10 can carry out vein verifying.Vein verifying can be by the vein image of the vein image (or vein pattern) of registration and capture
(or vein pattern) compares.When user dresses wearable device, device 10 can proceed with payment without receiving from user
Individually input.
Fig. 9 is the figure for illustrating the device 10 of study payment mode according to example embodiment.With reference to Fig. 9, device 10 can be with
Payment mode is analyzed by analyzing various types of sentences.Study payment mode may mean that mark and record user are being propped up
The a type of voice input said when paying.
User it may be said that various types of payments sentence.For example, user can say following kind of sentence.
1, it is paid using Samsung card
2, Samsung card please be use to pay
3, it is paid using my card
4, continue to pay
Device 10 can be stored in the expression mainly (most frequently) said when user's payment in memory 12, and determine and use
Whether family says the same or similar sentence of sentence with storage, and continues to pay.
Device 10 can register the card information of user when learning to start or request card information from user, to obtain user
Main card information to be used.When registering the card information of user, even if user briefly " is paid " using my card, device 10
It can also continue to pay by using the card information of user previously registered.
Figure 10 is the flow chart for illustrating the method paid by voice input according to example embodiment.With reference to Figure 10,
User can come payment for merchandise or service by using voice input.
In operation 1010, device 10 can show payment details on the monitor 13.
In operation 1020, the voice that device 10 can receive user via microphone 14 is inputted.User can check branch
Details are paid, and whether can be paid by providing voice input to express.For example, user in payment it may be said that " payment ",
And when not paying it may be said that " not paying ".
In operation 1030, device 10 can analyze the input of received voice to determine the intention of user.Device 10 can
Ratify shown payment details to analyze voice and input and determine whether user is desired.
In operation 1040, device 10 can carry out user's checking by voice input.Device 10 can pass through determination
Whether voice input tests with voice match (for example, being compared by the speech samples with registration) the Lai Jinhang user of user
Card.Device 10 can determine registered speech samples whether with input voice match, and if it is, continue to prop up
It pays.Device 10 not only can also carry out user's checking by fingerprint, iris, vein, face or password by voice.
In operation 1050, device 10 can send payment information to card company.If be proved to be successful, device 10 can
To send card company for payment information and card information.Payment information may include commodity, sellers' information, amount of money etc..Card letter
Breath may include card number, password, validity period of user etc..
Device 10 can show that message is completed in payment when completing payment.
As described above, user can be bought by voice input when user buys commodity or service via internet
Commodity or service.
Figure 11 is the block diagram according to the processor 1300 of some example embodiments.
With reference to Figure 11, data learner 1310 may include according to the processor 1300 of some example embodiments and data are known
Other device 1320.
Data learner 1310 can learn the reference for certain situation.Data learner 1310 can learn to using
Any data is determined predetermined case and how to be determined the reference of the situation by using data.Data learner 1310 can be with
The data that be used for learning are obtained, and by data application obtained in the data identification model being described below, to learn
Commonly use the reference in certain situation.
Data learner 1310 can be inputted by using voice or sentence carrys out learning data identification model, to generate estimation
The data identification model collection of the intention of user.At this point, voice input or sentence may include by the language of user's sending of device 10
The sentence of the voice of sound or for identification user.Alternatively, voice or sentence may include the voice or the issued by third party
The voice of tripartite.
Data learner 1310 can come learning data identification model, the supervised learning side by using supervised learning method
Method uses voice or sentence and learning object as learning data.
In the exemplary embodiment, data identification model can be the Models Sets of the intention of estimation user's remittance money.In this feelings
Under condition, learning object may include but be not limited at least one in user information, receiver information, remittance number and remittance intention
It is a.User information may include but be not limited to the mark of the identification information (for example, title or pet name) of user or the account of user
Information (for example, account bank, name on account, the account pet name or account).Receiver information may include but be not limited to mark letter
Cease the account of (for example, title, the pet name or telephone number) or payee identification information (for example, account bank, name on account,
The account pet name or account).Remittance is intended to may include user whether to converge money.For example, remittance is intended to may include but be not limited to
Remittance process, remittance are subscribed, subscribe and cancel, remit money and hold (holding) or remittance confirmation.
On the other hand, at least one learning object value can have value " sky ".In this case, value " sky " can indicate
Voice input or sentence as learning data be not about the information of entity value.
Specifically, if voice input or sentence for study are " converging 100,000,000 won from A bank account to Samsung ",
Learning object is that { user information: A bank, receiver information: number of remitting money: Samsung 100,000,000 won, money order: continues
Remittance }.As another example, if voice input or sentence for study are " converging 100,000,000 won to Samsung ", learn reality
Body can be made up of user information: empty, receiver information: Samsung, number of remitting money: 100,000,000 won, money order: continue into
Row remittance }.As another example, if voice or sentence for study be " converged 100,000,000 won to Samsung be pair
? ", then learning object can be made up of that { user information: empty, receiver information: number of remitting money: Samsung 100,000,000 won, converges
Money instruction: confirmation remittance }.As another example, if voice or sentence for study are " to cancel and converge 100,000,000 won to three
The reservation of star ", then learning object can be made up of { user information: empty, receiver information: Samsung, number of remitting money: 100,000,000 Korea Spro
Money order: member is canceled a reservation }.
In another example embodiment, data identification model can be the Models Sets that the payment of estimation user is intended to.?
In this case, learning object be may include but be not limited in Payment Card, the item of payment, method of payment and payment intention at least
One.Method of payment may include such as full-payout or monthly serial number.Payment is intended to may include user
It is no to pay.For example, payment intention may include payment process, payment is cancelled, payment is held, method of payment changes or pay true
Recognize.
Specifically, if voice input or sentence for study are " using the full-payout of Samsung card " learning objects
Can be made up of means of payment: Samsung card, the item of payment: empty, method of payment: full-payout, payment instruction: continue into
Row payment }.As another example, if voice input or sentence for study are " in a manner of 10 monthly payments
Payment ", then learning object can be made up of means of payment: empty, the item of payment: it is empty, settlement method: pay by stages within 10 months
Payment instruction: money continues to pay }.As another example, if voice input or sentence for study are " to cancel
Previously payment ", then learning object can be made up of that { means of payment: empty, the item of payment: empty, method of payment: empty, payment refers to
Show: cancelling payment }.
Determine the data identification model collection and determine the data identification model that the payment of user is intended to that the remittance of user is intended to
Collection can be identical identification model or different identification models.Alternatively, each data identification model may include multiple numbers
According to identification model.For example, it is contemplated that the use environment (for example, using time or place to use) of user is arrived, it can be by using needle
Multiple data identification models of each environment customisations are determined with the intention of user.
Data identifier 1320 can be based on data come certain situation.Data identifier 1320 can be by using being learnt
Data identification models situation is identified from tentation data.Data identifier 1320 can be made by using data obtained
For input value, by learning and using data identification model, by obtaining tentation data according to predetermined reference, thus based on should
Tentation data determines predetermined case.In addition, being exported as input value by data identification model by using data obtained
End value can be used for updating the data identification model.
Data identifier 1320 can pass through the voice input or the sentence application of the voice of user for identification by user
The intention of user is estimated in data identification model.For example, the voice of user can be inputted or be used for by data identifier 1320
Identify that the sentence of the voice of user is applied to data identification model, to obtain identification entity and the identification entity is supplied to device
Processor (for example, processor 11 of the device 10 of Fig. 2).Processor 11 can be determined by using the identification entity of acquisition
The intention of user.
In the exemplary embodiment, data identification model can be the Models Sets of the intention of estimation user's remittance money.In this feelings
Under condition, data identifier 1320 can be by the way that the sentence of the voice of user be applied to number by the input of the voice of user or for identification
The intention of user's remittance money is estimated according to identification model.For example, data identifier 1320 can be inputted or be used for from the voice of user
Identify that the sentence of the voice of user obtains identification entity.Identification entity may include such as user information, receiver information, remittance
At least one of number and money order.Identification entity obtained can be supplied to processor by data identifier 1320
11.Processor 11 (or dialogue management module of processor 11) can determine the intention of user based on identification entity.
If determining that being intended to encompass for user is not intended to remittance money based on identification entity, processor 11 can be without being used for
The process of remittance money.On the other hand, if determining that user's is intended that remittance money based on identification entity, processor 11 can be used
The processing of Yu Huiqian.
At this point, the history of user can be used in processor 11 if at least one of the value of identification entity is " sky "
Information or presupposed information determine value corresponding with " sky " is worth.For example, processor 11 can be by reference to nearest remittance Qian Li
History determines value corresponding with " sky " is worth.Alternatively, processor 11 can be preset in preference setting by reference to user
Information (for example, account, account bank etc.) determines value corresponding with " sky " is worth.
If at least one of the value of identification entity is " sky ", processor 11 can request and value " sky " phase from user
Corresponding value.For example, processor 11 can control 13 display statement of display, sentence instruction there is no about user information,
The information of at least one of receiver information, remittance number or money order.When user passes through voice or other input (examples
Such as, the dummy keyboard by being shown on display 13) input at least one above- mentioned information when, processor 11 can by using from
Identification entity value that data identifier 1320 obtains and user's input information are carried out for converging the process of money.
In another example embodiment, data identification model can be the Models Sets that the payment of estimation user is intended to.?
In this case, data identifier 1320 can be by the way that the sentence of the voice of user be answered by the input of the voice of user or for identification
Estimate that the payment of user is intended to for data identification model.For example, data identifier 1320 can be inputted from the voice of user
Or the sentence of the voice of user obtains identification entity for identification.Identification entity may include for example means of payment, the item of payment,
At least one of method of payment and payment instruction.Identification entity obtained can be supplied to processing by data identifier 1320
Device 11.Processor 11 (or dialogue management module of processor 11) can determine the intention of user based on identification entity.
If determining that being intended that for user does not pay based on identification entity, processor 11 can be without for payment
Process.On the other hand, if determining the payment that is intended that of user based on identification entity, processor 11 can be carried out for paying
Process.
On the other hand, if at least one of the value of identification entity is " sky ", processor 11 can be used user's
Historical information or presupposed information determine value corresponding with " sky " is worth.Alternatively, processor 11 can request user input with
It is worth " sky " corresponding value.
At least one of data learner 1310 and data identifier 1320 can be manufactured at least one hardware core
Piece is simultaneously installed on the electronic device.For example, at least one of data learner 1310 and data identifier 1320 can be made
It makes as the proprietary hardware chip for artificial intelligence (AI), or traditional common processor can be manufactured to and (such as CPU or answered
With processor) component or graphics processor (for example, GPU), and may be mounted on various electronic devices as described above.
In this case, it can be the application specific processor dedicated for probability calculation for the proprietary hardware chip of AI, and have
Parallel processing performance more higher than traditional common processor, to quickly handle arithmetical operation (such as engineering in the field AI
It practises).
In this case, data learner 1310 and data identifier 1320 may be mounted on an electronic device or
It is mounted on individual electronic device.For example, one in data learner 1310 and data identifier 1320 may be embodied in
In electronic device, and another may include in the server.Data learner 1310 and data identifier 1320 can be via
The model information constructed by data learner 1310 is supplied to data identifier 1320 by wired or wireless communication.It is input to data
The data of identifier 1320 can be used as accretion learning data and be supplied to data learner 1310.
Meanwhile at least one of data learner 1310 and data identifier 1320 can be implemented as software module.When
At least one of data learner 1310 and data identifier 1320 are implemented as software module (or the program mould comprising instruction
Block) when, software module can be stored in non-transitory computer-readable medium.In addition, in this case, at least one is soft
Part module can be provided by operating system (OS) or scheduled application.Alternatively, some at least one software module can be by
OS is provided, and others can be provided by scheduled application.
Figure 12 is the block diagram according to the data learner 1310 of some example embodiments.
With reference to Figure 12, according to the data learner 1310 of some example embodiments may include data acquisition device 1310-1,
Preprocessor 1310-2, learning data selector 1310-3, model learning device 1310-4 and model evaluation device 1310-5.Some
In example embodiment, data learner 1310 can essentially include data acquisition device 1310-1 and model learning device
1310-4, and the property of can choose include preprocessor 1310-2, learning data selector 1310-3 and model evaluation device
At least one of 1310-5, or preprocessor 1310-2, learning data selector 1310-3 and model can not included and commented
Estimate the whole of device 1310-5.
Data needed for data obtainer 1310-1 can obtain the study for certain situation.
For example, data acquisition device 1310-1 can obtain voice data, image data, text data, biometric signal
Data etc..Specifically, data acquisition device 1310-1 can obtain for converge money or payment voice input or sentence.Alternatively,
Data acquisition device 1310-1 can be obtained comprising for converging the voice of money or payment or the voice data or text data of sentence.
Data acquisition device 1310-1 can be by the input unit of electronic device (for example, microphone, camera, sensor, key
Disk etc.) receive data.Alternatively, data acquisition device 1310-1 can be via the external device (ED) communicated with device (for example, service
Device) obtain data.
Preprocessor 1310-2 can pre-process data obtained, allow to using the study for certain situation and
The data of acquisition.Preprocessor 1310-2 can handle data obtained in a predetermined format, so that the model being described below
The data obtained of the study for certain situation can be used in learner 1310-4.For example, preprocessor 1310-2 can be with
Learning object value is extracted from voice data according to predetermined format.For example, when predetermined format by user information, receiver information,
Remit money number and money order } composition when, or when predetermined format is by { means of payment, the item of payment, method of payment, payment refer to
Enable } composition when, preprocessor 1310-2 can extract learning object value according to format from voice data.At this point, if not mentioning
Learning object value is taken out, then preprocessor 1310-2 can cause specific entity value to be shown as " sky ".
Data needed for learning data selector 1310-3 can select study from preprocessed data.It can will be selected
Data be supplied to model learning device 1310-4.In this case, by the data acquisition device 1310-1 data obtained or by pre-
The data of processor 1310-2 processing can be used as learning data and be supplied to model learning device 1310-4.Learning data selector
Data needed for 1310-3 can select study according to the predetermined reference for certain situation from preprocessed data.For example, can
To consider the life of the attribute of data, the generation time, the founder of data, the reliability of data, the target of data, data of data
At at least one of region and the size of data, to determine predetermined reference.Alternatively, learning data selector 1310-3 can be with
Data are selected according to the predetermined reference of the study by using the model learning device 1310-4 being described below.
Model learning device 1310-4 can learn on how to the reference based on learning data certain situation.In addition, model
Learner 1310-4 can learn which learning data to carry out the reference of certain situation using.For example, model learning device 1310-
4 can learn to determine model according to supervised learning method or unsupervised learning method, to generate for predicting, determining or estimate
Data identification model.Data identification model can be for example for estimate user remittance be intended to Models Sets or for estimating
The Models Sets that the payment of user is intended to.
In addition, model learning device 1310-4 can learn to identify for the data of certain situation by using learning data
Model.Data identification model can be pre- established model.For example, data identification model can be by receiving basic studies data
The pre- established model of (for example, sample data etc.).
It is contemplated that the computer performance of the application field of identification model, the aim of learning or device, to construct data identification
Model.Data identification model can be model for example neural network based.Data identification model can be designed as calculating
Machine upper mold personification brain structure.Data identification model may include multiple network nodes with weight, to simulate human nerve's net
The neuron of network.Multiple network nodes can establish connection relationship to simulate the neuron sent and received signal via cynapse
Synaptic activity.The deep learning that data identification model may include such as neural network model or develop from the neural network model
Model.In deep learning model, multiple network nodes can be located at different depth (or layer), and can be connected according to convolution
Relationship is connect to exchange data.For example, such as deep neural network (DNN), recurrent neural network (RNN) or forward-backward recutrnce depth mind
Model through network (BRDNN) may be used as data identification model, and but the present disclosure is not limited thereto.
According to various example embodiments, when there is the multiple data identification models constructed in advance, model learning device 1310-
4 can determine there is high relevant data identification model between input learning data and basic studies data as being learnt
Data identification model.In such a case, it is possible to be presorted according to data type to basic studies data, and can be for every
Kind data type pre-establishes data identification model.For example, can by it is various with reference to (such as generate learning data region,
Generate the time of learning data, the size of learning data, the type of learning data, the founder of learning data, in learning data
Object type etc.) come basic studies data of presorting.
In addition, model learning device 1310-4 can be by using for example comprising error back propagation method or gradient decline side
The learning algorithm of method carrys out learning data identification model.
In addition, model learning device 1310-4 can pass through supervised learning by using learning data as input value
Carry out learning data identification model.In addition, model learning device 1310-4 can by unsupervised learning come learning data identification model,
To find the reference for certain situation for example, by learning a type of data needed for determining own situation.In addition,
Model learning device 1310-4 can by intensified learning (such as by using based on study the case where definitive result whether just
True feedback), carry out learning data identification model.
Learning data may include the voice input or the input of third-party voice of user, and the voice of user is identified via it
Or the sentence of third-party voice, by user or the sentence of third party's typing etc..In addition, learning data may include it is defeated with voice
Enter or the associated learning object of sentence.The various examples of learning object are described in detail with reference to Figure 11, and therefore omit it
In redundancy description.
In addition, model learning device 1310-4 can store learnt data identification mould when learning data identification model
Type.In this case, the data identification model of study can be stored in comprising data identifier by model learning device 1310-4
In the memory (for example, memory 12 of above-mentioned apparatus 10) of 1320 electronic device.Alternatively, model learning device 1310-4 can
The data identification model of study to be stored in the memory of the electronic device of the data identifier 1320 comprising being described below
In.Alternatively, the data identification model of study can be stored in company via wired or wireless network by model learning device 1310-4
It is connected in the memory of the server of electronic device (for example, above-mentioned apparatus 10).
In this case, store learnt data identification model memory can also store for example with electronic device
The associated instruction of at least one other component or data.Memory can also store software and/or program.Program can wrap
Containing such as kernel, middleware software (middleware), Application Programming Interface (API) and/or application program (or " application ").
Assessment data can be input to data identification model by model evaluation device 1310-5, and if defeated from assessment data
Recognition result out is unsatisfactory for predetermined reference, then model evaluation device 1310-5 can permit model learning device 1310-4 and learn again
It practises.In this case, assessment data can be the tentation data for assessing the data identification model.
For example, when in the recognition result of the data identification model learnt for assessing data, recognition result is not just
When the number or ratio of true assessment data are more than preset threshold, model evaluation device 1310-5 can be identified the data learnt
Model evaluation is to be unsatisfactory for predetermined reference.For example, when predetermined reference is limited to 2% ratio, when the data of study identify mould
When type assesses the wrong identification result for the assessment data that data are assessed in output more than 20 among data at 1000 in total, the mould
It is improper that type evaluator 1310-5 can assess learnt data identification model.
On the other hand, when there are multiple learning data identification models, model evaluation device 1310-5, which can be assessed, to be learnt
Each of data identification model whether meet predetermined reference, and the model for meeting predetermined reference can be determined as most
Whole data identification model.In this case, when there is the multiple models for meeting predetermined reference, model evaluation device 1310-5 can
It is determined as final data identification mould with the model of any one model or predetermined number for assessing the setting of score descending by previously
Type.
It meanwhile including data acquisition device 1310-1, preprocessor 1310-2, learning data in data learner 1310
At least one of selector 1310-3, model learning device 1310-4 and model evaluation device 1310-5 can be fabricated at least one
In a hardware chip and install on the electronic device.For example, data acquisition device 1310-1, preprocessor 1310-2, learning data
At least one of selector 1310-3, model learning device 1310-4 and model evaluation device 1310-5 can be manufactured to be used for AI
Proprietary hardware chip, or the component or figure of traditional common processor (such as CPU or application processor) can be manufactured to
Shape processor (such as GPU), and may be mounted on various electronic devices as described above.
In addition, data acquisition device 1310-1, preprocessor 1310-2, learning data selector 1310-3, model learning device
1310-4 and model evaluation device 1310-5 may be mounted on an electronic device or may be mounted on individual electronic device.
For example, data acquisition device 1310-1, preprocessor 1310-2, learning data selector 1310-3, model learning device 1310-4 and
Some in model evaluation device 1310-5 may include in an electronic, and others may include in the server.
In addition, data acquisition device 1310-1, preprocessor 1310-2, learning data selector 1310-3, model learning device
At least one of 1310-4 and model evaluation device 1310-5 can be implemented as software module.As data acquisition device 1310-1, in advance
In processor 1310-2, learning data selector 1310-3, model learning device 1310-4 and model evaluation device 1310-5 at least
One when being implemented as software module (or program module comprising instruction), software module can be stored in non-transitory computer
In readable medium.In addition, in this case, at least one software module can be mentioned by operating system (OS) or scheduled application
For.Alternatively, some at least one software module can be provided by OS, and others can be provided by scheduled application.
Figure 13 is the block diagram according to the data identifier 1320 of some example embodiments.
With reference to Figure 13, according to the data identifier 1320 of some example embodiments may include data acquisition device 1320-1,
Preprocessor 1320-2, identification data selector 1320-3, recognition result provider 1320-4 and model modification device 1320-5.?
In some example embodiments, data identifier 1320 can be mentioned essentially comprising data acquisition device 1320-1 and recognition result
For device 1320-4, and the property of can choose include preprocessor 1320-2, identification data selector 1320-2 and model modification
At least one of device 1320-5.
Data acquisition device 1320-1 can be obtained for data needed for certain situation.For example, data acquisition device 1320-1
The voice input that user can be obtained or the sentence of the voice of user for identification.Specifically, data acquisition device 1320-1 can be with
The voice input for obtaining user or the sentence for remitting money or paying.Alternatively, data acquisition device 1320-1 can be obtained
The voice data or text data of voice or sentence comprising the user for being remitted money or being paid.
Preprocessor 1320-2 can pre-process data obtained, allow to obtain using for certain situation
Data.Data obtained can be processed into predetermined format by preprocessor 1320-2, so that the recognition result being described below
The data obtained for certain situation can be used in provider 1320-4.For example, preprocessor 1320-2 can be according to pre-
The formula that fixes extracts learning object value from voice data.For example, preprocessor 1320-2 can be according to { user information, payee
Information, remit money number and money order } or the format of { means of payment, the item of payment, method of payment, payment instruction } learned to extract
Practise entity value.
Identify that data selector 1320-3 can the data needed for selecting certain situation in preprocessed data.Can by institute
The data of selection are supplied to recognition result provider 1320-4.Identify that data selector 1320-3 can be according to for certain situation
Preset reference come select preprocessed data part or all.Such as, it may be considered that the attributes of data, data generation when
Between, the founder of data, the reliability of data, the target of data, at least one in the size of the formation zone of data and data
It is a to determine predetermined reference.Alternatively, identification data selector 1320-3 can be according to by by model learning device 1310-4
The predetermined reference of habit selects data.
Recognition result provider 1320-4 can be by selected data application in data identification model, with certain situation.
Recognition result provider 1320-4 can provide recognition result according to data identifying purpose.Recognition result provider 1320-4 can
Using by using the data selected by identification data selector 1320-3 as input value, by selected data application in number
According to identification model.In addition, recognition result can be determined by data identification model.
For example, when data identification model is the Models Sets of remittance intention for estimating user, recognition result provider
The voice of user can be inputted the remittance intention for being applied to data identification model to estimate, infer or predict user by 1320-4,
The voice input of the user is the sentence for saying the voice input of user for identification.Alternatively, when data identification model is to be used for
When estimating the Models Sets that the payment of user is intended to, the voice of user can be inputted or be used to know by recognition result provider 1320-4
The sentence of the voice input of other user is applied to data identification model, is intended to the payment of estimation (or deduction or prediction) user.
Recognition result provider 1320-4 can obtain the identification entity of the result of the intention as estimation user.Identification knot
Identification entity obtained can be supplied to processor (for example, processor 11 of the device 10 of Fig. 2) by fruit provider 1320-4.
Processor can determine the intention of user based on identification entity and continue for converging the process of money or payment.
Model modification device 1320-5 can be based on the assessment to the recognition result provided by recognition result provider 1320-4
Carry out more new data identification model.For example, model modification device 1320-5 can be provided to model learning device 1310-4 by recognition result
The recognition result that provider 1320-4 is provided, allows model learning device 1310-4 more new data identification model.
Alternatively, model modification device 1320-5 can be received from processor (for example, processor 11 of the device 10 of Fig. 2) and be closed
In the assessment (or feedback) of recognition result.For example, device 10 can identify mould by the way that the voice input of user is applied to data
Type is intended to show remittance details according to the remittance of user.
User can ratify remit money details or refusal approval remittance details.For example, being used if user ratifies remittance details
It family can be with typing voice, fingerprint, iris scan, vein image, face-image or password.On the other hand, when user refuses to ratify
When details of remitting money, user can choose cancel button, the voice that typing request is cancelled or within a predetermined period of time without any
Input.
In such a case, it is possible to by model modification device is supplied to according to the approval of user or the user feedback of refusal
1320-5, as the assessment to recognition result.In other words, user feedback may include determining for designation date identifier 1320 and tie
Fruit is that false information or instruction definitive result are genuine information.Model modification device 1320-5 can be by using user obtained
Feedback updated determines model.
Meanwhile the data acquisition device 1320-1 in data identifier 1320, preprocessor 1320-2, identification data selection
At least one of device 1320-3, recognition result provider 1320-4 and model modification device 1320-5 can be fabricated at least one
In a hardware chip and install on the electronic device.For example, data acquisition device 1320-1, preprocessor 1320-2, identification data
It selector 1320-3, is that can be manufactured to use by least one of result provider 1320-4 and model modification device 1320-5
In the proprietary hardware chip of AI, or one of traditional common processor (such as CPU or application processor) can be manufactured to
Point or graphics processor (such as GPU), and may be mounted on various electronic devices as described above.
In addition, data acquisition device 1320-1, preprocessor 1320-2, identification data selector 1320-3, recognition result mention
It may be mounted on an electronic device or may be mounted at at least one of device 1320-4 and model modification device 1320-5
On individual electronic device.For example, data acquisition device 1320-1, preprocessor 1320-2, identification data selector 1320-3, knowledge
Some in other result provider 1320-4 and model modification device 1320-5 may include in an electronic, and others can be with
Comprising in the server.
In addition, data acquisition device 1320-1, preprocessor 1320-2, identification data selector 1320-3, recognition result mention
It can be implemented as software module at least one of device 1320-4 and model modification device 1320-5.As data acquisition device 1320-
1, preprocessor 1320-2, identification data selector 1320-3, recognition result provider 1320-4 and model modification device 1320-5
At least one of when being implemented as software module (or program module comprising instruction), software module can be stored in nonvolatile
In property computer-readable medium.In addition, in this case, at least one software module can be provided by OS or scheduled application.
Alternatively, some at least one software module can be provided by OS, and others can be provided by scheduled application.
Figure 14 is shown according to some non-limiting embodiments through the interaction between device 1000 and server 2000
To learn and identify the exemplary figure of data.
Device 1000 can correspond to the device 10 of such as Fig. 2.Data in the data identifier 1320 of device 1000 obtain
Obtain device 1320-1, preprocessor 1320-2, identification data selector 1320-3, recognition result provider 1320-4 and model modification
Device 1320-5 can correspond respectively to data acquisition device 1320-1, preprocessor 1320- in the data identifier 1320 of Figure 13
2, the model modification device 1320-5 of data selector 1320-3, recognition result provider 1320-4 sum are identified.In addition, server
Data acquisition device 2310, preprocessor 2320 in 2000 data learner 2300, learning data selector 2330, model
It practises device 2340 and model evaluation device 2350 corresponds respectively to data acquisition device 1310-1, preprocessor 1310-2, learning data choosing
Select device 1310-3, model learning device 1310-4 and model evaluation device 1310-5.
Device 1000 can be interacted by short distance or long haul communication with server 2000.Device 1000 and server 2000
Being connected to each other means that device 1000 and server 2000 are connected to each other directly or by another component (for example, access point
(AP), at least one of hub (hub), relay, base station, router and gateway, as third component) connect each other
It connects.
With reference to Figure 14, server 2000 can learn the reference for certain situation, and device 1000 can be based on clothes
The learning outcome of business device 2000 carrys out certain situation.
In this case, the model learning device 2340 of server 2000 can carry out data learner shown in Figure 12
1310 function.The model learning device 2340 of server 2000 can learn to determine predetermined case and such as using what data
What carrys out certain situation by using data.Model learning device 2340 can obtain the data that be used for learning, and will be obtained
Data application in data identification model to learn the reference for certain situation.For example, model learning device 2340 can pass through
Carry out learning data identification model using voice input or sentence, to generate the data identification model collection of the intention of estimation user.It is raw
At data identification model can be for example for estimate user remittance be intended to and payment be intended at least one of model
Collection.
The recognition result provider 1320-4 of device 1000 can be by will be selected by identification data selector 1320-3
Data application is in the data identification model generated by server 2000 come certain situation.For example, recognition result provider 1320-4
Server 2000 can be sent by the data selected by identification data selector 1320-3.Server 2000 can will be by identifying
The data application of data selector 1320-3 selection is in data identification model, to request certain situation.In addition, recognition result provides
Device 1320-4 can receive the information about situation about being determined by server 2000 from server 2000.For example, when selected
When voice input of the data comprising user or for identification sentence of user speech, server 2000 can be by selected data
The data identification model collection of intention applied to estimation user, to obtain the identification entity of the intention comprising user.Server
2000 can be supplied to entity obtained recognition result provider 1320-4, as the information about identified situation.
As another example, the recognition result provider 1320-4 of device 1000 can be received from server 2000 by servicing
The identification model that device 2000 generates, and certain situation is come by using the received identification model of institute.In this case, device
1000 recognition result provider 1320-4 can be by the data application selected by identification data selector 1320-3 in from service
The received data identification model of device 2000 is with certain situation.For example, when selected data include the voice input of user or are used
When identifying the sentence of user speech, the recognition result provider 1320-4 of device 1000 can by selected data application in
From the data identification model collection of the intention of the received estimation user of server, to obtain the identification entity of the intention comprising user.
Then, entity obtained can be supplied to processor (for example, processor 11 of Fig. 2) by device 1000, as about really
The information of fixed situation.
Processor 11 can determine that the remittance of user is intended to or payment is intended to based on identification entity, and can be used
Yu Huiqian or the process of payment.
Device 10 according to example embodiment can only converge money to payee by voice input.
Device 10 according to example embodiment can be in the case where that need not send the account of payee, by by payee
Title, contact person and the amount of money be sent to bank server 20 to converge money to payee.
Device 10 according to example embodiment can only be paid by voice input.
Figure 15 and 16 is the stream according to the network system using data identification model of some non-limiting example embodiments
Cheng Tu.
In Figure 15 and 16, network system may include 1502 He of first assembly 1501 and 1601 and the second component
1602.Herein, first assembly 1501 and 1601 can be device 1000, and the second component 1502 and 1602 can be storage
The server 2000 of deposit data analysis model.Alternatively, first assembly 1501 and 1601 can be general processor, and second
Component 1502 and 1602 can be AI application specific processor.Alternatively, first assembly 1501 and 1601 can be at least one application,
And the second component 1502 and 1602 can be OS.In other words, the second component 1502 and 1602 can be than first assembly 1501
With the 1601 more integrated and dedicated and less components of delay, and there is better performance and more than first assembly 1501 and 1601
More resources, and more rapidly and effectively can handle creation than first assembly 1501 and 1601, update or know using data
Many operations needed for other model.
In such a case, it is possible to limit between first assembly 1501 and 1601 and the second component 1502 and 1602
Transmission/reception data interface.
For example, can have the learning data that be applied to data identification model to make with defining application interface (API)
For factor values (or intermediate value or branch value).API can be limited as subroutine or the collection of function, can be called for any
Agreement (for example, the agreement limited in device 1000) is to another agreement (for example, the agreement limited in server 2000)
Any processing.In other words, the ring that can carry out the operation of another agreement in any one agreement by API can be provided
Border.
In Figure 15, the remittance that first assembly 1501 can analyze user by using data identification model is intended to.
In operation 1511, first assembly 1501 can receive the voice of the user that there is remittance to be intended to of sending.
In operation 1513, first assembly 1501 can be by institute's received voice input or the received voice of institute for identification
Sentence be sent to the second component 1502.For example, first assembly 1501 can be used as to use number using voice input or sentence
The factor values of the api function provided according to identification model.In this case, api function can input voice or sentence is sent out
It is sent to identification data of second component 1502 as data identification model to be applied to.At this point it is possible to according to the communication lattice of promise
Formula changes and sends voice input or sentence.
In operation 1515, the second component 1502 can be applied to institute's received voice input or sentence to estimate user's
The data identification model collection for intention of remitting money.
As application as a result, in operation 1517, the second component 1502 can obtain identification entity.For example, identification is real
Body may include at least one in user information, receiver information (for example, title of payee), remittance number and money order
It is a.
In operation 1519, the second component 1502 can send first assembly 1501 for identification entity.At this point it is possible to root
Change and send identification entity according to the communication format of promise.
In operation 1521, first assembly 1501 can determine that the voice input of user has remittance based on identification entity
It is intended to.For example, if the remittance comprising " continuing remittance, payee's name and remittance number " as identification entity indicates
Value, then first assembly 1501 can determine that there is the voice of user remittance to be intended to.
Herein, operation 1513 to 1521 can correspond to the embodiment of the process in the operation 720 of Fig. 2, in the mistake
In journey device 10 analysis institute received voice with determine user remittance intention.
If determining that there is the voice of user remittance to be intended in operation 1521, in operation 1523, first assembly
1501 can search for and include the corresponding contact person of the title of payee identified in entity in the contact list.
In operation 1525,1527 and 1529, first assembly 1501 can ratify details based on the payee found
Contact person come converge money to payee account.Corresponding process corresponds to the operation 740 to 760 of Fig. 7, and will omit it
Redundancy description.
In Figure 16, the payment that first assembly 1601 can analyze user by using data identification model is intended to.
In operation 1611, first assembly 1601 can provide payment details.For example, first assembly 1601 can be in screen
Upper display payment details or by voice output payment details.
Whether user can check the payment details shown on screen, and can be expressed and be paid by voice input.
In operation 1613, first assembly 1601 can receive the voice input of user.
In operation 1615, first assembly 1601 can by institute's received voice input or identify received voice language
Sentence is sent to the second component 1602.For example, first assembly 1601 can be used as to be known using data using voice input or sentence
Other model and the factor values of api function provided.In this case, api function can input voice or sentence is sent to
Identification data of second component 1602 as data identification model to be applied to.At this point it is possible to which the communication format according to promise changes
Become and send voice input or sentence.
Operation 1617 in, the second component 1602 can by institute received voice or sentence be applied to estimate user payment
The data identification model collection of intention.
As application as a result, in operation 1619, the second component 1602 can obtain identification entity.For example, identification is real
Body may include but be not limited at least one of means of payment, the item of payment, method of payment and payment instruction.
In operation 1621, the second component 1602 can send first assembly 1601 for identification entity.At this point it is possible to root
Change and send identification entity according to the communication format of promise.
In operation 1623, first assembly 1601 can determine that the voice of user has payment meaning based on identification entity
Figure.For example, if first assembly 1601 can determine user comprising " cancelling payment " as the payment instruction value for identifying entity
Voice have and do not go on the intention of payment.On the other hand, if comprising " continuing to pay " as identification entity
Payment instruction value, then first assembly 1601 can determine that the voice of user has the intention for continuing payment.
Herein, operation 1615 to 1623 can correspond to the reality of the process in the operation 1030 of Figure 10 as described above
Apply example, in this process device 10 analysis institute received voice with determine user payment intention.
If it is determined that there is payment to be intended to for the voice input of user, then if passing through voice in operation 1625 and 1627
User's checking success, then first assembly 1601 can send payment information to card company.Corresponding process corresponds to Figure 10
Operation 1040 and 1050, and omit its redundancy description.
The record comprising computer executable instructions (program module such as executed by computer system) can be used to be situated between
Matter realizes one or more example embodiments.Non-transitory computer readable recording medium can be any usable medium,
It by computer system accesses and can include all types of volatile and non-volatile medias and separation and non-separating medium.
In addition, non-transitory computer readable recording medium may include all types of computer storage medias and communication media.Meter
Calculation machine storage medium includes by for storing information (such as computer readable instructions, data structure, program module or other numbers
According to) any means or technology realize all types of volatile and non-volatiles and separation and non-separating medium.Communication
Medium generally comprises computer readable instructions, data structure, program module, other datas of modulated signal, other conveyers
System and random information delivery media.
In addition, method according to the embodiment can be used as computer program product offer.
The computer-readable storage media that computer program product may include software program, store the software program, or
The product that person trades between sellers and buyer.
For example, computer program product may include via device 10 or device 10 manufacturer or electronic market (for example,
Google Play Store, App Store) product of software program form electronically distributed is (for example, Downloadable
Using).For electronic distribution, at least part of software program can be stored in storage medium or can temporarily create.
In this case, storage medium can be the storage medium of manufacturer or the server of electronic market or Relay Server.
In addition, in the description, " unit " can be the hardware component of such as processor or circuit and/or by such as handling
The component software that the hardware component of device executes.
Example described above embodiment is merely illustrative, and it will be appreciated by the skilled addressee that not changing
In the case where the technical spirit for becoming the disclosure, it can carry out various changes of form and details wherein.Therefore, example is implemented
Example should be considered only as descriptive sense rather than the purpose for limitation.For example, being described as each component of single type
It can be implemented by being distributed, and similarly, the component for being described as distributed-type can also be implemented by coupling.
It should be understood that the example embodiment being described herein should be to be considered only as descriptive sense rather than in order to limit
The purpose of system.Usually it will be understood that the description of the features or aspect in each example embodiment can be used in other examples embodiment
Other similar features or aspect.
Although describing one or more example embodiments by reference to attached drawing, those of ordinary skill in the art will be managed
Solution, in the case where not departing from the spirit and scope being determined by the claims that follow, can carry out wherein in form and details
Various changes.
Claims (15)
1. a kind of device, comprising:
Memory is configured as storing at least one program;
Microphone is configured as receiving voice input;And
At least one processor is configured as at least one described program of execution to control described device and converge money to gathering
The operation of people, the operation include:
The payment intention of user is determined based on the institute received voice input is analyzed;
Contact information is fetched from the contacts list of storage based on the title of payee;
The amount of money specified in the title of the payee and contact information and voice input is sent collectively to bank's clothes
Business device;
Remittance details are received from the bank server;And
Ratify the remittance details.
2. the apparatus according to claim 1, wherein the payment of the determination user is intended to include passing through the user
Voice input when remittance money carrys out mode of learning.
3. the apparatus according to claim 1, wherein at least one described processor is configured to control the dress
It sets to perform the following operation, comprising:
Verifying be input to the microphone the voice be described device user voice,
It is wherein verified as the voice of the user of described device based on the voice for being input to the microphone, determines the use
The payment at family is intended to.
4. the apparatus according to claim 1, at least one described processor be configured to control described device with
It performs the following operation, comprising:
Show the remittance details comprising the account of the payee.
5. the apparatus according to claim 1, wherein the approval remittance details include fingerprint based on the user,
At least one of iris scan, face-image and described voice ratify the remittance details.
6. the apparatus according to claim 1, wherein the approval remittance details include being based on wearing from by the user
The received vein pattern image of the wearable device worn ratifies the remittance details.
7. the apparatus according to claim 1, wherein the payment intention of the determination user includes being received described
Voice input be applied to data identification model collection with estimate the user payment be intended to.
8. device according to claim 7, wherein the data identification model is passed through based on artificial intelligence (AI) algorithm
Use the model that voice inputs or text learns as learning data and learning object, and
Wherein the learning object includes user information, receiver information, remittance at least one of number and money order.
9. device according to claim 7, wherein the payment of the determination user is intended that based on identification entity, institute
Identified entity is stated to obtain as described the received voice input is applied to the result of the data identification model,
Wherein the identification entity includes user information, receiver information, remittance at least one of number and money order.
10. a kind of method of payment, comprising:
Receive the voice input of user;
Determine that the payment of user is intended to based on the analysis to described the received voice input;
Contact information is fetched from the contacts list of storage based on including the payee's name in the voice input;
The number specified in the title of the payee and contact information and voice input is sent collectively to bank's clothes
Business device;
Remittance details are received from the bank server;And
Ratify the remittance details.
11. method of payment according to claim 10, wherein the payment of the determination user is intended to include passing through institute
Voice input when stating user's remittance money carrys out mode of learning.
12. method of payment according to claim 10, further comprises:
The voice that the voice is the user of device is verified, and
It is verified as the voice of the user of described device based on the voice for being input to the microphone, determines the payment of the user
It is intended to.
13. method of payment according to claim 10, further comprises:
Show the remittance details comprising the account of the payee.
14. method of payment according to claim 10, wherein the payment intention of the determination user includes will be described
The input of received voice be applied to data identification model collection and be intended to the payment for estimating the user.
15. a kind of computer program product comprising instruction, described instruction are configured as that device is caused to carry out when executed:
Determine that the payment of user is intended to by analyzing the voice input of user;
Contact information is fetched from the contacts list of storage based on the title of the payee specified in voice input;
The number specified in the title of the payee and contact information and voice input is sent collectively to bank's clothes
Business device;
Remittance details are received from the bank server;And
Ratify the remittance details.
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR20160154879 | 2016-11-21 | ||
KR10-2016-0154879 | 2016-11-21 | ||
KR10-2017-0132758 | 2017-10-12 | ||
KR1020170132758A KR102457811B1 (en) | 2016-11-21 | 2017-10-12 | Device and method for sending money using voice |
PCT/KR2017/013226 WO2018093229A1 (en) | 2016-11-21 | 2017-11-21 | Method and device applying artificial intelligence to send money by using voice input |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109983491A true CN109983491A (en) | 2019-07-05 |
CN109983491B CN109983491B (en) | 2023-12-29 |
Family
ID=62300225
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201780071950.5A Active CN109983491B (en) | 2016-11-21 | 2017-11-21 | Method and apparatus for applying artificial intelligence to money collection by using voice input |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP3533015A4 (en) |
KR (1) | KR102457811B1 (en) |
CN (1) | CN109983491B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11397956B1 (en) | 2020-10-26 | 2022-07-26 | Wells Fargo Bank, N.A. | Two way screen mirroring using a smart table |
US11429957B1 (en) | 2020-10-26 | 2022-08-30 | Wells Fargo Bank, N.A. | Smart table assisted financial health |
US11457730B1 (en) | 2020-10-26 | 2022-10-04 | Wells Fargo Bank, N.A. | Tactile input device for a touch screen |
US11572733B1 (en) | 2020-10-26 | 2023-02-07 | Wells Fargo Bank, N.A. | Smart table with built-in lockers |
US11727483B1 (en) | 2020-10-26 | 2023-08-15 | Wells Fargo Bank, N.A. | Smart table assisted financial health |
US11740853B1 (en) | 2020-10-26 | 2023-08-29 | Wells Fargo Bank, N.A. | Smart table system utilizing extended reality |
US11741517B1 (en) | 2020-10-26 | 2023-08-29 | Wells Fargo Bank, N.A. | Smart table system for document management |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102623727B1 (en) * | 2018-10-29 | 2024-01-11 | 삼성전자주식회사 | Electronic device and Method for controlling the electronic device thereof |
KR20200067673A (en) * | 2018-12-04 | 2020-06-12 | (주)이더블유비엠 | Shared ai loud speaker |
CN112201245B (en) * | 2020-09-30 | 2024-02-06 | 中国银行股份有限公司 | Information processing method, device, equipment and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20030026428A (en) * | 2001-09-25 | 2003-04-03 | 주식회사 엠보이스텔레소프트 | Phone Banking Method using Speech Recognition |
JP2006119851A (en) * | 2004-10-20 | 2006-05-11 | Nec Corp | Registration transfer method, and its system |
US20130080164A1 (en) * | 2011-09-28 | 2013-03-28 | Google Inc. | Selective Feedback For Text Recognition Systems |
KR20140066467A (en) * | 2012-11-23 | 2014-06-02 | 주식회사 우리은행 | Method of processing credit transfer using speech recognition and apparatus performing the same |
US20140172694A1 (en) * | 2012-12-17 | 2014-06-19 | Capital One Financial Corporation | Systems and methods for effecting personal payment transactions |
KR20140003840U (en) * | 2012-12-13 | 2014-06-23 | 한국전력공사 | Portable metering error of the test device |
US20150149354A1 (en) * | 2013-11-27 | 2015-05-28 | Bank Of America Corporation | Real-Time Data Recognition and User Interface Field Updating During Voice Entry |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4240807B2 (en) * | 2000-12-25 | 2009-03-18 | 日本電気株式会社 | Mobile communication terminal device, voice recognition method, and recording medium recording the program |
US20030229588A1 (en) * | 2002-06-05 | 2003-12-11 | Pitney Bowes Incorporated | Voice enabled electronic bill presentment and payment system |
KR20030012912A (en) * | 2003-01-09 | 2003-02-12 | 이호권 | Remittance service system by mobile phone |
WO2008013657A2 (en) * | 2006-06-28 | 2008-01-31 | Planet Payment, Inc. | Telephone-based commerce system and method |
GB2476054A (en) * | 2009-12-08 | 2011-06-15 | Voice Commerce Group Technologies Ltd | Voice authentication of bill payment transactions |
KR20130082645A (en) * | 2011-12-13 | 2013-07-22 | 장형윤 | Voice recognition of smart phone banking |
KR20140061047A (en) * | 2012-11-13 | 2014-05-21 | 한국전자통신연구원 | Terminal apparatus for controlling medical equipment based on voice recognition and method for the same |
KR20150011293A (en) * | 2013-07-22 | 2015-01-30 | 김종규 | Biometric authentication Electronic Signature Service methods Using an instant messenger |
-
2017
- 2017-10-12 KR KR1020170132758A patent/KR102457811B1/en active IP Right Grant
- 2017-11-21 CN CN201780071950.5A patent/CN109983491B/en active Active
- 2017-11-21 EP EP17872539.6A patent/EP3533015A4/en not_active Ceased
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20030026428A (en) * | 2001-09-25 | 2003-04-03 | 주식회사 엠보이스텔레소프트 | Phone Banking Method using Speech Recognition |
JP2006119851A (en) * | 2004-10-20 | 2006-05-11 | Nec Corp | Registration transfer method, and its system |
US20130080164A1 (en) * | 2011-09-28 | 2013-03-28 | Google Inc. | Selective Feedback For Text Recognition Systems |
KR20140066467A (en) * | 2012-11-23 | 2014-06-02 | 주식회사 우리은행 | Method of processing credit transfer using speech recognition and apparatus performing the same |
KR20140003840U (en) * | 2012-12-13 | 2014-06-23 | 한국전력공사 | Portable metering error of the test device |
US20140172694A1 (en) * | 2012-12-17 | 2014-06-19 | Capital One Financial Corporation | Systems and methods for effecting personal payment transactions |
US20150149354A1 (en) * | 2013-11-27 | 2015-05-28 | Bank Of America Corporation | Real-Time Data Recognition and User Interface Field Updating During Voice Entry |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11397956B1 (en) | 2020-10-26 | 2022-07-26 | Wells Fargo Bank, N.A. | Two way screen mirroring using a smart table |
US11429957B1 (en) | 2020-10-26 | 2022-08-30 | Wells Fargo Bank, N.A. | Smart table assisted financial health |
US11457730B1 (en) | 2020-10-26 | 2022-10-04 | Wells Fargo Bank, N.A. | Tactile input device for a touch screen |
US11572733B1 (en) | 2020-10-26 | 2023-02-07 | Wells Fargo Bank, N.A. | Smart table with built-in lockers |
US11687951B1 (en) | 2020-10-26 | 2023-06-27 | Wells Fargo Bank, N.A. | Two way screen mirroring using a smart table |
US11727483B1 (en) | 2020-10-26 | 2023-08-15 | Wells Fargo Bank, N.A. | Smart table assisted financial health |
US11740853B1 (en) | 2020-10-26 | 2023-08-29 | Wells Fargo Bank, N.A. | Smart table system utilizing extended reality |
US11741517B1 (en) | 2020-10-26 | 2023-08-29 | Wells Fargo Bank, N.A. | Smart table system for document management |
US11969084B1 (en) | 2020-10-26 | 2024-04-30 | Wells Fargo Bank, N.A. | Tactile input device for a touch screen |
Also Published As
Publication number | Publication date |
---|---|
KR102457811B1 (en) | 2022-10-24 |
CN109983491B (en) | 2023-12-29 |
EP3533015A4 (en) | 2019-11-27 |
EP3533015A1 (en) | 2019-09-04 |
KR20180057507A (en) | 2018-05-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109983491A (en) | By artificial intelligence application in converging the method and apparatus of money by using voice input | |
US11605081B2 (en) | Method and device applying artificial intelligence to send money by using voice input | |
US11501161B2 (en) | Method to explain factors influencing AI predictions with deep neural networks | |
US11392481B1 (en) | AI for evaluation and development of new products and features | |
US20200258141A1 (en) | Dynamic checkout page optimization using machine-learned model | |
US20200394362A1 (en) | Apparatus and method for providing sentence based on user input | |
EP3680823A1 (en) | System, method, and computer program product for incorporating knowledge from more complex models in simpler models | |
US11087396B1 (en) | Context aware predictive activity evaluation | |
US11847572B2 (en) | Method, system, and computer program product for detecting fraudulent interactions | |
CN109154945A (en) | New connection based on data attribute is recommended | |
US20230290343A1 (en) | Electronic device and control method therefor | |
US20220215393A1 (en) | Real-time updating of a security model | |
US11756020B1 (en) | Gesture and context interpretation for secure interactions | |
US11941594B2 (en) | User interaction artificial intelligence chat engine for integration of automated machine generated responses | |
WO2023069244A1 (en) | System, method, and computer program product for denoising sequential machine learning models | |
US20180330240A1 (en) | From Alien Streams | |
KR102453673B1 (en) | System for sharing or selling machine learning model and operating method thereof | |
CN109784733A (en) | User credit prediction technique, device, electronic equipment and storage medium | |
US20240152584A1 (en) | Authentication data aggregation | |
US20230401417A1 (en) | Leveraging multiple disparate machine learning model data outputs to generate recommendations for the next best action | |
US20230401416A1 (en) | Leveraging multiple disparate machine learning model data outputs to generate recommendations for the next best action | |
US20230153774A1 (en) | Universal payment intent | |
US20230353524A1 (en) | Engaging unknowns in response to interactions with knowns | |
US20240160480A1 (en) | Systems and methods providing multi-channel cognitive virtual assistance for resource transfer requests | |
US20230351416A1 (en) | Using machine learning to leverage interactions to generate hyperpersonalized actions |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |