CN109906466A - Information processing equipment and information processing method - Google Patents

Information processing equipment and information processing method Download PDF

Info

Publication number
CN109906466A
CN109906466A CN201780067884.4A CN201780067884A CN109906466A CN 109906466 A CN109906466 A CN 109906466A CN 201780067884 A CN201780067884 A CN 201780067884A CN 109906466 A CN109906466 A CN 109906466A
Authority
CN
China
Prior art keywords
user
voice
application service
voice messaging
information processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201780067884.4A
Other languages
Chinese (zh)
Other versions
CN109906466B (en
Inventor
井原圭吾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Publication of CN109906466A publication Critical patent/CN109906466A/en
Application granted granted Critical
Publication of CN109906466B publication Critical patent/CN109906466B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/22Interactive procedures; Man-machine interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/10Speech classification or search using distance or distortion measures between unknown speech and reference templates
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/06Decision making techniques; Pattern matching strategies
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/06Decision making techniques; Pattern matching strategies
    • G10L17/08Use of distortion metrics or a particular distance between probe pattern and reference templates
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/40Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
    • H04R1/406Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/005Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • Development Economics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Economics (AREA)
  • Otolaryngology (AREA)
  • Game Theory and Decision Science (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Telephonic Communication Services (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

[problem] provides a kind of information processing equipment and information processing method, and the information processing equipment and information processing method make it possible to collect the voice of user's sending, and identify specific user based on the number that user speaks in predetermined amount of time.A kind of [solution] information processing equipment, is provided with communication unit, can receive the voice messaging of the voice in relation to multiple microphones collection by discrete arrangement;And control unit, the user identified based on voice messaging is determined as having carried out the specific user of the speech of pre-determined number or more at least certain period of time by it, wherein the voice messaging is received by communication unit, and it is related with voice collected by the particular microphone in multiple microphones, and control unit controls loudspeaker corresponding with particular microphone, to send the voice messaging that be sent to specific user by communication unit.

Description

Information processing equipment and information processing method
Technical field
This disclosure relates to information processing equipments and information processing method.
Background technique
Routinely, there is following services: by the application activated in smart phone etc., being measured using location technology The access frequency of user, and provide a user access point, favor information etc..
Herein, about customer's identification technology, for example, patent document described below 1 discloses a kind of online Karaoke system System, the voice feature data which is generated based on the song by analysis Karaoke, is extracted preparatory from customer database The personal information of registration and identify customer, and the message that output is distributed.
In addition, patent document 2 described below discloses a kind of online karaoke OK system, which reads customer from ID card ID, and message is exported to customer based on the analysis result of the content to the record accordingly based on customer.
In addition, patent document 3 described below discloses a kind of Customer management equipment, Gu of the equipment from such as accumulating card Objective recording medium reads Customer Information, calculates customer according to the business days of the number of customer's access amusement hall and amusement hall Rate of people logging in, accurately determining customer is regular guest or non-regular guest, and definitive result is used for the management tactics of amusement hall.
Reference listing
Patent document
Patent document 1:JP2011-43715A
Patent document 2:JP 2004-46233A
Patent document 3:JP 2001-300099A
Summary of the invention
Technical problem
However, in all above-mentioned technologies, it is necessary to pre-register Customer Information, but user has registration personal information Conflict.In addition, when user accesses shop, user needs to show ID card or accumulating card, this is with regard to pretty troublesome in order to receive service.
In addition, it is necessary to using the system of smart phone and application, there are such problems: the uncomfortable old age using equipment People is not available the system.
In view of afore-mentioned, the present disclosure presents following information processing equipments and information processing methods: the information Processing equipment and information processing method can collect the language voice of user, and spoken within a predetermined period of time based on user Number identifies specific user.
Solution to the problem
According to present disclosure, propose a kind of information processing equipment, comprising: communication unit, can receive it is related by The voice messaging for the voice that multiple microphones of discrete arrangement are collected;And control unit, it is configured to: will be believed based on voice The user for ceasing and identifying is determined as having carried out the specific use of the speech of pre-determined number or more at least certain period of time Family, wherein for the voice messaging in relation to the voice collected by the particular microphone in multiple microphones, which is via logical Believe that unit is received, and control unit will be sent to the voice messaging control of specific user to be sent to via communication unit Loudspeaker corresponding with particular microphone.
According to present disclosure, a kind of information processing method is proposed, comprising: will be known based on voice messaging by processor Other user is determined as having carried out the specific user of the speech of pre-determined number or more at least certain period of time, In, the voice which collects in relation to the particular microphone in multiple microphones by discrete arrangement, the voice messaging is It is received via communication unit, communication unit can receive the voice messaging of the voice in relation to being collected by multiple microphones;And it will The voice messaging control of specific user is sent to be sent to loudspeaker corresponding with particular microphone via communication unit.
Beneficial effects of the present invention
As described above, according to present disclosure the language voice of user can be collected, and can be based on user predetermined The number spoken in period identifies specific user.
Note that said effect be not necessarily it is restrictive.Together with said effect or replace may be implemented in said effect Any one effect described in this specification or other effects that can be understood from this specification.
Detailed description of the invention
[Fig. 1] is the figure for describing the overview of information processing system according to the embodiment of the present disclosure.
[Fig. 2] is the exemplary figure for showing the configured in one piece of information processing system according to the present embodiment.
[Fig. 3] is the exemplary figure for showing the configuration of terminal device according to the present embodiment.
[Fig. 4] is the exemplary figure for showing the configuration of server according to the present embodiment.
[Fig. 5] is the exemplary figure for showing application service management table according to the present embodiment.
[Fig. 6] is the exemplary figure for showing application service Keyword List according to the present embodiment.
[Fig. 7] is the exemplary figure for showing application service terminal list according to the present embodiment.
[Fig. 8] is the exemplary figure for showing user management table according to the present embodiment.
[Fig. 9] is the exemplary figure for showing user key words history according to the present embodiment.
[Figure 10] is the exemplary figure for showing user according to the present embodiment and identifying history.
[Figure 11] is the sequence chart for showing the registration process of application service according to the present embodiment.
[Figure 12] is the sequence chart for showing the response processing of information processing system according to the present embodiment.
[Figure 13] is the sequence chart for showing the response processing of information processing system according to the present embodiment.
[Figure 14] is the figure for describing the overview of first embodiment.
[Figure 15] is the flow chart for determining processing for showing high-quality user according to first embodiment.
[Figure 16] is the flow chart for showing the generation processing of voice responsive data according to first embodiment.
[Figure 17] is the figure for describing the overview of second embodiment.
[Figure 18] is the flow chart for determining processing for showing the high-quality user according to second embodiment.
[Figure 19] is the flow chart for showing the generation processing according to the voice responsive data of second embodiment.
[Figure 20] is description second embodiment using exemplary figure.
[Figure 21] is the figure for describing the overview of third embodiment.
[Figure 22] is the flow chart for determining processing for showing the high-quality user according to third embodiment.
[Figure 23] is the flow chart for showing the generation processing according to the voice responsive data of third embodiment.
[Figure 24] is the figure for describing the overview using example 1 of third embodiment.
[Figure 25] is the flow chart for determining processing for showing the high-quality user using example 1 according to third embodiment.
[Figure 26] is the process for showing the generation processing according to the voice responsive data using example 1 of third embodiment Figure.
[Figure 27] is the sequence chart for showing the management processing according to the amusement history using example 1 of third embodiment.
[Figure 28] is the figure for describing the overview using example 2 of third embodiment.
[Figure 29] is the flow chart for determining processing for showing the high-quality user using example 2 according to third embodiment.
[Figure 30] is the process for showing the generation processing according to the voice responsive data using example 2 of third embodiment Figure.
Specific embodiment
Hereinafter, it will be described in detail with reference to the accompanying drawings the preferred embodiment of present disclosure.Note that in this specification and In attached drawing, the structural detail with essentially identical function and structure is denoted by the same reference numerals, and is omitted to this The repeated explanation of a little structural details.
In addition, description will be provided in the following order.
1. the overview of information processing system according to the embodiment of the present disclosure
2. configuration
The configuration of 2-1. terminal device 1
The configuration of 2-2. server 2
3. operation processing
3-1. registration process
3-2. response processing
4. embodiment
4-1. first embodiment
4-2. second embodiment
4-3. third embodiment
(4-3-1. application example 1)
(4-3-2. application example 2)
5. conclusion
<<overview of 1. information processing system according to the embodiment of the present disclosure>>
Fig. 1 is the figure for describing the overview of information processing system according to the embodiment of the present disclosure.As shown in Figure 1, Information processing system according to the present embodiment collects user's by outputting and inputting the terminal device 1 of function with voice Language voice, and predetermined response voice is exported to the user for being confirmed as meeting the specific user of predetermined condition.
Specifically, for example, " thing liked either with or without me? " is spoken in a low voice in shop the voice of user be installed in shop In the voice-input unit 12 (microphone) of terminal device 1 collect, executed by analyzing the vocal print of voice messaging of user Identification to user, and determine the user whether be regular guest (such as, if having predetermined number of days in shop within a predetermined period of time In perform speech recognition to the user identified based on voiceprint analysis).Then, the case where user meets the condition of determination Under, determine that the user is regular guest, and export to the specific response of regular guest (for example, " the beefsteak meat special price pin that important customer is specially enjoyed Sell " etc.), as the voice exported from the voice-output unit 13 (loudspeaker) of terminal device 1, as shown in Figure 1.
In this way, in the present embodiment, it because executing reliable recognition by language speech analysis, is not required to The personal information of name and address of such as user etc is pre-registered, furthermore, it is possible to not showing ID card, accumulating card etc. In the case of execute determination to regular guest.In addition, carrying out establishing ID card, accumulating card etc. when buying behavior usually in shop, still It is not always to need previous purchase row in the case where executing voice-based reliable recognition as in present embodiment For.In addition, when user in shop with salesman or other customer conversations, greeting, talk to onself, speak in a low voice, talked with voice assistant Whens equal, the language voice of user can be collected.Furthermore it is also possible to reach following effect: with by camera to its face carry out at The facial recognition of picture is compared, and customer (user) is psychologically to voice-based personal identification compared with non-contravention.
Then, by the configured in one piece of this information processing system referring to Fig. 2 description according to the present embodiment.Fig. 2 is to show The exemplary figure of the configured in one piece of information processing system according to the present embodiment out.
As shown in Fig. 2, information processing system according to the present embodiment includes server 2 and is arranged at various positions Multiple terminal devices 1 (herein, be shown as example three terminal device 1a to 1c).Assuming that several 1 quilts of terminal device It is mounted at the various positions in city, such as commercial street, department store, restaurant, clothes shop and amusement arcade.In addition, terminal is set Standby 1 form is not particularly limited, for example, terminal device 1 can be arranged on down toy, game machine, the machine in shop People, local mascot clothes in.
Server 2 is connect via network 3 with multiple terminal devices 1, and mutually executes data transmission and reception.In addition, Server 2 executes voiceprint analysis to from the received voice messaging of multiple terminal devices 1, executes the identification (individual's identification) of user, And further determine that whether user is the high-quality user for meeting predetermined condition.In addition, determining that the user is high-quality user In the case of, the voice responsive data for being directed to high-quality user are obtained, and send terminal device 1 for the voice responsive data.From (4a to 4c) obtains the voice responsive data for being directed to high-quality user to corresponding scheduled application service server 4.Application service service Device 4 is server corresponding with the application service to apply in each terminal device 1, and application service server 4 with It wants to save high-quality user fixed condition really in the related information of ID (Termination ID) and service of the terminal device 1 of application service. In addition, application service server 4 generates voice responsive data (important Gu in response to the request from server 2, for high-quality user Objective information etc.), and server 2 is sent by voice responsive data generated.
Hereinbefore, information processing system according to the embodiment of the present disclosure is described.Then, with reference to the accompanying drawings The concrete configuration for each equipment for including in the information processing system of description according to the present embodiment.
<<2. configuration>>
<configuration of 2-1. terminal device 1>
Fig. 3 is the exemplary block diagram for showing the configuration of terminal device 1 according to the present embodiment.As shown in figure 3, terminal is set Standby 1 includes control unit 10, communication unit 11, voice-input unit 12, voice-output unit 13 and storage unit 14.
Control unit 10 serves as arithmetic processing equipment and control equipment, and according in various process control terminal devices 1 Integrated operation.For example, control unit 10 is realized by the electronic circuit of such as central processing unit (CPU) and microprocessor.Separately Outside, control unit 10 may include storing the read-only memory (ROM) of program to be used, calculating parameter etc., and temporarily deposit Store up the random access memory (RAM) of parameter suitably changed etc..
In addition, control unit 10 according to the present embodiment controls the (tool of voice messaging collected by voice-input unit 12 Body, the language voice messaging of user) server 2 is sequentially transmitted to from communication unit 11.For example, user is in quotient as a result, The voice said in shop is automatically sent to server 2, and executes about whether user is high-quality user (such as regular guest or again Want customer) determination.The voice messaging for being sent to server 2 can be primary voice data, or can be and executed Such as processed voice data of the processing of coding or Characteristic Extraction.In addition, the control of control unit 10 connects from server 2 The voice messaging (specifically, for the voice responsive data of high-quality user) of receipts is reproduced from voice-output unit 13.As a result, may be used Information is presented to high-quality user.
In addition, control unit 10 can have the function of automated toing respond to the music program (voiceagent) of user spoken utterances. The response modes of user spoken utterances can be stored in storage unit 14, or can be obtained from server 2.
Voice-input unit 12 is amplified the microphone of processing by microphone, execution to the voice signal that microphone obtains Amplifier unit and the A/D converter of digital signal is converted voice signals into realize, and voice-input unit 12 will Voice signal is output to control unit 10.
Voice-output unit 13 includes the loudspeaker of reproducing speech and the amplifier circuit for loudspeaker.
Communication unit 11 is connect with network 3 in a wired or wireless fashion, and carries out data hair by network and server 2 It send and receives.For example, communication unit 11 passes through wire/wireless local area network (LAN), Wi-Fi (registered trademark), cellular communications networks The communication connection of foundation and network 3 such as (long term evolution (LTE), 3G (Third Generation) Moblie methods (3G)).
The read-only memory of the program as used in the processing for being stored in control unit 10 of storage unit 14, calculating parameter etc. (ROM) and the random access memory (RAM) that temporarily stores the parameter suitably changed etc. is realized.
Hereinbefore, the configuration of terminal device 1 according to the present embodiment has been described in detail.Note that terminal device 1 is matched It sets and is not limited to example shown in Fig. 3.For example, in voice-input unit 12 or voice-output unit 13 at least any one can be with Terminal device 1 is provided separately.
<configuration of 2-2. server 2>
Fig. 4 is the exemplary block diagram for showing the configuration of server 2 according to the present embodiment.As shown in figure 4, server 2 (information processing equipment) includes that control unit 20, network communication unit 21, application service server interface (I/F) 22 and storage are single Member 23.
(control unit 20)
Control unit 20 serves as arithmetic processing equipment and control equipment, and according in various process control servers 2 Integrated operation.For example, control unit 20 is realized by the electronic circuit of such as central processing unit (CPU) and microprocessor.Separately Outside, control unit 20 may include storing the read-only memory (ROM) of program to be used, calculating parameter etc., and temporarily deposit Store up the random access memory (RAM) of parameter suitably changed etc..
In addition, as shown in figure 4, control unit 20 according to the present embodiment serves as application service management unit 20a, user Information management unit 20b, voiceprint analysis unit 20c, voice recognition unit 20d, user identification unit 20e, high-quality user determine Unit 20f and voice responsive data capture unit 20g.
Application service management unit 20a uses the application service management table being stored in storage unit 23, application service to close The list of key word and application service terminal list come execute the management to the information about application service (for example, the reading of data and Write-in etc.).Information via application service server I/F 22 from the acquisition of each application service server 4 about application service.
Herein, Fig. 5 shows the example of application service management table according to the present embodiment.As shown in figure 5, application service Management table stores Apply Names associated with application service ID and high-quality user determines condition.Application service ID is application service Identification information.Apply Names are the titles of application service.High-quality user determines that condition is the targeted high-quality use of application service Family fixed condition really, and example include access times (number of days that user speaks) within a predetermined period of time, user say it is pre- Determine the number etc. of keyword.Furthermore it is possible to which multiple high-quality users, which are arranged, determines condition.For example, in the example depicted in fig. 5, In application service ID:app0001, as " commercial street the ABC activity for important customer ", it can will be had subscribed in one month The high-quality user of target for the sale at special price that dried beef is determined as dried beef to be carried out up to ten times or more secondary people, and can incite somebody to action The target that the sale at special price that beefsteak meat is determined as beefsteak meat to be carried out up to five times or more people is had subscribed in one month is excellent Matter user.
In addition, Fig. 6 shows the example of application service Keyword List.As shown in fig. 6, application service Keyword List is The list of keyword associated with application service ID.Specifically, application service Keyword List is to determine in high-quality user Used in keyword list.Note that the not limited to of keyword associated with each application service ID is in one, and Multiple keywords can be associated with each application service ID.Furthermore it is also possible to be associated with the relatively narrow concept with special key words Keyword.
In addition, Fig. 7 shows the example of application service terminal list.Application service terminal list is and application service ID phase The list of associated terminal device.For example, in the example depicted in fig. 7, registering answering with application service ID:app0001 With service the terminal device to be applied arrived ID:dev0001, dev0002 etc..
Subscriber information management unit 20b is executed using the user management table being stored in storage unit 23 to about user Information management.Information about user includes that the User ID of each user, voiceprint analysis result (sound are distributed to by system side Line data), the history of keyword said of user and user identify history.Hereinafter, it will describe to use referring to Fig. 8 to Figure 10 The specific example of family information.
Fig. 8 is the exemplary figure for showing user management table according to the present embodiment.As shown in figure 8, user management table packet " the application that the voice print database and each application service for including the User ID, the user of distributing to each user can be used freely Service data " region.By storing application service data and each User ID in association, may be implemented with each using institute The function that uniqueness provides cooperates.
Fig. 9 is the exemplary figure for showing user key words history.As shown in figure 9, user says in user key words history The time and date (time and date of registration) of predetermined keyword and respective application service ID are by associated with User ID out Ground accumulation.Thus, it is possible to determine whether user has said predetermined keyword within a predetermined period of time.
Figure 10 is the exemplary figure for showing user and identifying history.As shown in Figure 10, history is identified as user, recognize use The ID and recognition time of the terminal device at family and date (time and date of registration) are accumulated in association with User ID.By This, for example, it may be determined that user accesses the number in commercial street or shopping center.
Voiceprint analysis unit 20c holds network communication unit 21 from the voice messaging of the received user spoken utterances of terminal device 1 Row voiceprint analysis, and obtain voice print database (i.e. voiceprint analysis result).In the present embodiment, do not have to the algorithm of voiceprint analysis There is special limitation.Voice print database varies with each individual, and can carry out personal identification.
The voice messaging of user spoken utterances is converted into text and executes morphological analysis by voice recognition unit 20d (morphological analysis) etc., and execute keyword extraction, meaning understanding, attribute evaluation etc..Attribute evaluation is The estimation at gender, age to speaker etc..
User identification unit 20e executes the knowledge of user based on the voiceprint analysis result obtained by voiceprint analysis unit 20c Not.Specifically, using the user management table being stored in storage unit 23, user identification unit 20e will be with each User ID phase Associated voice print database and voiceprint analysis result are compared, and are identified and produced the user of voice.
Determine condition with reference to the high-quality user being stored in storage unit 23, high-quality user's determination unit 20f determine by with Whether the user of family recognition unit 20e identification is high-quality user.For example, with reference to the user information being stored in storage unit 23 (user key words history and user identify history), high-quality user's determination unit 20f will speak at least certain period of time to be reached The user of pre-determined number or more is determined as high-quality user (example of specific user).In addition, as described with reference to figure 5, needle High-quality user is arranged to each application service and determines condition.Therefore, high-quality user's determination unit 20f uses following application service Condition is determined to execute the determination of high-quality user: the application service is to be applied in the terminal device for having collected user spoken utterances voice 1。
Voice responsive data capture unit 20g obtains the voice responsive data to user spoken utterances.Specifically, for example, response Voice data acquiring unit 20g to application service server 4 send by user meet really fixed condition, have collected the language of user Terminal device ID of voice etc., and request and obtain the voice responsive data to high-quality user.By voice responsive data acquisition Unit 20g obtain voice responsive data via network communication unit 21 be sent to terminal device 1 (including with have collected high-quality use The equipment of the corresponding loudspeaker of microphone of the language voice at family).
(network communication unit 21)
Network communication unit 21 is connect with network 3 in a wired or wireless fashion, and via network 3 and each terminal device 1 carries out data transmission and reception.For example, network communication unit 21 passes through wire/wireless local area network (LAN), Wireless Fidelity (Wi- Fi, registered trademark) etc. with network 3 establish communicate to connect.
(application service server I/F 22)
Application service server I/F 22 and application service server 4 carry out data transmission and reception.It is taken with application service The communication of business device I/F 22 can be executed via special circuit, or can be executed via network 3.
(storage unit 23)
Storage unit 23 is used for the ROM of program, calculating parameter of processing of control unit 20 etc. by storage and temporarily deposited The RAM of parameter suitably changed etc. is stored up to realize.For example, the storage of storage unit 23 according to the present embodiment is described above Application service management table, application service Keyword List, application service terminal list, the user management table, user key words crossed History and user identify history.
Hereinbefore, the configuration of server 2 according to the present embodiment has been described in detail.
<<3. operation processing>>
Then, the operation processing that attached drawing will be used to specifically describe information processing system according to the present embodiment.
<3-1. registering flow path>
Firstly, the registration process for describing application service for referring to Fig.1 1.Figure 11 is the application shown according to the present embodiment The sequence chart of the registration process of service.
As shown in figure 11, firstly, application service server 4 sends the application service ID for distributing to itself and name information To server 2 (step S103).
Next, the application service management unit 20a of server 2 registers the received application service ID of institute and name information Into the application service management table being stored in storage unit 23 (step S106).
Then, application service server 4 will be sent to using the ID of the terminal device of its own application service provided Server 2 (step S109).
Next, the application service management unit 20a of server 2 by received terminal device ID and application service ID It is registered in the application service terminal list stored in storage unit 23 (step S112) in association.
Then, application service server 4 will be sent to 2 (step of server by the list of the keyword of speech recognition S115)。
Next, the application service management unit 20a of server 2 is each pass for including in received Keyword List Key word distributes unique ID, and ID and keyword and application service ID are registered in association and stored in storage unit 23 In application service Keyword List (step S118).
Then, by high-quality user, fixed condition is sent to server 2 (step S121) to application service server 4 really.
Next, the application service management unit 20a of server 2 is related to application service ID by the received determining condition of institute It is registered in the application service management table stored in storage unit 23 (step S124) to connection.
For each application service, it is appropriately performed between application service server and server 2 described above The registration process of application service.
<3-2. response processing>
Then, referring to Fig.1 2 and Figure 13 description is used to determine high-quality user and executes the sound based on scheduled application service The operation processing answered.Figure 12 and Figure 13 is the sequence chart for showing the response processing of information processing system according to the present embodiment.
As shown in figure 12, firstly, terminal device 1 collects user spoken utterances (step S203) using voice-input unit 12.Eventually End equipment 1 is for example arranged on the inlet in shop or shop, and constantly collects the talk of user, speaks in a low voice.
Next, the ID of terminal device 1 and collected voice messaging are sent 2 (step of server by terminal device 1 S206)。
Then, server 2 executes voiceprint analysis (step to the received voice messaging of institute using voiceprint analysis unit 20c S209)。
Next, server 2 compares user management table using user identification unit 20e to check the result of voiceprint analysis (step S212).Because user management table stores voice print database associated with User ID, as described in reference to fig. 8, therefore pass through Voiceprint analysis result is compared with voice print database, can identify user.
Then, in the case where relative users are not stored in user management table (step S215/ is no), voiceprint analysis As a result it is registered in user management table by subscriber information management unit 20b, the voice print database (step S218) as new user.
Next, subscriber information management unit 20b sets the received terminal of the time and date for recognizing user and institute Standby ID is recorded user and identifies in history (step S221).The case where " user has been identified ", corresponds in above-mentioned steps S215 In the case where there are relative users, or the case where register new user in above-mentioned steps S218.
Then, server 2 is using voice recognition unit 20d to the voice messaging from the received user spoken utterances of terminal device 1 Speech recognition is executed, and extracts keyword (step S224) from discourse content.Specifically, voice recognition unit 20d is executed The text conversion and morphological analysis of voice messaging, and the extraction of execution keyword (herein, is widely extracted from language text Vocabulary).
Next, as shown in figure 13, server 2 compares application service terminal list using application service management unit 20a The ID (referring to Fig. 7) of received terminal device 1 to check, and extract the application service ID (step that be applied to terminal device 1 Rapid S227).
Then, voice recognition unit 20d obtain for extracted application service ID and register Keyword List (referring to Fig. 6) (step S230).
Next, voice recognition unit 20d determines the pass extracted from the voice messaging of user spoken utterances by speech recognition Whether key word is included in above-mentioned Keyword List (step S233).
Then, (step S233/ is), subscriber information management list in the case where keyword is included in Keyword List First 20b is by keyword record into user key words history (referring to Fig. 9) (step S236).
Next, high-quality user's determination unit 20f, which executes high-quality user, determines processing (step S239).Specifically, according to The high-quality user being arranged in application service management table determines condition, and high-quality user's determination unit 20f determines the use spoken Whether family is high-quality user.Note that high-quality user determines that condition changes according to application service to be applied.It will use later more A embodiment describes the specific example that high-quality user determines condition.
Then, (the step S242/ in the case where it is high-quality user that high-quality user's determination unit 20f, which determines the user not, It is no), server 2 notifies that the user of terminal device 1 is not high-quality user (step S245).
Then, in the case where server 2 determines that the user is not high-quality user, terminal device 1 does not execute sound to user Answer (step S248).Note that even if terminal device 1 does not execute response, terminal device 1 to the high-quality user based on application service General user can also be executed by music program and be automated toed respond to, such as " welcome ".
On the other hand, in the case where determining that the user is high-quality user (step S242/ is), server 2 is to will apply Application service ID indicated by application service server 4 send it is corresponding determine condition etc. (e.g., including determine item accordingly Part, the User ID of high-quality user and terminal device ID) (step S251).
Next, application service server 4 is that high-quality user generates voice responsive according to from the received information of server 2 Data (step S254).It can be stored in advance in application service server 4 for the voice responsive data of high-quality user, or Person can be generated by pre-defined algorithm.Note that the voice responsive data to be generated are described multiple embodiments are used later Specific example.
Then, the voice responsive data generated for high-quality user are sent server 2 by application service server 4 (step S257).
Next, server 2 will be sent to (the step of terminal device 1 from the received voice responsive data of application service server 4 Rapid S260).
Then, terminal device 1 will from the received voice responsive data of server 2 as voice from voice-output unit 13 into Row output (step S263).In the present embodiment, it is possible thereby to execute to the user for being confirmed as high-quality user based on corresponding The specific response (for example, providing action message etc. for regular guest) of application service.
Hereinbefore, response processing according to the present embodiment is described.
Then, the high-quality user being described above is described multiple embodiments are used determines processing and response language The specific example of the generation processing of sound data.
<<4. embodiment>>
<4-1. first embodiment>
In the first embodiment, it was spoken in shop in the past with high-frequency in the user identified by voiceprint analysis In the case where (that is, accessing shop with high-frequency), determine that the user is high-quality user.Note that use in the present embodiment Application service corresponds to " application service ID:app0002, the Apply Names: DD registered in application service management table shown in fig. 5 Market is come to visit thanks activity, determines condition: user had over 5 days the case where shop (speaking) in one week ".
Figure 14 is the figure for describing the overview of present embodiment.In the present embodiment, as shown in the top in Figure 14, example Such as, there is predetermined number of days (for example, five days) or more number of days in " DD within a predetermined period of time (such as in one week) in visiting subscriber In the case that the multiple terminal devices 1-1a to 1-1c (can be single) installed in market " nearby speaks, the use can be estimated Family accesses the market DD with high-frequency.Accordingly, it is determined that the user is high-quality user.
Then, as shown in the lower part in Figure 14, for high access frequency, feel grateful from terminal device 1-1d and favorably export The specific response voice for such as " you being thanked to come again " etc.Action message can be provided together.To which user can feel Felt grateful and warm feeling by from shop side.Note that all terminal device 1-1a to 1-1d are mounted in the market DD Terminal device group, and when recognized at least any terminal device 1-1 voice generate when, execute user identification.However, Present embodiment is without being limited thereto, and the number for the terminal device 1-1 being mounted in the market DD can be one.
(operation processing)
Then, the operation processing by description according to first embodiment.Because at basic operation according to the present embodiment Reason is largely similar to the operation processing of 2 and Figure 13 description referring to Fig.1, therefore successively will describe this by 5 and Figure 16 referring to Fig.1 herein The distinctive processing of embodiment, i.e., high-quality user determine that processing (step S239 shown in Figure 13) and voice responsive data generate It handles (step S254 shown in Figure 13).
High-quality user determines processing
Figure 15 is the flow chart for determining processing for showing high-quality user according to first embodiment.As shown in figure 15, first First, high-quality user's determination unit 20f of server 2 identifies history with reference to the user being stored in storage unit 23 (referring to Figure 10) With application service terminal list, and obtain about following terminal device user voice generate frequency: the terminal device category In application service ID (step S303) to be applied.Specifically, for example, high-quality user's determination unit 20f is from application service terminal Terminal device 1-1a to 1-1c is extracted in list, wherein terminal device 1-1a to 1-1c belongs to and to be applied to terminal device 1- The identical application service ID of the application service ID of 1d.Then, high-quality user's determination unit 20f control user identifies that history is examined It looks into, and obtains and user (that is, the voice for identifying user by voiceprint analysis) is recognized by terminal device 1-1a to 1-1d Time and date (voice generation frequency).
Next, determining item according to the high-quality user of the application service ID:app0002 registered in application service management table Part, high-quality user's determination unit 20f determine whether the user had five days or more within past one week and have said (step S306)。
Then, in the case where user meets above-mentioned determining condition (step S306/ is), high-quality user's determination unit 20f Determine that the user is high-quality user (step S309).
On the other hand, in the case where the user is unsatisfactory for above-mentioned determining condition (step S309/ is no), high-quality user determines It is high-quality user (step S312) that unit 20f, which determines the user not,.
Hereinbefore, the high-quality user having been described in detail according to first embodiment determines processing.In the present embodiment, In the case where not special identidication key, there will be predetermined number of days or more to scold the user talked about within a predetermined period of time (that is, user that estimation has predetermined number of days or more access shop within a predetermined period of time) is determined as high-quality user.
Voice responsive data generation processing
Figure 16 is the flow chart for showing the generation processing of voice responsive data according to first embodiment.Such as Figure 16 institute Show, for example, based on from the received corresponding determining condition of server 2, the generation of application service server 4 such as " thanks to your light again Face " etc predetermined response voice data (step S320).Application service server 4 can for example save corresponding with the condition of determination Voice responsive data or voice responsive data generating algorithm, and based on from server 2 it is received " accordingly determine item Part " Lai Shengcheng voice responsive data.
Hereinbefore, the generation processing of voice responsive data according to first embodiment has been described in detail.In Figure 13 Step S257 to S263 shown in, the voice responsive data that are generated by application service server 4 are via server 2 from application service Server 4 is sent to terminal device 1-1, and is exported as voice from terminal device 1-1.
<4-2. second embodiment>
In this second embodiment, it was said in the user identified by voiceprint analysis in the past in shop with high-frequency Out in the case where predetermined keyword, determine that the user is high-quality user.Note that the application service pair used in the present embodiment " application service ID:app0001, the Apply Names: the commercial street ABC weight that should be registered in application service management table shown in Fig. 5 Want customer activities, determine condition: user had subscribed " dried beef " (having said " dried beef ") ten times in one month.
Figure 17 is the figure for describing the overview of present embodiment.In the present embodiment, as shown in the top in Figure 17, example Such as, the multiple terminal devices 1-2a installed in " commercial street ABC " in predetermined amount of time (for example, one month) in visiting subscriber Predetermined keyword " dried beef " is nearby said to 1-2c (can be single) up to pre-determined number (such as ten times) or more time In the case of, it can estimate that the user has purchased dried beef with high-frequency.Accordingly, it is determined that the user is high-quality user.
Then, as shown in the lower part in Figure 17, such as " dried beef is now just in sale at special price!" favor information by from end End equipment 1-2d is output to the user of the regular guest as high-frequency purchase " dried beef ".It note that all terminal device 1-2a extremely 1-2d is mounted in the terminal device group in the commercial street ABC.And it is produced when recognizing voice at least any terminal device 1-2 When raw, user's identification is executed.However, present embodiment is without being limited thereto, the number of terminal device 1-2 can be one.
(operation processing)
Then, by description according to the operation processing of second embodiment.Because at basic operation according to the present embodiment Reason is largely similar to the operation processing of 2 and Figure 13 description referring to Fig.1, therefore successively will describe this by 8 and Figure 19 referring to Fig.1 herein The distinctive processing of embodiment, i.e., high-quality user determine that processing (step S239 shown in Figure 13) and voice responsive data generate It handles (step S254 shown in Figure 13).
High-quality user determines processing
Figure 18 is the flow chart for determining processing for showing the high-quality user according to second embodiment.As shown in figure 18, first First, high-quality user's determination unit 20f of server 2 is with reference to the user key words history being stored in storage unit 23 (referring to figure 9), and the frequency (step S403) that user says predetermined keyword " dried beef " is obtained.Specifically, for example, high-quality user is true Order member 20f is extracted from application service Keyword List will the predetermined keyword used in condition identified below: being applied to The targeted high-quality user of the application service of terminal device 1-2d fixed condition really.Then, high-quality user's determination unit 20f control User key words history is checked, and obtains the time and date (voice generation frequency) that user says predetermined keyword.
Next, determining item according to the high-quality user of the application service ID:app0001 registered in application service management table Part, high-quality user's determination unit 20f determine whether the user said " dried beef " Da Shici or more times within past one month (step S406).
Then, in the case where user meets above-mentioned determining condition (step S406/ is), high-quality user's determination unit 20f Determine that the user is high-quality user (step S409).
On the other hand, in the case where the user is unsatisfactory for above-mentioned determining condition (step S409/ is no), high-quality user determines It is high-quality user (step S412) that unit 20f, which determines the user not,.
Hereinbefore, it has been described in detail and processing is determined according to the high-quality user of second embodiment.In the present embodiment, Keyword will be said within a predetermined period of time up to the secondary user of pre-determined number or more (for example, being installed in terminal device 1-2 Cash register nearby in the case where, be estimated as having subscribed the product of predetermined keyword within a predetermined period of time up to pre-determined number or more Multiple user) it is determined as high-quality user.
Voice responsive data generation processing
Figure 19 is the flow chart for showing the generation processing according to the voice responsive data of second embodiment.Such as Figure 19 institute Show, for example, based on from the received corresponding determining condition of server 2, application service server 4 generates such as that " dried beef is now just In sale at special price!" predetermined response voice data (step S420).Application service server 4 can for example save and determine item The generating algorithm of the corresponding voice responsive data of part or voice responsive data, and based on received " corresponding true from server 2 Fixed condition " generates voice responsive data.
Hereinbefore, it has been described in detail and is handled according to the generation of the voice responsive data of second embodiment.In Figure 13 Step S257 to S263 shown in, the voice responsive data that are generated by application service server 4 are via server 2 from application service Server 4 is sent to terminal device 1-2, and is exported as voice from terminal device 1-2.
Note that the position of installing terminal equipment 1-2 is not limited to commercial street and market, and it for example can be the street in town Head stand (such as fortune-telling space).Figure 20 is description present embodiment using exemplary figure.
As shown in figure 20, for example, in the case where terminal device 1-2 is installed in the fortune-telling space in town, from The user that fortune-teller passes through front often said in the talk with same pedestrian etc. such as " gloomy ", " worry " and " uneasiness " it (for example, saying the vocabulary for indicating worried in one week up to five times in user in the case where the worried predetermined keyword of the expression of class Or more time in the case where), can from terminal device 1-2 to user export recommend tell the fortune voice responsive data.
<4-3. third embodiment>
In the third embodiment, it was said in the user identified by voiceprint analysis in the past in shop with high-frequency In the case where talking about, having specific user's attribute and say predetermined keyword, which is confirmed as high-quality user.User property It is gender, the age of user etc. estimated by the speech recognition of the language voice data to user.Note that in this embodiment party Application service used in formula correspond to application service management table shown in Fig. 5 in register " application service ID:app0003, Apply Names: the high-quality user privileges of the shopping center EE male determine condition: user had over five days shop (in quotient in one week Speak in shop), user the case where being adult male and saying special key words ' very hot ' ".
Figure 21 is the figure for describing the overview of present embodiment.In the present embodiment, making a reservation in the user in access shop There is predetermined number of days (for example, five days) or more number of days to install in period (for example, one week) in " shopping center EE " multiple In the case that terminal device 1-3a to 1-3c (can be odd number) nearby speaks, such as shown in the top of Figure 21, the user's Attribute is " adult male ", and the user has said predetermined keyword " very hot ", as shown in the lower part of Figure 21, then user's quilt It is determined as high-quality user.
Then, as the favor information for being directed to high-quality user, from terminal device 1-3d output, such as " HappyTime is from 4 points Start!Beer is supplied with half price!" response sound.It is possible thereby to by specific information (such as movable) be presented to it is in regular guest, As the people with particular community and the users of special key words is said.Note that all terminal device 1-3a to 1-3d are The terminal device group being mounted in the shopping center EE, and identify that voice generates at least any terminal device 1-3, and hold Row user identification.However, present embodiment is without being limited thereto, the number of terminal device 1-3 can be one.
(operation processing)
Then, by description according to the operation processing of third embodiment.Because at basic operation according to the present embodiment Reason is largely similar to the operation processing of 2 and Figure 13 description referring to Fig.1, therefore will successively describe this referring to Figure 22 and Figure 23 herein The distinctive processing of embodiment, i.e., high-quality user determine that processing (step S239 shown in Figure 13) and voice responsive data generate It handles (step S254 shown in Figure 13).
High-quality user determines processing
Figure 22 is the flow chart for determining processing for showing the high-quality user according to third embodiment.As shown in figure 22, first First, server 2 estimates attribute (gender, age group etc.) (step of user by the speech recognition to language voice messaging S503).Specifically, for example, the control unit 20 of server 2 is according to the tongue of user, the tone of voice, vocabulary end Articulation type, voice, the height of voice, the feature of voice, vocal print etc. estimate gender and age group (age), belong to as user Property.
Then, high-quality user's determination unit 20f of server 2 identifies history with reference to the user being stored in storage unit 23 (referring to Figure 10) and application service terminal list, and the voice for obtaining the user about following terminal device generates frequency: it should Terminal device belongs to application service ID (step S506) to be applied.Specifically, for example, high-quality user's determination unit 20f is from answering With terminal device 1-3a to 1-3c is extracted in service terminal list, terminal device 1-3a to 1-3c belongs to and to be applied to terminal The identical application service ID of the application service ID of equipment 1-3d.Then, high-quality user's determination unit 20f control user identifies history It is checked, and obtains terminal device 1-3a to 1-3d and recognize user (that is, if identifying user by voiceprint analysis Language) time and date (voice generation frequency).
Next, determining item according to the high-quality user of the application service ID:app0003 registered in application service management table Part, high-quality user's determination unit 20f determine whether the user had five days or more within past one week and have said (step S509), whether attribute is " adult male " (step S512) and whether user has said predetermined keyword " very hot " (step S515)。
Then, in the case where user meets all above-mentioned conditions, (step S509/ is that step S512/ is step S515/ It is), high-quality user's determination unit 20f determines that the user is high-quality user (step S518).
On the other hand, in the case where the user is unsatisfactory at least any of above condition, (step S509/ is no, step S512/ No or step S515/ is no), it is high-quality user (step S519) that high-quality user's determination unit 20f, which determines the user not,.
Hereinbefore, it has been described in detail and processing is determined according to the high-quality user of third embodiment.In the present embodiment, There is predetermined number of days or more to scold words without specific identidication key (that is, user's quilt within a predetermined period of time in user Be estimated as thering is predetermined number of days or more to have accessed shop within a predetermined period of time), have and predetermined attribute and also say In the case where special key words as triggering, which is confirmed as high-quality user.
Voice responsive data generation processing
Figure 23 is the flow chart for showing the generation processing according to the voice responsive data of third embodiment.Such as Figure 23 institute Show, based on from the received corresponding determining condition of server 2, for example, application service server 4 generates such as, " HappyTime is from four Point starts!Beer is supplied with half price!" predetermined response voice data (step S520).Application service server 4 can for example be protected The generating algorithm of voice responsive data corresponding with the condition of determination or voice responsive data is deposited, and is based on connecing from server 2 " accordingly determining condition " the Lai Shengcheng voice responsive data received.
Hereinbefore, it has been described in detail and is handled according to the generation of the voice responsive data of third embodiment.In Figure 13 Step S257 to S263 shown in, the voice responsive data that are generated by application service server 4 are via server 2 from application service Server 4 is sent to terminal device 1-3, and is exported as voice from terminal device 1-3.
(4-3-1. application example 1)
Note that the position of installing terminal equipment 1-3 is not limited to shop, and it can be the amusement hall of such as amusement arcade.This Place, will describe the situation that terminal device 1-3 is installed in amusement arcade referring to Figure 24 to Figure 27.
Figure 24 is the figure for describing the overview using example 1 of present embodiment.As shown in the top of Figure 24, for example, at end End equipment 1-3a is installed in around game machine 5, and often (such as having in one month ten days or more) from game machine 5 The adult male user that front is passed through with said such as that " I feels to prevent in the talk of same pedestrian, in automatic speaking etc. In the case where the predetermined keyword that the request pressure of funeral ", " I has the fidgets " and " I wants to hit the person " etc discharges, it can estimate The user is the regular guest that amusement arcade is accessed with high-frequency, and has reached the optimum state of recommended games now, and determining should User is high-quality user.
Then, defeated from terminal device 1-3a as shown in the lower part of Figure 24 in the case where determining the user is high-quality user The voice responsive data released stress out for recommending the game machine 5 by playing sandbag.
In addition, with reference to the amusement history (score) of game machine 5, and in the flat of the up to the present acquired score of user Mean value (example of user information) be more than the same day participate in game all players highest score in the case where, can also carry out with Lower response: the response is used for recommended games, and including that such as " if you normally play, can become the champion of today." Message.
(operation processing)
Then, it will describe according to the exemplary operation processing of this application.Because of basic operation processing according to the present embodiment It is largely similar to the operation processing of 2 and Figure 13 description referring to Fig.1, therefore will successively describe this reality referring to Figure 25 and Figure 26 herein The distinctive processing of mode is applied, i.e., high-quality user determines at processing (step S239 shown in Figure 13) and voice responsive data generation It manages (step S254 shown in Figure 13).
High-quality user determines processing
Figure 25 is the flow chart for determining processing for showing the high-quality user according to application example 1.As shown in figure 25, firstly, Server 2 estimates the attribute (gender, age group etc.) (step S603) of user by the speech recognition to language voice messaging. Specifically, for example, the control unit 20 of server 2 according to the tongue of user, the tone of voice, voice, voice height, Feature, vocal print of voice etc. estimate gender and age group (age), as user property.
Then, high-quality user's determination unit 20f of server 2 identifies history with reference to the user being stored in storage unit 23 (referring to Figure 10) and application service terminal list (referring to Fig. 7), and the voice for obtaining the user about following terminal device produces Raw frequency: the terminal device belongs to application service ID (step S606) to be applied.Specifically, for example, high-quality user determines list First 20f extracts following terminal device (for example, being mounted on around game machine or in amusement arcade from application service terminal list Multiple terminal devices): the terminal device belongs to application service identical with the application service ID of terminal device 1-3a to be applied to ID.Then, high-quality user's determination unit 20f control user identifies that history checks, and obtain terminal device 1-3a or Belong to same application service ID terminal device recognize user's (that is, language that user is identified by voiceprint analysis) when Between and date (voice generation frequency).
Next, determining condition, high-quality use according to the high-quality user for the application service registered in application service management table Family determination unit 20f executes determination (the step S609 to S615) of high-quality user.Specifically, for example, high-quality user's determination unit 20f determines whether the user had ten days or more within past one month and has said (step S609) whether attribute is " adult Whether male " (step S612) and user have said the vocabulary that request releases stress, such as " I feels to prevent predetermined keyword Funeral " or " I wants to hit the person " (step S615).Note that all determining conditions are all examples, and this application example is without being limited thereto.
Then, in the case where user meets all above-mentioned conditions, (step S609/ is that step S612/ is step S615/ It is), high-quality user's determination unit 20f determines that the user is high-quality user (step S618).
On the other hand, in the case where user is unsatisfactory at least any of above condition, (step S609/ is no, step S612/ No or step S615/ is no), it is high-quality user (step S619) that high-quality user's determination unit 20f, which determines the user not,.
Hereinbefore, it has been described in detail and processing is determined according to the high-quality user of application example 1.In this application example, User has predetermined number of days or more to scold words without specific identidication key (that is, user is estimated within a predetermined period of time Be calculated as thering is predetermined number of days or more to have accessed shop within a predetermined period of time), have and predetermined attribute and also said work In the case where special key words for triggering, which is confirmed as high-quality user.
Voice responsive data generation processing
Figure 26 is the flow chart for showing the generation processing according to the exemplary voice responsive data of this application.In this application example In, it is also contemplated that amusement history generates voice responsive data.For example, the amusement history of each user is accumulated in server 2 Storage unit 23 in " the application service data " of user management table (referring to Fig. 8) that are stored.
As shown in figure 26, firstly, application service server 4 will be confirmed as the User ID and application of the user of high-quality user Service ID is sent to server 2 (step S620).
Next, application service server 4 receives the application data (step S623) of user from server 2, and obtain It is recorded in using in data and amusement history (step S626) of the, user associated with application service in game machine 5.
Then, all players of game are participated on the day of obtaining in the amusement history managed from application service server 4 Highest score (step S629).
Next, application service server 4 determine the user average mark whether be more than the same day highest score (step S632)。
Then, in the case where determining that average mark is more than highest score (step S632/ is), based on being connect from server 2 Receive corresponding determining condition, application service server 4 for example generate such as " if why not you make a call to a fist? you normally play, can To become the champion of today!" predetermined response voice data (step S635).
On the other hand, in the case where determining that average mark is not above highest score (step S632/ is no), it is based on from clothes Be engaged in device 2 received corresponding determining condition, application service server 4 for example generate such as " why not you make a call to a fist? challenge today Highest score!" predetermined response voice data (step S638).
Hereinbefore, the generation processing according to the voice responsive data of application example 1 has been described in detail.Such as the step in Figure 13 Shown in rapid S257 to S263, the voice responsive data that are generated by application service server 4 are via server 2 from application service service Device 4 is sent to terminal device 1-3a, and is exported as voice from terminal device 1-3a.
The management of amusement history is handled
Next, the management for the game result for describing game machine 5 referring to Figure 27 is handled.Figure 27 is to show to be shown according to application The sequence chart of the management processing of the game result of the game machine 5 of example 1.
As shown in figure 27, firstly, when starting to play game (step S643), game machine 5 is taken via network to application service Business device 4 notifies game to start (step S646).
Next, in the special time period since server 2 determines high-quality user in the case where game, according to next (step S649/ is) is notified since the game of game machine 5, application service server 4 determines that high-quality user is playing game (step Rapid S652).Because the User ID of the high-quality user determined in the step S251 of Figure 13 by server 2 is also delivered to using clothes It is engaged in server 4, so application service server 4 can identify and have determined that high-quality user in server 2 and this is excellent The User ID of matter user.
Then, when sending the game result of game from game machine 5 (step S655), application service server 4 will be received To game result be sent collectively to 2 (step of server together with the User ID of the high-quality user and its own application service ID S658)。
Then, server 2 updates answer associated with relative users and application service in user management table (referring to Fig. 8) With data (step S661).In other words, the game result of high-quality user in gaming is registered in user management table by server 2 In, as using data.
Note that application service server 4 can also receive the ordinary user for being determined not to be high-quality user from game machine 5 Game result, and recording game result is as amusement history.Specifically, high-quality user is carried out really periodically from trip no In the case that gaming machine 5 sends game result, 4 progressive games of application service server are as a result, as non-user-specific (player) Game result.In addition, application service server 4 is by the game result of non-user-specific together with its own application service ID It is sent to server 2.For example, server 2 by received non-user-specific game result it is associated with respective application service Ground is registered in application service management table (referring to Fig. 5).For example, it is possible thereby to identifying the score of the player of same day Play Station games 5.
As described above, in this application example, it can be with reference to the amusement history (such as score) in game machine 5, with game The cooperation of machine 5 generates voice responsive data.
(4-3-2. application example 2)
In addition, the position of installing terminal equipment 1-3 is not limited to above-mentioned example, and it may, for example, be cage (batting cage).Figure 28 is figure of the description using the overview of example 2.
As shown in the top of Figure 28, for example, in the case where terminal device 1-3b is installed in cage, frequent The adult male user of (for example, at a Zhou Zhongyou tri- days or more) access cage has said the baseball about MLB Slam In the case where the keyword of club, which is confirmed as high-quality user.Then, determining that the user is the feelings of high-quality user Under condition, as shown in the lower part of Figure 28, the voice responsive data of cage are recommended to use from terminal device 1-3b output.
At this point, according to the keyword that the user (regular guest) of frequent access cage says, determining should in this application example The baseball club (example (preference information) of user information) that user pursues, and newest win-or-lose result, ratio can be referred to The generations such as appearance inside competition are directed to the voice responsive data of the regular guest as particular baseball club bean vermicelli.
(operation processing)
Then, it will describe according to the exemplary operation processing of this application.Because of basic operation processing according to the present embodiment It is largely similar to the operation processing of 2 and Figure 13 description referring to Fig.1, therefore will successively describe this reality referring to Figure 29 and Figure 30 herein The distinctive processing of mode is applied, i.e., high-quality user determines at processing (step S239 shown in Figure 13) and voice responsive data generation It manages (step S254 shown in Figure 13).
High-quality user determines processing
Figure 29 is the flow chart for determining processing for showing the high-quality user according to application example 2.As shown in figure 29, firstly, Server 2 estimates the attribute (gender, age group etc.) (step S703) of user by the speech recognition to language voice messaging. Specifically, for example, the control unit 20 of server 2 according to the tongue of user, the tone of voice, voice, voice height, Feature, vocal print of voice etc. estimate gender and age group (age), as user property.
Then, high-quality user's determination unit 20f of server 2 identifies history with reference to the user being stored in storage unit 23 (referring to Figure 10) and application service terminal list (referring to Fig. 7), and the voice for obtaining the user about following terminal device produces Raw frequency: the terminal device belongs to application service ID (step S706) to be applied.Specifically, for example, high-quality user determines list First 20f extracts following terminal device (for example, being mounted on the multiple terminal devices in cage) from application service terminal list: The terminal device belongs to application service ID identical with the application service ID of terminal device 1-3b to be applied to.Then, high-quality use Determination unit 20f control user in family identifies that history checks, and obtains terminal device 1-3b or belong to same application clothes The terminal device of business ID recognizes the time and date (voice of user's (that is, language that user is identified by voiceprint analysis) Generate frequency).
Next, determining condition, high-quality use according to the high-quality user for the application service registered in application service management table Family determination unit 20f executes determination (the step S709 to S715) of high-quality user.Specifically, for example, high-quality user's determination unit 20f determines whether the user had three days or more within past one week and has said (step S709) whether attribute is " adult male Property " (step S712) and user whether said about particular baseball club predetermined keyword (such as team's title or Player's name) (step S715).
Then, in the case where user meets all above-mentioned conditions, (step S709/ is that step S712/ is step S715/ It is), high-quality user's determination unit 20f determines that the user is the bean vermicelli of particular baseball club, and the determination is added to use In the attribute of family (step S718), in addition, determining that the user is high-quality user (step S721).Note that for example, user property is tired out In the user management table (referring to Fig. 8) that product is stored in the storage unit 23 of server 2.
On the other hand, user be unsatisfactory in above-mentioned condition at least any one in the case where (step S709/ is no, step S712/ is no or step S715/ is no), it is high-quality user (step S724) that high-quality user's determination unit 20f, which determines the user not,.
Hereinbefore, it has been described in detail and processing is determined according to the high-quality user of application example 2.In this application example, when User has predetermined number of days or more to talk about without specific identidication key (that is, user is estimated within a predetermined period of time Be calculated as thering is predetermined number of days or more to have accessed shop within a predetermined period of time), have and predetermined attribute and also said use In the case where making the special key words triggered, which is confirmed as high-quality user.
Voice responsive data generation processing
Figure 30 is the flow chart for showing the generation processing according to the exemplary voice responsive data of this application.In this application example In, it is contemplated that baseball club (baseball team) the Lai Shengcheng voice responsive data that user pursues.For example, showing as attribute information Example, the baseball club that each user pursues are accumulated in the user management table (ginseng stored in the storage unit 23 of server 2 See Fig. 8) in.
As shown in figure 30, firstly, application service server 4 obtains the baseball that user pursues based on the attribute of high-quality user Club (step S730).Specifically, for example, the User ID of high-quality user is sent server 2 by application service server 4, The attribute information of user is requested, and obtains the information for the baseball club that instruction user pursues.
Next, application service server 4 obtains the professional stick of the previous day by network from book server (not shown) Ball victory or defeat data (step S733).
Then, it in the case where user is the bean vermicelli of G team (example of baseball club) (step S736/ is), answers Check that G team is to win or fail (step S739) with reference to the MLB Slam victory or defeat data of the previous day with service server 4.
Then, in the case where G team wins (step S739/ is), for example, application service server 4 generates such as " last night G team wins!Why not you as player YY hit hommer? " and " the fans of G team!Today, only important customer ability It is played baseball with half price!" voice responsive data (step S742).
On the other hand, in the case where the failure of G team (step S739/ is no), for example, application service server 4 generates such as " you are certain disappointed last night.Why not you are to revenge and fight? " voice responsive data (step S745).
In addition, in the case where user is the bean vermicelli of H team (another example of baseball club) (step S748/ is), Application service server 4 checks that H team is to win or fail (step S751) with reference to the MLB Slam victory or defeat data of the previous day.
Then, in the case where H team wins (step S751/ is), for example, application service server 4 generates such as " last night H team wins!Let us is batted as player ZZ!" and " fans of H team!Today, only important customer could be beaten with half price Baseball!" voice responsive data (step S754).
On the other hand, in the case where the failure of G team (step S751/ is no), for example, application service server 4 generates " your yesterday It is late certain disappointed.Let us cherishes the hope of H team recovery and bats!" voice responsive data (step S757).
In addition, (the step S748/ in the case where user is the bean vermicelli of another baseball club in addition to G team and H team It is no), in a similar way, the triumph of the particular baseball club pursued according to the user or unsuccessfully generate voice responsive data (step S760).
Hereinbefore, the generation processing according to the voice responsive data of application example 2 has been described in detail.Such as the step in Figure 13 Shown in rapid S257 to S263, the voice responsive data that are generated by application service server 4 are via server 2 from application service service Device 4 is sent to terminal device 1-3b, and is exported as voice from terminal device 1-3b.
It in this way,, can in the case where high-quality user is the bean vermicelli of particular baseball club in this application example To refer to then the game content etc. of the baseball club recommends batting from the terminal device 1-3b output being arranged in cage Voice responsive data.
<<5. conclusion>>
As described above, in information processing system according to the embodiment of the present disclosure, if user being collected Language voice, and specific user can be identified based on the number that user speaks within a predetermined period of time.
In addition, there is processing load relatively because identifying user by speech processes in the present embodiment Small advantage.In addition, user is to voice is collected compared with non-contravention, in addition, because compared with camera compared with the imaging that camera executes Microphone has weaker directionality, so having the effect of easily obtaining surrounding speech information.However, this embodiment party Formula is not limited only to speech processes, and in the case where such as following situations, can be while camera is addedly applied in combination It executes user to determine: being only difficult to determine user by the speech processes of such as voiceprint analysis.
The preferred embodiment of present disclosure is described above by reference to attached drawing, and present disclosure is not limited to above-mentioned show Example.Those skilled in the art can realize various changes and modifications within the scope of the appended claims, and it should be understood that this A little changes and modification are fallen into itself in the technical scope of present disclosure.
For example, it is also possible to create for make the CPU for including in all terminal devices as described above 1 or server 2, The computer program of the function of the hardware realization terminal device 1 or server 2 of ROM and RAM etc.In addition, additionally providing storage The computer readable storage medium of computer program.
In addition, in the above-described embodiment, the determination to favorable user (such as regular guest) is executed based on language voice, but It is that present embodiment is without being limited thereto, and (can also such as may be used to execute to negative user using the system based on language voice Doubt personnel or unwelcome customer) determination.In the case where determining user is negative user, use is can be generated in server 2 Voice responsive data generated are exported in the voice responsive data of safety measure, and from terminal device 1.
In addition, effect described in this specification is merely illustrative or illustrative effect, rather than it is restrictive. That is, together with said effect or replacing that this field skill may be implemented according to the technology of present disclosure in said effect Art personnel other clear effects from the description of this specification.
In addition, this technology can also configure as follows.
(1)
A kind of information processing equipment, comprising:
Communication unit can receive the voice messaging of the voice in relation to multiple microphones collection by discrete arrangement;With And
Control unit is configured to:
The user identified based on voice messaging is determined as having carried out pre-determined number or more at least certain period of time The repeatedly specific user of several speeches, wherein the voice messaging by the particular microphone in the multiple microphone in relation to being collected Voice, the voice messaging is received via the communication unit, and
The voice messaging control of the specific user will be sent to be sent to and the spy via the communication unit Determine the corresponding loudspeaker of microphone.
(2)
The information processing equipment according to (1), wherein when described control unit will be received based on related particular microphone When the voice messaging of the voice of collection and the user identified are determined as the specific user limited for each application service,
Described control unit controls loudspeaker corresponding with the particular microphone, to want via communication unit transmission It is sent to the voice messaging of the specific user.
(3)
The information processing equipment according to (2), wherein described control unit will be based on institute's predicate according to following keywords The user of message breath identification is determined as the specific user limited for each application service: being received as to related particular microphone Result that the voice messaging of the voice of collection is identified and the keyword extracted, and for the key that each application service limits Word.
(4)
The information processing equipment according to (2), wherein described control unit is received according to based on related particular microphone The voice messaging of the voice of collection and the attribute of user identified, the user identified based on the voice messaging is determined as every The specific user that a application service limits.
(5)
The information processing equipment according to any one of (2) to (4), wherein described control unit is according to based on related The voice messaging of voice collected by particular microphone and the user information identified control raises corresponding with the particular microphone Sound device, to send voice messaging corresponding with the user via the communication unit.
(6)
The information processing equipment according to (5), wherein the user information is the trip of user property, the game machine that links The preference information of history of playing or user.
(7)
The information processing equipment according to any one of (1) to (6), wherein described control unit is received by analysis The vocal print of the voice of collection executes the identification to user.
(8)
The information processing equipment according to any one of (2) to (7), wherein described control unit uses to be applied Fixed condition comes really defined in the application service of loudspeaker corresponding with the particular microphone of the voice has been collected Execute the determination to the specific user.
(9)
A kind of information processing method, comprising:
By processor,
The user identified based on voice messaging is determined as having carried out pre-determined number or more at least certain period of time The repeatedly specific user of several speeches, wherein the voice messaging is in relation to the specific wheat in multiple microphones by discrete arrangement The voice that gram wind is collected, the voice messaging are received via communication unit, and the communication unit can receive related by described The voice messaging for the voice that multiple microphones are collected;And
The voice messaging control of the specific user will be sent to be sent to and the spy via the communication unit Determine the corresponding loudspeaker of microphone.
Reference signs list
1 terminal device
2 servers
3 networks
4 application service servers
5 game machines
10 control units
11 communication units
12 voice-input units
13 voice-output units
14 storage units
20 control units
20a application service management unit
20b subscriber information management unit
20c voiceprint analysis unit
20d voice recognition unit
20e user identification unit
The high-quality user's determination unit of 20f
20g voice responsive data capture unit
21 network communication units
22 application service server Is/F
23 storage units

Claims (9)

1. a kind of information processing equipment, comprising:
Communication unit can receive the voice messaging of the voice in relation to multiple microphones collection by discrete arrangement;And
Control unit is configured to:
The user identified based on voice messaging is determined as to have carried out pre-determined number or more at least certain period of time The specific user of several speeches, wherein the voice messaging is in relation to the language by the particular microphone collection in the multiple microphone Sound, the voice messaging are received via the communication unit, and
The voice messaging control of the specific user will be sent to be sent to and the specific wheat via the communication unit Gram corresponding loudspeaker of wind.
2. information processing equipment according to claim 1, wherein when described control unit will be based on related particular microphone When the voice messaging of collected voice and the user identified are determined as the specific user limited for each application service,
Described control unit controls loudspeaker corresponding with the particular microphone, to send via communication unit transmission To the voice messaging of the specific user.
3. information processing equipment according to claim 2, wherein described control unit will be based on institute according to following keywords The user for stating voice messaging identification is determined as the specific user limited for each application service: as to related particular microphone Result that the voice messaging of collected voice is identified and the keyword extracted, and limited for each application service Keyword.
4. information processing equipment according to claim 2, wherein described control unit is according to based on related particular microphone The voice messaging of collected voice and the attribute of user identified, are determined as needle for the user identified based on the voice messaging The specific user that each application service is limited.
5. information processing equipment according to claim 2, wherein described control unit is according to based on related particular microphone The voice messaging of collected voice and the user information identified control loudspeaker corresponding with the particular microphone, to pass through Voice messaging corresponding with the user is sent by the communication unit.
6. information processing equipment according to claim 5, wherein the user information is user property, linkage game machine Amusement history or user preference information.
7. information processing equipment according to claim 1, wherein described control unit is by analyzing collected voice Vocal print executes the identification to user.
8. information processing equipment according to claim 2, wherein described control unit using it is to be applied in collected Really fixed condition is executed to described defined in the application service of the corresponding loudspeaker of the particular microphone of the voice The determination of specific user.
9. a kind of information processing method, comprising:
By processor,
The user identified based on voice messaging is determined as to have carried out pre-determined number or more at least certain period of time The specific user of several speeches, wherein the voice messaging is in relation to the particular microphone in multiple microphones by discrete arrangement The voice of collection, the voice messaging are received via communication unit, and the communication unit can receive related by the multiple The voice messaging for the voice that microphone is collected;And
The voice messaging control of the specific user will be sent to be sent to and the specific wheat via the communication unit Gram corresponding loudspeaker of wind.
CN201780067884.4A 2016-11-08 2017-08-04 Information processing apparatus and information processing method Active CN109906466B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2016218130 2016-11-08
JP2016-218130 2016-11-08
PCT/JP2017/028471 WO2018087967A1 (en) 2016-11-08 2017-08-04 Information processing device and information processing method

Publications (2)

Publication Number Publication Date
CN109906466A true CN109906466A (en) 2019-06-18
CN109906466B CN109906466B (en) 2023-05-05

Family

ID=62109758

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201780067884.4A Active CN109906466B (en) 2016-11-08 2017-08-04 Information processing apparatus and information processing method

Country Status (6)

Country Link
US (1) US11289099B2 (en)
EP (1) EP3540677A4 (en)
JP (1) JP7092035B2 (en)
KR (1) KR20190084033A (en)
CN (1) CN109906466B (en)
WO (1) WO2018087967A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110111796A (en) * 2019-06-24 2019-08-09 秒针信息技术有限公司 Identify the method and device of identity

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10777203B1 (en) * 2018-03-23 2020-09-15 Amazon Technologies, Inc. Speech interface device with caching component
US11152006B2 (en) * 2018-05-07 2021-10-19 Microsoft Technology Licensing, Llc Voice identification enrollment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010156825A (en) * 2008-12-26 2010-07-15 Fujitsu Ten Ltd Voice output device
US20130197912A1 (en) * 2012-01-31 2013-08-01 Fujitsu Limited Specific call detecting device and specific call detecting method
JP2013164642A (en) * 2012-02-09 2013-08-22 Nikon Corp Retrieval means control device, retrieval result output device, and program
CN105448292A (en) * 2014-08-19 2016-03-30 北京羽扇智信息科技有限公司 Scene-based real-time voice recognition system and method
CN105868360A (en) * 2016-03-29 2016-08-17 乐视控股(北京)有限公司 Content recommendation method and device based on voice recognition

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050043994A1 (en) * 1996-09-04 2005-02-24 Walker Jay S. Method for allowing a customer to obtain a discounted price for a transaction and terminal for performing the method
US6058364A (en) * 1997-11-20 2000-05-02 At&T Corp. Speech recognition of customer identifiers using adjusted probabilities based on customer attribute parameters
JP2001300099A (en) 2000-04-26 2001-10-30 Ace Denken:Kk Customer management device
US6785647B2 (en) * 2001-04-20 2004-08-31 William R. Hutchison Speech recognition system with network accessible speech processing resources
JP2004046233A (en) 2003-09-04 2004-02-12 Daiichikosho Co Ltd Communication karaoke reproducing terminal
US20070156470A1 (en) * 2005-06-24 2007-07-05 Granucci Nicole J Automatically Calculating A Discount Using A Reservation System
WO2008111190A1 (en) * 2007-03-14 2008-09-18 Pioneer Corporation Accoustic model registration device, speaker recognition device, accoustic model registration method, and accoustic model registration processing program
US20090157472A1 (en) * 2007-12-14 2009-06-18 Kimberly-Clark Worldwide, Inc. Personalized Retail Information Delivery Systems and Methods
JP2009145755A (en) * 2007-12-17 2009-07-02 Toyota Motor Corp Voice recognizer
JP2011043715A (en) 2009-08-21 2011-03-03 Daiichikosho Co Ltd Communication karaoke system for output of message based on personal information of customer by specifying customer based on feature of singing voice
US8412604B1 (en) * 2009-09-03 2013-04-02 Visa International Service Association Financial account segmentation system
US20120072290A1 (en) * 2010-09-20 2012-03-22 International Business Machines Corporation Machine generated dynamic promotion system
US20130006633A1 (en) * 2011-07-01 2013-01-03 Qualcomm Incorporated Learning speech models for mobile device users
JP2014013494A (en) * 2012-07-04 2014-01-23 Nikon Corp Display control device, display system, display device, terminal device, display control method and program
EP2947658A4 (en) * 2013-01-15 2016-09-14 Sony Corp Memory control device, playback control device, and recording medium
US20150324881A1 (en) 2014-05-09 2015-11-12 Myworld, Inc. Commerce System and Method of Providing Intelligent Personal Agents for Identifying Intent to Buy
US9691379B1 (en) * 2014-06-26 2017-06-27 Amazon Technologies, Inc. Selecting from multiple content sources
US9653075B1 (en) * 2015-11-06 2017-05-16 Google Inc. Voice commands across devices
US9898250B1 (en) * 2016-02-12 2018-02-20 Amazon Technologies, Inc. Controlling distributed audio outputs to enable voice output
JP6559079B2 (en) 2016-02-12 2019-08-14 シャープ株式会社 Interactive home appliance system and method performed by a computer to output a message based on interaction with a speaker

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010156825A (en) * 2008-12-26 2010-07-15 Fujitsu Ten Ltd Voice output device
US20130197912A1 (en) * 2012-01-31 2013-08-01 Fujitsu Limited Specific call detecting device and specific call detecting method
JP2013164642A (en) * 2012-02-09 2013-08-22 Nikon Corp Retrieval means control device, retrieval result output device, and program
CN105448292A (en) * 2014-08-19 2016-03-30 北京羽扇智信息科技有限公司 Scene-based real-time voice recognition system and method
CN105868360A (en) * 2016-03-29 2016-08-17 乐视控股(北京)有限公司 Content recommendation method and device based on voice recognition

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110111796A (en) * 2019-06-24 2019-08-09 秒针信息技术有限公司 Identify the method and device of identity
CN110111796B (en) * 2019-06-24 2021-09-17 秒针信息技术有限公司 Identity recognition method and device

Also Published As

Publication number Publication date
US20190214023A1 (en) 2019-07-11
KR20190084033A (en) 2019-07-15
EP3540677A4 (en) 2019-10-16
US11289099B2 (en) 2022-03-29
CN109906466B (en) 2023-05-05
EP3540677A1 (en) 2019-09-18
JPWO2018087967A1 (en) 2019-09-26
JP7092035B2 (en) 2022-06-28
WO2018087967A1 (en) 2018-05-17

Similar Documents

Publication Publication Date Title
US8308562B2 (en) Biofeedback for a gaming device, such as an electronic gaming machine (EGM)
US10777199B2 (en) Information processing system, and information processing method
CN104240113B (en) Reward voucher dispensing apparatus and system
CN109906466A (en) Information processing equipment and information processing method
JP2002078939A (en) Game tendency analysis system
JP7032678B1 (en) Information processing equipment, information processing methods, and programs
JP2012192085A (en) Game parlor system
WO2021240923A1 (en) Information processing device, information processing method, and program
KR20000053974A (en) Real time sports betting system through wireline / wireless internet
JP7158771B1 (en) Systems, methods and programs for supporting competitive betting
JP5994540B2 (en) Display system and display control method
US12002461B2 (en) Voice based wagering
JP7393716B2 (en) Information processing device, information processing method and program
JP2004049660A (en) Information providing device for game player
JP2012090751A (en) Game system
JP2024032245A (en) Information processing device, information processing method, and program
JP2023008117A (en) Information processing device, information processing method and program
JP2021146175A (en) Sorting apparatus, information processing method, and program
KR100438503B1 (en) Information supply system,program and information storage medium
JP2014097169A (en) Game machine, game system including the same, control method used for the game machine, and computer program
KR20180043117A (en) System for solving stress
JP2012192084A (en) Game parlor system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant