US20220189479A1 - Communication system and communication control method - Google Patents
Communication system and communication control method Download PDFInfo
- Publication number
- US20220189479A1 US20220189479A1 US17/682,106 US202217682106A US2022189479A1 US 20220189479 A1 US20220189479 A1 US 20220189479A1 US 202217682106 A US202217682106 A US 202217682106A US 2022189479 A1 US2022189479 A1 US 2022189479A1
- Authority
- US
- United States
- Prior art keywords
- user
- feedback
- question
- agent
- conversation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 96
- 238000004891 communication Methods 0.000 title claims abstract description 82
- 230000004044 response Effects 0.000 claims abstract description 19
- 238000012545 processing Methods 0.000 claims description 127
- 230000008859 change Effects 0.000 claims description 8
- 230000005540 biological transmission Effects 0.000 claims description 5
- 238000009825 accumulation Methods 0.000 abstract description 7
- 230000008569 process Effects 0.000 description 79
- 238000003780 insertion Methods 0.000 description 25
- 230000037431 insertion Effects 0.000 description 25
- 238000010586 diagram Methods 0.000 description 23
- 235000019219 chocolate Nutrition 0.000 description 17
- 230000007704 transition Effects 0.000 description 13
- 230000001755 vocal effect Effects 0.000 description 11
- 230000006870 function Effects 0.000 description 10
- 230000000694 effects Effects 0.000 description 9
- 230000000717 retained effect Effects 0.000 description 9
- 238000012790 confirmation Methods 0.000 description 8
- 230000010365 information processing Effects 0.000 description 8
- 239000000284 extract Substances 0.000 description 5
- 230000001133 acceleration Effects 0.000 description 4
- 238000011161 development Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000001815 facial effect Effects 0.000 description 3
- 235000013305 food Nutrition 0.000 description 3
- 230000036772 blood pressure Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 230000000994 depressogenic effect Effects 0.000 description 2
- 230000036541 health Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 235000013372 meat Nutrition 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000001151 other effect Effects 0.000 description 2
- 150000008442 polyphenolic compounds Chemical class 0.000 description 2
- 235000013824 polyphenols Nutrition 0.000 description 2
- 230000029058 respiratory gaseous exchange Effects 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 241001122315 Polites Species 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000036760 body temperature Effects 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 230000005281 excited state Effects 0.000 description 1
- 239000008267 milk Substances 0.000 description 1
- 210000004080 milk Anatomy 0.000 description 1
- 235000013336 milk Nutrition 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008450 motivation Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 230000008685 targeting Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3329—Natural language query formulation or dialogue systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/335—Filtering based on additional data, e.g. user or group profiles
- G06F16/337—Profile generation, learning or modification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0201—Market modelling; Market analysis; Collecting market data
- G06Q30/0203—Market surveys; Market polls
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
- G10L15/1815—Semantic context, e.g. disambiguation of the recognition hypotheses based on word meaning
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/28—Constructional details of speech recognition systems
- G10L15/30—Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/63—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/04—Protocols specially adapted for terminals or networks with limited capabilities; specially adapted for terminal portability
-
- H04L67/22—
-
- H04L67/32—
-
- H04L67/42—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/535—Tracking the activity of the user
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/60—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/226—Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics
- G10L2015/227—Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics of the speaker; Human-factor methodology
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
Definitions
- the present disclosure relates to a communication system and a communication control method.
- Patent Literature 1 discloses a device that realizes natural and smooth dialogue by generating a response for a user in consideration of a feeling of a user.
- Patent Literature 2 discloses a system that is capable of displaying a character with a predetermined shape to have a conversation with a user and displays advertisement information in a form of introduction by the character.
- Patent Literature 3 discloses a dialogue processing device that estimates a feeling of a user in accordance with voice prosodic information, conceptual information of a phrase subjected to voice recognition, a facial image, a pulse rate, and the like and generates an output sentence to be output to the user on the basis of an estimation result.
- Patent Literature 1
- Patent Literature 2
- Patent Literature 3
- the present disclosure proposes a communication system and a communication control method capable of obtaining reliable feedback from a user further naturally through a conversation with an agent without imposing a burden on the user.
- a communication system including: a communication unit configured to receive request information for requesting feedback on a specific experience of a user; an accumulation unit configured to accumulate the feedback received from a client terminal of the user via the communication unit; and a control unit configured to perform control such that a question for requesting the feedback on the specific experience of the user based on the request information is transmitted to the client terminal of the user at a timing according to context of the user, and feedback input by the user in response to the question output as speech of an agent via the client terminal is received.
- a communication control method including: by a processor, receiving request information for requesting feedback on a specific experience of a user via a communication unit; performing control such that a question for requesting the feedback on the specific experience of the user based on the request information is transmitted to a client terminal of the user at a timing according to context of the user, and feedback input by the user in response to the question output as speech of an agent via the client terminal is received; and accumulating the feedback received from the client terminal of the user via the communication unit, in the accumulation unit.
- FIG. 1 is an explanatory diagram illustrating an overview of a communication control system according to an embodiment of the present disclosure.
- FIG. 2 is a diagram illustrating an overall configuration of the communication control system according to the embodiment.
- FIG. 3 is a block diagram illustrating an example of a configuration of a voice agent server according to the embodiment.
- FIG. 4 is a diagram illustrating an example of a configuration of a dialogue processing unit according to the embodiment.
- FIG. 5 is a flowchart illustrating a conversation DB generation process according to the embodiment.
- FIG. 6 is a flowchart illustrating a phoneme DB generation process according to the embodiment.
- FIG. 7 is a flowchart illustrating a dialogue control process according to the embodiment.
- FIG. 8 is an explanatory diagram illustrating a data configuration example of the conversation DB according to the embodiment.
- FIG. 9 is a flowchart illustrating a process of updating the conversation DB according to the embodiment.
- FIG. 10 is a flowchart illustrating a conversation data transition process from an individualized layer to a common layer according to the embodiment.
- FIG. 11 is an explanatory diagram illustrating transition of conversation data to a basic dialogue conversation DB according to the embodiment.
- FIG. 12 is a flowchart illustrating a conversation data transition process to a basic dialogue DB according to the embodiment.
- FIG. 13 is a diagram illustrating an example of advertisement information registered in an advertisement DB according to the embodiment.
- FIG. 14 is a flowchart illustrating an advertisement content insertion process according to the embodiment.
- FIG. 15 is a diagram illustrating a configuration example of a feedback acquisition processing unit according to the embodiment.
- FIG. 16 is a diagram illustrating an example of a mission list registered in a mission list DB according to the embodiment.
- FIG. 17 is a diagram illustrating an example of an experience list registered in an experience list DB according to the embodiment.
- FIG. 18 is a flowchart illustrating a feedback acquisition process according to the embodiment.
- FIG. 19 is a flowchart illustrating a mission list generation process according to the embodiment.
- FIG. 20 is a flowchart illustrating an experience list generation process according to the embodiment.
- FIG. 21 is a flowchart illustrating a timing determination process according to the embodiment.
- FIG. 22 is a diagram illustrating an example of a timing index according to the embodiment.
- FIG. 23 is a flowchart illustrating a question sentence data generation process in which reliability is considered according to the embodiment.
- FIGS. 24A, 24B, 24C, and 24D are diagrams illustrating an example of question sentence data adjusted in accordance with the reliability according to the embodiment.
- FIG. 25 is a flowchart illustrating a question sentence data generation process in which the personality traits of a user is considered according to the embodiment.
- FIG. 26 is a diagram illustrating an example of a sales point list of a mission according to the embodiment.
- FIG. 27 is a flowchart illustrating a result generation process according to the embodiment.
- FIG. 28 is a diagram illustrating an example of a result generated through the result generation process according to the embodiment.
- a communication control system is capable of obtaining reliable feedback on a specific experience from a user who has the specific experience further naturally through a conversation with an agent without imposing a burden on the user.
- FIG. 1 An overview of the communication control system according to the embodiment will be described with reference to FIG. 1 .
- FIG. 1 is an explanatory diagram illustrating the overview of the communication control system according to an embodiment of the present disclosure.
- a dialogue with an agent can be performed via, for example, a client terminal 1 such as a smartphone owned by a user.
- the client terminal 1 includes a microphone and a speaker, and thus is capable of performing a dialogue with the user by voice.
- a system that realizes a dialogue between a user and an agent is used so that the agent can get feedback from a user naturally and discreetly.
- the agent can get feedback from a user naturally and discreetly.
- the user when a user is relaxed or at a timing at which the user is free, the user is asked a question by an agent 10 for getting feedback on an object (a sample or the like), content, or a service (for example, the agent system) experienced by the user.
- This question is reproduced through voice of the agent 10 from the speaker of the client terminal 1 .
- an image of the agent 10 may be displayed on a display of the client terminal 1 .
- the agent 10 asks, for example, “Did you like 00 chocolate that you just ate?” and asks the user to give feedback on “ 00 chocolate” (which is an example of goods).
- the client terminal 1 can collect speech of the user with the microphone and obtain the feedback of the user.
- the feedback on the goods can be acquired naturally from the user in a manner in which the agent 10 speaks to the user in a dialogue. Since the user can speak to the agent 10 at an unexpected timing, there is a high possibility of the user giving his or her true opinion or impression. In addition, when a character of the agent 10 is suitable for preference of the user or the user is accustomed to the character of the agent 10 , an increase in the possibility of the user giving his or her true feeling is expected. Further, since the user merely speaks his or her feeling or opinion in response to a question of the agent 10 , the effort of accessing a specific web site or inputting a comment is reduced.
- the communication control system (agent system) is not limited to a voice agent that performs a response by voice, and a text treatment agent that performs a response on a text basis may be used in the client terminal 1 .
- FIG. 2 is a diagram illustrating an overall configuration of the communication control system according to the embodiment.
- the communication control system includes the client terminal 1 and an agent server 2 .
- the agent server 2 is connected to the client terminal 1 via a network 3 and transmits and receives data. Specifically, the agent server 2 generates response voice to spoken voice collected and transmitted by the client terminal 1 and transmits the response voice to the client terminal 1 .
- the agent server 2 includes a phoneme database (DB) corresponding to one or more agents and can generate response voice through the voice of a specific agent.
- the agent may be a character of a cartoon, an animation, a game, a drama, or a movie, an entertainer, a celebrity, a historical person, or the like or may be, for example, an average person of each generation without specifying an individual.
- the agent may be an animal or a personified character.
- the agent may be a person in whom the personality of the user is reflected or a person in whom the personality of a friend, a family member, or an acquaintance of the user is reflected.
- agent server 2 can generate response content in which the personality of each agent is reflected.
- the agent server 2 can supply various services such as management of a schedule of the user, transmission and reception of messages, and supply of information through dialogue with the user via the agent.
- the client terminal 1 is not limited to the smartphone illustrated in FIG. 2 .
- a mobile phone terminal a tablet terminal, a personal computer (PC), a game device, a wearable terminal (smart eyeglasses, a smart band, a smart watch, or a smart necklace) may also be used.
- the client terminal 1 may also be a robot.
- Agent Server 2
- FIG. 3 is a block diagram illustrating an example of the configuration of the agent server 2 according to the embodiment.
- the agent server 2 includes a voice agent interface (I/F) 20 , a dialogue processing unit 30 , a phoneme storage unit 40 , a conversation DB generation unit 50 , a phoneme DB generation unit 60 , an advertisement insertion processing unit 70 , an advertisement DB 72 , and a feedback acquisition processing unit 80 .
- I/F voice agent interface
- the voice agent I/F 20 functions as an input and output unit, a voice recognition unit, and a voice generation unit for voice data.
- As the input and output unit a communication unit that transmits and receives data to and from the client terminal 1 via the network 3 is assumed.
- the voice agent I/F 20 can receive the spoken voice of the user from the client terminal 1 , process the voice, and convert the spoken voice into text through voice recognition.
- the voice agent I/F 20 processes answer sentence data (text) of the agent output from the dialogue processing unit 30 to vocalize answer voice using phoneme data corresponding to the agent and transmits the generated answer voice of the agent to the client terminal 1 .
- the dialogue processing unit 30 functions as an arithmetic processing device and a control device and controls overall operations in the agent server 2 in accordance with various programs.
- the dialogue processing unit 30 is realized by, for example, an electronic circuit such as a central processing unit (CPU) or a microprocessor.
- the dialogue processing unit 30 according to the embodiment functions as a basic dialogue processing unit 31 , a character A dialogue processing unit 32 , a person B dialogue processing unit 33 , and a person C dialogue processing unit 34 .
- the character A dialogue processing unit 32 , the person B dialogue processing unit 33 , and the person C dialogue processing unit 34 realize dialogue specialized for each agent.
- examples of the agent include a “character A,” a “person B,” and a “person C” and the embodiment is, of course, not limited thereto.
- Each dialogue processing unit realizing dialogue specialized for many agents may be further included.
- the basic dialogue processing unit 31 realizes general-purpose dialogue not specialized for each agent.
- FIG. 4 is a diagram illustrating an example of a configuration of the dialogue processing unit 300 according to the embodiment.
- the dialogue processing unit 300 includes a question sentence retrieval unit 310 , an answer sentence generation unit 320 , a phoneme data acquisition unit 340 , and a conversation DB 330 .
- the conversation DB 330 stores CONVERSATION data in which question sentence data and answer sentence data are paired.
- conversation data specialized for the agent is stored in the conversation DB 330 .
- general-purpose data that is, basic conversation data not specific to the agent is stored in the conversation DB 330 .
- the question sentence retrieval unit 310 recognizes question voice (which is an example of spoken voice) of the user output from the voice agent I/F 20 and retrieves question sentence data matching the question sentence converted into text from the conversation DB 330 .
- the answer sentence generation unit 320 extracts the answer sentence data stored in association with the question sentence data retrieved by the question sentence retrieval unit 310 from the conversation DB 330 and generates the answer sentence data.
- the phoneme data acquisition unit 340 acquires phoneme data for vocalizing an answer sentence generated by the answer sentence generation unit 320 from the phoneme storage unit 40 of the corresponding agent. For example, in the case of the character A dialogue processing unit 32 , phoneme data for reproducing answer sentence data through the voice of the character A is acquired from the character A phoneme DB 42 . Then, the dialogue processing unit 300 outputs the generated answer sentence data and the acquired phoneme data to the voice agent I/F 20 .
- the phoneme storage unit 40 stores a phoneme database for generating voice of each agent.
- the phoneme storage unit 40 can be realized by a read-only memory (ROM) and a random access memory (RAM).
- ROM read-only memory
- RAM random access memory
- a basic phoneme DB 41 a character A phoneme DB 42 , a person B phoneme DB 43 , and a person C phoneme DB 44 are stored.
- a phoneme segment and a prosodic model which is control information for the phoneme segment are stored as phoneme data.
- the conversation DB generation unit 50 has a function of generating the conversation DB 330 of the dialogue processing unit 300 .
- the conversation DB generation unit 50 collects assumed question sentence data, collects answer sentence data corresponding to each question, and subsequently pairs and stores the question sentence data and the answer sentence data. Then, when a predetermined number of pieces of conversation data (pairs of question sentence data and answer sentence data: for example, 100 pairs) are collected, the conversation DB generation unit 50 registers the conversation data as a set of conversation data of the agent in the conversation DB 330 .
- the phoneme DB generation unit 60 has a function of generating the phoneme DB stored in the phoneme storage unit 40 .
- the phoneme DB generation unit 60 analyzes voice information of predetermined read text, decomposes the voice information into the phoneme segment and the prosodic model which is control information, and performs a process of registering a predetermined number or more of pieces of voice information as phoneme data in the phoneme DB when the predetermined number or more of pieces of voice information are collected.
- the advertisement insertion processing unit 70 has a function of inserting advertisement information into dialogue of the agent.
- the advertisement information to be inserted can be extracted from the advertisement DB 72 .
- advertisement information for example, information such as advertisement content of text, an image, voice, or the like, an advertiser, an advertisement period, and an advertisement target person
- a supply side such as a company (a vendor or a supplier)
- the feedback acquisition processing unit 80 has a function of inserting a question for acquiring feedback into dialogue of the agent and obtaining the feedback from the user.
- each configuration of the agent server 2 may be achieved by another server on a network.
- FIG. 5 is a flowchart illustrating the conversation DB generation process 330 according to the embodiment. As illustrated in FIG. 5 , the conversation DB generation unit 50 first stores assumed question sentences (step S 103 ).
- the conversation DB generation unit 50 stores answer sentences corresponding to (paired with) the question sentences (step S 106 ).
- the conversation DB generation unit 50 determines whether a predetermined number of pairs of question sentences and answer sentences (also referred to as conversation data) are collected (step S 109 ).
- the conversation DB generation unit 50 registers the data sets formed of many pairs of question sentences and answer sentences in the conversation DB 330 (step S 112 ).
- the pairs of question sentences and answer sentences for example, the following pairs are assumed.
- the pairs can be registered as conversation data in the conversation DB 330 .
- FIG. 6 is a flowchart illustrating a phoneme DB generation process according to the embodiment.
- the phoneme DB generation unit 60 first displays an example sentence (step S 113 ).
- an example sentence necessary to generate phoneme data is displayed on a display of an information processing terminal (not illustrated).
- the phoneme DB generation unit 60 records voice reading the example sentence (step S 116 ) and analyzes the recorded voice (step S 119 ). For example, voice information read by a person who takes charge of the voice of an agent is collected by the microphone of the information processing terminal. Then, the phoneme DB generation unit 60 receives and stores the voice information and further performs voice analysis.
- the phoneme DB generation unit 60 generates a prosodic model on the basis of the voice information (step S 122 ).
- the prosodic model extracts prosodic parameters indicating prosodic features of the voice (for example, a tone of the voice, strength of the voice, and a speech speed) and differs for each person.
- the phoneme DB generation unit 60 generates a phoneme segment (phoneme data) on the basis of the voice information (step S 125 ).
- the phoneme DB generation unit 60 stores the prosodic model and the phoneme segment (step S 128 ).
- the phoneme DB generation unit 60 determines whether a predetermined number of the prosodic models and the phoneme segments are collected (step S 131 ).
- the phoneme DB generation unit 60 registers the prosodic models and the phoneme segments as a phoneme database for a predetermined agent in the phoneme storage unit 40 (step S 134 ).
- FIG. 7 is a flowchart illustrating a dialogue control process according to the embodiment.
- the voice agent I/F 20 first confirms whether question voice and an agent ID of a user are acquired (step S 143 ).
- the agent ID is identification information indicating a specific agent such as the character A, the person B, or the person C.
- the user can purchase phoneme data of each agent. For example, an ID of the agent purchased in a purchase process is stored in the client terminal 1 .
- the voice agent I/F 20 converts the question voice into text through voice recognition (step S 149 ).
- the voice agent I/F 20 outputs the question sentence converted into text to the dialogue processing unit of the specific agent designated with the agent ID. For example, in the case of “agent ID: agent A” the voice agent I/F 20 outputs the question sentence converted into text to the character A dialogue processing unit 32 .
- the dialogue processing unit 30 retrieves a question sentence matching the question sentence converted into text from the conversation DB of the specific agent designated with the agent ID (step S 152 ).
- step S 155 the character A dialogue processing unit 32 acquires answer sentence data corresponding to (paired with and stored) the question from the conversation DB of the specific agent (step S 158 ).
- step S 155 a question sentence matching the question sentence converted into text is retrieved from the conversation DB of the basic dialogue processing unit 31 (step S 161 ).
- the basic dialogue processing unit 31 acquires the answer sentence data corresponding to (paired with and stored) the question from the conversation DB of the basic dialogue processing unit 31 (step S 167 ).
- the basic dialogue processing unit 31 acquires answer sentence data (for example, an answer sentence “I don't understand the question”) in a case in which there is no matching question sentence (step S 170 ).
- the character A dialogue processing unit 32 acquires phoneme data of the character A for generating voice of the answer sentence data with reference to the phoneme DB (herein, the character A phoneme DB 42 ) of the specific agent designated with the agent ID (step S 173 ).
- the acquired phoneme data and answer sentence data are output to the voice agent I/F 20 (step S 176 ).
- the voice agent I/F 20 vocalizes the answer sentence data (text) (voice synthesis) using the phoneme data and transmits the answer sentence data to the client terminal 1 (step S 179 ).
- the client terminal 1 reproduces the answer sentence through the voice of the character A.
- FIG. 8 is an explanatory diagram illustrating a data configuration example of the conversation DB 330 according to the embodiment.
- each conversation DB 330 includes two layers, an individualized layer 331 and a common layer 332 .
- conversation data in which personality or a feature of the character A is reflected is retained in the common layer 332 A.
- an individualized layer 331 A conversation data customized only for a user through a conversation with the user is retained.
- the character A phoneme DB 42 and the character A dialogue processing unit 32 are supplied (sold) as a set to users. Then, certain users X and Y perform dialogues with the same character A at first (conversation data retained in the common layer 332 A is used). However, as the dialogues continue, conversation data customized only for each user is accumulated in the individualized layer 331 A for each user. In this way, it is possible to supply the users X and Y with dialogues with the character A in accordance with preferences of the users X and Y. In addition, even in a case in which the agent “person B” is an average person of each generation who has no specific personality such as the character A, the conversation data can be customized only for the user.
- the “person B” is a “person in his or her twenties”
- average conversation data of his or her twenties is retained in the common layer 332 B and dialogue with the user is continued so that the customized conversation data is retained in the individualized layer 331 B of each user.
- customized conversation data is retained in the individualized layer 331 B for each user.
- the user can also select favorite phoneme data such as “male,” “female,” “high-tone voice,” or “low-tone voice” as the voice of the person B from the person B phoneme DB 43 and can purchase the favorite phoneme data.
- FIG. 9 is a flowchart illustrating a process of updating the conversation DB 330 according to the embodiment.
- the voice agent I/F 20 first acquires (receives) question voice of the user from the client terminal 1 and converts the question voice into text through voice recognition (step S 183 ).
- the data (question sentence data) converted into text is output to the dialogue processing unit (herein, for example, the character A dialogue processing unit 32 ) of the specific agent designated by the agent ID.
- the character A dialogue processing unit 32 determines whether the question sentence data is a predetermined command (step S 186 ).
- the character A dialogue processing unit 32 registers answer sentence data designated by the user as a pair with the question sentence data in the individualized layer 331 A of the conversation DB 330 A (step S 189 ).
- the predetermined command may be, for example, a word “NG” or “Setting.”
- the conversation DB of the character A can be customized in accordance with a flow of the following conversation.
- “NG” is the predetermined command.
- the character A dialogue processing unit 32 registers answer sentence data “Fine do your best” designated by the user as a pair with the question sentence data “Good morning” in the individualized layer 331 A of the conversation DB 330 A.
- the character A dialogue processing unit 32 retrieves the answer sentence data retained as the pair with the question sentence data from the character A conversation DB 330 A.
- the answer sentence data retained as the pair with the question sentence data is not retained in the character A conversation DB 330 A, that is, a question of the user is a question with no answer sentence (Yes in step S 192 )
- the character A dialogue processing unit 32 registers the answer sentence data designated by the user as a pair with the question sentence in the individualized layer 331 A (step S 195 ). For example, in a flow of the following conversation, the conversation DB of the character A can be customized.
- Character A “I can't understand the question” (answer data example in case in which there is no corresponding answer)
- the character A dialogue processing unit 32 acquires the answer sentence data and outputs the answer sentence data along with the corresponding phoneme data of the character A to the voice agent I/F 20 . Then, the answer sentence is reproduced through the voice of the character A in the client terminal 1 (step S 198 ).
- FIG. 10 is a flowchart illustrating conversation data transition process from an individualized layer to a common layer according to the embodiment.
- the conversation data transition process from the individualized layer 331 A to the common layer 332 A of the character A dialogue processing unit 32 will be described.
- the character A dialogue processing unit 32 first searches the individualized layer 331 A for each user periodically (step S 203 ) and extracts conversation pairs with substantially the same content (the pair of question sentence data and answer sentence data) (step S 206 ).
- the conversation pairs with the substantially same content for example, a pair of question sentence “Fine?” and answer sentence “Fine today!” and a pair of question sentence “Are you fine?” and answer sentence “Fine today!” can be determined to be the conversation pairs with substantially the same content because the question sentences are different only in a polite expression or not.
- step S 209 the character A dialogue processing unit 32 registers the conversation pairs in the common layer 332 A (for each user) (step S 212 ).
- the common layer 332 can be extended (the conversation pairs can be expanded).
- the conversation data can transition from the conversation DB (specifically, the common layer) of the specific agent to the basic dialogue conversation DB, and thus the basic dialogue conversation DB can also be extended.
- FIG. 11 is an explanatory diagram illustrating transition of conversation data to the basic dialogue conversation DB 330 F according to the embodiment. For example, in a case in which the users X and Y each select (purchase) the agent “character A” and a user Z selects (purchases) the agent “person B,” as illustrated in FIG.
- a character A conversation DB 330 A-X of the user X a character A conversation DB 330 A-Y of the user Y, and a person B conversation DB 330 -Z of the user Z can be in the dialogue processing unit 30 .
- unique (customized) conversation pairs are gradually registered in accordance with dialogues with the users X, Y, and Z (see FIG. 9 ).
- substantially the same conversation pairs in the same individualized layers 331 A-X and 331 A-Y become a predetermined number, substantially the same conversation pairs are registered in common layers 332 A-X, 332 A-Y for the users, respectively (see FIG. 10 ).
- the dialogue processing unit 30 causes the conversation pairs to transition to a high-order basic dialogue conversation DB 330 F.
- the basic dialogue conversation DB 330 F is a conversation DB included in the basic dialogue processing unit 31 .
- FIG. 12 is a flowchart illustrating the conversation data transition process to the basic dialogue DB 330 F according to the embodiment.
- the dialogue processing unit 30 first searches the plurality of common layers 332 of the conversation DBs 330 periodically (step S 223 ) and extracts substantially the same conversation pairs (step S 226 ).
- the dialogue processing unit 30 registers the conversation pairs in the basic dialogue conversation DB 330 F (step S 232 ).
- the advertisement insertion processing unit 70 can insert advertisement information stored in the advertisement DB 72 into speech of an agent.
- the advertisement information can be registered in advance in the advertisement DB 72 .
- FIG. 13 is a diagram illustrating an example of advertisement information registered in the advertisement DB 72 according to the embodiment.
- advertisement information 621 includes, for example, an agent ID, a question sentence, advertisement content, a condition, and a probability.
- the agent ID designates an agent speaking advertisement content
- the question sentence designates a question sentence of a user which serves as a trigger and into which advertisement content is inserted
- the advertisement content is an advertisement sentence inserted into dialogue of an agent.
- the condition is a condition on which advertisement content is inserted and the probability indicates a probability at which advertisement content is inserted. For example, in an example illustrated in the first row of FIG.
- a probability at which the advertisement is inserted may be set.
- the probability may be decided in accordance with advertisement charges. For example, the probability is set to be higher as the advertisement charges are higher.
- FIG. 14 is a flowchart illustrating the advertisement content insertion process according to the embodiment.
- the advertisement insertion processing unit 70 first monitors dialogue (specifically, a dialogue process by the dialogue processing unit 30 ) between the user and the agent (step S 243 ).
- the advertisement insertion processing unit 70 determines whether a question sentence with the same content as a question sentence registered in the advertisement DB 72 appears in the dialogue between the user and the agent (step S 246 ).
- the advertisement insertion processing unit 70 confirms the condition and the probability of the advertisement insertion associated with the corresponding question sentence (step S 249 ).
- the advertisement insertion processing unit 70 determines whether a current state is an advertising state on the basis of the condition and the probability (step S 252 ).
- the advertisement insertion processing unit 70 temporarily interrupts the dialogue process by the dialogue processing unit 30 (step S 255 ) and inserts the advertisement content into the dialogue (step S 258 ). Specifically, for example, the advertisement content is inserted into an answer sentence of the agent for the question sentence of the user.
- the dialogue (conversation sentence data) including the advertisement content is output from the dialogue processing unit 30 to the voice agent I/F 20 , is transmitted from the voice agent I/F 20 to the client terminal 1 , and is reproduced through voice of the agent (step S 261 ).
- the advertisement content can be presented as a speech of the character A to the user, for example, in the following conversation.
- the advertisement insertion processing unit 70 performs the advertisement insertion process and outputs the answer sentence with the advertisement content “I heard that grilled meat at CC store is delicious” through the voice of the character A.
- the feedback acquisition processing unit 80 can obtain reliable feedback on a specific experience further naturally through a dialogue with an agent from a user who has the specific experience without imposing a burden on the user.
- the feedback acquisition processing unit 80 will be described specifically with reference to FIGS. 15, 16, 17, 18, 19, 20, 21, 22, 23, 24A, 24B, 24C, 24D, 25, 26, 27, and 28 .
- FIG. 15 is a diagram illustrating a configuration example of the feedback acquisition processing unit 80 according to the embodiment.
- the feedback acquisition processing unit 80 includes a list confirmation unit 801 , a timing determination unit 802 , an acquisition control unit 803 , a result generation unit 804 , a mission list DB 810 , an experience list DB 811 , a user situation DB 812 , a user feeling DB 813 , an individual characteristic DB 814 , and a question sentence DB 815 .
- the list confirmation unit 801 confirms a mission registered in the mission list DB 810 and estimates whether the user has a specific experience which is a mission target.
- a feedback mission requested by a company that provides an experience specifically, a company that provides an object or content
- a questionnaire agent company that receives a request from a company and conducts questionnaires on an experience is registered.
- the feedback mission is transmitted via the network 3 from, for example, an information processing device of a company or the like, is received by the communication unit included in the voice agent I/F 20 , and is output to the feedback acquisition processing unit 80 .
- FIG. 16 illustrates an example of a mission list registered in the mission list DB 810 according to the embodiment.
- the mission list includes mission details (specifically, which feedback on which experiences is obtained) and a time limit in which a mission is to be executed. For example, a mission for obtaining feedback (an opinion, an impression, or the like) on a chocolate sample of BB company or a mission for obtaining feedback on a ramen sample of DD company is registered.
- the list confirmation unit 801 estimates whether the user has a specific experience (behavior) of “eating the chocolate sample of BB company” or a specific experience of “eating a ramen sample of DD company.”
- the experience may be estimated, for example, by requesting the dialogue processing unit 30 to output a question sentence (“Have you eaten chocolate of BB company?”) for directly confirming with the user whether the user has the specific experience is output through voice of the agent.
- the list confirmation unit 801 can also estimate an experience with reference to the user situation DB 812 in which situations of the user are accumulated.
- the situations of the user based on information acquired from an external server that performs a schedule management service or the like, context of dialogues acquired from the dialogue processing unit 30 , or the like are stored.
- behavior information of an individual user may be acquired from a wearable terminal (a transmissive or non-transmissive head-mounted display (HMD), a smart band, a smart watch, smart eyeglasses, or the like) worn on the body of the user, and the behavior information of the individual user may be accumulated as user situations in the user situation DB 812 .
- HMD transmissive or non-transmissive head-mounted display
- Examples of the behavior information of the individual user acquired from the wearable terminal include acceleration sensor information, various kinds of biological information, positional information, and a captured image captured in the periphery of the user (including an angle of view of the user) by a camera installed in the wearable terminal.
- the list confirmation unit 801 registers the experience information in the experience list DB 811 when it is confirmed that the user has an experience of a mission target.
- FIG. 17 is a diagram illustrating an example of an experience list registered in the experience list DB 811 according to the embodiment. For example, in a case in which it could be confirmed that the user has, for example, the specific experience of “eating the chocolate sample of BB company,” as illustrated in FIG. 17 , the mission targeting the experience “obtaining feedback on the chocolate sample of BB company” can be registered in conjunction with experience date information “Jan. 2, 20XX.”
- the timing determination unit 802 has a function of determining a timing to execute a mission registered in the experience list DB 811 in accordance with context of the user.
- the context of the user is a current situation or feeling of the user and can be determined with reference to, for example, the user situation DB 812 , the user feeling DB 813 , or the individual characteristic DB 814 .
- the user feeling DB 813 is a storage unit that stores a history of user feelings.
- the user feelings stored in the user feeling DB 813 can be estimated on the basis of biological information (a pulse rate, a heart rate, a heart sound, a blood pressure, respiration, a body temperature, a perspiration amount, an electroencephalogram, myoelectricity, or the like), voice information (intonation of a voice), or a captured image (a facial image, an eye image, or the like of the user) acquired from a wearable terminal worn by the user.
- the user feelings may also be estimated from context of a conversation between the user and the agent performed through the dialogue processing unit 30 or a result of voice analysis.
- the individual characteristic DB 814 is a storage unit that stores personality traits, habits, or the like of an individual. While the user situation DB 812 or the user feeling DB 813 stores the situations (a history of the situations) of the user for a relatively short time, the individual characteristic DB 814 stores the personality traits or the habits of the individual user for a relatively long time such as half of a year or 1 year.
- the timing determination unit 802 acquires a current situation of the user from the user situation DB 812 and determines an appropriate timing to execute a mission, that is, to ask the user a question for obtaining feedback on a specific experience. More specifically, the timing determination unit 802 may determine a period of time in which there is no schedule on the basis of schedule information of the user as an appropriate timing. In addition, the timing determination unit 802 may acquire a current feeling of the user from the user feeling DB 813 and determine the appropriate timing to ask the user a question for obtaining feedback on the specific experience. More specifically, the timing determination unit 802 may determine the appropriate timing so that a time at which the user is experiencing an intense emotion, is in an excited state, or is in a busy and nervous state is avoided. The details of a timing determination process will be described below.
- the acquisition control unit 803 performs control such that question sentence data for obtaining the feedback on the specific experience is generated, the question is output as speech of the agent at the timing determined by the timing determination unit 802 from the client terminal 1 , and an answer of the user to the question is acquired as feedback.
- the question sentence data is output from the client terminal 1 via the dialogue processing unit 30 and the voice agent I/F 20 .
- the question sentence data is generated with reference to the question sentence DB 815 . The details of a process of generating the question sentence data and a process of acquiring the feedback in the acquisition control unit 803 will be described below.
- the result generation unit 804 generates a result on the basis of the feedback acquired from the user.
- the result generation unit 804 may generate the result in consideration of a user state at the time of the answer in addition to a voice recognition result (text) of answer voice of the user to the question.
- the result of the feedback can be matched (associated) with the mission list of the mission list DB 810 to be stored in the mission list DB 810 .
- the generated result can be provided as an answer to, for example, a company or the like that has registered the mission.
- the generated result is matched with the mission list to be stored in the mission list DB 810 , but the embodiment is not limited thereto.
- the generated result may be matched with the mission list to be stored in another DB (storage unit).
- FIG. 18 is a flowchart illustrating a feedback acquisition process according to the embodiment.
- the feedback acquisition processing unit 80 first acquires a feedback request, a request time limit, and the like from, for example, a company that provides an experience (provides an object or content) or an information processing device of a questionnaire agent company side that receives a request from a company and conducts questionnaires (step S 270 ).
- the feedback acquisition processing unit 80 generates a mission list by registering the acquired mission information in the mission list DB 810 (step S 273 ).
- the details of the mission list generation process are illustrated in FIG. 19 .
- the feedback acquisition processing unit 80 checks the feedback request from a company or the like (step S 303 ).
- the feedback request can be registered as a mission list in the mission list DB 810 (step S 309 ).
- An example of the mission list registered in the mission list DB 810 has been described above with reference to FIG. 16 .
- the feedback acquisition processing unit 80 causes the list confirmation unit 801 to confirm whether the user has an experience of a mission target and performs generating of the experience list (step S 276 ).
- the details of an experience list generation process are illustrated in FIG. 20 .
- the list confirmation unit 801 confirms whether the user has the experience which is the mission target (step S 315 ).
- experience information is registered as a list of the experience which the user already has in the experience list DB 811 (step S 321 ).
- An example of the experience list registered in the experience list DB 811 has been described above with reference to FIG. 17 .
- FIG. 21 is a flowchart illustrating the timing determination process according to the embodiment. As illustrated in FIG. 21 , the timing determination unit 802 first confirms whether the list of the experience which the user already has is registered in the experience list DB 811 (step S 333 ).
- the timing determination unit 802 calculates an index indicating appropriateness of a timing on the basis of a situation of the user (step S 339 ).
- the situation of the user is schedule information, a behavior state, or the like of the user and is acquired from the user situation DB 812 .
- the situations of the user are periodically accumulated so that a change in a user situation for a relatively short time can be ascertained.
- the user situation is associated with the index indicating appropriateness of a timing for obtaining feedback.
- the timing determination unit 802 calculates the index indicating appropriateness of a timing on the basis of a feeling of the user (step S 342 ).
- a feeling of the user is acquired from the user feeling DB 813 .
- feelings of the user are periodically accumulated so that a change in a user feeling for a relatively short time can be ascertained.
- the user feeling is associated with an index indicting appropriateness of a timing for obtaining feedback.
- the timing determination unit 802 calculates a sum value (or an average value) of the indexes on the basis of a timing index based on the user situation and a timing index based on the user feeling and determines whether the calculated index exceeds a predetermined threshold (step S 345 ).
- the timing determination unit 802 determines that the timing is appropriate for obtaining the feedback (step S 348 ).
- an appropriate timing is determined on the basis of the two components, the user situation and the user feeling.
- examples of a timing index are illustrated in FIG. 22 .
- indexes for example, numerical values of - 5 to + 5
- the timing determination unit 802 calculates an average value of an index of “+3” corresponding to the user situation and an index of “+4” corresponding to the user feeling as a timing index by the following Expression 1.
- the timing determination unit 802 can determine that a present time is a timing appropriate for obtaining feedback since the calculated index of “+3.5” exceeds the threshold.
- the appropriate timing is determined on the basis of the two components, the user situation and the user feeling.
- the embodiment is not limited thereto.
- the timing may be determined using at least one of the user situation and the user feeling.
- the acquisition control unit 803 of the feedback acquisition processing unit 80 performs a question sentence data generation process (step S 282 ).
- the question sentence data generation process will be described in detail with reference to FIGS. 23, 24A, 24B, 24C, 24D, 25, and 26 .
- the acquisition control unit 803 can adjust question sentence data for obtaining feedback in consideration of two components such as reliability that the user has ranked and, personality traits and habits of the user. All the reliability, the personality traits and habits are components of which a change is less than the user situation or the user feeling.
- the reliability that the user has ranked is reliability of the system that the user has ranked, and the acquisition control unit 803 adjusts, for example, a formality degree (specifically, an expression or a way of speaking) of a question sentence in accordance with a level of the reliability.
- the acquisition control unit 803 can adjust the number of questions allowed by the user on the basis of the personality traits or habits (which is an example of an attribute) of the user. In the feedback acquisition process, feedback is desired to be obtained as much as possible from the user. However, when too many questions are asked, some users consider feeling unpleasant in some cases.
- the number of questions may be adjusted, for example, using “factors for being happy” proposed in the field of happiness study in recent years.
- Mechanism of Happiness Koreansha's new library of knowledge
- the tolerance of the number of questions is considered to depend on a value of the factor “Let's have a try!” and the feedback acquisition processing unit 80 adjusts the number of questions in accordance with the magnitude of the value of the factor “Let's have a try!” of the user estimated on the basis of the personality traits or habits of the user.
- the feedback acquisition processing unit 80 may adjust the number of questions in accordance with, for example, the positive degree (positiveness) of the personality traits of the user estimated on the basis of the personality traits or habits of the user without being limited to the factor “Let's have a try!.”
- FIG. 23 is a flowchart illustrating a question sentence data generation process in which reliability is considered according to the embodiment.
- the acquisition control unit 803 first acquires reliability of the system (the agent) that the user has ranked (step S 353 ).
- the reliability of the agent that the user has ranked may be estimated on the basis of user information acquired from the user feeling DB 813 or may be acquired by directly asking the user a question. For example, the agent asks the user “How much do you trust me?” and reliability of the system is acquired from the user.
- the acquisition control unit 803 adjusts an expression, a good way, and a request degree of feedback content of question sentence data stored in the question sentence DB 815 and corresponding to a mission in accordance with the level (high, intermediate, and low) of the reliability and generates question sentence data. That is, in a case in which the reliability is “low” (“low” in step S 356 ), the acquisition control unit 803 generates the question sentence data corresponding to “low” reliability (step S 359 ). In a case in which the reliability is “intermediate” (“intermediate” in step S 356 ), the acquisition control unit 803 generates the question sentence data corresponding to “intermediate” reliability (step S 362 ).
- the acquisition control unit 803 In a case in which the reliability is “high” (“high” in step S 356 ), the acquisition control unit 803 generates the question sentence data corresponding to “high” reliability (step S 365 ). Specifically, in a case in which the reliability that the user has ranked is high, the acquisition control unit 803 adjusts the expression to a casual expression. In a case in which the reliability that the user has ranked is low, the acquisition control unit 803 adjusts the expression to a more formal expression.
- FIGS. 24A, 24B, 24C, and 24D an example of the question sentence data adjusted in accordance with the reliability is illustrated in FIGS. 24A, 24B, 24C, and 24D . As illustrated in FIGS.
- question sentence data corresponding to a mission “obtaining feedback on the chocolate sample of BB company” is adjusted to a question sentence of a very formal expression “Could you please give me your feedback on the chocolate?” in a case in which the reliability is “low.”
- the question sentence data is adjusted to a question sentence of a formal expression “Can you give me your feedback on the chocolate?” in a case in which the reliability is “intermediate.”
- the question sentence data is adjusted to a question sentence of a casual expression “How was the chocolate?” in a case in which the reliability is “high.”
- the acquisition control unit 803 may generate question sentence data for asking a specific question “What do you like about it?” in response to an answer of the user, for example, “It is good.”
- the acquisition control unit 803 may output information regarding the reliability of the agent that the user has ranked to the dialogue processing unit 30 so that the dialogue processing unit 30 may generate question sentence data in accordance with the reliability.
- the acquisition control unit 803 may change a frequency at which the question for obtaining the feedback is performed in accordance with the level of the reliability. For example, in a case in which the reliability is low, the acquisition control unit 803 may reduce the frequency at which the question for obtaining the feedback is performed. As the reliability increases, the acquisition control unit 803 may increase the frequency at which the question for obtaining the feedback.
- the acquisition control unit 803 outputs the generated question sentence data to the dialogue processing unit 30 (step S 368 ).
- the factor “Let's have a try” included in “factors for being happy” introduced in “Mechanism of Happiness” (Kodansha's new library of knowledge) by a professor, Takashi Maeno, in a graduate school of Keio University is used.
- the factor “Let's have a try” is the factor of self-fulfillment and growth and a value of the factor has positive correlation with a level of happiness.
- the factor “Let's have a try” of the user is quantified between - 1 to + 1 on the basis of the personality traits and habits of the user and is recorded in advance in the individual characteristic DB 814 .
- the tolerance of the number of questions for obtaining feedback is considered to depend on the level of happiness of the user and further the value of the factor “Let's have a try” and the acquisition control unit 803 adjusts the number of questions in accordance with the value of the factor “Let's have a try” of the user stored in the individual characteristic DB 814 .
- the acquisition control unit 803 can generate question sentence data with reference to a sales point list corresponding to a mission stored in the question sentence DB 815 .
- FIG. 25 is a flowchart illustrating a question sentence data generation process in which personality traits of a user is considered according to the embodiment.
- the acquisition control unit 803 first acquires the value of the factor “Let's have a try” of the user from the individual characteristic DB 814 (step S 373 ).
- the acquisition control unit 803 determines whether the value of the factor exceeds a predetermined threshold (step S 376 ).
- the acquisition control unit 803 generates question sentence data regarding a predetermined number n of sales points set in advance (step S 379 ).
- the acquisition control unit 803 Conversely, in a case in which the value of the factor exceeds the predetermined threshold (Yes in step S 376 ), the acquisition control unit 803 generates question sentence data regarding a predetermined number m of sales points set in advance (step S 382 ).
- the integers n and m have a relation of m>n. That is, in case in which the value of the factor “Let's have a try” exceeds the predetermined threshold, the acquisition control unit 803 adjusts the number of questions so that the number of questions is greater than in the case in which the value of the factor is less than the threshold since there is a high possibility of the user answering many questions because of his or her personality traits without feeling stress.
- FIG. 26 an example of a sales point list of a mission stored in the question sentence DB 815 is illustrated in FIG. 26 .
- the sales point list for each mission illustrated in FIG. 26 can be transmitted along with a request for feedback from an information processing device of a company side in advance and can be stored.
- sales points “(1) smooth melt-in-the-mouth feeling,” “(2) polyphenol content of OO%, good for health,” “(3) low in calories” are registered.
- a question sentence “I heard that the chocolate provides good melt-in-the-mouth feeling. How was it?” is generated.
- the acquisition control unit 803 outputs the generated question sentence data to the dialogue processing unit 30 (step S 385 ).
- the feedback acquisition processing unit 80 outputs the generated question sentence data to the dialogue processing unit 30 (step S 285 ).
- the dialogue processing unit 30 performs a process of supplying the user with a dialogue of the agent into which the question sentence data output from the feedback acquisition processing unit 80 is inserted (step S 288 ) and acquires the feedback (the question sentence data) of the user to the question (step S 291 ).
- the presentation of the question sentence data is realized when the question sentence data is output to the voice agent I/F 20 along with the phoneme data in accordance with the agent ID designated by the user by the dialogue processing unit 30 and the question sentence data is vocalized by the voice agent I/F 20 , and the vocalized question sentence data is transmitted to the client terminal 1 .
- the user performs feedback on the specific experience in a format in which questions from the agent are answered.
- the client terminal 1 collects answer voice of the user with the microphone and transmits the answer voice to the agent server 2 . At this time, the client terminal 1 also transmits various kinds of sensor information such as biological information and acceleration information detected from the user at the time of the feedback.
- the dialogue processing unit 30 of the agent server 2 can acquire not only an answer (verbal information) of the user but also non-verbal information such as a situation of voice (a situation in which voice is loud, a speaking amount abruptly increases, a tone of note, or the like), a situation of an activity amount (an amount of motion of a hand or a body or the like), or a body reaction (a heart rate, a respiration rate, a blood pressure, perspiration, or the like) as the feedback of the user.
- the dialogue processing unit 30 outputs the acquired feedback to the feedback acquisition processing unit 80 (step S 294 ).
- the result generation unit 804 of the feedback acquisition processing unit 80 generates a result (report data) obtained by associating the acquired feedback with the mission (step S 297 ) and outputs (transmits) the generated result to a company or the like of a request source (step S 300 ).
- FIG. 27 is a flowchart illustrating the result generation process according to the embodiment.
- the result generation unit 804 first acquires feedback (answer sentence data) acquired by a dialogue with the user from the dialogue processing unit 30 (step S 393 ).
- the result generation unit 804 acquires activity information (for example, a motion of the body) of the user at the time of feedback, body reaction information (for example, biological information), and feeling information (analyzed from the biological information or an expression of the face) from the user situation DB 812 or the user feeling DB 813 and estimates a user state (step S 396 ).
- the feedback from the user includes not only the answer sentence data (verbal information) acquired from a conversation between the agent and the user performed through the dialogue processing unit 30 but also non-verbal information other than the answer sentence data.
- the non-verbal information is biological information detected by a biological sensor of a wearable terminal worn by the user, acceleration information detected by an acceleration sensor, a facial image of the user captured by a camera, feeling information, a context extracted from the conversation between the agent and the user, a voice analysis result of the conversation, or the like and is stored in the user situation DB 812 or the user feeling DB 813 .
- the result generation unit 804 estimates a user state (busy, irritated, depressed, or the like) at the time of feedback on the basis of the information stored in the user situation DB 812 or the user feeling DB 813 .
- the result generation unit 804 calculates a positive determination value of the feedback on the basis of the verbal information and the non-verbal information of the feedback (step S 399 ).
- a positive determination value of the user may be calculated on the basis of the non-verbal information other than an answer of the oral so that the positive determination value can be referred to along with the feedback result.
- the positive determination value is normalized to 0 to 1 so that the positive determination result is near 1 and the average value is calculated as the positive determination value.
- the result generation unit 804 matches the feedback result with the mission list and generates a result (step S 405 ).
- a regular feedback for example, the feedback regarding the sales points illustrated in FIG. 26 , feedback on an experience of a mission target, or a predetermined number of feedbacks
- the result generation unit 804 matches the feedback result with the mission list and generates a result (step S 405 ).
- the generated result is illustrated in FIG. 28 .
- the feedback result according to the embodiment is associated with the mission, the sales point, the question sentence data, the feedback (the verbal language), the user state (the non-verbal information), and the positive determination value (calculated on the basis of the user state).
- the company side can understand not only the feedback (the answer sentence data) regarding each sales point but also the aspect of the user at that time from the user state or the positive determination value and can predict whether the user gives the feedback with his or her real intention.
- the feedback result is output to the advertisement insertion processing unit 70 so that the feedback result can be used even at the time of the advertisement insertion process in the advertisement insertion processing unit 70 .
- the advertisement insertion processing unit 70 compares the content of the mission list with the advertisement DB 72 , extracts terms (goods names, content names, company names, characteristics of goods/content (sales points), and the like) registered as words of interest in the advertisement DB 72 , and refers to the feedback result including the words of interest.
- the advertisement insertion processing unit 70 confirms the words of interest by which the user takes a positive attitude on the basis of the positive determination value of the feedback including the words of interest, and performs control such that advertisement information including the words of interest is inserted into a dialogue.
- a computer program causing hardware such as the CPU, the ROM, and the RAM contained in the client terminal 1 or the agent server 2 described above to realize the function of the client terminal 1 or the agent server 2 .
- a computer-readable storage medium that stores the computer program is also provided.
- the configuration in which various functions are realized by the agent server 2 on the Internet has been described, but the embodiment is not limited thereto.
- At least a part of the configuration of the agent server 2 illustrated in FIG. 3 may be realized in the client terminal 1 (a smartphone, a wearable terminal, or the like) of the user.
- the whole configuration of the agent server 2 illustrated in FIG. 3 may be installed in the client terminal 1 so that the client terminal 1 can perform all the processes.
- the effects described in this specification are merely illustrative or exemplified effects, and are not limitative. That is, with or in the place of the above effects, the technology according to the present disclosure may achieve other effects that are clear to those skilled in the art from the description of this specification. Additionally, the present technology may also be configured as below.
- a communication system including: a communication unit configured to receive request information for requesting feedback on a specific experience of a user; an accumulation unit configured to accumulate the feedback received from a client terminal of the user via the communication unit; and a control unit configured to perform control such that a question for requesting the feedback on the specific experience of the user based on the request information is transmitted to the client terminal of the user at a timing according to context of the user, and feedback input by the user in response to the question output as speech of an agent via the client terminal is received.
- control unit performs control such that the question for requesting the feedback on the specific experience of the user is transmitted to the client terminal of the user.
- control unit estimates that the user has the specific experience by acquiring a response of the user to a question regarding whether the user has the specific experience, via the communication unit.
- control unit estimates that the user has the experience by acquiring an analysis result of sensor data of the client terminal via the communication unit.
- control unit performs control such that the question for requesting the feedback is transmitted to the client terminal at a timing according to at least one of a schedule of the user, a conversation of the user acquired via the communication unit, and feeling information of the user which are the context of the user.
- control unit generates the question for requesting the feedback in consideration of a relation between the user and the agent.
- control unit generates the question by using, as the relation, reliability of the agent that the user has ranked.
- control unit adjusts an expression of the question for requesting the feedback in accordance with the reliability.
- control unit generates the question for requesting the feedback in consideration of an attribute of the user.
- control unit in which the control unit generates a predetermined number of questions for requesting the feedback, in accordance with the attribute of the user.
- control unit calculates a positive determination value of the specific experience on a basis of the feedback and a feeling of the user at the time of acquisition of the feedback, and accumulates the positive determination value of the specific experience in the accumulation unit.
- control unit performs control such that the question for requesting the feedback is output as speech of the agent from the client terminal by using voice corresponding to a specific agent.
- the communication system further including: a database configured to store voice data corresponding to each agent, in which the control unit performs control such that the question for requesting the feedback is generated in consideration of a personality trait of an agent purchased by the user, and the generated question is output from the client terminal by using voice corresponding to the agent.
- the communication system according to any one of (1) to (14), in which the accumulation unit stores the request information in association with feedback transmitted from the client terminal via the communication unit.
- a communication control method including: by a processor, receiving request information for requesting feedback on a specific experience of a user via a communication unit; performing control such that a question for requesting the feedback on the specific experience of the user based on the request information is transmitted to a client terminal of the user at a timing according to context of the user, and feedback input by the user in response to the question output as speech of an agent via the client terminal is received; and accumulating the feedback received from the client terminal of the user via the communication unit, in the accumulation unit.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Multimedia (AREA)
- Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Acoustics & Sound (AREA)
- Accounting & Taxation (AREA)
- Development Economics (AREA)
- Finance (AREA)
- Strategic Management (AREA)
- Computer Networks & Wireless Communication (AREA)
- Data Mining & Analysis (AREA)
- Entrepreneurship & Innovation (AREA)
- Databases & Information Systems (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- General Business, Economics & Management (AREA)
- Marketing (AREA)
- Economics (AREA)
- Game Theory and Decision Science (AREA)
- Computer Hardware Design (AREA)
- Child & Adolescent Psychology (AREA)
- Hospice & Palliative Care (AREA)
- Psychiatry (AREA)
- Machine Translation (AREA)
- Telephonic Communication Services (AREA)
- Information Transfer Between Computers (AREA)
Abstract
[Object] To provide a communication system and a communication control method capable of obtaining reliable feedback from a user further naturally through a conversation with an agent without imposing a burden on the user. [Solution] The communication system includes: a communication unit configured to receive request information for requesting feedback on a specific experience of a user; an accumulation unit configured to accumulate the feedback received from a client terminal of the user via the communication unit; and a control unit configured to perform control such that a question for requesting the feedback on the specific experience of the user based on the request information is transmitted to the client terminal of the user at a timing according to context of the user, and feedback input by the user in response to the question output as speech of an agent via the client terminal is received.
Description
- This application is a continuation application of U.S. patent application Ser. No. 16/069,005, filed on Jul. 10, 2018, is a U.S. National Phase of International Patent Application No. PCT/JP2016/081954 filed on Oct. 27, 2016, which claims priority benefit of Japanese Patent Application No. JP 2016-011664 filed in the Japan Patent Office on Jan. 25, 2016. Each of the above-referenced applications is hereby incorporated herein by reference in its entirety.
- The present disclosure relates to a communication system and a communication control method.
- In recent years, with the development of communication technologies, messages have frequently been exchanged via networks. Users can use information processing terminals such as smartphones, mobile phone terminals, and tablet terminals to confirm messages transmitted from other terminals and transmit messages.
- In addition, with information processing terminals, agent systems that perform automatic responses to messages of users have been proposed. With regard to such systems, for example, the following
Patent Literature 1 discloses a device that realizes natural and smooth dialogue by generating a response for a user in consideration of a feeling of a user. - In addition, the following
Patent Literature 2 discloses a system that is capable of displaying a character with a predetermined shape to have a conversation with a user and displays advertisement information in a form of introduction by the character. - In addition, the following
Patent Literature 3 discloses a dialogue processing device that estimates a feeling of a user in accordance with voice prosodic information, conceptual information of a phrase subjected to voice recognition, a facial image, a pulse rate, and the like and generates an output sentence to be output to the user on the basis of an estimation result. - Patent Literature 1:
- Patent Literature 2:
- Patent Literature 3:
- Herein, it is very important to determine the impressions or opinions of the users and to connect the impressions and opinions to subsequent development of goods or improvement of services after users experience objects (goods and samples), content, agents, services, and the like.
- However, it has been difficult to obtain feedback such as the impressions, opinions, and the like from the users and to arouse true feelings of the users naturally without imposing a burden on the users. For example, in the
foregoing Patent Literature 2, advertisement information is displayed, but obtaining feedback on goods after actually purchasing the goods or the like is not considered. - Accordingly, the present disclosure proposes a communication system and a communication control method capable of obtaining reliable feedback from a user further naturally through a conversation with an agent without imposing a burden on the user.
- According to the present disclosure, there is provided a communication system including: a communication unit configured to receive request information for requesting feedback on a specific experience of a user; an accumulation unit configured to accumulate the feedback received from a client terminal of the user via the communication unit; and a control unit configured to perform control such that a question for requesting the feedback on the specific experience of the user based on the request information is transmitted to the client terminal of the user at a timing according to context of the user, and feedback input by the user in response to the question output as speech of an agent via the client terminal is received.
- According to the present disclosure, there is provided a communication control method including: by a processor, receiving request information for requesting feedback on a specific experience of a user via a communication unit; performing control such that a question for requesting the feedback on the specific experience of the user based on the request information is transmitted to a client terminal of the user at a timing according to context of the user, and feedback input by the user in response to the question output as speech of an agent via the client terminal is received; and accumulating the feedback received from the client terminal of the user via the communication unit, in the accumulation unit.
- According to the present disclosure, as described above, it is possible to obtain reliable feedback from a user further naturally through a conversation with an agent without imposing a burden on the user.
- Note that the effects described above are not necessarily limitative. With or in the place of the above effects, there may be achieved any one of the effects described in this specification or other effects that may be grasped from this specification.
-
FIG. 1 is an explanatory diagram illustrating an overview of a communication control system according to an embodiment of the present disclosure. -
FIG. 2 is a diagram illustrating an overall configuration of the communication control system according to the embodiment. -
FIG. 3 is a block diagram illustrating an example of a configuration of a voice agent server according to the embodiment. -
FIG. 4 is a diagram illustrating an example of a configuration of a dialogue processing unit according to the embodiment. -
FIG. 5 is a flowchart illustrating a conversation DB generation process according to the embodiment. -
FIG. 6 is a flowchart illustrating a phoneme DB generation process according to the embodiment. -
FIG. 7 is a flowchart illustrating a dialogue control process according to the embodiment. -
FIG. 8 is an explanatory diagram illustrating a data configuration example of the conversation DB according to the embodiment. -
FIG. 9 is a flowchart illustrating a process of updating the conversation DB according to the embodiment. -
FIG. 10 is a flowchart illustrating a conversation data transition process from an individualized layer to a common layer according to the embodiment. -
FIG. 11 is an explanatory diagram illustrating transition of conversation data to a basic dialogue conversation DB according to the embodiment. -
FIG. 12 is a flowchart illustrating a conversation data transition process to a basic dialogue DB according to the embodiment. -
FIG. 13 is a diagram illustrating an example of advertisement information registered in an advertisement DB according to the embodiment. -
FIG. 14 is a flowchart illustrating an advertisement content insertion process according to the embodiment. -
FIG. 15 is a diagram illustrating a configuration example of a feedback acquisition processing unit according to the embodiment. -
FIG. 16 is a diagram illustrating an example of a mission list registered in a mission list DB according to the embodiment. -
FIG. 17 is a diagram illustrating an example of an experience list registered in an experience list DB according to the embodiment. -
FIG. 18 is a flowchart illustrating a feedback acquisition process according to the embodiment. -
FIG. 19 is a flowchart illustrating a mission list generation process according to the embodiment. -
FIG. 20 is a flowchart illustrating an experience list generation process according to the embodiment. -
FIG. 21 is a flowchart illustrating a timing determination process according to the embodiment. -
FIG. 22 is a diagram illustrating an example of a timing index according to the embodiment. -
FIG. 23 is a flowchart illustrating a question sentence data generation process in which reliability is considered according to the embodiment. -
FIGS. 24A, 24B, 24C, and 24D are diagrams illustrating an example of question sentence data adjusted in accordance with the reliability according to the embodiment. -
FIG. 25 is a flowchart illustrating a question sentence data generation process in which the personality traits of a user is considered according to the embodiment. -
FIG. 26 is a diagram illustrating an example of a sales point list of a mission according to the embodiment. -
FIG. 27 is a flowchart illustrating a result generation process according to the embodiment. -
FIG. 28 is a diagram illustrating an example of a result generated through the result generation process according to the embodiment. - Hereinafter, (a) preferred embodiment(s) of the present disclosure will be described in detail with reference to the appended drawings. Note that, in this specification and the appended drawings, structural elements that have substantially the same function and structure are denoted with the same reference numerals, and repeated explanation of these structural elements is omitted.
- In addition, the description will be made in the following order.
- 1. Overview of communication control system according to embodiment of the present disclosure
- 2-1. System configuration
2-2. Server configuration
3. System operation process
3-1. Conversation data registration process
3-2. Phoneme DB generation process
3-3. Dialogue control process
3-4. Conversation DB updating process
3-5. Advertisement insertion process
4. Feedback acquisition process - 4-2. Operation process
- A communication control system according to an embodiment of the present disclosure is capable of obtaining reliable feedback on a specific experience from a user who has the specific experience further naturally through a conversation with an agent without imposing a burden on the user. Hereinafter, an overview of the communication control system according to the embodiment will be described with reference to
FIG. 1 . -
FIG. 1 is an explanatory diagram illustrating the overview of the communication control system according to an embodiment of the present disclosure. A dialogue with an agent can be performed via, for example, aclient terminal 1 such as a smartphone owned by a user. Theclient terminal 1 includes a microphone and a speaker, and thus is capable of performing a dialogue with the user by voice. - Herein, as described above, after users experience objects (goods and samples), content, agents, services, and the like, it is very important to determine the impressions or opinions of the users and to connect the impressions and opinions to subsequent development of goods or improvement of services. However, it has been difficult to arouse true feelings of the users further naturally from the users without imposing a burden on the users.
- For example, in a method of supplying samples of goods to users and obtaining feedback on the samples, feedback can be obtained more reliably from the users with motivation for obtaining the samples of the goods cheaply. However, reliability of the feedback is low. That is, at a stage of actually inputting opinions or comments, the users may feel that input manipulations are troublesome in many cases. Thus, there is a concern that vague comments or unsubstantial comments will be input. In addition, it is also difficult for company sides that provide the samples to determine whether the users respond with their true opinions or give uninformative comments, and reliability is lacking.
- Accordingly, according to the embodiment, a system (agent system) that realizes a dialogue between a user and an agent is used so that the agent can get feedback from a user naturally and discreetly. Thus, it is possible to obtain reliable feedback further naturally without imposing a burden such as that posed by an input manipulation on a user.
- For example, as illustrated in
FIG. 1 , when a user is relaxed or at a timing at which the user is free, the user is asked a question by anagent 10 for getting feedback on an object (a sample or the like), content, or a service (for example, the agent system) experienced by the user. This question is reproduced through voice of theagent 10 from the speaker of theclient terminal 1. At this time, an image of theagent 10 may be displayed on a display of theclient terminal 1. - The
agent 10 asks, for example, “Did you like 00 chocolate that you just ate?” and asks the user to give feedback on “00 chocolate” (which is an example of goods). When the user answers the question from theagent 10 with “It was pretty good,” “I didn't really like it,” “It was good, but it's a little expensive,” or the like, theclient terminal 1 can collect speech of the user with the microphone and obtain the feedback of the user. - In this way, the feedback on the goods can be acquired naturally from the user in a manner in which the
agent 10 speaks to the user in a dialogue. Since the user can speak to theagent 10 at an unexpected timing, there is a high possibility of the user giving his or her true opinion or impression. In addition, when a character of theagent 10 is suitable for preference of the user or the user is accustomed to the character of theagent 10, an increase in the possibility of the user giving his or her true feeling is expected. Further, since the user merely speaks his or her feeling or opinion in response to a question of theagent 10, the effort of accessing a specific web site or inputting a comment is reduced. In addition, the communication control system (agent system) according to the embodiment is not limited to a voice agent that performs a response by voice, and a text treatment agent that performs a response on a text basis may be used in theclient terminal 1. - Next, an overall configuration of the above-described communication control system according to the embodiment will be described with reference to
FIG. 2 .FIG. 2 is a diagram illustrating an overall configuration of the communication control system according to the embodiment. - As illustrated in
FIG. 2 , the communication control system according to the embodiment includes theclient terminal 1 and anagent server 2. - The
agent server 2 is connected to theclient terminal 1 via anetwork 3 and transmits and receives data. Specifically, theagent server 2 generates response voice to spoken voice collected and transmitted by theclient terminal 1 and transmits the response voice to theclient terminal 1. Theagent server 2 includes a phoneme database (DB) corresponding to one or more agents and can generate response voice through the voice of a specific agent. Herein, the agent may be a character of a cartoon, an animation, a game, a drama, or a movie, an entertainer, a celebrity, a historical person, or the like or may be, for example, an average person of each generation without specifying an individual. In addition, the agent may be an animal or a personified character. In addition, the agent may be a person in whom the personality of the user is reflected or a person in whom the personality of a friend, a family member, or an acquaintance of the user is reflected. - In addition, the
agent server 2 can generate response content in which the personality of each agent is reflected. Theagent server 2 can supply various services such as management of a schedule of the user, transmission and reception of messages, and supply of information through dialogue with the user via the agent. - The
client terminal 1 is not limited to the smartphone illustrated inFIG. 2 . For example, a mobile phone terminal, a tablet terminal, a personal computer (PC), a game device, a wearable terminal (smart eyeglasses, a smart band, a smart watch, or a smart necklace) may also be used. In addition, theclient terminal 1 may also be a robot. - The overview of the communication control system according to the embodiment has been described above. Next, a configuration of the
agent server 2 of the communication control system according to the embodiment will be described specifically with reference toFIG. 3 . -
FIG. 3 is a block diagram illustrating an example of the configuration of theagent server 2 according to the embodiment. As illustrated inFIG. 3 , theagent server 2 includes a voice agent interface (I/F) 20, adialogue processing unit 30, aphoneme storage unit 40, a conversation DB generation unit 50, a phoneme DB generation unit 60, an advertisementinsertion processing unit 70, anadvertisement DB 72, and a feedbackacquisition processing unit 80. - The voice agent I/
F 20 functions as an input and output unit, a voice recognition unit, and a voice generation unit for voice data. As the input and output unit, a communication unit that transmits and receives data to and from theclient terminal 1 via thenetwork 3 is assumed. The voice agent I/F 20 can receive the spoken voice of the user from theclient terminal 1, process the voice, and convert the spoken voice into text through voice recognition. In addition, the voice agent I/F 20 processes answer sentence data (text) of the agent output from thedialogue processing unit 30 to vocalize answer voice using phoneme data corresponding to the agent and transmits the generated answer voice of the agent to theclient terminal 1. - The
dialogue processing unit 30 functions as an arithmetic processing device and a control device and controls overall operations in theagent server 2 in accordance with various programs. Thedialogue processing unit 30 is realized by, for example, an electronic circuit such as a central processing unit (CPU) or a microprocessor. In addition, thedialogue processing unit 30 according to the embodiment functions as a basic dialogue processing unit 31, a character Adialogue processing unit 32, a person B dialogue processing unit 33, and a person Cdialogue processing unit 34. - The character A
dialogue processing unit 32, the person B dialogue processing unit 33, and the person Cdialogue processing unit 34 realize dialogue specialized for each agent. Herein, examples of the agent include a “character A,” a “person B,” and a “person C” and the embodiment is, of course, not limited thereto. Each dialogue processing unit realizing dialogue specialized for many agents may be further included. The basic dialogue processing unit 31 realizes general-purpose dialogue not specialized for each agent. - Herein, a basic configuration common to the basic dialogue processing unit 31, the character A
dialogue processing unit 32, the person B dialogue processing unit 33, and the person Cdialogue processing unit 34 will be described with reference toFIG. 4 . -
FIG. 4 is a diagram illustrating an example of a configuration of the dialogue processing unit 300 according to the embodiment. As illustrated inFIG. 4 , the dialogue processing unit 300 includes a questionsentence retrieval unit 310, an answersentence generation unit 320, a phonemedata acquisition unit 340, and aconversation DB 330. Theconversation DB 330 stores CONVERSATION data in which question sentence data and answer sentence data are paired. In the dialogue processing unit specialized for the agent, conversation data specialized for the agent is stored in theconversation DB 330. In a general-purpose dialogue processing unit, general-purpose data (that is, basic conversation data) not specific to the agent is stored in theconversation DB 330. - The question
sentence retrieval unit 310 recognizes question voice (which is an example of spoken voice) of the user output from the voice agent I/F 20 and retrieves question sentence data matching the question sentence converted into text from theconversation DB 330. The answersentence generation unit 320 extracts the answer sentence data stored in association with the question sentence data retrieved by the questionsentence retrieval unit 310 from theconversation DB 330 and generates the answer sentence data. The phonemedata acquisition unit 340 acquires phoneme data for vocalizing an answer sentence generated by the answersentence generation unit 320 from thephoneme storage unit 40 of the corresponding agent. For example, in the case of the character Adialogue processing unit 32, phoneme data for reproducing answer sentence data through the voice of the character A is acquired from the characterA phoneme DB 42. Then, the dialogue processing unit 300 outputs the generated answer sentence data and the acquired phoneme data to the voice agent I/F 20. - The
phoneme storage unit 40 stores a phoneme database for generating voice of each agent. Thephoneme storage unit 40 can be realized by a read-only memory (ROM) and a random access memory (RAM). In the example illustrated inFIG. 3 , abasic phoneme DB 41, a characterA phoneme DB 42, a personB phoneme DB 43, and a personC phoneme DB 44 are stored. In each phoneme DB, for example, a phoneme segment and a prosodic model which is control information for the phoneme segment are stored as phoneme data. - The conversation DB generation unit 50 has a function of generating the
conversation DB 330 of the dialogue processing unit 300. For example, the conversation DB generation unit 50 collects assumed question sentence data, collects answer sentence data corresponding to each question, and subsequently pairs and stores the question sentence data and the answer sentence data. Then, when a predetermined number of pieces of conversation data (pairs of question sentence data and answer sentence data: for example, 100 pairs) are collected, the conversation DB generation unit 50 registers the conversation data as a set of conversation data of the agent in theconversation DB 330. - The phoneme DB generation unit 60 has a function of generating the phoneme DB stored in the
phoneme storage unit 40. For example, the phoneme DB generation unit 60 analyzes voice information of predetermined read text, decomposes the voice information into the phoneme segment and the prosodic model which is control information, and performs a process of registering a predetermined number or more of pieces of voice information as phoneme data in the phoneme DB when the predetermined number or more of pieces of voice information are collected. - The advertisement
insertion processing unit 70 has a function of inserting advertisement information into dialogue of the agent. The advertisement information to be inserted can be extracted from theadvertisement DB 72. In theadvertisement DB 72, advertisement information (for example, information such as advertisement content of text, an image, voice, or the like, an advertiser, an advertisement period, and an advertisement target person) requested by a supply side such as a company (a vendor or a supplier) is registered. - The feedback
acquisition processing unit 80 has a function of inserting a question for acquiring feedback into dialogue of the agent and obtaining the feedback from the user. - The configuration of the
agent server 2 according to the embodiment has been described specifically above. Note that the configuration of theagent server 2 according to the embodiment is not limited to the example illustrated inFIG. 3 . For example, each configuration of theagent server 2 may be achieved by another server on a network. - Next, a basic operation process of the communication control system according to the embodiment will be described with reference to
FIGS. 5 to 14 . -
FIG. 5 is a flowchart illustrating the conversationDB generation process 330 according to the embodiment. As illustrated inFIG. 5 , the conversation DB generation unit 50 first stores assumed question sentences (step S103). - Subsequently, the conversation DB generation unit 50 stores answer sentences corresponding to (paired with) the question sentences (step S106).
- Subsequently, the conversation DB generation unit 50 determines whether a predetermined number of pairs of question sentences and answer sentences (also referred to as conversation data) are collected (step S109).
- Then, in a case in which the predetermined number of pairs of question sentences and conversation sentences are collected (Yes in step S109), the conversation DB generation unit 50 registers the data sets formed of many pairs of question sentences and answer sentences in the conversation DB 330 (step S112). As examples of the pairs of question sentences and answer sentences, for example, the following pairs are assumed.
- Question sentence: Good morning.
Answer sentence: How are you doing today? - Question sentence: How's the weather today?
Answer sentence: Today's weather is OO. - The pairs can be registered as conversation data in the
conversation DB 330. -
FIG. 6 is a flowchart illustrating a phoneme DB generation process according to the embodiment. As illustrated inFIG. 6 , the phoneme DB generation unit 60 first displays an example sentence (step S113). In the display of the example sentence, for example, an example sentence necessary to generate phoneme data is displayed on a display of an information processing terminal (not illustrated). - Subsequently, the phoneme DB generation unit 60 records voice reading the example sentence (step S116) and analyzes the recorded voice (step S119). For example, voice information read by a person who takes charge of the voice of an agent is collected by the microphone of the information processing terminal. Then, the phoneme DB generation unit 60 receives and stores the voice information and further performs voice analysis.
- Subsequently, the phoneme DB generation unit 60 generates a prosodic model on the basis of the voice information (step S122). The prosodic model extracts prosodic parameters indicating prosodic features of the voice (for example, a tone of the voice, strength of the voice, and a speech speed) and differs for each person.
- Subsequently, the phoneme DB generation unit 60 generates a phoneme segment (phoneme data) on the basis of the voice information (step S125).
- Subsequently, the phoneme DB generation unit 60 stores the prosodic model and the phoneme segment (step S128).
- Subsequently, the phoneme DB generation unit 60 determines whether a predetermined number of the prosodic models and the phoneme segments are collected (step S131).
- Then, in a case in which the predetermined number of prosodic models and phoneme segments are collected (Yes in step S131), the phoneme DB generation unit 60 registers the prosodic models and the phoneme segments as a phoneme database for a predetermined agent in the phoneme storage unit 40 (step S134).
-
FIG. 7 is a flowchart illustrating a dialogue control process according to the embodiment. As illustrated inFIG. 7 , the voice agent I/F 20 first confirms whether question voice and an agent ID of a user are acquired (step S143). The agent ID is identification information indicating a specific agent such as the character A, the person B, or the person C. The user can purchase phoneme data of each agent. For example, an ID of the agent purchased in a purchase process is stored in theclient terminal 1. - Subsequently, when the question voice and the agent ID of the user are acquired (Yes in step S146), the voice agent I/
F 20 converts the question voice into text through voice recognition (step S149). The voice agent I/F 20 outputs the question sentence converted into text to the dialogue processing unit of the specific agent designated with the agent ID. For example, in the case of “agent ID: agent A” the voice agent I/F 20 outputs the question sentence converted into text to the character Adialogue processing unit 32. - Subsequently, the
dialogue processing unit 30 retrieves a question sentence matching the question sentence converted into text from the conversation DB of the specific agent designated with the agent ID (step S152). - Subsequently, in a case in which there is a matching question (Yes in step S155), the character A
dialogue processing unit 32 acquires answer sentence data corresponding to (paired with and stored) the question from the conversation DB of the specific agent (step S158). - Conversely, in a case in which there is no matching question (No in step S155), a question sentence matching the question sentence converted into text is retrieved from the conversation DB of the basic dialogue processing unit 31 (step S161).
- In a case in which there is a matching question sentence (Yes in step S161), the basic dialogue processing unit 31 acquires the answer sentence data corresponding to (paired with and stored) the question from the conversation DB of the basic dialogue processing unit 31 (step S167).
- Conversely, in a case in which there is no matching question (No in step S164), the basic dialogue processing unit 31 acquires answer sentence data (for example, an answer sentence “I don't understand the question”) in a case in which there is no matching question sentence (step S170).
- Subsequently, the character A
dialogue processing unit 32 acquires phoneme data of the character A for generating voice of the answer sentence data with reference to the phoneme DB (herein, the character A phoneme DB 42) of the specific agent designated with the agent ID (step S173). - Subsequently, the acquired phoneme data and answer sentence data are output to the voice agent I/F 20 (step S176).
- Then, the voice agent I/
F 20 vocalizes the answer sentence data (text) (voice synthesis) using the phoneme data and transmits the answer sentence data to the client terminal 1 (step S179). Theclient terminal 1 reproduces the answer sentence through the voice of the character A. - Next, a process of updating the
conversation DB 330 of each dialogue processing unit 300 will be described. In the embodiment, it is possible to extend theconversation DB 330 by a conversation with a user. - First, a data configuration example of the
conversation DB 330 will be described supplementarily with reference toFIG. 8 .FIG. 8 is an explanatory diagram illustrating a data configuration example of theconversation DB 330 according to the embodiment. As illustrated inFIG. 8 , eachconversation DB 330 includes two layers, an individualized layer 331 and a common layer 332. For example, in the case of a characterA conversation DB 330A, conversation data in which personality or a feature of the character A is reflected is retained in thecommon layer 332A. On the other hand, in anindividualized layer 331A, conversation data customized only for a user through a conversation with the user is retained. That is, the characterA phoneme DB 42 and the character Adialogue processing unit 32 are supplied (sold) as a set to users. Then, certain users X and Y perform dialogues with the same character A at first (conversation data retained in thecommon layer 332A is used). However, as the dialogues continue, conversation data customized only for each user is accumulated in theindividualized layer 331A for each user. In this way, it is possible to supply the users X and Y with dialogues with the character A in accordance with preferences of the users X and Y. In addition, even in a case in which the agent “person B” is an average person of each generation who has no specific personality such as the character A, the conversation data can be customized only for the user. That is, for example, in a case in which the “person B” is a “person in his or her twenties,” average conversation data of his or her twenties is retained in thecommon layer 332B and dialogue with the user is continued so that the customized conversation data is retained in theindividualized layer 331 B of each user. As dialogues with the user continue, customized conversation data is retained in theindividualized layer 331 B for each user. In addition, the user can also select favorite phoneme data such as “male,” “female,” “high-tone voice,” or “low-tone voice” as the voice of the person B from the personB phoneme DB 43 and can purchase the favorite phoneme data. - A specific process at the time of the customization of the
conversation DB 330 will be described with reference toFIG. 9 .FIG. 9 is a flowchart illustrating a process of updating theconversation DB 330 according to the embodiment. - As illustrated in
FIG. 9 , the voice agent I/F 20 first acquires (receives) question voice of the user from theclient terminal 1 and converts the question voice into text through voice recognition (step S183). The data (question sentence data) converted into text is output to the dialogue processing unit (herein, for example, the character A dialogue processing unit 32) of the specific agent designated by the agent ID. - Subsequently, the character A
dialogue processing unit 32 determines whether the question sentence data is a predetermined command (step S186). - Subsequently, in a case in which the question sentence data is the predetermined command (Yes in step S186), the character A
dialogue processing unit 32 registers answer sentence data designated by the user as a pair with the question sentence data in theindividualized layer 331A of theconversation DB 330A (step S189). The predetermined command may be, for example, a word “NG” or “Setting.” For example, the conversation DB of the character A can be customized in accordance with a flow of the following conversation. - User: “Good morning”
Character A: “Good morning”
User: “NG. Answer to fine do your best”
Character A: “Fine do your best” - In the flow of the foregoing conversation, “NG” is the predetermined command. After “NG” is spoken by the user, the character A
dialogue processing unit 32 registers answer sentence data “Fine do your best” designated by the user as a pair with the question sentence data “Good morning” in theindividualized layer 331A of theconversation DB 330A. - Conversely, in a case in which the question sentence data is not the predetermined command (No in step S186), the character A
dialogue processing unit 32 retrieves the answer sentence data retained as the pair with the question sentence data from the characterA conversation DB 330A. In a case in which the answer sentence data retained as the pair with the question sentence data is not retained in the characterA conversation DB 330A, that is, a question of the user is a question with no answer sentence (Yes in step S192), the character Adialogue processing unit 32 registers the answer sentence data designated by the user as a pair with the question sentence in theindividualized layer 331A (step S195). For example, in a flow of the following conversation, the conversation DB of the character A can be customized. - User A: “Fine?”
- Character A: “I can't understand the question” (answer data example in case in which there is no corresponding answer)
User: “When I questions “Fine?,” answer to “Fine today””
Character A: “Fine today” - In the flow of the foregoing conversation, since there is no answer sentence data maintained to be paired with “Fine?,” “I can't understand the question” which is an example of the answer data in the case in which there is no corresponding answer is acquired by the character A
dialogue processing unit 32, is output along with corresponding phoneme data of the character A to the voice agent I/F 20, and is reproduced in theclient terminal 1. Subsequently, when the answer sentence “Fine today” designated by the user is input, the character Adialogue processing unit 32 registers “Fine today” as the pair with the question sentence data “Fine?” in theindividualized layer 331A. - Conversely, in a case in which the question of the user is a question for which there is an answer sentence (No in step S192), the character A
dialogue processing unit 32 acquires the answer sentence data and outputs the answer sentence data along with the corresponding phoneme data of the character A to the voice agent I/F 20. Then, the answer sentence is reproduced through the voice of the character A in the client terminal 1 (step S198). - Next, conversation data transition from an individualized layer to a common layer will be described with reference to
FIG. 10 .FIG. 10 is a flowchart illustrating conversation data transition process from an individualized layer to a common layer according to the embodiment. Herein, for example, the conversation data transition process from theindividualized layer 331A to thecommon layer 332A of the character Adialogue processing unit 32 will be described. - As illustrated in
FIG. 10 , the character Adialogue processing unit 32 first searches theindividualized layer 331A for each user periodically (step S203) and extracts conversation pairs with substantially the same content (the pair of question sentence data and answer sentence data) (step S206). For the conversation pairs with the substantially same content, for example, a pair of question sentence “Fine?” and answer sentence “Fine today!” and a pair of question sentence “Are you fine?” and answer sentence “Fine today!” can be determined to be the conversation pairs with substantially the same content because the question sentences are different only in a polite expression or not. - Subsequently, when a predetermined number or more of conversation pairs are extracted from the
individualized layer 331A for each user (Yes in step S209), the character Adialogue processing unit 32 registers the conversation pairs in thecommon layer 332A (for each user) (step S212). - In this way, when the conversation pairs with substantially the same content in the individualized layer 331 for each user transition to the common layer 332, the common layer 332 can be extended (the conversation pairs can be expanded).
- In addition, in the embodiment, the conversation data can transition from the conversation DB (specifically, the common layer) of the specific agent to the basic dialogue conversation DB, and thus the basic dialogue conversation DB can also be extended.
FIG. 11 is an explanatory diagram illustrating transition of conversation data to the basicdialogue conversation DB 330F according to the embodiment. For example, in a case in which the users X and Y each select (purchase) the agent “character A” and a user Z selects (purchases) the agent “person B,” as illustrated inFIG. 11 , a characterA conversation DB 330A-X of the user X, a characterA conversation DB 330A-Y of the user Y, and a person B conversation DB 330-Z of the user Z can be in thedialogue processing unit 30. In this case, inindividualized layers 331A-X, 331A-Y, and 331B-Z, unique (customized) conversation pairs are gradually registered in accordance with dialogues with the users X, Y, and Z (seeFIG. 9 ). Subsequently, when substantially the same conversation pairs in the sameindividualized layers 331A-X and 331A-Y become a predetermined number, substantially the same conversation pairs are registered incommon layers 332A-X, 332A-Y for the users, respectively (seeFIG. 10 ). - Then, in a case in which a predetermined number or more of substantially same conversation pairs are extracted from the
common layers 332A-X, 332A-Y, and 332B-Z of the plurality of agents (which may include different agents), thedialogue processing unit 30 causes the conversation pairs to transition to a high-order basicdialogue conversation DB 330F. The basicdialogue conversation DB 330F is a conversation DB included in the basic dialogue processing unit 31. Thus, it is possible to extend the basicdialogue conversation DB 330F (expand the conversation pairs). The data transition process will be described specifically with reference toFIG. 12 .FIG. 12 is a flowchart illustrating the conversation data transition process to thebasic dialogue DB 330F according to the embodiment. - As illustrated in
FIG. 12 , thedialogue processing unit 30 first searches the plurality of common layers 332 of theconversation DBs 330 periodically (step S223) and extracts substantially the same conversation pairs (step S226). - Subsequently, when the predetermined number or more of substantially same conversation pairs are extracted from the plurality of common layers 332 (Yes in step S229), the
dialogue processing unit 30 registers the conversation pairs in the basicdialogue conversation DB 330F (step S232). - In this way, by causing the conversation pairs with substantially the same content in the common layers 332 of the
conversation DBs 330 in the plurality of agents to transition to the basicdialogue conversation DB 330F, it is possible to extend the basicdialogue conversation DB 330F (expand the conversation pairs). - Next, an advertisement information insertion process by the advertisement
insertion processing unit 70 will be described with reference toFIGS. 13 and 14 . In the embodiment, the advertisementinsertion processing unit 70 can insert advertisement information stored in theadvertisement DB 72 into speech of an agent. The advertisement information can be registered in advance in theadvertisement DB 72.FIG. 13 is a diagram illustrating an example of advertisement information registered in theadvertisement DB 72 according to the embodiment. - As illustrated in
FIG. 13 , advertisement information 621 includes, for example, an agent ID, a question sentence, advertisement content, a condition, and a probability. The agent ID designates an agent speaking advertisement content, the question sentence designates a question sentence of a user which serves as a trigger and into which advertisement content is inserted, and the advertisement content is an advertisement sentence inserted into dialogue of an agent. In addition, the condition is a condition on which advertisement content is inserted and the probability indicates a probability at which advertisement content is inserted. For example, in an example illustrated in the first row ofFIG. 13 , in a case in which a word “chocolate” is included in a question sentence from a user who is 30 years old or less in dialogue with the agent “character A,” advertisement content “chocolate newly released by “BB company is delicious because milk is contained much” is inserted into the question sentence. In addition, when the advertisement content is inserted every time at the time of speaking the question sentence serving as a trigger, the user feels troublesome. Therefore, in the embodiment, a probability at which the advertisement is inserted may be set. The probability may be decided in accordance with advertisement charges. For example, the probability is set to be higher as the advertisement charges are higher. - The advertisement content insertion process will be described specifically with reference to
FIG. 14 .FIG. 14 is a flowchart illustrating the advertisement content insertion process according to the embodiment. - As illustrated in
FIG. 14 , the advertisementinsertion processing unit 70 first monitors dialogue (specifically, a dialogue process by the dialogue processing unit 30) between the user and the agent (step S243). - Subsequently, the advertisement
insertion processing unit 70 determines whether a question sentence with the same content as a question sentence registered in theadvertisement DB 72 appears in the dialogue between the user and the agent (step S246). - Subsequently, in a case in which the question sentence with the same content appears (Yes in step S246), the advertisement
insertion processing unit 70 confirms the condition and the probability of the advertisement insertion associated with the corresponding question sentence (step S249). - Subsequently, the advertisement
insertion processing unit 70 determines whether a current state is an advertising state on the basis of the condition and the probability (step S252). - Subsequently, in a case in which the current state is the advertising state (Yes in step S252), the advertisement
insertion processing unit 70 temporarily interrupts the dialogue process by the dialogue processing unit 30 (step S255) and inserts the advertisement content into the dialogue (step S258). Specifically, for example, the advertisement content is inserted into an answer sentence of the agent for the question sentence of the user. - Then, the dialogue (conversation sentence data) including the advertisement content is output from the
dialogue processing unit 30 to the voice agent I/F 20, is transmitted from the voice agent I/F 20 to theclient terminal 1, and is reproduced through voice of the agent (step S261). Specifically, for example, the advertisement content can be presented as a speech of the character A to the user, for example, in the following conversation. - User: “Good morning”
Character A: “Good morning! How are you doing today?”
User: “Fine. I want to eat some delicious food”
Character A: “I heard that grilled meat at CC store is delicious” - In the conversation, the corresponding answer sentence “Good morning! How are you doing today?” retrieved from the conversation DB of the character A is first output as voice in response to the question sentence “Good Morning” of the user. Subsequently, since the question sentence “I want to eat some delicious food” serving as the trigger of the advertisement insertion is included in the question sentence “Fine. I want to eat some delicious food” of the user (see second row of
FIG. 13 ), the advertisementinsertion processing unit 70 performs the advertisement insertion process and outputs the answer sentence with the advertisement content “I heard that grilled meat at CC store is delicious” through the voice of the character A. - The conversation data registration process, the phoneme DB generation process, the dialogue control process, the conversation DB updating process, and the advertisement insertion process have been described above as the basic operation processes of the communication control system according to the embodiment. In addition, in the communication control system according to the embodiment, the feedback
acquisition processing unit 80 can obtain reliable feedback on a specific experience further naturally through a dialogue with an agent from a user who has the specific experience without imposing a burden on the user. Hereinafter, the feedbackacquisition processing unit 80 will be described specifically with reference toFIGS. 15, 16, 17, 18, 19, 20, 21, 22, 23, 24A, 24B, 24C, 24D, 25, 26, 27, and 28 . -
FIG. 15 is a diagram illustrating a configuration example of the feedbackacquisition processing unit 80 according to the embodiment. As illustrated inFIG. 15 , the feedbackacquisition processing unit 80 includes alist confirmation unit 801, atiming determination unit 802, anacquisition control unit 803, aresult generation unit 804, amission list DB 810, anexperience list DB 811, auser situation DB 812, auser feeling DB 813, an individualcharacteristic DB 814, and aquestion sentence DB 815. - The
list confirmation unit 801 confirms a mission registered in themission list DB 810 and estimates whether the user has a specific experience which is a mission target. In themission list DB 810, a feedback mission requested by a company that provides an experience (specifically, a company that provides an object or content) or a questionnaire agent company that receives a request from a company and conducts questionnaires on an experience is registered. The feedback mission is transmitted via thenetwork 3 from, for example, an information processing device of a company or the like, is received by the communication unit included in the voice agent I/F 20, and is output to the feedbackacquisition processing unit 80. Herein,FIG. 16 illustrates an example of a mission list registered in themission list DB 810 according to the embodiment. - As illustrated in
FIG. 16 , the mission list includes mission details (specifically, which feedback on which experiences is obtained) and a time limit in which a mission is to be executed. For example, a mission for obtaining feedback (an opinion, an impression, or the like) on a chocolate sample of BB company or a mission for obtaining feedback on a ramen sample of DD company is registered. In this case, thelist confirmation unit 801 estimates whether the user has a specific experience (behavior) of “eating the chocolate sample of BB company” or a specific experience of “eating a ramen sample of DD company.” The experience may be estimated, for example, by requesting thedialogue processing unit 30 to output a question sentence (“Have you eaten chocolate of BB company?”) for directly confirming with the user whether the user has the specific experience is output through voice of the agent. - In addition, the
list confirmation unit 801 can also estimate an experience with reference to theuser situation DB 812 in which situations of the user are accumulated. In theuser situation DB 812, the situations of the user based on information acquired from an external server that performs a schedule management service or the like, context of dialogues acquired from thedialogue processing unit 30, or the like are stored. Further, behavior information of an individual user may be acquired from a wearable terminal (a transmissive or non-transmissive head-mounted display (HMD), a smart band, a smart watch, smart eyeglasses, or the like) worn on the body of the user, and the behavior information of the individual user may be accumulated as user situations in theuser situation DB 812. Examples of the behavior information of the individual user acquired from the wearable terminal include acceleration sensor information, various kinds of biological information, positional information, and a captured image captured in the periphery of the user (including an angle of view of the user) by a camera installed in the wearable terminal. - The
list confirmation unit 801 registers the experience information in theexperience list DB 811 when it is confirmed that the user has an experience of a mission target.FIG. 17 is a diagram illustrating an example of an experience list registered in theexperience list DB 811 according to the embodiment. For example, in a case in which it could be confirmed that the user has, for example, the specific experience of “eating the chocolate sample of BB company,” as illustrated inFIG. 17 , the mission targeting the experience “obtaining feedback on the chocolate sample of BB company” can be registered in conjunction with experience date information “Jan. 2, 20XX.” - The
timing determination unit 802 has a function of determining a timing to execute a mission registered in theexperience list DB 811 in accordance with context of the user. The context of the user is a current situation or feeling of the user and can be determined with reference to, for example, theuser situation DB 812, theuser feeling DB 813, or the individualcharacteristic DB 814. - Herein, the
user feeling DB 813 is a storage unit that stores a history of user feelings. The user feelings stored in theuser feeling DB 813 can be estimated on the basis of biological information (a pulse rate, a heart rate, a heart sound, a blood pressure, respiration, a body temperature, a perspiration amount, an electroencephalogram, myoelectricity, or the like), voice information (intonation of a voice), or a captured image (a facial image, an eye image, or the like of the user) acquired from a wearable terminal worn by the user. In addition, the user feelings may also be estimated from context of a conversation between the user and the agent performed through thedialogue processing unit 30 or a result of voice analysis. Examples of the feeling information of the user include busy, irritated, depressed, and enjoyable feelings, a relaxed state, a focused state, and a tense state. In addition, the individualcharacteristic DB 814 is a storage unit that stores personality traits, habits, or the like of an individual. While theuser situation DB 812 or theuser feeling DB 813 stores the situations (a history of the situations) of the user for a relatively short time, the individualcharacteristic DB 814 stores the personality traits or the habits of the individual user for a relatively long time such as half of a year or 1 year. - For example, the
timing determination unit 802 acquires a current situation of the user from theuser situation DB 812 and determines an appropriate timing to execute a mission, that is, to ask the user a question for obtaining feedback on a specific experience. More specifically, thetiming determination unit 802 may determine a period of time in which there is no schedule on the basis of schedule information of the user as an appropriate timing. In addition, thetiming determination unit 802 may acquire a current feeling of the user from theuser feeling DB 813 and determine the appropriate timing to ask the user a question for obtaining feedback on the specific experience. More specifically, thetiming determination unit 802 may determine the appropriate timing so that a time at which the user is experiencing an intense emotion, is in an excited state, or is in a busy and nervous state is avoided. The details of a timing determination process will be described below. - The
acquisition control unit 803 performs control such that question sentence data for obtaining the feedback on the specific experience is generated, the question is output as speech of the agent at the timing determined by thetiming determination unit 802 from theclient terminal 1, and an answer of the user to the question is acquired as feedback. Specifically, the question sentence data is output from theclient terminal 1 via thedialogue processing unit 30 and the voice agent I/F 20. The question sentence data is generated with reference to thequestion sentence DB 815. The details of a process of generating the question sentence data and a process of acquiring the feedback in theacquisition control unit 803 will be described below. - The
result generation unit 804 generates a result on the basis of the feedback acquired from the user. Theresult generation unit 804 may generate the result in consideration of a user state at the time of the answer in addition to a voice recognition result (text) of answer voice of the user to the question. The result of the feedback can be matched (associated) with the mission list of themission list DB 810 to be stored in themission list DB 810. In addition, the generated result can be provided as an answer to, for example, a company or the like that has registered the mission. Herein, the generated result is matched with the mission list to be stored in themission list DB 810, but the embodiment is not limited thereto. The generated result may be matched with the mission list to be stored in another DB (storage unit). - The configuration of the feedback
acquisition processing unit 80 according to the embodiment has been described above specifically. Next, an operation process according to the embodiment will be described specifically with reference toFIGS. 18, 19, 20, 21, 22, 23, 24A, 24B, 24C, 24D, 25, 26, 27, and 28 . -
FIG. 18 is a flowchart illustrating a feedback acquisition process according to the embodiment. As illustrated inFIG. 18 , the feedbackacquisition processing unit 80 first acquires a feedback request, a request time limit, and the like from, for example, a company that provides an experience (provides an object or content) or an information processing device of a questionnaire agent company side that receives a request from a company and conducts questionnaires (step S270). - Subsequently, the feedback
acquisition processing unit 80 generates a mission list by registering the acquired mission information in the mission list DB 810 (step S273). Here, the details of the mission list generation process are illustrated inFIG. 19 . As illustrated inFIG. 19 , the feedbackacquisition processing unit 80 checks the feedback request from a company or the like (step S303). In a case in which a new request is transmitted from an information processing device of a company or the like and is added (Yes in step S306), the feedback request can be registered as a mission list in the mission list DB 810 (step S309). An example of the mission list registered in themission list DB 810 has been described above with reference toFIG. 16 . - Subsequently, the feedback
acquisition processing unit 80 causes thelist confirmation unit 801 to confirm whether the user has an experience of a mission target and performs generating of the experience list (step S276). Herein, the details of an experience list generation process are illustrated inFIG. 20 . As illustrated inFIG. 20 , thelist confirmation unit 801 confirms whether the user has the experience which is the mission target (step S315). In a case in which it could be estimated that the user has the experience (Yes in step S318), experience information is registered as a list of the experience which the user already has in the experience list DB 811 (step S321). An example of the experience list registered in theexperience list DB 811 has been described above with reference toFIG. 17 . - Subsequently, the
timing determination unit 802 of the feedbackacquisition processing unit 80 determines an appropriate timing at which the user is asked to answer a question for obtaining feedback (step S279). Herein, the details of the timing determination process will be described with reference toFIGS. 21 and 22 .FIG. 21 is a flowchart illustrating the timing determination process according to the embodiment. As illustrated inFIG. 21 , thetiming determination unit 802 first confirms whether the list of the experience which the user already has is registered in the experience list DB 811 (step S333). - Subsequently, in a case in which the list of the experience which the user already has is registered (Yes in step S336), the
timing determination unit 802 calculates an index indicating appropriateness of a timing on the basis of a situation of the user (step S339). The situation of the user is schedule information, a behavior state, or the like of the user and is acquired from theuser situation DB 812. In theuser situation DB 812, the situations of the user are periodically accumulated so that a change in a user situation for a relatively short time can be ascertained. In addition, in the embodiment, the user situation is associated with the index indicating appropriateness of a timing for obtaining feedback. - In addition, the
timing determination unit 802 calculates the index indicating appropriateness of a timing on the basis of a feeling of the user (step S342). In addition, a feeling of the user is acquired from theuser feeling DB 813. In theuser feeling DB 813, feelings of the user are periodically accumulated so that a change in a user feeling for a relatively short time can be ascertained. In addition, in the embodiment, the user feeling is associated with an index indicting appropriateness of a timing for obtaining feedback. - Subsequently, the
timing determination unit 802 calculates a sum value (or an average value) of the indexes on the basis of a timing index based on the user situation and a timing index based on the user feeling and determines whether the calculated index exceeds a predetermined threshold (step S345). - Then, in a case in which the index exceeds the predetermined threshold (Yes in step S345), the
timing determination unit 802 determines that the timing is appropriate for obtaining the feedback (step S348). - I this way, in the embodiment, an appropriate timing is determined on the basis of the two components, the user situation and the user feeling. Here, examples of a timing index are illustrated in
FIG. 22 . As illustrated inFIG. 22 , indexes (for example, numerical values of -5 to +5) indicating the appropriateness of a timing are associated with situations or feelings of the user. For example, in a situation in which a schedule of the user is empty (that is, a period of time which there is no schedule) and further in a case in which a feeling of the user is in a calm state, thetiming determination unit 802 calculates an average value of an index of “+3” corresponding to the user situation and an index of “+4” corresponding to the user feeling as a timing index by the followingExpression 1. -
Index={(+3)+(+4)}÷2=+3.5Expression 1 - Then, for example, in a case in which the threshold is “0,” the
timing determination unit 802 can determine that a present time is a timing appropriate for obtaining feedback since the calculated index of “+3.5” exceeds the threshold. - In the above-described example, the appropriate timing is determined on the basis of the two components, the user situation and the user feeling. However, the embodiment is not limited thereto. For example, the timing may be determined using at least one of the user situation and the user feeling.
- Subsequently, referring back to
FIG. 18 , theacquisition control unit 803 of the feedbackacquisition processing unit 80 performs a question sentence data generation process (step S282). Herein, the question sentence data generation process will be described in detail with reference toFIGS. 23, 24A, 24B, 24C, 24D, 25, and 26 . Theacquisition control unit 803 according to the embodiment can adjust question sentence data for obtaining feedback in consideration of two components such as reliability that the user has ranked and, personality traits and habits of the user. All the reliability, the personality traits and habits are components of which a change is less than the user situation or the user feeling. - The reliability that the user has ranked is reliability of the system that the user has ranked, and the
acquisition control unit 803 adjusts, for example, a formality degree (specifically, an expression or a way of speaking) of a question sentence in accordance with a level of the reliability. In addition, theacquisition control unit 803 can adjust the number of questions allowed by the user on the basis of the personality traits or habits (which is an example of an attribute) of the user. In the feedback acquisition process, feedback is desired to be obtained as much as possible from the user. However, when too many questions are asked, some users consider feeling unpleasant in some cases. Since a tolerance of the number of questions is considered to be caused by the personality traits or habits of the user, the number of questions may be adjusted, for example, using “factors for being happy” proposed in the field of happiness study in recent years. In “Mechanism of Happiness” (Kodansha's new library of knowledge) by a professor, Takashi Maeno, in a graduate school of Keio University, the following four factors are exemplified as “factors for being happy.” -
- a factor “Let's have a try!” (a factor of self-fulfillment and growth)
- a factor “Thank you!” (a factor of connection and thanks)
- a factor “Going to be all right!” (a factor of positive stand and optimism)
- a factor “Be yourself!” (a factor of independency and my pace)
- Of these factors, the tolerance of the number of questions is considered to depend on a value of the factor “Let's have a try!” and the feedback
acquisition processing unit 80 adjusts the number of questions in accordance with the magnitude of the value of the factor “Let's have a try!” of the user estimated on the basis of the personality traits or habits of the user. Note that the feedbackacquisition processing unit 80 may adjust the number of questions in accordance with, for example, the positive degree (positiveness) of the personality traits of the user estimated on the basis of the personality traits or habits of the user without being limited to the factor “Let's have a try!.” -
FIG. 23 is a flowchart illustrating a question sentence data generation process in which reliability is considered according to the embodiment. - As illustrated in
FIG. 23 , theacquisition control unit 803 first acquires reliability of the system (the agent) that the user has ranked (step S353). The reliability of the agent that the user has ranked may be estimated on the basis of user information acquired from theuser feeling DB 813 or may be acquired by directly asking the user a question. For example, the agent asks the user “How much do you trust me?” and reliability of the system is acquired from the user. - Subsequently, the
acquisition control unit 803 adjusts an expression, a good way, and a request degree of feedback content of question sentence data stored in thequestion sentence DB 815 and corresponding to a mission in accordance with the level (high, intermediate, and low) of the reliability and generates question sentence data. That is, in a case in which the reliability is “low” (“low” in step S356), theacquisition control unit 803 generates the question sentence data corresponding to “low” reliability (step S359). In a case in which the reliability is “intermediate” (“intermediate” in step S356), theacquisition control unit 803 generates the question sentence data corresponding to “intermediate” reliability (step S362). In a case in which the reliability is “high” (“high” in step S356), theacquisition control unit 803 generates the question sentence data corresponding to “high” reliability (step S365). Specifically, in a case in which the reliability that the user has ranked is high, theacquisition control unit 803 adjusts the expression to a casual expression. In a case in which the reliability that the user has ranked is low, theacquisition control unit 803 adjusts the expression to a more formal expression. Herein, an example of the question sentence data adjusted in accordance with the reliability is illustrated inFIGS. 24A, 24B, 24C, and 24D . As illustrated inFIGS. 24A, 24B, 24C, and 24D , for example, question sentence data corresponding to a mission “obtaining feedback on the chocolate sample of BB company” is adjusted to a question sentence of a very formal expression “Could you please give me your feedback on the chocolate?” in a case in which the reliability is “low.” In addition, the question sentence data is adjusted to a question sentence of a formal expression “Can you give me your feedback on the chocolate?” in a case in which the reliability is “intermediate.” In addition, the question sentence data is adjusted to a question sentence of a casual expression “How was the chocolate?” in a case in which the reliability is “high.” Further, when the reliability is “high,” the request degree is high and theacquisition control unit 803 may generate question sentence data for asking a specific question “What do you like about it?” in response to an answer of the user, for example, “It is good.” - Note that the example in which a specific exemplary sentence is generated with reference to the
question sentence DB 815 has been described herein, but the embodiment is not limited thereto. Theacquisition control unit 803 may output information regarding the reliability of the agent that the user has ranked to thedialogue processing unit 30 so that thedialogue processing unit 30 may generate question sentence data in accordance with the reliability. In addition, theacquisition control unit 803 may change a frequency at which the question for obtaining the feedback is performed in accordance with the level of the reliability. For example, in a case in which the reliability is low, theacquisition control unit 803 may reduce the frequency at which the question for obtaining the feedback is performed. As the reliability increases, theacquisition control unit 803 may increase the frequency at which the question for obtaining the feedback. - Then, the
acquisition control unit 803 outputs the generated question sentence data to the dialogue processing unit 30 (step S368). - Next, a case in which the number of questions allowed by the user is adjusted on the basis of the personality traits and habits of the user will be described. There are many ways to indicate the personality traits of the user. Herein, for example, the factor “Let's have a try” included in “factors for being happy” introduced in “Mechanism of Happiness” (Kodansha's new library of knowledge) by a professor, Takashi Maeno, in a graduate school of Keio University is used. The factor “Let's have a try” is the factor of self-fulfillment and growth and a value of the factor has positive correlation with a level of happiness. The factor “Let's have a try” of the user is quantified between -1 to +1 on the basis of the personality traits and habits of the user and is recorded in advance in the individual
characteristic DB 814. The tolerance of the number of questions for obtaining feedback is considered to depend on the level of happiness of the user and further the value of the factor “Let's have a try” and theacquisition control unit 803 adjusts the number of questions in accordance with the value of the factor “Let's have a try” of the user stored in the individualcharacteristic DB 814. In a case in which the number of questions is increased, theacquisition control unit 803 can generate question sentence data with reference to a sales point list corresponding to a mission stored in thequestion sentence DB 815. - A process of adjusting the number of questions is illustrated in
FIG. 25 .FIG. 25 is a flowchart illustrating a question sentence data generation process in which personality traits of a user is considered according to the embodiment. As illustrated inFIG. 25 , theacquisition control unit 803 first acquires the value of the factor “Let's have a try” of the user from the individual characteristic DB 814 (step S373). - Subsequently, the
acquisition control unit 803 determines whether the value of the factor exceeds a predetermined threshold (step S376). - Subsequently, in a case in which the value of the factor does not exceed the predetermined threshold (No in step S376), the
acquisition control unit 803 generates question sentence data regarding a predetermined number n of sales points set in advance (step S379). - Conversely, in a case in which the value of the factor exceeds the predetermined threshold (Yes in step S376), the
acquisition control unit 803 generates question sentence data regarding a predetermined number m of sales points set in advance (step S382). Herein, the integers n and m have a relation of m>n. That is, in case in which the value of the factor “Let's have a try” exceeds the predetermined threshold, theacquisition control unit 803 adjusts the number of questions so that the number of questions is greater than in the case in which the value of the factor is less than the threshold since there is a high possibility of the user answering many questions because of his or her personality traits without feeling stress. Herein, an example of a sales point list of a mission stored in thequestion sentence DB 815 is illustrated inFIG. 26 . The sales point list for each mission illustrated inFIG. 26 can be transmitted along with a request for feedback from an information processing device of a company side in advance and can be stored. For example, in the mission of “obtaining feedback on the chocolate sample of BB company,” sales points “(1) smooth melt-in-the-mouth feeling,” “(2) polyphenol content of OO%, good for health,” “(3) low in calories” are registered. In a case in which the sales point (1) is asked to the user, for example, a question sentence “I heard that the chocolate provides good melt-in-the-mouth feeling. How was it?” is generated. In addition, in a case in which the sales point (2) is asked to the user, a question sentence “I heard that the chocolate is good for health because it contains OO% of polyphenol” is generated. In addition, in a case in which the sales point (3) is asked to the user, a question sentence “I heard that the chocolate is low in calories” is generated. - Then, the
acquisition control unit 803 outputs the generated question sentence data to the dialogue processing unit 30 (step S385). - The question sentence generation process has been described specifically above.
- Subsequently, referring back to
FIG. 18 , the feedbackacquisition processing unit 80 outputs the generated question sentence data to the dialogue processing unit 30 (step S285). - Subsequently, the
dialogue processing unit 30 performs a process of supplying the user with a dialogue of the agent into which the question sentence data output from the feedbackacquisition processing unit 80 is inserted (step S288) and acquires the feedback (the question sentence data) of the user to the question (step S291). The presentation of the question sentence data is realized when the question sentence data is output to the voice agent I/F 20 along with the phoneme data in accordance with the agent ID designated by the user by thedialogue processing unit 30 and the question sentence data is vocalized by the voice agent I/F 20, and the vocalized question sentence data is transmitted to theclient terminal 1. The user performs feedback on the specific experience in a format in which questions from the agent are answered. Theclient terminal 1 collects answer voice of the user with the microphone and transmits the answer voice to theagent server 2. At this time, theclient terminal 1 also transmits various kinds of sensor information such as biological information and acceleration information detected from the user at the time of the feedback. Thus, thedialogue processing unit 30 of theagent server 2 can acquire not only an answer (verbal information) of the user but also non-verbal information such as a situation of voice (a situation in which voice is loud, a speaking amount abruptly increases, a tone of note, or the like), a situation of an activity amount (an amount of motion of a hand or a body or the like), or a body reaction (a heart rate, a respiration rate, a blood pressure, perspiration, or the like) as the feedback of the user. - Subsequently, the
dialogue processing unit 30 outputs the acquired feedback to the feedback acquisition processing unit 80 (step S294). - Subsequently, the
result generation unit 804 of the feedbackacquisition processing unit 80 generates a result (report data) obtained by associating the acquired feedback with the mission (step S297) and outputs (transmits) the generated result to a company or the like of a request source (step S300). - Herein, the details of the result generation process described in the foregoing step S297 will be described with reference to
FIGS. 27 and 28 .FIG. 27 is a flowchart illustrating the result generation process according to the embodiment. - As illustrated in
FIG. 27 , theresult generation unit 804 first acquires feedback (answer sentence data) acquired by a dialogue with the user from the dialogue processing unit 30 (step S393). - In addition, the
result generation unit 804 acquires activity information (for example, a motion of the body) of the user at the time of feedback, body reaction information (for example, biological information), and feeling information (analyzed from the biological information or an expression of the face) from theuser situation DB 812 or theuser feeling DB 813 and estimates a user state (step S396). As described above, the feedback from the user includes not only the answer sentence data (verbal information) acquired from a conversation between the agent and the user performed through thedialogue processing unit 30 but also non-verbal information other than the answer sentence data. The non-verbal information is biological information detected by a biological sensor of a wearable terminal worn by the user, acceleration information detected by an acceleration sensor, a facial image of the user captured by a camera, feeling information, a context extracted from the conversation between the agent and the user, a voice analysis result of the conversation, or the like and is stored in theuser situation DB 812 or theuser feeling DB 813. Theresult generation unit 804 estimates a user state (busy, irritated, depressed, or the like) at the time of feedback on the basis of the information stored in theuser situation DB 812 or theuser feeling DB 813. - Subsequently, the
result generation unit 804 calculates a positive determination value of the feedback on the basis of the verbal information and the non-verbal information of the feedback (step S399). Even when a good valuation can be obtain in an oral, a real intention appears in an attitude of the user or a tone of voice in some cases. Therefore, in the embodiment, a positive determination value of the user may be calculated on the basis of the non-verbal information other than an answer of the oral so that the positive determination value can be referred to along with the feedback result. For example, in a case in which each item of the non-verbal information can be considered to be a positive attitude, the positive determination value is normalized to 0 to 1 so that the positive determination result is near 1 and the average value is calculated as the positive determination value. - Subsequently, in a case in which a regular feedback (for example, the feedback regarding the sales points illustrated in
FIG. 26 , feedback on an experience of a mission target, or a predetermined number of feedbacks) is obtained (Yes in step S402), theresult generation unit 804 matches the feedback result with the mission list and generates a result (step S405). Herein, an example of the generated result is illustrated inFIG. 28 . As illustrated inFIG. 28 , the feedback result according to the embodiment is associated with the mission, the sales point, the question sentence data, the feedback (the verbal language), the user state (the non-verbal information), and the positive determination value (calculated on the basis of the user state). Thus, the company side can understand not only the feedback (the answer sentence data) regarding each sales point but also the aspect of the user at that time from the user state or the positive determination value and can predict whether the user gives the feedback with his or her real intention. - Note that the feedback result is output to the advertisement
insertion processing unit 70 so that the feedback result can be used even at the time of the advertisement insertion process in the advertisementinsertion processing unit 70. That is, the advertisementinsertion processing unit 70 according to the embodiment compares the content of the mission list with theadvertisement DB 72, extracts terms (goods names, content names, company names, characteristics of goods/content (sales points), and the like) registered as words of interest in theadvertisement DB 72, and refers to the feedback result including the words of interest. Specifically, the advertisementinsertion processing unit 70 confirms the words of interest by which the user takes a positive attitude on the basis of the positive determination value of the feedback including the words of interest, and performs control such that advertisement information including the words of interest is inserted into a dialogue. Thus, it is possible to present the advertisement information to which the user positively reacts. - As described above, in the communication control system according to the embodiment of the present disclosure, it is possible to obtain reliable feedback from a user further naturally through a conversation with an agent without imposing a burden on the user.
- The preferred embodiment(s) of the present disclosure has/have been described above with reference to the accompanying drawings, whilst the present disclosure is not limited to the above examples. A person skilled in the art may find various alterations and modifications within the scope of the appended claims, and it should be understood that they will naturally come under the technical scope of the present disclosure.
- For example, it is possible to also generate a computer program causing hardware such as the CPU, the ROM, and the RAM contained in the
client terminal 1 or theagent server 2 described above to realize the function of theclient terminal 1 or theagent server 2. In addition, a computer-readable storage medium that stores the computer program is also provided. - In addition, in the above-described embodiment, the configuration in which various functions are realized by the
agent server 2 on the Internet has been described, but the embodiment is not limited thereto. At least a part of the configuration of theagent server 2 illustrated inFIG. 3 may be realized in the client terminal 1 (a smartphone, a wearable terminal, or the like) of the user. In addition, the whole configuration of theagent server 2 illustrated inFIG. 3 may be installed in theclient terminal 1 so that theclient terminal 1 can perform all the processes. - Further, the effects described in this specification are merely illustrative or exemplified effects, and are not limitative. That is, with or in the place of the above effects, the technology according to the present disclosure may achieve other effects that are clear to those skilled in the art from the description of this specification. Additionally, the present technology may also be configured as below.
- (1)
- A communication system including:
a communication unit configured to receive request information for requesting feedback on a specific experience of a user;
an accumulation unit configured to accumulate the feedback received from a client terminal of the user via the communication unit; and
a control unit configured to perform control such that a question for requesting the feedback on the specific experience of the user based on the request information is transmitted to the client terminal of the user at a timing according to context of the user, and feedback input by the user in response to the question output as speech of an agent via the client terminal is received. - (2)
- The communication system according to (1),
in which after it is estimated that the user has the specific experience, the control unit performs control such that the question for requesting the feedback on the specific experience of the user is transmitted to the client terminal of the user. - (3)
- The communication system according to (2),
in which the control unit estimates that the user has the specific experience by acquiring a response of the user to a question regarding whether the user has the specific experience, via the communication unit. - (4)
- The communication system according to (2),
in which the control unit estimates that the user has the experience by acquiring an analysis result of sensor data of the client terminal via the communication unit. - (5)
- The communication system according to any one of (1) to (4),
in which the control unit performs control such that the question for requesting the feedback is transmitted to the client terminal at a timing according to at least one of a schedule of the user, a conversation of the user acquired via the communication unit, and feeling information of the user which are the context of the user. - (6)
- The communication system according to any one of (1) to (5),
in which the control unit generates the question for requesting the feedback in consideration of a relation between the user and the agent. - (7)
- The communication system according to (6),
in which the control unit generates the question by using, as the relation, reliability of the agent that the user has ranked. - (8)
- The communication system according to (7),
in which the control unit adjusts an expression of the question for requesting the feedback in accordance with the reliability. - (9)
- The communication system according to any one of 81) to (8),
in which the control unit generates the question for requesting the feedback in consideration of an attribute of the user. - (10)
- The communication system according to (9),
in which the attribute of the user is a personality trait or habit of the user. - (11)
- The communication system according to (10),
in which the control unit generates a predetermined number of questions for requesting the feedback, in accordance with the attribute of the user. - (12)
- The communication system according to any one of (1) to (11),
in which the control unit calculates a positive determination value of the specific experience on a basis of the feedback and a feeling of the user at the time of acquisition of the feedback, and accumulates the positive determination value of the specific experience in the accumulation unit. - (13)
- The communication system according to any one of (1) to (12),
in which the control unit performs control such that the question for requesting the feedback is output as speech of the agent from the client terminal by using voice corresponding to a specific agent. - (14)
- The communication system according to (13), further including:
a database configured to store voice data corresponding to each agent,
in which the control unit performs control such that the question for requesting the feedback is generated in consideration of a personality trait of an agent purchased by the user, and the generated question is output from the client terminal by using voice corresponding to the agent. - (15)
- The communication system according to any one of (1) to (14),
in which the accumulation unit stores the request information in association with feedback transmitted from the client terminal via the communication unit. - (16)
- A communication control method including: by a processor,
receiving request information for requesting feedback on a specific experience of a user via a communication unit;
performing control such that a question for requesting the feedback on the specific experience of the user based on the request information is transmitted to a client terminal of the user at a timing according to context of the user, and feedback input by the user in response to the question output as speech of an agent via the client terminal is received; and
accumulating the feedback received from the client terminal of the user via the communication unit, in the accumulation unit. -
- 1 client terminal
- 2 agent server
- 30 dialogue processing unit
- 300 dialogue processing unit
- 310 question sentence retrieval unit
- 320 answer sentence generation unit
- 330 conversation DB
- 340 phoneme data acquisition unit
- 31 basic dialogue processing unit
- 32 character A dialogue processing unit
- 33 person B dialogue processing unit
- 34 person C dialogue processing unit
- 40 phoneme storage unit
- 41 basic phoneme DB
- 42 character A phoneme DB
- 43 person B phoneme DB
- 44 person C phoneme DB
- 50 conversation DB generation unit
- 60 phoneme DB generation unit
- 70 advertisement insertion processing unit
- 72 advertisement DB
- 80 feedback acquisition processing unit
- 801 list confirmation unit
- 802 timing determination unit
- 803 acquisition control unit
- 804 result generation unit
- 810 mission list DB
- 811 experience list DB
- 812 user situation DB
- 813 user feeling DB
- 814 individual characteristic DB
- 815 question sentence DB
- 3 network
- 10 agent
Claims (20)
1. A communication system, comprising:
processing circuitry configured to:
receive request information to request a feedback on a specific experience of a user;
estimate that the user has the specific experience based on an acquisition of a response of the user to a first question, wherein the first question is associated with the specific experience of the user;
control, based on the request information and the estimation, transmission of a second question to a client terminal of the user at a timing, wherein
the timing is associated with a context of the user, and
the context of the user is based on at least one of a change in a feeling of the user in a specific period or a change in a situation of the user;
control reception of the feedback of the user from the client terminal based on the transmission of the second question, wherein the second question is output as speech of an agent in the client terminal; and
accumulate the feedback received from the client terminal in a database.
2. The communication system according to claim 1 , wherein the context of the user comprises at least one of a schedule of the user, a conversation of the user, or feeling information of the user.
3. The communication system according to claim 1 , wherein the processing circuitry is further configured to generate the second question based on a relation between the user and the agent.
4. The communication system according to claim 3 , wherein
the processing circuitry is further configured to generate the second question based on a reliability of the agent, and
the reliability of the agent is based on a rank assigned to the agent by the user.
5. The communication system according to claim 4 , wherein the processing circuitry is further configured to modify an expression of the second question based on the reliability of the agent.
6. The communication system according to claim 1 , wherein the processing circuitry is further configured to generate the second question based on an attribute of the user.
7. The communication system according to claim 6 , wherein
the attribute of the user is one of a personality trait or habit of the user, and
the attribute is based on one of biological information of the user, voice information of the user, a captured image that is associated with a wearable terminal of the user, or a conversation of the user.
8. The communication system according to claim 7 , wherein
the processing circuitry is further configured to generate a number of questions to request the feedback, and
the number of questions is generated based on the attribute of the user.
9. The communication system according to claim 1 , wherein the processing circuitry is further configured to:
calculate a positive determination value of the specific experience based on the feedback and the feeling of the user, wherein the positive determination value is calculated at a time of acquisition of the feedback; and
accumulate the positive determination value of the specific experience in the database.
10. The communication system according to claim 1 , wherein the processing circuitry is further configured to store the request information in association with the feedback.
11. A communication method, comprising:
in a communication system:
receiving request information to request a feedback on a specific experience of a user;
estimating that the user has the specific experience based on an acquisition of a response of the user to a first question, wherein the first question is associated with the specific experience of the user;
controlling, based on the request information and the estimation, transmission of a second question to a client terminal of the user at a timing, wherein
the timing is associated with a context of the user, and
the context of the user is based on at least one of a change in a feeling of the user in a specific period or a change in a situation of the user;
controlling reception of the feedback of the user from the client terminal based on the transmission of the second question, wherein the second question is output as speech of an agent in the client terminal; and
accumulating the feedback received from the client terminal in a database.
12. The communication method according to claim 11 , wherein the context of the user comprises at least one of a schedule of the user, a conversation of the user, or feeling information of the user.
13. The communication method according to claim 11 , further comprising generating the second question based on a relation between the user and the agent.
14. The communication method according to claim 13 , further comprising generating the second question based on a reliability of the agent, wherein the reliability of the agent is based on a rank assigned to the agent by the user.
15. The communication method according to claim 14 , further comprising modifying an expression of the second question based on the reliability of the agent.
16. The communication method according to claim 11 , further comprising generating the second question based on an attribute of the user.
17. The communication method according to claim 16 , wherein
the attribute of the user is one of a personality trait or habit of the user, and
the attribute is based on one of biological information of the user, voice information of the user, a captured image that is associated with a wearable terminal of the user, or a conversation of the user.
18. The communication method according to claim 17 , further comprising generating a number of questions to request the feedback, wherein the number of questions is generated based on the attribute of the user,
19. The communication method according to claim 11 , further comprising:
calculating a positive determination value of the specific experience based on the feedback and the feeling of the user, wherein the positive determination value is calculated at a time of acquisition of the feedback; and
accumulating the positive determination value of the specific experience in the database.
20. The communication method according to claim 11 , further comprising storing the request information in association with the feedback.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/682,106 US20220189479A1 (en) | 2016-01-25 | 2022-02-28 | Communication system and communication control method |
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2016011664 | 2016-01-25 | ||
JP2016-011664 | 2016-01-25 | ||
PCT/JP2016/081954 WO2017130496A1 (en) | 2016-01-25 | 2016-10-27 | Communication system and communication control method |
US201816069005A | 2018-07-10 | 2018-07-10 | |
US17/682,106 US20220189479A1 (en) | 2016-01-25 | 2022-02-28 | Communication system and communication control method |
Related Parent Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/069,005 Continuation US11295736B2 (en) | 2016-01-25 | 2016-10-27 | Communication system and communication control method |
PCT/JP2016/081954 Continuation WO2017130496A1 (en) | 2016-01-25 | 2016-10-27 | Communication system and communication control method |
Publications (1)
Publication Number | Publication Date |
---|---|
US20220189479A1 true US20220189479A1 (en) | 2022-06-16 |
Family
ID=59398954
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/069,005 Active 2036-11-21 US11295736B2 (en) | 2016-01-25 | 2016-10-27 | Communication system and communication control method |
US17/682,106 Pending US20220189479A1 (en) | 2016-01-25 | 2022-02-28 | Communication system and communication control method |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/069,005 Active 2036-11-21 US11295736B2 (en) | 2016-01-25 | 2016-10-27 | Communication system and communication control method |
Country Status (3)
Country | Link |
---|---|
US (2) | US11295736B2 (en) |
CN (1) | CN108475404B (en) |
WO (1) | WO2017130496A1 (en) |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3107019A1 (en) * | 2015-06-16 | 2016-12-21 | Lazaryev Oleksiy | Method, computer program product and system for the exchange of health data |
US11302317B2 (en) * | 2017-03-24 | 2022-04-12 | Sony Corporation | Information processing apparatus and information processing method to attract interest of targets using voice utterance |
CN108600911B (en) | 2018-03-30 | 2021-05-18 | 联想(北京)有限公司 | Output method and electronic equipment |
JP7131077B2 (en) * | 2018-05-24 | 2022-09-06 | カシオ計算機株式会社 | CONVERSATION DEVICE, ROBOT, CONVERSATION DEVICE CONTROL METHOD AND PROGRAM |
CN108986804A (en) * | 2018-06-29 | 2018-12-11 | 北京百度网讯科技有限公司 | Man-machine dialogue system method, apparatus, user terminal, processing server and system |
KR20200039982A (en) * | 2018-10-08 | 2020-04-17 | 현대자동차주식회사 | multi device system, AND CONTROL METHOD THEREOF |
JP6993314B2 (en) * | 2018-11-09 | 2022-01-13 | 株式会社日立製作所 | Dialogue systems, devices, and programs |
JP6577126B1 (en) * | 2018-12-18 | 2019-09-18 | 株式会社Acd | Avatar display control method |
CN112015852A (en) * | 2019-05-31 | 2020-12-01 | 微软技术许可有限责任公司 | Providing responses in a session about an event |
CN114430831A (en) * | 2019-09-19 | 2022-05-03 | 株式会社钟化 | Information processing device and information processing program |
JP7218816B2 (en) * | 2019-10-03 | 2023-02-07 | 日本電信電話株式会社 | DIALOGUE METHOD, DIALOGUE SYSTEM, DIALOGUE DEVICE, AND PROGRAM |
US11694039B1 (en) | 2021-01-22 | 2023-07-04 | Walgreen Co. | Intelligent automated order-based customer dialogue system |
US11797753B2 (en) * | 2022-06-27 | 2023-10-24 | Univerzita Palackého v Olomouci | System and method for adapting text-based data structures to text samples |
Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060122834A1 (en) * | 2004-12-03 | 2006-06-08 | Bennett Ian M | Emotion detection device & method for use in distributed systems |
US20100274632A1 (en) * | 2007-09-04 | 2010-10-28 | Radford Institute Australia Pty Ltd | Customer satisfaction monitoring system |
US20130173687A1 (en) * | 2012-01-03 | 2013-07-04 | Teletech Holdings, Inc. | Method for providing support services using consumer selected specialists and specialist ratings |
US20140143157A1 (en) * | 2012-11-21 | 2014-05-22 | Verint Americas Inc. | Design and Analysis of Customer Feedback Surveys |
US20140222512A1 (en) * | 2013-02-01 | 2014-08-07 | Goodsnitch, Inc. | Receiving, tracking and analyzing business intelligence data |
US20140267651A1 (en) * | 2013-03-15 | 2014-09-18 | Orcam Technologies Ltd. | Apparatus and method for using background change to determine context |
US20150124952A1 (en) * | 2013-11-05 | 2015-05-07 | Bank Of America Corporation | Determining most effective call parameters and presenting to representative |
US20150356579A1 (en) * | 2014-06-04 | 2015-12-10 | SureCritic, Inc. | Intelligent customer-centric feedback management |
US20160117699A1 (en) * | 2013-05-14 | 2016-04-28 | Gut Feeling Laboratory Inc. | Questionnaire system, questionnaire response device, questionnaire response method, and questionnaire response program |
US20160180360A1 (en) * | 2014-12-18 | 2016-06-23 | Edatanetworks Inc. | Devices, systems and methods for managing feedback in a network of computing resources |
US20160266724A1 (en) * | 2015-03-13 | 2016-09-15 | Rockwell Automation Technologies, Inc. | In-context user feedback probe |
US20160300275A1 (en) * | 2015-04-07 | 2016-10-13 | International Business Machines Corporation | Rating Aggregation and Propagation Mechanism for Hierarchical Services and Products |
US9536269B2 (en) * | 2011-01-19 | 2017-01-03 | 24/7 Customer, Inc. | Method and apparatus for analyzing and applying data related to customer interactions with social media |
US9659301B1 (en) * | 2009-08-19 | 2017-05-23 | Allstate Insurance Company | Roadside assistance |
US9860355B2 (en) * | 2015-11-23 | 2018-01-02 | International Business Machines Corporation | Call context metadata |
US10068221B1 (en) * | 2014-10-29 | 2018-09-04 | Walgreen Co. | Using a mobile computing device camera to trigger state-based actions |
US10217142B1 (en) * | 2010-09-23 | 2019-02-26 | Tribal Technologies, Inc. | Selective solicitation of user feedback for digital goods markets |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0981632A (en) | 1995-09-13 | 1997-03-28 | Toshiba Corp | Information publication device |
JP2001215993A (en) | 2000-01-31 | 2001-08-10 | Sony Corp | Device and method for interactive processing and recording medium |
JP2002216026A (en) | 2000-11-17 | 2002-08-02 | Sony Corp | Information communication system, agent terminal, information distribution system, storage medium with agent program stored, storage medium with agent access program stored, storage medium with exclusive processing program stored, agent program, agent access program and exclusive processing program |
WO2004072883A1 (en) | 2003-02-12 | 2004-08-26 | Hitachi, Ltd. | Usability evaluation support method and system |
JP4074243B2 (en) | 2003-12-26 | 2008-04-09 | 株式会社東芝 | Content providing apparatus and method, and program |
JP2005309604A (en) | 2004-04-19 | 2005-11-04 | Digital Dream:Kk | Interview type personal information management method |
JP2005339368A (en) | 2004-05-28 | 2005-12-08 | Ntt Docomo Inc | Emotion grasping system and emotion grasping method |
CN103593054B (en) * | 2013-11-25 | 2018-04-20 | 北京光年无限科技有限公司 | A kind of combination Emotion identification and the question answering system of output |
CN104464733B (en) * | 2014-10-28 | 2019-09-20 | 百度在线网络技术(北京)有限公司 | A kind of more scene management method and devices of voice dialogue |
CN104392720A (en) * | 2014-12-01 | 2015-03-04 | 江西洪都航空工业集团有限责任公司 | Voice interaction method of intelligent service robot |
CN104951077A (en) * | 2015-06-24 | 2015-09-30 | 百度在线网络技术(北京)有限公司 | Man-machine interaction method and device based on artificial intelligence and terminal equipment |
-
2016
- 2016-10-27 WO PCT/JP2016/081954 patent/WO2017130496A1/en active Application Filing
- 2016-10-27 US US16/069,005 patent/US11295736B2/en active Active
- 2016-10-27 CN CN201680079320.8A patent/CN108475404B/en active Active
-
2022
- 2022-02-28 US US17/682,106 patent/US20220189479A1/en active Pending
Patent Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060122834A1 (en) * | 2004-12-03 | 2006-06-08 | Bennett Ian M | Emotion detection device & method for use in distributed systems |
US20100274632A1 (en) * | 2007-09-04 | 2010-10-28 | Radford Institute Australia Pty Ltd | Customer satisfaction monitoring system |
US9659301B1 (en) * | 2009-08-19 | 2017-05-23 | Allstate Insurance Company | Roadside assistance |
US10217142B1 (en) * | 2010-09-23 | 2019-02-26 | Tribal Technologies, Inc. | Selective solicitation of user feedback for digital goods markets |
US9536269B2 (en) * | 2011-01-19 | 2017-01-03 | 24/7 Customer, Inc. | Method and apparatus for analyzing and applying data related to customer interactions with social media |
US20130173687A1 (en) * | 2012-01-03 | 2013-07-04 | Teletech Holdings, Inc. | Method for providing support services using consumer selected specialists and specialist ratings |
US20140143157A1 (en) * | 2012-11-21 | 2014-05-22 | Verint Americas Inc. | Design and Analysis of Customer Feedback Surveys |
US20140222512A1 (en) * | 2013-02-01 | 2014-08-07 | Goodsnitch, Inc. | Receiving, tracking and analyzing business intelligence data |
US20140267651A1 (en) * | 2013-03-15 | 2014-09-18 | Orcam Technologies Ltd. | Apparatus and method for using background change to determine context |
US20160117699A1 (en) * | 2013-05-14 | 2016-04-28 | Gut Feeling Laboratory Inc. | Questionnaire system, questionnaire response device, questionnaire response method, and questionnaire response program |
US20150124952A1 (en) * | 2013-11-05 | 2015-05-07 | Bank Of America Corporation | Determining most effective call parameters and presenting to representative |
US20150356579A1 (en) * | 2014-06-04 | 2015-12-10 | SureCritic, Inc. | Intelligent customer-centric feedback management |
US10068221B1 (en) * | 2014-10-29 | 2018-09-04 | Walgreen Co. | Using a mobile computing device camera to trigger state-based actions |
US20160180360A1 (en) * | 2014-12-18 | 2016-06-23 | Edatanetworks Inc. | Devices, systems and methods for managing feedback in a network of computing resources |
US20160266724A1 (en) * | 2015-03-13 | 2016-09-15 | Rockwell Automation Technologies, Inc. | In-context user feedback probe |
US20160300275A1 (en) * | 2015-04-07 | 2016-10-13 | International Business Machines Corporation | Rating Aggregation and Propagation Mechanism for Hierarchical Services and Products |
US9860355B2 (en) * | 2015-11-23 | 2018-01-02 | International Business Machines Corporation | Call context metadata |
Also Published As
Publication number | Publication date |
---|---|
CN108475404A (en) | 2018-08-31 |
US11295736B2 (en) | 2022-04-05 |
CN108475404B (en) | 2023-02-10 |
US20190027142A1 (en) | 2019-01-24 |
WO2017130496A1 (en) | 2017-08-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220189479A1 (en) | Communication system and communication control method | |
US11327556B2 (en) | Information processing system, client terminal, information processing method, and recording medium | |
US11159462B2 (en) | Communication system and communication control method | |
US10146882B1 (en) | Systems and methods for online matching using non-self-identified data | |
US20240185853A1 (en) | System and method for adapted interactive experiences | |
JP7396396B2 (en) | Information processing device, information processing method, and program | |
US11646026B2 (en) | Information processing system, and information processing method | |
US20210406956A1 (en) | Communication system and communication control method | |
US11595331B2 (en) | Communication system and communication control method | |
JP7524896B2 (en) | Information processing system, information processing method, and program | |
US20240212826A1 (en) | Artificial conversation experience | |
CN110214301B (en) | Information processing apparatus, information processing method, and program | |
JP2020160641A (en) | Virtual person selection device, virtual person selection system and program | |
US20180121624A1 (en) | Methods and apparatus for personalising content in a health management system | |
KR102101311B1 (en) | Method and apparatus for providing virtual reality including virtual pet | |
JP2022127234A (en) | Information processing method, information processing system, and program | |
CN114971755A (en) | Information processing apparatus, information processing method, and computer readable medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |