JP5897240B2 - Customer service system and conversation server - Google Patents

Customer service system and conversation server Download PDF

Info

Publication number
JP5897240B2
JP5897240B2 JP2009150146A JP2009150146A JP5897240B2 JP 5897240 B2 JP5897240 B2 JP 5897240B2 JP 2009150146 A JP2009150146 A JP 2009150146A JP 2009150146 A JP2009150146 A JP 2009150146A JP 5897240 B2 JP5897240 B2 JP 5897240B2
Authority
JP
Japan
Prior art keywords
conversation
sentence
answer
unit
plan
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2009150146A
Other languages
Japanese (ja)
Other versions
JP2010073191A (en
Inventor
黄 声揚
声揚 黄
勝倉 裕
裕 勝倉
Original Assignee
株式会社ユニバーサルエンターテインメント
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to JP2008212190 priority Critical
Priority to JP2008212190 priority
Application filed by 株式会社ユニバーサルエンターテインメント filed Critical 株式会社ユニバーサルエンターテインメント
Priority to JP2009150146A priority patent/JP5897240B2/en
Priority claimed from US12/542,170 external-priority patent/US8374859B2/en
Publication of JP2010073191A publication Critical patent/JP2010073191A/en
Publication of JP5897240B2 publication Critical patent/JP5897240B2/en
Application granted granted Critical
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Description

  The present invention relates to a customer service system, and more particularly to a customer service system using an apparatus that can automatically output an answer in response to a user's utterance and establish a conversation with the user.

  With the spread and development of communication networks and network communication devices, the distribution of goods and services through the network will expand. Along with this, customer-facing services (services that provide answers and advice in response to customer requests, such as support services) have also been provided between customers and service providers via networks.

As a form of customer service via the network, there was a service in the FAQ format posted on the homepage and a service by offline e-mail support, but there was a point that it was not satisfactory in that it could not respond realistically . Therefore, a customer service using a real chat system has been provided (for example, Non-Patent Document 1).
This is a service in which an operator called an agent responds to a question from a customer by a chat method using a database called an online Q & A search system (Talisma Knowledge Management).

Vital Information, Inc., Title "Talisma CIM Channels", [online], [Search April 30, 2009], Internet <URL: http://www.vitals.co.jp/solution/tcim.html>

  However, the customer service using the conventional method described above requires the cost of preparing and providing a database for chat operators for each business as much as the manual for telephone support, which is the same as running manual. There is no significant advantage in saving labor costs.

  SUMMARY OF THE INVENTION An object of the present invention is to provide a customer service that can respond to a customer in real time and give satisfaction while suppressing an increase in cost for preparing a chat operator and a database used by the chat operator.

As means for solving the above problems, the present invention has the following features.
The present invention is proposed as a customer service system .
Sends a User chromatography The speech, a first means for receiving the reply sentence (e.g., conversation device), a customer service system and a reply process unit,
The answer processing means includes
When an arbitrary user utterance is transmitted from the first means or when a certain period of time elapses without speech, the first answer sentence is determined based on the conversation scenario, and the determined first answer sentence and the first answer sentence The process of transmitting the operation control information associated with one answer sentence to the first means is repeated until a first specific user utterance is transmitted from the first means, and from the first means, If the first specific user utterance is transmitted, a first function of transmitting an operation control information correspondence to the second reply sentence and the second reply sentence to the first means,
For the third user utterance transmitted from the first means, a third answer sentence is determined based on a conversation scenario, and is associated with the determined third answer sentence and the third answer sentence. If the third answer sentence for the third user utterance cannot be found from the conversation scenario, the expert responds to the answer to the third user utterance. The third user utterance is transmitted, the response content corresponding thereto is received, the received response content is transmitted to the first means, and the third user utterance, the response sentence, and the response content are stored. A second function for transmitting a conversation log and receiving and storing a conversation scenario generated based on the conversation log;
If the response to the user utterance can be handled by the current plan , the basic control state is determined as the first control information, the response specified in the next plan specification information is determined,
When the user utterance requests termination of the current conversation, the basic control state is determined as the second control information, the conversation is terminated,
If an answer sentence to the user utterance cannot correspond to the current plan , the basic control state is determined as the third control information, an answer sentence is determined from another plan different from the current plan ,
When the user's intention is not clear from the user utterance, the basic control state is determined as the fourth control information, and a third function for further extracting the user utterance is provided.

  According to the present invention, it is possible to respond to inquiries and questions from customers without depending on a chat operator, and it is also possible to respond to unknown questions and complicated questions that cannot be answered by the system.

The customer service system may further have the following features.
4th means (for example, expert side terminal device) which receives the said user utterance, receives the reply of the expert with respect to this, and transmits a reply content to the said 3rd means, and user utterance from the said 3rd means And the fifth means for receiving and storing the conversation log including the response content transmitted from the fourth means (for example, the conversation log DB) and the user utterance from the conversation log stored in the fifth means And a sixth scenario (for example, a conversation scenario editing device) that generates a conversation scenario based on the answer contents to be answered and transmits the generated conversation scenario to the third means. .
The customer service system may further have the following features.
You may make it further have a deletion means to delete a part of content of the conversation scenario which the said 3rd means memorize | stores.
The customer service system may further have the following features.
You may make it further have the replacement means which replaces a part or all of the conversation scenario which the said 3rd means memorize | stores with a new conversation scenario.
The customer service system may further have the following features.
When an arbitrary user utterance is transmitted from the first means or when a certain period of time elapses without speech, the fourth answer sentence is determined based on the conversation scenario, and the determined fourth answer sentence and the When the second specific user utterance is transmitted from the first means while the operation control information associated with the answer sentence 4 is transmitted to the first means, the fifth answer sentence and the A seventh means for transmitting the operation control information associated with the answer sentence 5 to the first means;
After the seventh means transmits the fourth answer sentence and the operation control information associated with the fourth answer sentence to the first means, an arbitrary user utterance is sent from the first means. When a certain period of time has passed without being transmitted or when it is silent, the sixth answer sentence is determined based on the conversation scenario, and the determined sixth answer sentence and the operation control associated with the sixth answer sentence are determined When the third specific user utterance is transmitted from the first means while the information is transmitted to the first means, the seventh answer sentence and the operation control associated with the seventh answer sentence An eighth means for transmitting information to the first means;
After the eighth means transmits the sixth answer sentence and the action control information associated with the sixth answer sentence to the first means, an arbitrary user utterance is sent from the first means. When a certain period of time has passed without being transmitted or when it is silent, the eighth answer sentence is determined based on the conversation scenario, and the determined eighth answer sentence and the operation control associated with the eighth answer sentence When the fourth specific user utterance is transmitted from the first means while the information is transmitted to the first means, the ninth answer sentence and the operation control associated with the ninth answer sentence And ninth means for transmitting information to the first means,
The seventh means transmits the first answer sentence and the action control information associated with the eighth answer sentence to the first means after the ninth means. When an arbitrary user utterance is transmitted from the means or when a certain period of time elapses without speech, the fourth answer sentence is determined based on a conversation scenario, and the determined fourth answer sentence and the fourth answer sentence are determined. When the second specific user utterance is transmitted from the first means while the operation control information associated with the reply sentence is transmitted to the first means, the fifth answer sentence and the first answer sentence are transmitted. The operation control information associated with the answer sentence 5 may be transmitted to the first means.

  According to this invention, it is possible to provide the user with an answer that makes use of the expert's knowledge even for a customer's question that the system cannot answer, and to feed back the answer contents of the expert as a conversation scenario. It is possible to reduce the occurrence of a state that cannot be answered for a while.

  ADVANTAGE OF THE INVENTION According to this invention, the customer response service which can respond to a customer in real time and can give satisfaction can be provided, hold | suppressing the increase in the cost for database preparation which a chat operator and a chat operator use.

Hereinafter, embodiments of the present invention will be described with reference to the drawings.
The first embodiment is proposed as an automatic conversation system that outputs an answer in response to a user's utterance based on a conversation scenario prepared in advance, and a conversation scenario editing apparatus that generates and edits a conversation scenario. .

[1. Configuration example of automatic conversation system and conversation scenario editing device]
Hereinafter, configuration examples of the automatic conversation system and the conversation scenario editing apparatus will be described. FIG. 1 is a block diagram illustrating a configuration example of the automatic conversation system 1. The automatic conversation system 1 includes a conversation device 10, a conversation server 20 connected to the conversation device 10, and a conversation scenario editing device 30 that generates and edits a conversation scenario used by the conversation server 20.

  When the user inputs an utterance, the conversation device 10 transmits the utterance content to the conversation server 20. When the conversation server 20 receives the utterance contents, the conversation server 20 receives an answer that is a reply to the utterance contents based on the conversation scenario and action control information that is an action corresponding to the answer and describes the action to be executed by the conversation device 10. The answer and the operation control information are output to the conversation device 10. The conversation scenario editing device 30 generates and edits the conversation scenario 40, and outputs the generated or edited conversation scenario. The output conversation scenario 40 is stored in the conversation server 20.

Below, each of the said apparatus is explained in full detail.
[1.1. Conversation device]
The conversation device 10 acquires a user's utterance (user utterance) as an input, transmits this input content (hereinafter referred to as an input sentence) to the conversation server 20, and answers and operation control information returned from the conversation server 20. And outputs an answer and performs an operation according to the operation control information based on the received content.

  The conversation device 10 includes an arithmetic processing unit (CPU), a main memory (RAM), a read-only memory (ROM), an input / output device (I / O), and, if necessary, an external storage device such as a hard disk device. An information processing apparatus, or an instrument or toy including such an information processing apparatus, such as a computer, a mobile phone, a so-called Internet home appliance, or a robot. A program is stored in the ROM of the conversation device 10 or a hard disk device, and the conversation device is realized by placing this program on the main memory and executing it by the CPU. Also, the program need not necessarily be stored in a storage device in the information processing device, but provided from an external device (for example, an ASP (application service provider server, etc.)) and placed in the main memory. It may be.

  FIG. 2 is a block diagram illustrating a configuration example of the conversation device 10. The conversation device 10 is connected to the input unit 11, the conversation processing unit 12 connected to the input unit 11, the operation control unit 13 connected to the conversation processing unit 12, and the conversation processing unit 12 and the operation control unit 13. And an output unit 14. The conversation processing unit 12 can communicate with the conversation server 20.

  The input unit 11 has a function of receiving a user's utterance content (input sentence), converting it into a signal that can be processed by the conversation processing unit 12, such as an electric signal, and passing it. The input unit 11 is, for example, any one of a keyboard, a pointing device, a touch panel, a microphone, or a combination thereof.

  The conversation processing unit 12 sends the input sentence received from the input unit 11 to the conversation server 20 and requests the conversation server 20 to transmit an answer sentence corresponding to the input sentence and operation control information corresponding to the answer sentence. When the conversation processing unit 12 receives the answer sentence and the operation control information corresponding to the answer sentence from the conversation server 20, the conversation processing unit 12 passes the answer sentence to the output unit 14 and outputs the answer sentence, and the operation control information is sent to the action control unit 13. hand over.

  The operation control unit 13 executes a specified operation based on the operation control information passed from the conversation processing unit 12. If the designated operation is execution of display by the output unit 14 (for example, reproduction of the designated operation), the output unit 14 is caused to execute this. In addition, the specified action is an output of an answer sentence that is different from the answer sentence acquired from the conversation server 20 (for example, the answer sentence acquired from the conversation server 20 is “What do you talk about?” If it is “Please say something!”), The output unit 14 outputs such an answer sentence.

  The output unit 14 has a function of outputting an answer sentence in a manner that the user can recognize. There is no restriction in the present invention as to how the answer sentence is output. The output unit 14 is, for example, a liquid crystal display device or the like when an answer sentence is provided to the user as character information, and is provided with an artificial voice generation device and a speaker when the answer sentence is provided to the user as voice information. is there.

[1.2. Conversation server]
The conversation server 20 determines an answer that is a reply to the utterance content based on the conversation scenario, and action control information that is an action corresponding to the answer and describes an action to be executed by the conversation device 10, and the answer and action This is a device having a function of outputting control information to the conversation device 10.

  The conversation server 20 includes an arithmetic processing unit (CPU), a main memory (RAM), a read-only memory (ROM), an input / output device (I / O), and an external storage device such as a hard disk device if necessary. For example, a computer, a workstation, a server device, or the like. A program is stored in the ROM or the hard disk device of the conversation server 20, and the program is loaded on the main memory, and the conversation server is realized by the CPU executing the program. Also, the program need not necessarily be stored in a storage device in the information processing device, but provided from an external device (for example, an ASP (application service provider server, etc.)) and placed in the main memory. It may be.

  The conversation device 10 and the conversation server 20 may be configured to be connected by wire or wireless, and are connected via a communication network such as a LAN, a wireless LAN, and the Internet (a plurality of communication networks may be combined). May be. Further, the conversation device 10 and the conversation server 20 do not necessarily have to be independent devices, and the present invention can be realized even if the conversation device 10 and the conversation server 20 are realized by the same device.

  FIG. 3 is a block diagram illustrating a configuration example of the conversation server 20. The conversation server 20 includes an answer processing unit 21 that can communicate with the conversation device 10, and a semantic interpretation dictionary unit 23 and a conversation scenario storage unit 22 connected to the answer processing unit 21.

  The answer processing unit 21 receives an input sentence from the conversation device 10, selects or determines an answer sentence corresponding to the input sentence based on the conversation scenario stored in the conversation scenario storage unit 22, and the determined answer sentence and this The operation control information associated with the answer sentence is transmitted to the conversation device 10. Further, the answer processing unit 21 refers to the semantic interpretation dictionary stored in the semantic interpretation dictionary unit 23 to obtain the synonym or synonym of the input sentence, and selects the answer sentence based on the synonym or syntactic sentence or Make a decision.

  The semantic interpretation dictionary unit 23 has a function of storing a semantic interpretation dictionary for performing paraphrasing of an answer sentence corresponding to an input sentence (e.g., expansion by a synonym). The semantic interpretation dictionary corresponds to a database having a function like a thesaurus.

  The conversation scenario storage unit 22 has a function of storing the conversation scenario 40 generated or edited by the conversation scenario editing device 30. The description of the conversation scenario 40 will be described later.

[1.3. Conversation scenario editing device]
The conversation scenario editing device 30 has been modified by newly generating a conversation scenario to be used by the conversation server 20 described above, or changing a generated conversation scenario, adding contents, or partially deleting contents. It has a function for generating a conversation scenario.

  The conversation scenario editing device 30 includes an arithmetic processing unit (CPU), a main memory (RAM), a read-only memory (ROM), an input / output device (I / O), and an external storage device such as a hard disk device if necessary. For example, a computer or a workstation. A program is stored in the ROM of the conversation scenario editing device 30 or a hard disk device, and the conversation scenario editing device 30 is realized by placing this program on the main memory and executing it by the CPU. Also, the program need not necessarily be stored in a storage device in the information processing device, but provided from an external device (for example, an ASP (application service provider server, etc.)) and placed in the main memory. It may be.

  FIG. 4 is a block diagram illustrating a configuration example of the conversation scenario editing apparatus 30. The conversation scenario editing device 30 includes an input unit 31, an editor unit 32 connected to the input unit 31, an output unit 34 and a conversation scenario holding unit 33 connected to the editor unit 32.

  The input unit 31 has a function of receiving a user input, converting the signal into a signal that can be processed by the editor unit 32 such as an electric signal, and passing the converted signal. The input unit 31 is, for example, any one of a keyboard, a pointing device, a touch panel, a microphone, or a combination thereof.

  The output unit 34 has a function of outputting the contents of the conversation scenario during editing or after editing in a manner that can be recognized by the user (operator) of the conversation scenario editing apparatus 30. The output unit 34 is, for example, a liquid crystal display device.

  The editor unit 32 has a function of generating data as a conversation scenario and editing (adding, changing, deleting) data according to the content input from the input unit 31. The content of the conversation scenario being edited is displayed on the output unit 34 so that the operator can grasp the content of the conversation scenario in real time. In addition, the editor unit 32 outputs the conversation scenario data whose editing has been completed to the conversation scenario holding unit 33.

  In addition, the editor unit 32 checks whether or not an appropriate state transition relationship is maintained in the generated conversation scenario, and if there is a violation, the operator has caused a violation, and the violation has occurred. It may have a function of generating a message or the like informing the input sentence or answer sentence being displayed and displaying the message on the output unit.

  The editor unit 32 may further include a semantic interpretation dictionary unit corresponding to the semantic interpretation dictionary unit 23 of the conversation server 20, and the editor unit 32 uses the semantic interpretation dictionary unit to duplicate meanings in the conversation scenario. When there is an input sentence or an answer sentence of contents, it may have a function of organizing or integrating these, or urging the operator to organize or integrate them.

  The conversation scenario holding unit 33 has a function of storing or holding the conversation scenario data received from the editor unit 32 in a manner that can be read later. The conversation scenario data stored in the conversation scenario holding unit 33 is sent to the conversation scenario storage unit 22 of the conversation server 20 as necessary or at a predetermined timing. The transfer of the conversation scenario from the conversation scenario holding unit 33 to the conversation scenario storage unit 22 may be performed via a storage medium, or may be performed via a communication network or a communication cable. .

[1.3.1. About conversation scenario]
Here, the conversation scenario 40 will be described. The conversation scenario in the present invention has the following features.

(1) The answer sentence is “target”, and the user utterance (input sentence) is “shot”.
By providing this feature, it is possible to express the flow of conversation defined by the conversation scenario with a “state transition diagram”. The conversation scenario of the present invention can output answer sentences corresponding to all input sentences (user utterances) by using an “other” function described later. Moreover, it can respond to a user's "silence" (no input) by "timer" utterance mentioned later (a silence can be handled as a shoot).

  FIG. 5 is a state transition diagram showing an example of a conversation scenario. In the figure, ellipse frames X1, X2, X3, and X4 are respectively answer sentences, and these correspond to “objects”. In the figure, sentences displayed in the vicinity of the arrows are input sentences, and these correspond to “shooting”. <Others> in the figure indicates input sentences other than "I like" and "I don't like" from X1. In the figure, <timer> indicates a state in which the user has allowed a predetermined period of time to remain silent. The notation “<other> | <timer>” means <other> or <timer>.

  In the example shown in FIG. 5, the “shooting” “I want to eat something” is changed to the “subject” that is the answer sentence X1 “Do you like ramen”. After the answer sentence X1 is output, when the first shooting “I hate” occurs, the process proceeds to the answer sentence X4 “Sorry! Let's change the topic”. On the other hand, after the answer sentence X1 is output, if the second shooting “I like it” occurs, the process proceeds to the answer sentence X3 “I will introduce you to a delicious restaurant”. On the other hand, after the answer sentence X1 is output, when a shot other than the first and second shots occurs, or when a certain period of time has passed without the user silently, the response sentence X2 “Do you like or dislike ramen?” .

  When the conversation scenario of FIG. 5 is expressed as data, the contents are as shown in FIG. 6 as an example. Here, “X1 (utterance A) X2” is an answer string, and describes that the answer state of X1 is changed to the answer state of X2 by the utterance A.

(2) Combining can be defined for shooting. This feature makes it possible to accept an utterance that branches off from the main scenario, and to return to the original (main scenario) even after branching. Therefore, the creator of the conversation scenario can construct a conversation flow “story” envisioned by the conversation scenario, and can cause the conversation system to carry out a conversation along the story.

  FIG. 7 is a state transition diagram showing an example of a conversation scenario including a composition of shooting. The symbols and notations in the figure are the same as in FIG. In the conversation scenario in this example, if the first shot “I hate” occurs after the output of answer sentence X1 “Do you like ramen?”, Answer sentence X3 “Yes? Transition to “N”. On the other hand, when a shoot other than the first shoot occurs, or when a certain period of time has passed without the user being silent, the transition is made to an answer sentence X2 “I will introduce a really delicious shop”.

  After the above answer sentence X3 “So? Ramen is delicious,” only one shoot <other> | <timer> is specified, and any input sentence (user utterance) or the passage of a certain period of time Makes a transition to answer sentence X2, “I will introduce you to a really delicious restaurant”.

  Since it is possible to use an example of a conversation scenario including such a composition of shooting, in the present invention, it is possible to lead to the utterance of one who wants to stick while respecting the utterance of the other party.

When the conversation scenario of FIG. 7 is expressed as an answer string, the content is as shown in FIG. Here X2 is a quote of X2. The quoted source of X2 is X2, and it is formally equivalent to the fact that the target X1 and X2 have a `` (I hate) X3 (<other> | <timer>) '' To do. This shoot is a composition of the shoot “I hate” and the shoot “<other> | <timer>”.
(3) In the conversation scenario of the present invention in which the unit element can be defined, the unit element can be defined. “Unit element” means a shoot that does not change the target. The ability to define unit elements enables the following:

(A) A “forced answer” can be made to a user utterance.
FIG. 9 is a state transition diagram illustrating an example of a conversation scenario in which a forced answer is made. In this example, when outputting the reply sentence X1 “I like ramen. Ramen is the essence of gourmet”, the first shoot <Other> with NULL is specified, and what input sentence (user Even if it is uttered), the input sentence is ignored and the forced output of "I like ramen. Ramen is the essence of gourmet." On the other hand, after output of the reply sentence X1 “I like ramen. Ramen is the essence of gourmet”, the second transition “timer” makes the transition to reply sentence X2 “I will introduce a really delicious restaurant”.

In this example, ignoring the other party's utterance is expressed as “NULL”. In the example shown in FIG. 9, <Others> is assigned NULL in order to ignore all utterances, but only “I hate” can be ignored.
If the conversation scenario of FIG. 9 is expressed as an answer string, the content is as shown in FIG. Here X1 is a quote of X1. The quoted X1 has the same transition destination as the quoted X1. In this sense, X1 and X1 are isomorphic. In this case, the “(<other>)” is a shot from X1 to X1, and is a unit element.

(B) A “persistent answer” can be made to the user utterance.
FIG. 11 is a state transition diagram illustrating an example of a conversation scenario in which a “sticky answer” is given to a user utterance. In the example of Fig. 11, after the response sentence X1 "I like ramen? Dislike?", If the first shot "I don't like it" occurs, it will transition to answer sentence X3 "Is that ramen delicious?" To do. On the other hand, when the second shooting “like” occurs after the answer sentence X1 is output, the process proceeds to the answer sentence X2 “I will introduce a really delicious shop”. On the other hand, after the answer sentence X1 is output, if a shot other than the first and second shots occurs, or if a certain period of time elapses while the user is silent, the answer sentence X1 returns to answer sentence X1 “Do you like ramen? . In this way, it is possible to force the user to choose between “like” or “dislike”.

When the conversation scenario of FIG. 11 is expressed as an answer string, the content is as shown in FIG. Here X 1 is a quote of X1. The quoted X1 has the same transition destination as the quoted X1. In this sense, X1 and X1 have the same shape. In this case, the shoot "(<other> | <timer>)" corresponds to the shoot from X1 to X1 and is called a unit element.

(C) A “closed-loop answer” can be constructed by “unit elements configured by composition”.
By providing this feature, it is possible to prompt the other party to speak in a closed loop. FIG. 13 is a state transition diagram illustrating an example of a conversation scenario in which a “closed loop answer” is constructed by “unit elements configured by composition”. In this example, a closed loop is constructed by the answer sentences X1, X2, X3, and X4, and the conversation flow can be controlled by this closed loop. When the conversation scenario of FIG. 13 is expressed as an answer string, the content is as shown in FIG. Again, this is equivalent to X1 to X1

(<Other> | <timer>) X2 (<Other> | <timer>)
X3 (<other> | <timer>) X4 (<other> | <timer>)
Is called a unit element. The unit element in this case constitutes a “closed loop”.
This completes the description of “item (3) unit element can be defined”.

(4) Combinatorial law holds for composition of shoots This feature makes it possible to construct reply sequences S1 and S2 along two different paths for a reply sequence S corresponding to a certain shoot, and they are equal. Can be treated as a thing. At this time, if S is an answer string related to a certain problem, S1 and S2 are answer strings that give different interpretations to S and provide information related to problem solving. Because of this feature, the conversation scenario according to the present invention can deal with logical user utterances.

FIG. 15 shows a state transition diagram of an example of a conversation scenario in which a coupling law is established in the composition of the projection. When the conversation scenario of FIG. 15 is expressed as an answer string, the content is as shown in FIG. Here, X2, X4 is a citation of each X2, X4. Formally, the following equation holds.
(Hint) X3 (XX) X4 (<Other> | <timer>)
= (XX is) X4 (<other> | <timer>)
= (Hint) X3 (<Other> | <timer>)

(5) A commutative diagram can be drawn This feature allows you to define a shot to reach any object. For this reason, a goal can be set for the scenario and the entire scenario can be grasped.

(6) Others In the present invention, “the range of the discourse that can treat the input sentence as a target and the answer sentence can be treated as a shot” is different from the search mechanism. The same treatment as “range” is not possible. In this case, the range of discourse like the former is not dealt with.

[1.4. Positioning of conversation scenario editing device]
Here, the positioning of the conversation scenario editing apparatus 30 of the present invention will be summarized.
(1) Regarding the conversation scenario having the target and the shooting, the following characteristics can be given.
・ The answer sentence is the target and the input sentence is the target (state transition)
・ Responding to the input sentence, leading to the answer sentence you want to stick to (context maintenance: composition)
・ Respond to the answer sentence regardless of the input sentence (mandatory answer: credit)
・ Repeat until the other person speaks the required utterance (sticky answer: credit)
-Prompt input sentence in a closed loop (closed loop: unit element)
・ Conversations that lead to problem solving (Problem solving: coupling law)
・ Conversation toward the goal (conversation with goal: commutative diagram)

Note that the above characteristics can also be arranged by the answer string. The conversation scenario editing device 30 has a function of expressing the characteristics of the above conversation scenario with an answer string.
By using the above conversation scenario, the conversation server 20 may simply perform a search. That is, the conversation server grasps the current state as an object (answer sentence) of the conversation scenario, and when a user utterance occurs, the conversation server 20 performs an optimal shooting (input sentence) while performing semantic analysis. The next state is set as a target (answer text) corresponding to the searched shot (input text).

  The above conversation scenario is not only expressed as a state transition diagram and data based on it (FIGS. 6, 8, 10 etc.), but also generated and edited using a GUI such as an outline editor as shown in FIG. It doesn't matter if it is done.

[2. Example of conversation scenario generator]
Next, an operation example of the conversation scenario editing device 30 will be described.
The conversation scenario editing apparatus 30 according to the present embodiment can establish a conversation with the user on a plurality of different subjects (conversation themes). FIG. 18 is a diagram illustrating a data configuration example of a conversation scenario stored in the conversation scenario holding unit 33 and the conversation scenario storage unit 22 (hereinafter simply referred to as the conversation scenario holding unit 33).

  The conversation scenario holding unit 33 can have individual conversation scenario data for each domain 200 corresponding to a discourse area or subject (conversation theme) 201. For example, when the user can have conversation scenario data related to the “weather” domain and the “coffee beans” domain, and the user utters the weather, the conversation server 20, more specifically, the answer processing unit 21, “weather” An answer sentence (also referred to as a system utterance) corresponding to an input sentence (also referred to as a user utterance) is searched with priority on the domain conversation scenario data, and a system utterance responding to the user utterance is output. On the other hand, when the user utters “coffee beans”, the answer processing unit 21 searches the system utterance corresponding to the user utterance with priority on the conversation scenario data that is the “coffee beans” domain, The system utterance that responds is output.

  Each domain 200 has a user utterance sentence 210 and a system utterance sentence 220 prepared as an answer of the automatic conversation system for the user utterance sentence. In the example shown in FIG. 18, the user utterance 210-1 and the system utterance 220-1 associated therewith are recorded, and it is assumed that the user utters in response to the system utterance 220-1. A user utterance sentence 210-2 is recorded, and a system utterance sentence 220-2 prepared as an answer of the automatic conversation system for the user utterance sentence 210-2 is recorded.

For example, the above conversation scenario is a user-system conversation as follows.
User utterance sentence 210-1: “It ’s good weather”
System utterance 220-1: “Do you like good weather?”
User utterance sentence 210-1: “Yes, I like it”
System utterance 220-1: “Do you hate rainy days?”

  The conversation scenario shown in FIG. 18 shows the simplest form. In a conversation scenario that can be handled by this automatic conversation system, multiple user utterances can be assigned to one system utterance so that the user can respond to the same system utterance and return a user utterance. It is also possible to prepare.

  The conversation scenario editing apparatus 30 generates conversation scenario data including a new domain 200 to be stored in the conversation scenario holding unit 33, the domain 200 user utterance sentence 210, and the system utterance sentence 220, and stores the conversation scenario data in the conversation scenario holding unit 33. It has a function.

[3. Example of conversation scenario input]
Next, an example of inputting a conversation scenario will be described. FIG. 19 to FIG. 23 are diagrams showing an example of transition of the input screen when a conversation scenario is input for a certain domain 200.

  FIG. 19 shows an example of an input interface screen generated by the conversation scenario editing device 30. Here, it is assumed that the domain 200 is about “coffee beans”.

  The conversation scenario editing device 30, more specifically the editor unit 32, generates a window 300 that serves as an input interface and causes the output unit 34 to display the window 300. A display area 301 is provided in the window 300, and a user utterance sentence and a system utterance sentence are input here when the user operates the input unit 31. In the example of FIG. 19, the domain name 302 is displayed, and is waiting for input of a conversation scenario stored in the domain 200.

  FIG. 20 is a screen example in a state where a user utterance sentence 401 that is the start of a conversation scenario stored in the domain 200 is input.

  When the automatic conversation is actually executed, the answer processing unit 21 of the conversation server 20 matches or equates the user utterance with the user utterance sentence 401 “about coffee beans” described here. If the utterance content is possible, the domain 200 having the domain name 302 as “coffee beans” is selected from the conversation scenario storage unit 22 as the domain 200 for extracting the system utterance sentence that responds to the user utterance, and this domain 200 is selected. The system utterance will be selected with priority.

  A user who is an input person of a conversation scenario inputs a system utterance sentence that is an answer to the user utterance sentence 401. FIG. 21 shows a display example of the window 300 in a state where the system utterance sentence 501 about the user utterance sentence 401 “about coffee beans” is inputted by the user. In this example, for the user utterance sentence 401 “About coffee beans”, “Answer about taste characteristics. Which do you want to know, “Mocha”, “Blue Mountain”, “Kilimanjaro”? It is assumed that a conversation scenario in which an automatic conversation system issues a scenario answer sentence 501 that is a question “

  Next, the user who is the input person of the conversation scenario inputs an expected user utterance sentence to the scenario answer sentence 501. FIG. 22 shows a display example of the window 300 in a state where an expected user utterance sentence 601 is input to the scenario answer sentence 501. In this example, “I will answer about the characteristics of taste. Which one of “Mocha”, “Blue Mountain”, or “Kilimanjaro” do you want to do? ”And the user utterance sentence 601“ Blue Mountain ”is input by the user, assuming that the user answers“ Blue Mountain ”.

Next, the user who is the input person of the conversation scenario inputs a system utterance sentence for the user utterance sentence 601. FIG. 23 shows a display example of the window 300 in a state where a system utterance sentence 701 for the user utterance sentence 601 is input. The input person of the conversation scenario inputs the system utterance sentence 701 as an answer to the user utterance sentence 601.
Such a conversation scenario enables the automatic conversation system to return an answer when the user wants to know about the blue bean of coffee beans. Note that the input user of the conversation scenario can continue to input the user utterance sentence and the system utterance sentence so that the conversation between the user and the automatic conversation system continues.

  The conversation scenario (a set of user utterance sentences and system utterance sentences) input as described above is written and stored in the conversation scenario holding section 33 by the editor section 32. The conversation scenario is transferred to the conversation scenario storage unit 22 of the conversation server 20. In addition, when transferred to the conversation scenario storage unit 22, the conversation scenario may be converted and transplanted so as to be suitable for the conversation server 20.

  The answer processing unit 21 of the conversation server 20 can output a scenario answer to the user utterance with reference to the new conversation scenario stored in the conversation scenario storage unit 22.

[4. Modified example]
The present embodiment is valid even if it is modified as follows.
(1) Modified Example of Conversation Scenario Editing Device FIG. 24 is a functional block diagram of a conversation scenario editing device 30X according to the modified example. The conversation scenario editing device 30X basically has the same configuration as the conversation scenario editing device 30 described above, except that it includes a dynamic knowledge generation unit 35 connected to the conversation scenario holding unit 33. ing. In addition, the same referential mark is attached | subjected about the same component and those description is abbreviate | omitted.

  The dynamic knowledge generation unit 35 has a function of generating dynamic knowledge 40X based on the conversation scenario 40 stored in the conversation scenario holding unit 33. The dynamic knowledge 40X is data reconstructed so that the conversation server 20 can search for an input sentence that is faster and more efficiently shot from the conversation scenario 40 that is an answer string and an answer sentence that is the target. .

  According to such a modification, it is possible to reduce the processing load on the conversation server 20 and to return a reply sentence at high speed.

[5. Another example of conversation server configuration]
Even if the conversation server 20 and the answer processing unit 21 according to the present invention adopt the following configurations, the present invention can be realized. Hereinafter, a configuration example of the conversation server 20 and more specifically the answer processing unit 21 will be described. FIG. 25 is an enlarged block diagram of the answer processing unit 21, and is a block diagram illustrating a specific configuration example of the conversation control unit 300 and the sentence analysis unit 400. The answer processing unit 21 includes a conversation control unit 300, a sentence analysis unit 400, and a conversation database 500. The conversation database 500 has a function of storing the conversation scenario 40 or the dynamic knowledge 40X.

[5.1. Sentence Analysis Department]
Next, a configuration example of the sentence analysis unit 400 will be described with reference to FIG.
The sentence analysis unit 400 analyzes the character string specified by the input unit 100 or the speech recognition unit 200. In this embodiment, as shown in FIG. 25, the sentence analysis unit 400 includes a character string identification unit 410, a morpheme extraction unit 420, a morpheme database 430, an input type determination unit 440, and an utterance type database 450. have. The character string specifying unit 410 divides a series of character strings specified by the input unit 100 and the speech recognition unit 200 into one sentence. This one-sentence means a delimiter sentence in which character strings are divided as finely as possible without breaking the meaning of the grammar. Specifically, when there is a certain time interval or more in a series of character strings, the character string specifying unit 410 divides the character string at that portion. The character string specifying unit 410 outputs the divided character strings to the morpheme extracting unit 420 and the input type determining unit 440. It should be noted that “character string” described below means a character string for each phrase.

[5.1.1. Morphological extraction unit]
The morpheme extraction unit 420 sets, as first morpheme information, each morpheme constituting the minimum unit of the character string from the character string of the one phrase according to the character string of the one sentence divided by the character string specifying unit 410. To extract. Here, in this embodiment, the morpheme means the minimum unit of the word structure represented in the character string. Examples of the minimum unit of the word structure include parts of speech such as nouns, adjectives and verbs.

  As shown in FIG. 26, each morpheme can be expressed as m1, m2, m3... In the present embodiment. FIG. 26 is a diagram illustrating a relationship between a character string and a morpheme extracted from the character string. As shown in FIG. 26, the morpheme extraction unit 420 to which a character string has been input from the character string specifying unit 410 has the input character string and a morpheme group stored in advance in the morpheme database 430 ( Morphemes that belong to the part-of-speech classification are prepared as a morpheme dictionary that describes the morpheme entry word, reading, part-of-speech, utilization form, etc.). The collated morpheme extraction unit 420 extracts each morpheme (m1, m2,...) That matches one of the previously stored morpheme groups from the character string. Examples of the elements (n1, n2, n3...) Excluding each extracted morpheme include auxiliary verbs.

  The morpheme extraction unit 420 outputs each extracted morpheme to the topic identification information search box 320 as first morpheme information. Note that the first morpheme information need not be structured. Here, “structured” means to classify and arrange morphemes contained in a character string based on the part of speech, for example, a character string that is an utterance sentence, such as “subject + object + predicate”. In the same way, it refers to conversion into data obtained by arranging morphemes in a predetermined order. Of course, even if structured first morpheme information is used, this does not interfere with the implementation of the present embodiment.

[5.1.2. Input type determination unit]
The input type determination unit 440 determines the type of utterance content (speech type) based on the character string specified by the character string specifying unit 410. This utterance type is information for specifying the type of utterance content, and in the present embodiment, it means, for example, the “spoken sentence type” shown in FIG. FIG. 27 is a diagram illustrating an example of an “uttered sentence type”, a two-letter alphabet representing the type of the spoken sentence, and an spoken sentence corresponding to the type of the spoken sentence.

  In this embodiment, as shown in FIG. 27, “spoken sentence type” is a statement sentence (D; Declaration), a time sentence (T; Time), a location sentence (L; Location), and a repulsive sentence. (N; Negation). The sentence composed of each type is composed of an affirmative sentence or a question sentence. The “declaration sentence” means a sentence indicating a user's opinion or idea. In the present embodiment, this statement includes, for example, a sentence such as “I like Sato” as shown in FIG. “Place sentence” means a sentence with a place concept. “Time sentence” means a sentence with a temporal concept. “Rebound sentence” means a sentence when a statement is denied. An example sentence for “spoken sentence type” is as shown in FIG.

  In order for the input type determination unit 440 to determine the “spoken sentence type”, in this embodiment, the input type determination unit 440 defines a definition expression for determining that it is a statement sentence as shown in FIG. A dictionary, a repulsive expression dictionary for determining that the sentence is a repelled sentence, and the like are used. Specifically, the input type determination unit 440 to which the character string is input from the character string specifying unit 410 compares the character string with each dictionary stored in the utterance type database 450 based on the input character string. To do. The input type determination unit 440 that has performed the collation extracts elements related to each dictionary from the character string.

  The input type determination unit 440 determines “spoken sentence type” based on the extracted elements. For example, when an element that describes a certain event is included in a character string, the input type determination unit 440 determines the character string that includes the element as a statement. The input type determination unit 440 outputs the determined “spoken sentence type” to the answer acquisition unit 380.

[5.1.3. Conversation database]
Next, a data configuration example of data stored in the conversation database 500 will be described with reference to FIG. FIG. 29 is a conceptual diagram illustrating a configuration example of data stored in the conversation database 500.

  As shown in FIG. 29, the conversation database 500 stores in advance a plurality of pieces of topic specifying information 810 for specifying topics. Each topic specifying information 810 may be associated with other topic specifying information 810. For example, in the example shown in FIG. 29, when the topic specifying information C (810) is specified, this topic specifying information Other topic specifying information A (810), topic specifying information B (810), and topic specifying information D (810) associated with C (810) are stored so as to be determined.

  Specifically, in the present embodiment, the topic identification information 810 means “keywords” that are relevant to the input content that is expected to be input by the user or an answer sentence to the user.

  One or more topic titles 820 are stored in the topic specifying information 810 in association with each other. The topic title 820 is composed of morphemes composed of one character, a plurality of character strings, or a combination thereof. Each topic title 820 stores an answer sentence 830 to the user in association with it. A plurality of answer types indicating the type of the answer sentence 830 are associated with the answer sentence 830.

Next, the association between certain topic specifying information 810 and other topic specifying information 810 will be described. FIG. 30 is a diagram illustrating an association between certain topic specifying information 810A and other topic specifying information 810B, 810C 1 to 810C 4 , 810D 1 to 810D 3 . In the following description, “stored in association” means that when information X is read, information Y associated with the information X can be read. For example, information Y in the data of the information X Is stored as information (for example, a pointer indicating the storage destination address of information Y, a physical memory address of the storage destination of information Y, and a logical address). "Remembered".

  In the example shown in FIG. 30, topic specific information can be stored in association with other topic specific information in association with a higher concept, a lower concept, a synonym, and a synonym (omitted in the example of this figure). In the example shown in this figure, topic specifying information 810B (= “entertainment”) is stored in association with the topic specifying information 810A as topic specifying information of the higher concept for the topic specifying information 810A (= “movie”). The topic specific information (“movie”) is stored in the upper hierarchy.

Further, topic specific information 810C 1 (= “director”), topic specific information 810C 2 (= “starring”), topic specific information 810C 3 (= “distribution company” for the topic specific information 810A (= “movie”) )), Topic identification information 810C 4 (= “screening time”), topic identification information 810D 1 (= “Seven Samurai”), topic identification information 810D 2 (= “Ran”), topic identification information 810D 3 ( = "Bouncer"), ... are stored in association with the topic identification information 810A.

  In addition, the synonym 900 is associated with the topic identification information 810A. In this example, “works”, “contents”, and “cinema” are stored as synonyms of the keyword “movie” that is the topic identification information 810A. By defining such synonyms, the topic specifying information 810A is obtained when the utterance does not include the keyword “movie” but includes “works”, “contents”, and “cinema” in the utterance sentence or the like. Can be handled as being included in an utterance sentence.

  When the reply processing unit 21 identifies certain topic specifying information 810 by referring to the stored contents of the conversation database 500, the other topic specifying information 810 stored in association with the topic specifying information 810 and the topic specifying information are stored. 810 topic titles 820, answer sentences 830, and the like can be searched and extracted at high speed.

  Next, a data configuration example of the topic title 820 (also referred to as “second morpheme information”) will be described with reference to FIG. FIG. 31 is a diagram illustrating a data configuration example of the topic title 820.

The topic identification information 810D 1 , 810D 2 , 810D 3 ,... Has a plurality of different topic titles 820 1 , 820 2 ,..., Topic titles 820 3 , 820 4 ,..., Topic titles 820 5 , 820 6 ,. ing. In the present embodiment, as shown in FIG. 31, each topic title 820 is information including first specific information 1001, second specific information 1002, and third specific information 1003. Here, the 1st specific information 1001 means the main morpheme which comprises a topic in this Embodiment. As an example of the 1st specific information 1001, the subject which comprises a sentence is mentioned, for example. The second specific information 1002 means a morpheme having a close relationship with the first specific information 1001 in the present embodiment. The second specific information 1002 includes, for example, an object. Further, in the present embodiment, the third identification information 1003 means a morpheme that indicates a movement of a certain object or a morpheme that modifies a noun or the like. The third specific information 1003 includes, for example, a verb, an adverb, or an adjective. The meanings of the first identification information 1001, the second identification information 1002, and the third identification information 1003 do not have to be limited to the above-described contents, and other meanings (different parts of speech) are assigned to the first identification information 1001, Even if it is given to the second specific information 1002 and the third specific information 1003, as long as the contents of the sentence can be grasped from these, this embodiment is established.

For example, the subject is "Seven Samurai" and the adjective is "interesting", as shown in FIG. 31, the topic title (second morpheme information) 820 2 is the first specification information 1001 morpheme " It consists of “Seven Samurai” and the morpheme “Funny” which is the third specific information 1003. Incidentally, this topic title 820 2 not included morpheme corresponding to the second identification information 1002, the symbol for indicating that there is no corresponding morpheme "*" is stored as the second specification information 1002 .

The topic title 820 2 (Seven Samurai; *; Interesting) has the meaning of “Seven Samurai is interesting”. In the parentheses constituting the topic title 820, the first specific information 1001, the second specific information 1002, and the third specific information 1003 are in the following order from the left. In addition, in the topic title 820, when there is no morpheme included in the first to third specific information, “*” is indicated for the portion.

  The specific information constituting the topic title 820 is not limited to three like the first to third specific information as described above. For example, other specific information (fourth specific information, and fourth specific information, and And more).

  Next, the answer sentence 830 will be described with reference to FIG. In the present embodiment, as shown in FIG. 32, the reply sentence 830 includes a statement (D; Declaration) and a time (T; Time) in order to make a reply corresponding to the type of utterance sentence uttered by the user. , Location (L; Location), negation (N; Negation) and other types (answer types), and prepared for each type. The affirmative sentence is “A” and the question sentence is “Q”.

  A data configuration example of the topic identification information 810 will be described with reference to FIG. FIG. 33 shows a specific example of a topic title 820 and an answer sentence 830 associated with certain topic specifying information 810 “Sato”.

  The topic identification information 810 “Sato” is associated with a plurality of topic titles (820) 1-1, 1-2,. Each of the topic titles (820) 1-1, 1-2,... Is associated with a response sentence (830) 1-1, 1-2,. The answer sentence 830 is prepared for each answer type 840.

  If the topic title (820) 1-1 is (Sato; *; likes) {this is an extracted morpheme contained in "I like Sato"}, the topic title (820) 1-1 Answer sentence (830) 1-1 corresponding to (DA; statement affirmation sentence "I also like Sato"), (TA; affirmation sentence "I like Sato when I was standing at bat") Etc. An answer acquisition unit 380 described later acquires one answer sentence 830 associated with the topic title 820 while referring to the output of the input type determination unit 440.

  For each answer sentence, next plan designation information 840, which is information for designating an answer sentence (referred to as “next answer sentence”) that is preferentially output in response to the user utterance, is determined to correspond to the answer sentence. It has been. The next plan designation information 840 may be any information as long as it can identify the next answer sentence. For example, at least one answer sentence is identified from all the answer sentences stored in the conversation database 500. Answer sentence ID that can be used.

  In the present embodiment, the next plan designation information 840 will be described as information (for example, answer sentence ID) for specifying the next answer sentence in response sentence units, but the next plan designation information 840 includes the topic title 820, The next answer text (in this case, a plurality of answer texts are designated as the next answer text, so called the next answer text group. However, the actual answer text is It may be information specifying any answer sentence included in this answer sentence group. For example, even if the topic title ID and the topic identification information ID are used as the hour plan designation information, the present embodiment is established.

[5.1.4. Conversation control unit]
Here, returning to FIG. 25, a configuration example of the conversation control unit 300 will be described.
The conversation control unit 300 controls the data transfer between the constituent elements in the answer processing unit 21 (speech recognition unit 200, sentence analysis unit 400, conversation database 500, output unit 600, speech recognition dictionary storage unit 700). And a function of determining and outputting an answer sentence that responds to a user utterance.

  In this embodiment, the conversation control unit 300 includes a management unit 310, a plan conversation processing unit 320, a discourse space conversation control processing unit 330, and a CA conversation processing unit 340 as shown in FIG. Yes. Hereinafter, these components will be described.

[5.1.4.1. Management Department]
The management unit 310 has a function of storing the discourse history and updating it as necessary. In response to requests from the topic identification information search unit 350, the abbreviated sentence complement unit 360, the topic search unit 370, and the answer acquisition unit 380, the management unit 310 converts all or a part of the stored discourse history into these units. The function to pass to.

[5.1.4.2. Plan conversation processing section]
The plan conversation processing unit 320 has a function of executing a plan and establishing a conversation according to the plan with the user. “Plan” refers to providing a user with a predetermined answer according to a predetermined order. Hereinafter, the plan conversation processing unit 320 will be described.

  The plan conversation processing unit 320 has a function of outputting a predetermined answer according to a predetermined order in response to a user utterance.

  FIG. 34 is a conceptual diagram for explaining a plan. As shown in FIG. 34, various plans 1402 such as a plurality of plans 1, plan 2, plan 3, and plan 4 are prepared in advance in the plan space 1401. The plan space 1401 is a set of a plurality of plans 1402 stored in the conversation database 500. The answer processing unit 21 selects a plan predetermined for starting when the apparatus is activated or starts a conversation, or selects one of the plans 1402 as appropriate from the plan space 1401 according to the content of each user utterance. Using the selected plan 1402, an answer sentence for the user utterance is output.

  FIG. 35 is a diagram illustrating a configuration example of the plan 1402. The plan 1402 has an answer sentence 1501 and next plan designation information 1502 associated therewith. The next plan designation information 1502 is information for specifying a plan 1402 including an answer sentence (referred to as a next candidate answer sentence) scheduled to be output to the user after the answer sentence 1501 included in the plan 1402. In this example, the plan 1 has an answer sentence A (1501) output by the answer processing unit 21 when the plan 1 is executed, and next plan designation information 1502 associated with the answer sentence A (1501). The next plan designation information 1502 is information “ID: 002” identifying the plan 1402 having the answer sentence B (1501) which is the next candidate answer sentence for the answer sentence A (1501). Similarly, for the reply sentence B (1501), the next plan designation information 1502 is defined. When the reply sentence B (1501) is output, the plan 2 (1402) including the next candidate reply sentence is designated. The In this way, the plan 1402 is linked in a chain by the next plan designation information 1502 and realizes a plan conversation in which a series of continuous contents are output to the user. In other words, by dividing the content (description, guidance, questionnaire, etc.) that you want to convey to the user into multiple response sentences, and by preparing the order of each response sentence in advance and preparing it as a plan, Accordingly, these answer sentences can be provided to the user in order. Note that the answer sentence 1501 included in the plan 1402 designated by the next plan designation information 1502 does not necessarily need to be outputted immediately if there is a user utterance responding to the output of the immediately preceding answer sentence. The answer sentence 1501 included in the plan 1402 designated by the next plan designation information 1502 may be output after the plant has a conversation about another topic between the two.

  35 corresponds to any one of the answer sentence character strings in the answer sentence 830 shown in FIG. 33, and the next plan designation information 1502 shown in FIG. This corresponds to the plan designation information 840.

  Note that the connection of the plans 1402 is not limited to the one-dimensional arrangement as shown in FIG. FIG. 36 is a diagram illustrating an example of a plan 1402 having a connection method different from that in FIG. In the example shown in FIG. 36, the plan 1 (1402) has two next plan designation information 1502 so that two answer sentences 1501, which are the next candidate answer sentences, that is, the plan 1402 can be designated. As a plan 1402 having a next candidate answer sentence when a certain answer sentence A (1501) is output, a plan 2 (1402) having an answer sentence B (1501) and a plan 3 (1402) having an answer sentence C (1501) Two next plan designation information 1502 are provided so that the two plans 1402 are determined. Note that the answer sentence B and the answer sentence C are selective / alternative. If one is output, the other is not output, and the plan 1 (1402) ends. As described above, the connection of the plans 1402 is not limited to a one-dimensional permutation, and may be a tree diagram connection or a net connection.

  Note that the number of next candidate answer sentences each plan has is not limited. Further, the next plan designation information 1502 may not exist for the plan 1402 at which the story ends.

FIG. 37 shows a specific example of a series of plans 1402. This series of plans 1402 1 to 1402 4 correspond to the four reply sentences 1501 1 to 1501 4 for notifying information on risk management to the user. The four answer sentences 1501 1 to 1501 4 constitute a single united story (explanatory sentence) in total. Each plan 1402 1 to 1402 4 each have an ID data 1702 1 to 1702 4 as "1000-01,""1000-02,""1000-03,""1000-04". The numbers below the hyphen in the ID data are information indicating the output order. The plans 1402 1 to 1402 4 have next plan designation information 1502 1 to 1502 4 , respectively. Contents of the next plan designation information 1502 4 is the data of "1000-0F", the hyphen following number "0F" is, then not present plan you plan to output, talk about the answer sentence is a series This is information indicating the end of (descriptive text).

In this example, when the user utterance is “Tell me about crisis management when a large earthquake occurs”, the plan conversation processing unit 320 starts executing this series of plans. That is, when the plan conversation processing unit 320 accepts the user utterance “Tell me about crisis management when a large earthquake occurs”, the plan conversation processing unit 320 searches the plan space 1401 and searches for the user utterance “A large earthquake has occurred. It is checked whether or not there is a plan 1402 having an answer sentence 15011 1 corresponding to “tell me when crisis management”. In this example, it is assumed that the user utterance character string 1701 1 corresponding to “Tell me about crisis management when a large earthquake occurs” corresponds to the plan 1402 1 .

When the plan conversation processing unit 320 finds the plan 1402 1 , the plan conversation processing unit 320 obtains an answer sentence 1501 1 included in the plan 1402 1 , outputs this answer sentence 1501 1 as an answer to the user utterance, and uses the next plan designation information 1502 1. Identify the next candidate answer sentence.

Then, when receiving the user's utterance via a input unit 100 and the speech recognition unit 200 after the output of the reply sentence 1501 1, the plan conversation process unit 320, the execution of the plan 1402 2. That is, the plan conversation processing unit 320 determines whether to execute the plan 1402 2 designated by the next plan designation information 1502 1 , that is, whether to output the second answer sentence 15012. Specifically, the plan conversation process unit 320 with the reply sentence 1501 (also referred to as example sentences) user utterance string associated with the 2 1701 2 or topic title 820, (not shown in FIG. 37), accepts the user The speech is compared and it is determined whether or not they match. If there is a match, it outputs the second reply sentence 1501 2. In addition, since the next plan designation information 1502 2 is described in the plan 1402 2 including the second answer sentence 15012, the next candidate answer sentence is specified.

Similarly, the plan conversation processing unit 320 shifts to the plan 1402 3 and the plan 1402 4 in order according to user utterances continuously made thereafter, and the third answer sentence 1501 3 and the fourth answer sentence 1501. 3 outputs can be performed. Incidentally, the fourth reply sentence 1501 4 is the final reply sentence, the fourth output of the reply sentence 1501 4 has been completed, the plan conversation process unit 320 terminates the plan execution.

As described above, by executing the plans 1402 1 to 1402 4 one after another, it is possible to provide the user with the conversation contents prepared in advance in a predetermined order.

[5.1.4.3. Discourse space conversation control processing section]
Returning to FIG. 25, the description of the configuration example of the conversation control unit 300 will be continued.
The discourse space conversation control processing unit 330 includes a topic specifying information search unit 350, an abbreviated sentence complement unit 360, a topic search unit 370, and an answer acquisition unit 380. The management unit 310 controls the entire conversation control unit 300.

The “discourse history” is information for specifying the topic and subject of the conversation between the user and the answer processing unit 21, and the discourse history is “target topic specification information”, “target topic title”, “user input sentence topic” described later. This information includes at least one of “specific information” and “answer sentence topic specific information”. In addition, “focused topic identification information”, “focused topic title”, and “answer sentence topic specific information” included in the discourse history are not limited to those determined by the previous conversation, but focused topic identification information during a past predetermined period. "Remarked topic title", "Reply sentence topic specific information", or a cumulative record thereof.
Hereinafter, each of these units constituting the discourse space conversation control processing unit 330 will be described.

[5.1.4.1.3.1. Topic specific information search section]
The topic identification information search unit 350 collates the first morpheme information extracted by the morpheme extraction unit 420 with each topic identification information, and the topic that matches the morpheme constituting the first morpheme information from each topic identification information Search for specific information. Specifically, the topic identification information search unit 350, when the first morpheme information input from the morpheme extraction unit 420 is composed of two morphemes of “Sato” and “like”, The information is collated with the topic specific information group.

  The topic identification information search unit 320 that has performed this collation includes the morpheme that constitutes the first morpheme information (for example, the topic title that has been searched up to the previous time, expressed as 820focus in order to distinguish it from other topic titles). If “Sato” is included, the topic title of interest 820focus is output to the answer acquisition unit 380. On the other hand, when the morpheme constituting the first morpheme information is not included in the focused topic title 820focus, the topic identification information search unit 350 determines the user input sentence topic identification information based on the first morpheme information and inputs it. The first morpheme information and the user input sentence topic specifying information are output to the abbreviated sentence complementing unit 360. "User input sentence topic specific information" is included in the topic specific information corresponding to the morpheme corresponding to the content that the user is talking about or the first morpheme information among the morphemes included in the first morpheme information. The topic specific information corresponding to the morpheme which may correspond to the content which the user is talking about among morphemes.

[5.1.3.2.2. Abbreviated sentence completion part]
The abbreviated sentence complementing unit 360 uses the first morpheme information as topic specifying information 810 (hereinafter referred to as “focused topic specifying information”) searched up to the previous time and topic specifying information 810 (hereinafter referred to as “target topic specifying information”). , Referred to as “answer sentence topic specific information”), a plurality of types of complemented first morpheme information is generated by complementing. For example, when the user utterance is a sentence “I like”, the abbreviated sentence complementing unit 360 includes the topic topic identification information “Sato” in the first morpheme information “like” and the complemented first morpheme information “ "Sato likes".

  That is, if the first morpheme information is “W” and the set of the topic topic identification information and the answer sentence topic specification information is “D”, the abbreviated sentence complementing unit 360 adds the set “D” to the first morpheme information “W”. Complemented first morpheme information including elements is generated.

  Thereby, when the sentence composed using the first morpheme information is an abbreviated sentence and is not clear as Japanese, the abbreviated sentence complementing unit 360 uses the set “D” to set the set “D”. (For example, “Sato”) can be included in the first morpheme information “W”. As a result, the abbreviated sentence complementing unit 360 can change the first morpheme information “like” into the first morpheme information “Sato, like”. The supplemented first morpheme information “Sato, I like” corresponds to the user utterance “I like Sato”.

  That is, the abbreviated sentence complementing unit 360 can supplement the abbreviated sentence using the set “D” even when the user's utterance content is an abbreviated sentence. As a result, the abbreviated sentence complementing unit 360 can make the sentence in proper Japanese even if the sentence composed of the first morpheme information is an abbreviated sentence.

  In addition, the abbreviated sentence complementing unit 360 searches for the topic title 820 that matches the first morpheme information after completion based on the set “D”. When the topic title 820 that matches the first morpheme information after complement is found, the abbreviated sentence complement unit 360 outputs the topic title 820 to the answer acquisition unit 380. The answer acquisition unit 380 can output the answer sentence 830 most suitable for the user's utterance content based on the appropriate topic title 820 searched by the abbreviation sentence complementing part 360.

  Note that the abbreviated sentence complementing unit 360 is not limited to including elements of the set “D” in the first morpheme information. The abbreviated sentence complementing unit 360 extracts, based on the topic title of interest, the morpheme included in any of the first specific information, the second specific information, or the third specific information constituting the topic title. It may be included in the information.

[5.1.3.3. Topic Search Department]
When the topic title 810 is not determined by the abbreviation sentence complementing section 360, the topic search section 370 collates the first morpheme information with each topic title 810 corresponding to the user input sentence topic specifying information, and each topic title 810 The topic title 810 that is most suitable for the first morpheme information is searched for.

  Specifically, the topic search unit 370 to which the search command signal is input from the abbreviated sentence complement unit 360 is used based on the user input sentence topic identification information and the first morpheme information included in the input search command signal. The topic title 810 most suitable for the first morpheme information is searched from each topic title associated with the person input sentence topic identification information. The topic search unit 370 outputs the searched topic title 810 to the answer acquisition unit 380 as a search result signal.

  FIG. 33 shown above shows a specific example of the topic title 820 and the answer sentence 830 associated with certain topic specifying information 810 (= “Sato”). As shown in FIG. 33, for example, the topic search unit 370 includes the topic specifying information 810 (= “Sato”) in the input first morpheme information “Sato, I like”, so the topic specifying information 810 (= First, the first morpheme information that is input as each topic title (820) 1-1, 1-2,... Associated with the topic specifying information 810 (= “Sato”) Match “Sato, I like”.

  Based on the collation result, the topic search unit 370 selects the topic title (820) that matches the input first morpheme information “Sato, likes” from among the topic titles (820) 1-1-1-2. Specify 1-1 (Sato; *; likes). The topic search unit 340 outputs the searched topic title (820) 1-1 (Sato; *; likes) to the answer acquisition unit 380 as a search result signal.

[5.1.3.4. Response acquisition department]
The answer acquisition unit 380 acquires the answer sentence 830 associated with the topic title 820 based on the topic title 820 searched by the abbreviated sentence complementing unit 360 or the topic search unit 370. In addition, the answer acquisition unit 380 determines each answer type associated with the topic title 820 and the utterance type determined by the input type determination unit 440 based on the topic title 820 searched by the topic search unit 370. Collate. The answer acquisition unit 380 that has performed the collation searches for an answer type that matches the determined utterance type from among the answer types.

  In the example shown in FIG. 33, when the topic title searched by the topic search unit 370 is the topic title 1-1 (Sato; *; likes), the answer acquisition unit 350 sets the topic title 1-1 as the topic title 1-1. From the associated response sentences 1-1 (DA, TA, etc.), the response type (DA) that matches the “spoken sentence type” (for example, DA) determined by the input type determination unit 440 is specified. . Based on the identified answer type (DA), the answer acquisition unit 380 that has identified this answer type (DA) is the response sentence 1-1 associated with the answer type (DA) ("I like Sato too" .)).

  Here, among the “DA”, “TA”, etc., “A” means an affirmative form. Therefore, when “A” is included in the utterance type and the answer type, it indicates that a certain matter is affirmed. In addition, types such as “DQ” and “TQ” can be included in the utterance type and the answer type. Of these “DQ”, “TQ”, etc., “Q” means a question about a certain matter.

  When the answer type is the above question format (Q), the answer text associated with the answer type is configured in an affirmative format (A). Examples of the answer sentence created in this affirmative form (A) include a sentence that answers a question item. For example, when the utterance sentence is “Have you operated the slot machine?”, The utterance type for this utterance sentence is a question form (Q). An example of an answer sentence associated with the question format (Q) is “I have operated a slot machine” (affirmative format (A)).

  On the other hand, when the utterance type is an affirmative form (A), the answer sentence associated with the answer type is configured with a question form (Q). Examples of the answer sentence created in the question format (Q) include a question sentence that is replied to the utterance content or a question sentence that asks a specific matter. For example, if the utterance sentence is “I am playing with a slot machine”, the utterance type for this utterance sentence is an affirmative form (A). The answer sentence associated with this affirmative form (A) is, for example, “isn't it a hobby to play with pachinko?” (Question sentence (Q) to ask for a specific matter).

  The answer acquisition unit 380 outputs the acquired answer sentence 830 to the management unit 310 as an answer sentence signal. The management unit 310 to which the answer sentence signal is input from the answer acquisition unit 350 outputs the input answer sentence signal to the output unit 600.

[5.1.4.4. CA conversation processing department]
The CA conversation processing unit 340 has a conversation with the user according to the content of the user utterance when no answer sentence is determined in any of the plan conversation processing unit 320 and the discourse space conversation control processing unit 330 for the user utterance. Has a function to output an answer sentence that can continue.
This is the end of the description of the configuration example of the answer processing unit 21.

[5.2. Conversation control method]
The answer processing unit 21 having the above configuration executes the conversation control method by operating as follows. The operation of the answer processing unit 21 according to the present embodiment, more specifically, the conversation control unit 300 will be described.

  FIG. 38 is a flowchart illustrating an example of main processing of the conversation control unit 300. This main process is executed every time the conversation control unit 300 accepts a user utterance. By performing this main process, an answer sentence for the user utterance is output, and the conversation device 10 and the conversation server 20 ( A conversation between the answer processing units 21) is established.

  When entering the main process, the conversation control unit 300, more specifically the plan conversation processing unit 320, first executes a plan conversation control process (S1801). The plan conversation control process is a process for executing a plan.

  39 and 40 are flowcharts showing an example of the plan conversation control process. An example of the plan conversation control process will be described below with reference to FIGS. 39 and 40.

When the plan conversation control process is started, the plan conversation processing unit 320 first performs basic control state information check (S1901). In the basic control state information, whether or not the execution of the plan 1402 is completed is stored in a predetermined storage area as basic control state information.
The basic control state information has a role of describing the basic control state of the plan.

  FIG. 41 is a diagram showing four basic control states that can occur for a type of plan called a scenario. Hereinafter, each state will be described.

(1) Unity This basic control state is when the user utterance matches the plan 1402 being executed, more specifically, the topic title 820 or example sentence 1701 corresponding to the plan 1402. In this case, the plan conversation processing unit 320 ends the plan 1402 and shifts to the plan 1402 corresponding to the answer sentence 1501 designated by the next plan designation information 1502.

(2) Discard This basic control state is determined when it is determined that the user utterance content is requesting the termination of the plan 1402 or when the user's interest has shifted to a matter other than the plan being executed. This is the basic control state to be set. When the basic control state information indicates discard, the plan conversation processing unit 320 searches for a plan 1402 corresponding to the user utterance other than the plan 1402 to be discarded. The execution of the plan 1402 is started, and if it does not exist, the execution of the plan is terminated.

(3) Maintenance This basic control state is when the user utterance does not correspond to the topic title 820 (see FIG. 33) or example sentence 1701 (see FIG. 37) corresponding to the plan 1402 being executed, In addition, when it is determined that the user utterance does not correspond to the basic control state “discard”, the basic control state is described in the basic control state information.

  In this basic control state, when receiving a user utterance, the plan conversation processing unit 320 first considers whether or not to resume the suspended / suspended plan 1402 and the user utterance is suitable for resuming the plan 1402. If not, for example, if the user utterance does not correspond to the topic title 802 or example sentence 1702 corresponding to the plan 1402, execution of another plan 1402 is started, or a discourse space conversation control process (S1902) described later is performed. . If the user utterance is suitable for resuming the plan 1402, an answer sentence 1501 is output based on the stored next plan designation information 1502.

  When the basic control state is “maintained”, the plan conversation processing unit 320 searches for another plan 1402 so that an answer other than the answer sentence 1501 corresponding to the plan 1402 can be output, or a discourse space described later. Conversation control processing or the like is performed, but if the user utterance is related to the plan 1402 again, execution of the plan 1402 is resumed.

(4) Continuation In this state, it is determined that the user utterance does not correspond to the answer sentence 1501 included in the plan 1402 being executed, and the user utterance content does not correspond to the basic control state “discard”. The basic control state is set when the user's intention interpreted by the user utterance is not clear.

  When the basic control state is “continuation”, the plan conversation processing unit 320 first considers whether to resume the suspended / suspended plan 1402 upon accepting the user utterance, and the user utterance resumes the plan 1402 If not suitable, CA conversation control processing described later is performed so that an answer sentence for extracting further utterances from the user can be output.

  Returning to FIG. 39, the description of the plan conversation control process will be continued.

  The plan conversation processing unit 320 referring to the basic control state information determines whether or not the basic control state indicated by the basic control state information is “Bundling” (S1902). If it is determined that the basic control state is “union” (S1902, Yes), the plan conversation processing unit 320 determines whether the answer sentence 1501 is the final answer sentence in the plan 1402 being executed indicated by the basic control state information. It is determined whether or not (S1903).

  If it is determined that the final response sentence 1501 has been output (S1903, Yes), the plan conversation processing unit 320 has already transmitted all the contents to be answered to the user in the plan 1402, and therefore, another new plan In order to determine whether or not 1402 is to be started, a search is made as to whether there is a plan 1402 corresponding to the user utterance in the plan space (S1904). If the plan 1402 corresponding to the user utterance cannot be found as a result of this search (S1905, No), there is no plan 1402 to be provided to the user, so the plan conversation processing unit 320 ends the plan conversation control process as it is. .

  On the other hand, if the plan 1402 corresponding to the user utterance is found as a result of this search (S1905, Yes), the plan conversation processing unit 320 moves to the plan 1402 (S1906). This is because there is a plan 1402 to be provided to the user, and therefore execution of the plan 1402 (output of the answer sentence 1501 included in the plan 1402) is started.

  Next, the plan conversation processing unit 320 outputs the reply sentence 1501 of the plan 1402 (S1908). The output answer sentence 1501 becomes an answer to the user utterance, and the plan conversation processing unit 320 provides information to be transmitted to the user.

  After the answer sentence output process (S1908), the plan conversation processing unit 320 ends the plan conversation control process.

  On the other hand, when it is determined whether or not the previously output response text 1501 is the final response text 1501 (S1903), if the previously output response text 1501 is not the final response text 1501 (S1903, No), The conversation processing unit 320 proceeds to the reply sentence 1501 following the reply sentence 1501 output previously, that is, the plan 1402 corresponding to the reply sentence 1501 specified by the next plan designation information 1502 (S1907).

  Thereafter, the plan conversation processing unit 320 outputs a reply sentence 1501 included in the corresponding plan 1402 and makes a reply to the user utterance (S1908). The output answer sentence 1501 becomes an answer to the user utterance, and the plan conversation processing unit 320 provides information to be transmitted to the user. After the answer sentence output process (S1908), the plan conversation processing unit 320 ends the plan conversation control process.

  In the determination process of S1902, if the basic control state information is not “union” (S1902, No), the plan conversation processing unit 320 determines whether or not the basic control state indicated by the basic control state information is “discard”. Determination is made (S1909). If it is determined that the basic control state is “discard” (S1909, Yes), there is no plan 1402 to be continued, so the plan conversation processing unit 320 has another new plan 1402 to be started. In order to determine whether or not to do so, a search is made as to whether there is a plan 1402 corresponding to the user utterance in the plan space 1401 (S1904). Thereafter, the plan conversation processing unit 320 executes the processing from S1905 to S1908 in the same manner as the processing in S1903 (Yes) described above.

  On the other hand, when it is determined that the basic control state indicated by the basic control state information is “discard” (S1909) and the basic control state is not “discard” (S1909, No), the plan conversation processing unit 320 further determines whether or not the basic control state indicated by the basic control state information is “maintained” (S1910).

  When the basic control state indicated by the basic control state information is “maintained” (S1910, Yes), the plan conversation processing unit 320 determines whether or not the user has shown an interest in the suspended / stopped plan 1402 again. When the interest is examined, the plan 1402 temporarily suspended / suspended is resumed. That is, the plan conversation processing unit 320 examines the plan 1402 that is on hold / stop (FIG. 40; S2001), and determines whether or not the plan 1402 on which the user utterance is on hold / stop corresponds (S2002).

  When it is determined that the user utterance corresponds to the plan 1402 (S2002, Yes), the plan conversation processing unit 320 shifts to the plan 1402 corresponding to the user utterance (S2003), and then the answer included in the plan 1402 Answer sentence output processing (FIG. 39; S1908) is executed so as to output the sentence 1501. By operating in this way, the plan conversation processing unit 320 can resume the suspended / suspended plan 1402 according to the user's utterance, and the contents included in the prepared plan 1402 can be obtained. All can be communicated to the user.

  On the other hand, in the previous S2002 (see FIG. 40), when it is determined that the suspended / suspended plan 1402 does not correspond to the user utterance (No in S2002), the plan conversation processing unit 320 sets a new separate one to be started. In order to determine whether or not the plan 1402 exists, a search is made as to whether or not the plan 1402 corresponding to the user utterance exists in the plan space 1401 (FIG. 39; S1904). Thereafter, the plan conversation processing unit 320 executes the processing from S1905 to S1909, similarly to the processing in S1903 (Yes) described above.

In the determination of S1910, when the basic control state indicated by the basic control state information is not “maintained” (S1910, No), it means that the basic control state indicated by the basic control state information is “continue”. In this case, the plan conversation processing unit 320 ends the plan conversation control process without outputting an answer sentence.
This is the end of the description of the plan conversation control process.

Returning to FIG. 38, the description of the main process is continued.
When the planned conversation control process (S1801) ends, the conversation control unit 300 starts the discourse space conversation control process (S1802). However, when an answer sentence is output in the plan conversation control process (S1801), the conversation control unit 300 does not perform any of the discourse space conversation control process (S1802) and the CA conversation control process (S1803) described later. A basic control information update process (S1904) is performed and the main process is terminated.

FIG. 42 is a flowchart showing an example of the discourse space conversation control process according to the present embodiment.
First, the input unit 100 performs a step of acquiring the utterance content from the user (step S2201). Specifically, the input unit 100 acquires the voice that constitutes the utterance content of the user. The input unit 100 outputs the acquired voice to the voice recognition unit 200 as a voice signal. Note that the input unit 100 may acquire a character string input from the user (for example, character data input in a text format) instead of the voice from the user. In this case, the input unit 100 is not a microphone but a character input device such as a keyboard or a touch panel.

  Next, the voice recognition unit 200 performs a step of specifying a character string corresponding to the utterance content based on the utterance content acquired by the input unit 100 (step S2202). Specifically, the speech recognition unit 200 to which the speech signal is input from the input unit 100 specifies a word hypothesis (candidate) corresponding to the speech signal based on the input speech signal. The voice recognition unit 200 acquires a character string associated with the identified word hypothesis (candidate), and outputs the acquired character string as a character string signal to the conversation control unit 300, more specifically, to the discourse space conversation control unit 330. .

  Then, the character string specifying unit 410 performs a step of dividing the series of character strings specified by the voice recognition unit 200 for each sentence (step S2203). Specifically, the character string specifying unit 410 to which a character string signal (or morpheme signal) is input from the management unit 310 has a certain time interval or more in the input series of character strings. , Delimit the string at that part. The character string specifying unit 410 outputs the divided character strings to the morpheme extracting unit 420 and the input type determining unit 440. In addition, when the input character string is a character string input from the keyboard, the character string specifying unit 410 preferably divides the character string at a part such as a punctuation mark or a space.

  Thereafter, the morpheme extraction unit 420 performs a step of extracting each morpheme constituting a minimum unit of the character string as first morpheme information based on the character string specified by the character string specifying unit 410 (step S2204). Specifically, the morpheme extraction unit 420 to which the character string is input from the character string specifying unit 410 collates the input character string with a morpheme group stored in advance in the morpheme database 430. In this embodiment, the morpheme group is prepared as a morpheme dictionary in which each morpheme belonging to each part-of-speech classification describes a morpheme entry word, reading, part-of-speech, utilization form and the like.

  The matched morpheme extraction unit 420 extracts each morpheme (m1, m2,...) That matches each morpheme included in the previously stored morpheme group from the input character string. The morpheme extraction unit 420 outputs each extracted morpheme to the topic identification information search unit 350 as first morpheme information.

  Next, the input type determination unit 440 performs a step of determining “spoken sentence type” based on each morpheme constituting one sentence specified by the character string specifying unit 410 (step S2205). Specifically, the input type determination unit 440, to which the character string is input from the character string specifying unit 410, determines the character string and each dictionary stored in the utterance type database 450 based on the input character string. Collation is performed, and elements related to each dictionary are extracted from the character string. The input type determination unit 440 that extracted this element determines to which “spoken sentence type” the element belongs based on the extracted element. The input type determination unit 440 outputs the determined “spoken sentence type” (speech type) to the answer acquisition unit 380.

  Then, the topic identification information search unit 350 performs a step of comparing the first morpheme information extracted by the morpheme extraction unit 420 with the topic title of interest 820focus (step S2206).

  When the morpheme constituting the first morpheme information matches the topic topic title 820focus, the topic identification information search unit 350 outputs the topic title 820 to the answer acquisition unit 380. On the other hand, if the morpheme constituting the first morpheme information and the topic title 820 do not match, the topic specifying information search unit 350 searches for the input first morpheme information and user input sentence topic specifying information. The abbreviated sentence complementing unit 360 outputs the signal as a signal.

  Thereafter, the abbreviated sentence complementing unit 360 includes the focused topic specifying information and the answer sentence topic specifying information in the input first morpheme information based on the first morpheme information input from the topic specifying information search unit 350. This is performed (step S2207). Specifically, assuming that the first morpheme information is “W” and the set of the focused topic identification information and the answer sentence topic identification information is “D”, the abbreviated sentence complementing unit 360 identifies the topic as the first morpheme information “W”. Complemented first morpheme information including the element of information “D” is generated, and the complemented first morpheme information is collated with all topic titles 820 associated with the set “D” to be complemented. Whether there is a topic title 820 that matches the first morpheme information is searched. If there is a topic title 820 that matches the complemented first morpheme information, the abbreviated sentence complementation unit 360 outputs the topic title 820 to the answer acquisition unit 380. On the other hand, when the topic title 820 that matches the complemented first morpheme information is not found, the abbreviated sentence complementing unit 360 passes the first morpheme information and the user input sentence topic specifying information to the topic searching unit 370. .

  Next, the topic search unit 370 collates the first morpheme information with the user input sentence topic identification information, and performs a step of searching for a topic title 820 suitable for the first morpheme information from each topic title 820. (Step S2208). Specifically, the topic search unit 370, to which the search command signal is input from the abbreviated sentence complement unit 360, is based on the user input sentence topic identification information and the first morpheme information included in the input search command signal. A topic title 820 suitable for the first morpheme information is searched from the topic titles 820 associated with the user input sentence topic identification information. The topic search unit 370 outputs the topic title 820 obtained as a result of the search to the answer acquisition unit 380 as a search result signal.

  Next, the answer acquisition unit 380 determines the utterance type of the user determined by the sentence analysis unit 400 based on the topic title 820 searched by the topic specifying information search unit 350, the abbreviated sentence complement unit 360, or the topic search unit 370. Are compared with each answer type associated with the topic title 820, and an answer sentence 830 is selected (step S2209).

  Specifically, the answer sentence 830 is selected as follows. That is, the answer acquisition unit 380 to which the search result signal is input from the topic search unit 370 and the “spoken sentence type” is input from the input type determination unit 440, the “topic title” corresponding to the input search result signal, Based on the entered “spoken sentence type”, the answer type that matches the “spoken sentence type” (such as DA) is identified from the answer type group associated with the “topic title”. .

  Subsequently, the reply acquisition unit 380 outputs the reply sentence 830 acquired in step S2209 to the output unit 600 via the management unit 310 (step S2210). The output unit 600 that has received the answer sentence from the management unit 310 outputs the input answer sentence 830.

  This is the end of the description of the discourse space conversation control process. Returning to FIG. 38, the description of the main process is resumed.

  When the conversation control unit 300 ends the discourse space conversation control process, the conversation control unit 300 executes a CA conversation control process (S1803). However, when an answer sentence is output in the plan conversation control process (S1801) and the discourse space conversation control process (S1801), the conversation control unit 300 does not perform the CA conversation control process (S1803), and does not perform the basic control information update process (S1803). S1804) is performed and the main process is terminated.

  In the CA conversation control process (S1803), whether the user utterance is “explaining something”, “confirming something”, “condemning or attacking”, “other than these” This is a process of determining whether or not the answer is in accordance with the content of the user utterance and the determination result. By performing this CA conversation control process, the flow of conversation with the user is not interrupted even if an answer sentence suitable for the user utterance cannot be output in either the plan conversation control process or the discourse space conversation control process. In other words, it is possible to output an answer sentence of “Tsunagi” that can be continued.

  Next, the conversation control unit 300 performs basic control information update processing (S1804). In this processing, the conversation control unit 300, more specifically, the management unit 310 sets the basic control information to “union” when the plan conversation processing unit 320 outputs the answer sentence, and the plan conversation processing unit 320 sets the answer sentence. When the output is stopped, the basic control information is set to “discard”. When the discourse space conversation control processing unit 330 outputs the answer sentence, the basic control information is set to “maintain”, and the CA conversation processing unit 340 When answer text is output, the basic control information is set to “continue”.

  The basic control information set in the basic control information update process is referred to in the above-described plan conversation control process (S1801), and is used for continuation and resumption of the plan.

  As described above, by executing the main process every time a user utterance is received, the answer processing unit 21 can execute a plan prepared in advance according to the user utterance and can appropriately respond to a topic not included in the plan. it can.

[6. Second Embodiment]
Next, a second embodiment of the present invention will be described. The second embodiment is proposed as a customer service system.
[6.1. Configuration example]
A customer service system that is one embodiment of the present invention will be described below. The customer response system is a system that transmits answers and information via a network in response to questions and inquiries from customers received via the network.

FIG. 43 is a block diagram showing a configuration example of the customer service system according to the present embodiment.
The customer-facing system 100 is connected to a conversation device 10 that functions as a user terminal that can be connected to a wide area network (WAN) 110, a conversation server 20A connected to the wide area network, and a local area network (LAN) 120. The expert terminal device 130, the conversation scenario editing device 30, the conversation log database (abbreviated as DB) 140, and the conversation log analysis device 150 are configured. The conversation server 20 </ b> A can communicate with the expert-side terminal device 130, conversation scenario editing device 30, conversation log DB 140, and conversation log analysis device 150 via the local communication network 120.

  In the above configuration example, the conversation server 20A communicates with the expert terminal device 130, the conversation scenario editing device 30, the conversation log DB 140, and the conversation log analysis device 150 via the local communication network 120. The present invention can be realized even if these devices communicate with each other via the wide area communication network 110 or another wide area communication network.

  The “expert” referred to here means a person who plays a role in answering questions and inquiries from users, and does not necessarily need to be a person who has expert knowledge.

Below, the component of the said customer response system 100 is demonstrated.
[6.1.1. Conversation device]
1. The conversation device 10 corresponding to the first means of the present invention is a device in which a user (customer) transmits a question or inquiry as a user utterance (input sentence) to the conversation server 20A and receives the answer sentence from the conversation server 20A. It is. Since the conversation device 10 of this customer service system is a device having the same configuration as the conversation device 20 of the first embodiment, a detailed description of a configuration example thereof will be omitted.

[6.1.2. Conversation server]
The conversation server 20A corresponding to the second means of the present invention determines an answer sentence based on the conversation scenario 40 for the user utterance transmitted from the conversation device 10, and corresponds to the decided answer sentence and the answer sentence. Specializing the contents of user utterances so that experts can answer the responses to the user utterances when the response sentence to the user utterances cannot be found from the conversation scenario. A function of transmitting to the home terminal device 130, receiving the response content transmitted from the expert side terminal device 130 in response thereto, and transmitting the received response content to the conversation device 10, the user utterance, the response text, A function of storing the response contents from the expert terminal device 130 according to their time series and transmitting the stored contents (referred to as “conversation log”) to the conversation log DB 140 , And a function of receiving the conversation scenario 40 to be transmitted from the conversation scenario editing device 30, and adds the conversation already already stored conversation scenario 40 or to replace.

  The conversation server 20A includes an arithmetic processing unit (CPU), a main memory (RAM), a read only memory (ROM), an input / output device (I / O), and, if necessary, an external storage device such as a hard disk device. Realized by the information processing apparatus. The information processing apparatus is a PC, a workstation, a server, or the like. The conversation server 20A may be configured by connecting a plurality of information processing apparatuses via a network.

  FIG. 44 is a functional block diagram showing a configuration example of the conversation server 20A. In addition, since conversation server 20A in the present embodiment has components common to conversation server 20 described above, those common components are denoted by the same reference numerals, and detailed descriptions thereof are omitted. And

  The conversation server 20A includes an answer processing unit 21, an answer relay unit 24 and a log collection unit 25 connected to the answer processing unit 21, and a semantic interpretation dictionary unit 23 and a conversation scenario storage unit 22 connected to the answer processing unit 21. A conversation scenario update unit 26 connected to the conversation scenario storage unit 22.

  Since the answer processing unit 21, the semantic interpretation dictionary unit 23, and the conversation scenario storage unit 22 are components having the same functions as those of the conversation server 20 of the first embodiment, description thereof is omitted. However, the answer processing unit 21 has a function of passing or receiving user utterances, answer sentences, and answer contents to the answer relay unit 24 and the log collection unit 25.

  The answer relay unit 24 communicates with the expert terminal device 130 to transmit the contents of the user utterance received from the answer processing unit 21 to the expert terminal device 130, and from the expert terminal device accordingly. It has a function of receiving the transmitted response content and passing the received response content to the response processing unit 21.

  The log collection unit 25 acquires from the answer processing unit 21 the user utterance received by the answer processing unit 21 and the answer contents from the expert-side terminal device 130 and the answer sentence that the answer processing unit 21 transmits to the conversation device 10. Is transmitted to the conversation log DB 140 as a conversation log. Note that the timing for transmitting the conversation log may be any timing determined by the conversation server 20A, may be the timing when the transmission request is received from the conversation log DB 140, or other timing (for example, the conversation log by the operator (When transmission processing is executed).

The conversation scenario update unit 26 has a function of adding a new conversation scenario 40 to the conversation scenario 40 stored in the conversation scenario storage unit 22 or replacing a part or all of it. For example, a new conversation scenario (referred to as “additional conversation scenario” for distinction) composed of a question that is a user's utterance and an expert's answer to the question is generated by the conversation scenario editing device 30, and the additional conversation scenario is The conversation scenario update unit 26 receives from the conversation scenario editing apparatus 30 and is additionally stored in the conversation scenario 40 already stored in the conversation scenario storage unit 22. After this processing, when the conversation server 20A receives the question that is the user utterance again, the answer processing unit 21 may transmit an answer sentence having the same content as the answer content to the conversation device 10 based on the additional conversation scenario portion. become able to do.
This is the end of the description of the configuration example of the conversation server 20A.

Returning to FIG. 43, description of the components of the customer service system 100 will be continued.
[6.1.3. Expert side terminal device]
The expert terminal device 130 corresponding to the third means of the present invention receives the user utterance transmitted (transferred) from the conversation server 20A, and changes the contents of the user utterance to the operator of the expert terminal device 130. This is an apparatus having a function of prompting the expert to input the answer and transmitting the answer data to the conversation server 20A when the answer is input.

  The expert-side terminal device 130 may be any device as long as it can receive user utterances and transmit response contents. For example, the expert terminal device 130 is a personal computer, a mobile communication device (mobile phone), a dedicated terminal device, or the like.

[6.1.4. Conversation scenario editing device]
Since the conversation scenario editing device 30 corresponding to the fifth means of the present invention is the same device as the conversation scenario editing device 30 in the first embodiment, a detailed description of the configuration is omitted here. However, the conversation scenario editing apparatus 30 according to the present embodiment acquires a conversation log from the conversation log DB 140, particularly a conversation log including answer contents transmitted from the expert terminal device 130, edits the conversation log, and edits the conversation scenario 40. And a function for sending to the conversation server 20A to add / update the conversation scenario.

[6.1.5. Conversation log DB]
The conversation log DB 140 corresponding to the fourth means of the present invention is an apparatus having a function of receiving and storing the conversation log transmitted from the conversation server 20A. The conversation log DB 140 includes an arithmetic processing unit (CPU), a main memory (RAM), a read only memory (ROM), an input / output device (I / O), and, if necessary, an external storage device such as a hard disk device. Realized by the information processing apparatus. The information processing apparatus is a PC, a workstation, a server, or the like. The conversation log DB may be configured by connecting a plurality of information processing apparatuses via a network.

[6.1.6. Conversation log analyzer]
The conversation log analysis device 150 receives the conversation log from the conversation log DB 140 and analyzes the conversation log to generate conversation tendency statistics (for example, statistical data of the number of accesses for each question).
Above, description of the structural example of the customer response system 100 is complete | finished.

[6.2. Operation of customer service system]
Next, the operation of the customer service system 100 will be described.
[6.2.1. Operation when conversation server can answer based on conversation scenario]
FIG. 45 is a sequence diagram illustrating an operation example of the customer response system 100 when a user utterance is accepted when the conversation server 20A can answer based on a conversation scenario.

  First, the user accesses the conversation server from the conversation device 10 to establish communication, and then inputs a user utterance (assuming that this is a question in this example) to the conversation device 10. The conversation device 10 transmits the user utterance to the conversation server 20A (S3010). The conversation server 20A that has received the user utterance extracts an answer sentence and operation control information corresponding to the answer sentence based on the conversation scenario stored in the conversation scenario storage unit 22 (S3020). The conversation server 20A transmits the extracted answer sentence and operation control information to the conversation apparatus 10 (S3030). The conversation device 10 displays the received answer sentence and provides the answer content to the user (S3040). The user obtains an answer to the question from the answer content.

  On the other hand, the conversation server 20A acquires the user utterance and the answer sentence as a conversation log (S3050), and transmits the conversation log to the conversation log DB 140 (S3060). The conversation log DB 140 stores the received conversation log (S3070).

[Operation when conversation server asks expert terminal device for answer]
FIG. 46 is a sequence diagram showing an operation example of the customer response system 100 when the conversation server 20A determines that there is no appropriate answer in the conversation scenario and asks the expert terminal device 130 for an answer.

  The user accesses the conversation server 20 </ b> A from the conversation device 10 and establishes communication, and then inputs a user utterance (assuming that the question is also this example) to the conversation device 10. The conversation device 10 transmits the user utterance to the conversation server 20A (S3110). The conversation server 20A that has received the user utterance searches the response sentence and the operation control information corresponding to the answer sentence based on the conversation scenario 40 stored in the conversation scenario storage unit 22 (S3120). It is assumed that there is no appropriate answer sentence in the existing conversation scenario 40. The conversation server 20A establishes communication with the expert terminal device 130, transmits the user utterance received from the conversation device 10 in the previous step S3110 to the expert terminal device 130 (S3130), and the expert terminal device 130. Ask the expert who is waiting for you to answer the question that is the user's utterance.

  Upon receiving the user utterance, the expert terminal device 130 displays the content (for example, displays text as the content of the user utterance on the liquid crystal display device) (S3140). The expert prepares an answer to the question by referring to his / her knowledge or a separately prepared database for the contents of the user utterance, and inputs the answer to the expert terminal device 130 (S3150). When the answer is input, the expert terminal device 130 transmits the answer as data to the conversation server 20A (S3160).

  The conversation server 20A that has received the data that is the answer from the expert terminal device 130 transmits the received answer to the conversation device 10 (S3170). The conversation device 10 displays the received answer sentence and provides the answer content to the user (S3180). The user obtains an answer to the question from the answer content.

  On the other hand, the conversation server 20A acquires the user utterance and answer as a conversation log (S3190), and transmits this conversation log to the conversation log DB 140 (S3200). The conversation log DB 140 stores the received conversation log (S3210).

  Thereafter, the conversation log DB 140 transmits to the conversation scenario editing apparatus 30 the conversation log transmitted in step S3200, that is, the conversation log containing the answer transmitted from the expert terminal device 130 and the user utterance (question) paired therewith. (S3220). The conversation scenario editing device 30 that has received the conversation log generates a conversation scenario 40 based on the conversation log (S3230). The conversation scenario 40 may be generated by an operator of the conversation scenario editing device 30. Alternatively, a separate automatic editing program may be installed in the conversation scenario editing device 30, and the conversation scenario may be generated by the automatic editing program. You may make it produce.

  The conversation scenario editing device 30 transmits the conversation scenario 40 generated in the previous step S3230 to the conversation server (S3240). The conversation server 20A that has received the conversation scenario 40 stores the received conversation scenario 40 in its own conversation scenario storage unit 22, and updates the conversation scenario (S3250). Thereby, when the question similar to the user utterance transmitted in step S3110 is received again, the conversation server 20A extracts the answer sentence and the operation control information from the conversation scenario 40, and does not ask for an answer by an expert. It becomes possible to provide an answer to the user.

[6.2.2. Conversation log analysis]
Next, an operation example of conversation log analysis will be described.
FIG. 47 is a sequence diagram illustrating an operation example when the customer-facing system 100 analyzes a conversation log.

  First, the conversation log analysis device 150 sends a conversation log transmission request to the conversation log DB 140 (S3310). The conversation log DB 140 transmits the conversation log to the conversation log analysis device 150 (S3320). The conversation log analysis device 150 analyzes the received conversation log (S3330) and outputs the analysis result (S3340). The analysis result is used as information that can be used for marketing, such as the user's interests and reactions for each attribute of the user.

  Above, description of operation | movement of the customer response system 100 is complete | finished.

Block diagram showing a configuration example of an automatic conversation system Block diagram showing a configuration example of a conversation device Block diagram showing a configuration example of a conversation server Block diagram showing a configuration example of a conversation scenario editing device State transition diagram showing an example of a conversation scenario corresponding to a discourse area The figure which shows the example which expressed the conversation scenario of FIG. 5 as data State transition diagram showing an example of a conversation scenario including a composition of shooting The figure which shows the example which expressed the conversation scenario of FIG. 7 as data State transition diagram showing an example of a conversation scenario in which a forced answer is performed using the NULL function The figure which shows the example which expressed the conversation scenario of FIG. 9 as data State transition diagram showing an example of a conversation scenario in which a “persistent answer” is given to a user utterance by the citation function The figure which shows the example which expressed the conversation scenario of FIG. 11 as data State transition diagram showing an example of a conversation scenario in which a “closed-loop answer” is constructed by “unit elements configured by composition” The figure which shows the example which expressed the conversation scenario of FIG. 13 as data State transition diagram of an example of a conversation scenario in which a coupling law is established for compositing The figure which shows the example which expressed the conversation scenario of FIG. 15 as data The figure which shows the example of edit screen of conversation scenario editing device The figure which shows the example of a data structure of a conversation scenario holding part The figure which shows the example of the input screen for conversation scenario data generation by the conversation scenario editing device The figure which shows the example of an input screen for conversation scenario data generation by the conversation scenario editing apparatus following FIG. The figure which shows the example of an input screen for conversation scenario data generation by the conversation scenario editing apparatus following FIG. The figure which shows the example of an input screen for conversation scenario data generation by the conversation scenario editing apparatus following FIG. The figure which shows the example of an input screen for conversation scenario data generation by the conversation scenario editing apparatus following FIG. Functional block diagram showing a modified configuration example of the conversation scenario editing device Functional block diagram of the answer processing section The figure which shows the relationship between the character string and the morpheme extracted from this character string The figure which shows the example of the utterance sentence which corresponds to the type of the utterance sentence, the two letter alphabet which shows the type of the utterance sentence, and the type of the utterance sentence The figure which shows the relationship between the type of sentence and the dictionary for judging the type Conceptual diagram showing an example of the data structure of data stored in the conversation database The figure which shows the correlation with a certain topic specific information and other topic specific information Data structure example of topic title (also called “second morpheme information”) Illustration for explaining an example of the data structure of an answer sentence The figure which shows the specific example of the topic title, the answer sentence, and the next plan designation information associated with certain topic specific information Conceptual diagram for explaining the plan space Diagram showing an example plan Diagram showing another plan example Diagram showing a specific example of plan conversation processing The flowchart which shows an example of the main process of a conversation control part Flow chart showing an example of plan conversation control processing FIG. 39 is a flowchart illustrating an example of the plan conversation control process. Diagram showing basic control status Flow chart showing an example of discourse space conversation control processing Block diagram showing a configuration example of a customer service system Functional block diagram showing a configuration example of a conversation server according to the second embodiment Sequence diagram showing an example of operation of the customer service system when a user utterance is accepted when the conversation server can respond based on a conversation scenario Sequence diagram showing an operation example of the customer service system when the conversation server determines that there is no appropriate answer to the conversation scenario and asks the expert terminal device for a reply Sequence diagram showing an example of operation when the customer service system analyzes conversation logs

DESCRIPTION OF SYMBOLS 1 ... Automatic conversation apparatus 10 ... Conversation apparatus 20, 20A ... Conversation server 30 ... Conversation scenario editing apparatus 40 ... Conversation scenario 100 ... Customer correspondence system 130 ... Expert side terminal apparatus 140 ... Conversation log DB
150 ... Conversation log analyzer

Claims (1)

  1. A customer response system having a first means for transmitting a user utterance and receiving an answer sentence thereof, and an answer processing means ,
    The answer processing means includes
    When an arbitrary user utterance is transmitted from the first means or when a certain period of time elapses without speech, the first answer sentence is determined based on the conversation scenario, and the determined first answer sentence and the first answer sentence The process of transmitting the operation control information associated with one answer sentence to the first means is repeated until a first specific user utterance is transmitted from the first means, and from the first means, If the first specific user utterance is transmitted, a first function of transmitting an operation control information correspondence to the second reply sentence and the second reply sentence to the first means,
    For the third user utterance transmitted from the first means, a third answer sentence is determined based on a conversation scenario, and is associated with the determined third answer sentence and the third answer sentence. If the third answer sentence for the third user utterance cannot be found from the conversation scenario, the expert responds to the answer to the third user utterance. The third user utterance is transmitted, the response content corresponding thereto is received, the received response content is transmitted to the first means, and the third user utterance, the response sentence, and the response content are stored. A second function for transmitting a conversation log and receiving and storing a conversation scenario generated based on the conversation log;
    If the response to the user utterance can be handled by the current plan , the basic control state is determined as the first control information, the response specified in the next plan specification information is determined,
    When the user utterance requests termination of the current conversation, the basic control state is determined as the second control information, the conversation is terminated,
    If an answer sentence to the user utterance cannot correspond to the current plan , the basic control state is determined as the third control information, an answer sentence is determined from another plan different from the current plan ,
    And a third function for further determining the basic control state as the fourth control information when the user's intention is not clear from the user utterance, and further extracting the user utterance.
JP2009150146A 2008-08-20 2009-06-24 Customer service system and conversation server Active JP5897240B2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2008212190 2008-08-20
JP2008212190 2008-08-20
JP2009150146A JP5897240B2 (en) 2008-08-20 2009-06-24 Customer service system and conversation server

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2009150146A JP5897240B2 (en) 2008-08-20 2009-06-24 Customer service system and conversation server
US12/542,170 US8374859B2 (en) 2008-08-20 2009-08-17 Automatic answering device, automatic answering system, conversation scenario editing device, conversation server, and automatic answering method
CN 200910167065 CN101656800B (en) 2008-08-20 2009-08-19 Automatic answering device and method thereof, conversation scenario editing device, conversation server
EP09168152.8A EP2157571B1 (en) 2008-08-20 2009-08-19 Automatic answering device, automatic answering system, conversation scenario editing device, conversation server, and automatic answering method

Publications (2)

Publication Number Publication Date
JP2010073191A JP2010073191A (en) 2010-04-02
JP5897240B2 true JP5897240B2 (en) 2016-03-30

Family

ID=41710876

Family Applications (2)

Application Number Title Priority Date Filing Date
JP2009150146A Active JP5897240B2 (en) 2008-08-20 2009-06-24 Customer service system and conversation server
JP2009150147A Active JP5829000B2 (en) 2008-08-20 2009-06-24 Conversation scenario editing device

Family Applications After (1)

Application Number Title Priority Date Filing Date
JP2009150147A Active JP5829000B2 (en) 2008-08-20 2009-06-24 Conversation scenario editing device

Country Status (2)

Country Link
JP (2) JP5897240B2 (en)
CN (1) CN101656800B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013072887A (en) * 2011-09-26 2013-04-22 Toshiba Corp Interactive device
WO2014020835A1 (en) * 2012-07-31 2014-02-06 日本電気株式会社 Agent control system, method, and program
KR101508429B1 (en) * 2013-08-22 2015-04-07 주식회사 엘지씨엔에스 System and method for providing agent service to user terminal
JP2015129793A (en) * 2014-01-06 2015-07-16 株式会社デンソー Voice recognition apparatus
JP6255274B2 (en) * 2014-02-19 2017-12-27 シャープ株式会社 Information processing apparatus, voice dialogue apparatus, and control program
JP2015184563A (en) * 2014-03-25 2015-10-22 シャープ株式会社 Interactive household electrical system, server device, interactive household electrical appliance, method for household electrical system to interact, and program for realizing the same by computer
JP6271361B2 (en) * 2014-07-18 2018-01-31 ヤフー株式会社 Information processing apparatus, information processing method, and information processing program
KR20160136837A (en) 2015-05-21 2016-11-30 라인 가부시키가이샤 Method, system and recording medium for providing content in messenger
JP2017146782A (en) * 2016-02-17 2017-08-24 ソニー株式会社 Information processing apparatus, information processing method, and program
CN109997128A (en) * 2016-11-25 2019-07-09 株式会社东芝 Knowledge architecture application system and program
JP2018159729A (en) * 2017-03-22 2018-10-11 株式会社東芝 Interaction system construction support device, method and program
US20180330252A1 (en) 2017-05-12 2018-11-15 Fujitsu Limited Interaction scenario display control method and information processing apparatus
JPWO2019026716A1 (en) * 2017-08-04 2020-08-20 ソニー株式会社 Information processing apparatus and information processing method
JP6695850B2 (en) * 2017-12-27 2020-05-20 ヤフー株式会社 Information processing apparatus, information processing method, and information processing program

Family Cites Families (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3378595B2 (en) * 1992-09-30 2003-02-17 株式会社日立製作所 Spoken dialogue system and dialogue progress control method thereof
JPH1125174A (en) * 1997-07-02 1999-01-29 Nec Corp System and method for automatically sending answer in help disk system and recording medium recording automatic answer sending program
JP3178426B2 (en) * 1998-07-29 2001-06-18 日本電気株式会社 Natural language dialogue system and natural language dialogue program recording medium
IL142363D0 (en) * 1998-10-02 2002-03-10 Ibm System and method for providing network coordinated conversational services
US6314402B1 (en) * 1999-04-23 2001-11-06 Nuance Communications Method and apparatus for creating modifiable and combinable speech objects for acquiring information from a speaker in an interactive voice response system
JP2001236401A (en) * 2000-02-24 2001-08-31 Nec Eng Ltd Device and method for answering by help desk system and recording medium with control program recording thereon
JP2001273310A (en) * 2000-03-27 2001-10-05 Livlib Co Ltd Various inquiry/answer service system through internet and intranet
JP3654850B2 (en) * 2000-05-17 2005-06-02 松下電器産業株式会社 Information retrieval system
US7194409B2 (en) * 2000-11-30 2007-03-20 Bruce Balentine Method and system for preventing error amplification in natural language dialogues
JP4336808B2 (en) * 2000-11-30 2009-09-30 富士通株式会社 Spoken dialogue program generation system and recording medium
JP3450823B2 (en) * 2000-12-01 2003-09-29 株式会社ナムコ Simulated conversation system, simulated conversation method, and information storage medium
JP2002169818A (en) * 2000-12-04 2002-06-14 Sanyo Electric Co Ltd Device and system for supporting user
US6882723B1 (en) * 2001-03-05 2005-04-19 Verizon Corporate Services Group Inc. Apparatus and method for quantifying an automation benefit of an automated response system
JP2002287791A (en) * 2001-03-21 2002-10-04 Global Data System Co Ltd Intellectual interactive device based on voice recognition using expert system and its method
JP2002324019A (en) * 2001-04-24 2002-11-08 Sons Ltd Virtual world presentment method, virtual world presentment system, user terminal that can be used for these, server and computer program
JP2004054883A (en) * 2001-11-13 2004-02-19 Equos Research Co Ltd Onboard agent system and interactive operation control system
JP4132962B2 (en) * 2002-05-16 2008-08-13 パイオニア株式会社 Interactive information providing apparatus, interactive information providing program, and storage medium storing the same
JP3945356B2 (en) * 2002-09-17 2007-07-18 株式会社デンソー Spoken dialogue apparatus and program
US7606714B2 (en) * 2003-02-11 2009-10-20 Microsoft Corporation Natural language classification within an automated response system
JP2004355386A (en) * 2003-05-29 2004-12-16 Nippon Telegr & Teleph Corp <Ntt> Method and apparatus for repeating question conversation in question-answer system, question conversation repeating program and recording medium with question conversation repeating program recorded thereon
JP4408665B2 (en) * 2003-08-11 2010-02-03 富士通株式会社 Speech recognition apparatus for speech recognition, speech data collection method for speech recognition, and computer program
JP2006133296A (en) * 2004-11-02 2006-05-25 Matsushita Electric Ind Co Ltd Voice interactive device
JP2006277519A (en) * 2005-03-30 2006-10-12 Toshiba Corp Interaction device, interaction scenario editing device, interaction method and program
JP2007114621A (en) * 2005-10-21 2007-05-10 Aruze Corp Conversation controller
JP4849662B2 (en) * 2005-10-21 2012-01-11 株式会社ピートゥピーエー Conversation control device
JP2008052449A (en) * 2006-08-23 2008-03-06 Synapse Communications Kk Interactive agent system and method
CN101075435B (en) * 2007-04-19 2011-05-18 深圳先进技术研究院 Intelligent chatting system and its realizing method
CN101122972A (en) * 2007-09-01 2008-02-13 腾讯科技(深圳)有限公司 Virtual pet chatting system, method and virtual pet server for answering question

Also Published As

Publication number Publication date
CN101656800B (en) 2013-07-24
JP2010073191A (en) 2010-04-02
CN101656800A (en) 2010-02-24
JP5829000B2 (en) 2015-12-09
JP2010073192A (en) 2010-04-02

Similar Documents

Publication Publication Date Title
Blodgett et al. Demographic dialectal variation in social media: A case study of African-American English
US10572589B2 (en) Cognitive matching of narrative data
US10567329B2 (en) Methods and apparatus for inserting content into conversations in on-line and digital environments
Meteer Expressibility and the problem of efficient text planning
Thelwall The Heart and soul of the web? Sentiment strength detection in the social web with SentiStrength
Bajaj et al. Ms marco: A human generated machine reading comprehension dataset
US10579657B2 (en) Answering questions via a persona-based natural language processing (NLP) system
KR101881114B1 (en) Identifying tasks in messages
US8892419B2 (en) System and methods for semiautomatic generation and tuning of natural language interaction applications
US8738558B2 (en) Method and computer program product for providing a response to a statement of a user
US8386265B2 (en) Language translation with emotion metadata
Sabou et al. Crowdsourcing research opportunities: lessons from natural language processing
Bird et al. Natural language processing with Python: analyzing text with the natural language toolkit
US8346563B1 (en) System and methods for delivering advanced natural language interaction applications
McDonald et al. Use fewer instances of the letter “i”: Toward writing style anonymization
Wu et al. Emotion recognition from text using semantic labels and separable mixture models
US8260616B2 (en) System and method for audio content generation
US8554540B2 (en) Topic map based indexing and searching apparatus
US20130325992A1 (en) Methods and apparatus for determining outcomes of on-line conversations and similar discourses through analysis of expressions of sentiment during the conversations
US8407049B2 (en) Systems and methods for conversation enhancement
US8521512B2 (en) Systems and methods for natural language communication with a computer
Ku et al. Mining opinions from the Web: Beyond relevance retrieval
JP4901738B2 (en) Machine learning
US8954849B2 (en) Communication support method, system, and server device
US7503007B2 (en) Context enhanced messaging and collaboration system

Legal Events

Date Code Title Description
RD02 Notification of acceptance of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7422

Effective date: 20110224

RD02 Notification of acceptance of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7422

Effective date: 20120509

A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20120509

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20130619

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20130709

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20130909

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20140212

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20140411

A02 Decision of refusal

Free format text: JAPANESE INTERMEDIATE CODE: A02

Effective date: 20140916

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20141120

A911 Transfer of reconsideration by examiner before appeal (zenchi)

Free format text: JAPANESE INTERMEDIATE CODE: A911

Effective date: 20141128

A912 Removal of reconsideration by examiner before appeal (zenchi)

Free format text: JAPANESE INTERMEDIATE CODE: A912

Effective date: 20150109

RD04 Notification of resignation of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7424

Effective date: 20150130

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20151218

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20160302

R150 Certificate of patent or registration of utility model

Ref document number: 5897240

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250