WO2022118869A1 - Information processing method, information processing device, information processing system, and computer program - Google Patents

Information processing method, information processing device, information processing system, and computer program Download PDF

Info

Publication number
WO2022118869A1
WO2022118869A1 PCT/JP2021/044021 JP2021044021W WO2022118869A1 WO 2022118869 A1 WO2022118869 A1 WO 2022118869A1 JP 2021044021 W JP2021044021 W JP 2021044021W WO 2022118869 A1 WO2022118869 A1 WO 2022118869A1
Authority
WO
WIPO (PCT)
Prior art keywords
sentence
model
user
conversation
input
Prior art date
Application number
PCT/JP2021/044021
Other languages
French (fr)
Japanese (ja)
Inventor
謙一 原田
直也 宮本
貴治 伊藤
友博 山田
和也 鵜野
Original Assignee
株式会社Rath
株式会社オージス総研
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Rath, 株式会社オージス総研 filed Critical 株式会社Rath
Priority to JP2022566952A priority Critical patent/JPWO2022118869A1/ja
Publication of WO2022118869A1 publication Critical patent/WO2022118869A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/9035Filtering based on additional data, e.g. user or group profiles

Definitions

  • the present invention relates to an information processing method, an information processing device, an information processing system, and a computer program in an artificial intelligence agent that talks with a human.
  • AI Artificial Intelligence
  • Such an agent is used as a search for FAQ (Frequently Asked Questions) related to a specific service and as a chatbot related to these (Patent Document 1 and the like).
  • the artificial intelligence agent is expected to be learned to have characteristics by the attributes of the corpus based on learning, but it is difficult for the operator to arbitrarily control the characteristics.
  • the artificial intelligence agent used as a chatbot such as FAQ talks with an unspecified number of people and is passive, the artificial intelligence agent side can accumulate useful information for conversation with a specific user. difficult.
  • the present invention has been made in view of such circumstances, and is an information processing method, an information processing apparatus, and an information processing method for improving a conversation between a user and an artificial intelligence agent so as to be a natural conversation in consideration of each other's characteristics.
  • the purpose is to provide an information processing system and a computer program.
  • the information processing method of the embodiment of the present disclosure includes a conversation model learned to output a model answer sentence when an input sentence from a user is input, and a database storing conversation rules related to the conversation model.
  • the computer accepts the input sentence from the user and inputs the accepted input sentence into the conversation model, the model answer sentence is acquired, and the accepted input sentence or the acquired model answer sentence is stored in the database.
  • the model answer sentence is acquired, and the accepted input sentence or the acquired model answer sentence is stored in the database.
  • a rule answer sentence based on the conversation rule corresponding to the above is created, and the model answer sentence or the rule answer sentence is output.
  • the information processing method of the present disclosure relates to an agent system in which a conversation with a user in natural language is carried out by using a conversation model learned by machine learning.
  • the model answer sentence from the conversation model is used for at least one of the input sentence input by the user and the model answer sentence output when the input sentence is input to the conversation model. Determine if to use.
  • the computer creates a rule-based rule answer when it does not use the model answer.
  • conversations based on the model learned by deep learning are not all, but are controlled to be rule-based conversations when some are necessary.
  • the quality of the conversation with the artificial intelligence agent is improved so that the conversation is based on the user's profile and the conversation is natural in consideration of each other's characteristics.
  • FIG. 1 is a schematic diagram showing a configuration example of the agent system 100.
  • the agent system 100 includes a server 1, an intermediate server 2, a plurality of terminals 3, 3, 3 ...
  • Each device is communicated and connected via a network N such as the Internet.
  • Server 1 is a server computer capable of various information processing and transmission / reception of information.
  • the server 1 learns to output a response sentence from the agent to the input sentence when the input sentence (spoken sentence) from the user is input by learning the predetermined training data.
  • a completed machine learning model (conversation model 50 described later) has been generated.
  • the server 1 inputs an input sentence from the user to the model, generates an answer sentence, and outputs the answer sentence.
  • the terminal 3 is an information processing terminal used by each user (user of the agent system 100), and is, for example, a smartphone, a personal computer, a tablet terminal, or the like.
  • the terminal 3 displays an image of a character (two-dimensional or three-dimensional animation set as a conversation partner of the user) corresponding to the agent, and accepts input of an input sentence from the user.
  • the server 1 generates a response sentence to the input sentence input to the terminal 3, outputs the answer sentence to the terminal 3, and displays it as a response by the character.
  • the intermediate server 2 is a server computer located between the server 1 and the terminal 3, and transmits the input text input to the terminal 3 to the server 1 and the response text generated by the server 1 to the terminal 3. ..
  • the intermediate server 2 outputs an answer sentence by combining not only a machine learning model but also a rule-based conversation in order to improve the quality of the conversation.
  • server 1 and the intermediate server 2 will be described as different devices, but each function may be realized in one device.
  • FIG. 2 is a block diagram showing a configuration example of the server 1.
  • the server 1 includes a control unit 11, a main storage unit 12, a communication unit 13, and an auxiliary storage unit 14.
  • the control unit 11 has one or more CPUs (Central Processing Units), MPUs (Micro-Processing Units), GPUs (Graphics Processing Units), and other arithmetic processing units, and stores the program P1 stored in the auxiliary storage unit 14. By reading and executing, various information processing, control processing, etc. are performed.
  • the main storage unit 12 is a temporary storage area for SRAM (Static Random Access Memory), DRAM (Dynamic Random Access Memory), flash memory, etc., and temporarily stores data necessary for the control unit 11 to execute arithmetic processing.
  • the communication unit 13 is a communication module for performing processing related to communication, and transmits / receives information to / from the outside.
  • the auxiliary storage unit 14 is a non-volatile storage area such as a large-capacity memory or a hard disk, and stores the program P1 and other data necessary for the control unit 11 to execute processing. Further, the auxiliary storage unit 14 stores the conversation model 50.
  • the conversation model 50 is a machine learning model in which predetermined training data has been learned, and is a model that is learned to output an answer sentence to the input sentence when an input sentence from the user is input.
  • FIG. 3 is a block diagram showing a configuration example of the intermediate server 2.
  • the intermediate server 2 includes a control unit 21, a main storage unit 22, a communication unit 23, and an auxiliary storage unit 24.
  • the control unit 21 is an arithmetic processing device such as a CPU, and performs various information processing, control processing, and the like by reading and executing the program P2 stored in the auxiliary storage unit 24.
  • the main storage unit 22 is a temporary storage area such as a RAM, and temporarily stores data necessary for the control unit 21 to execute arithmetic processing.
  • the communication unit 23 is a communication module for performing processing related to communication, and transmits / receives information to / from the outside.
  • the auxiliary storage unit 24 is a non-volatile storage area such as a large-capacity memory or a hard disk, and stores the program P2 and other data necessary for the control unit 21 to execute processing. Further, the auxiliary storage unit 24 stores the rule DB 241 and the prohibited word DB 242 and the user DB 243.
  • the rule DB 241 is a database for storing conversation rules related to the conversation model 50.
  • the prohibited word DB 242 is a database for storing words prohibited in conversation.
  • the user DB 243 is a database that stores user information including user profiles (attributes including names, families, fields of interest, hobbies, etc.) in association with user IDs.
  • FIG. 4 is a block diagram showing a configuration example of the terminal 3.
  • the terminal 3 includes a control unit 31, a main storage unit 32, a communication unit 33, a display unit 34, an input unit 35, a voice output unit 36, an auxiliary storage unit 37, and a voice input unit 38.
  • the control unit 31 is an arithmetic processing device such as a CPU, and performs various information processing, control processing, and the like by reading and executing the program P3 stored in the auxiliary storage unit 37.
  • the main storage unit 32 is a temporary storage area such as a RAM, and temporarily stores data necessary for the control unit 31 to execute arithmetic processing.
  • the communication unit 33 is a communication module for performing processing related to communication, and transmits / receives information to / from the outside.
  • the display unit 34 is a display screen such as a liquid crystal display and displays an image.
  • the input unit 35 is an operation interface such as a mechanical key, and receives an operation input from the user.
  • the voice output unit 36 is a speaker that outputs voice, and outputs the voice given by the control unit 31.
  • the voice input unit 38 is a microphone for inputting voice, and inputs voice spoken by the user to perform voice recognition.
  • the control unit 31 receives the result of voice recognition from the voice input unit 38 and can acquire it as a text (or a phonetic symbol).
  • the auxiliary storage unit 37 is a non-volatile storage device such as a hard disk and a large-capacity memory, and stores the program P3 and other data necessary for the control unit 31 to execute processing.
  • the server 1 generates the conversation model 50 based on the training data (corpus) which is a pair of the input sentence and the answer sentence, and inputs the input sentence to the conversation model 50 to generate the answer sentence. ..
  • the training data (corpus) which is a pair of the input sentence and the answer sentence
  • the conversation model 50 produces inappropriate answer sentences (for example, answer sentences contrary to public order and morals, answer sentences contrary to the purpose of conversation, answer sentences contrary to the setting on the character side, etc.) depending on the learned training data and input sentences. There is a risk of generating it.
  • the intermediate server 2 determines whether or not a rule-based conversation (answer) should be performed instead of a machine learning model conversation according to the input sentence from the user and / or the response sentence from the conversation model 50. ..
  • the intermediate server 2 defines in the rule DB 241 and the prohibited word DB 242 an input sentence to be answered on a rule basis, an inappropriate word as an answer sentence, and the like, and compares with those rules to answer on a rule basis. Judge whether or not it is possible.
  • the intermediate server 2 outputs the answer sentence according to the conversation rule of the predetermined rule DB 241 regardless of the answer sentence obtained from the conversation model 50.
  • 5 and 6 are flowcharts showing an example of the processing procedure in the agent system 100.
  • the user activates the program P3 using the terminal 3
  • the following processing is started and repeatedly executed as long as the input from the user continues.
  • the control unit 31 of the terminal 3 causes the display unit 34 to display the character set as the conversation partner of the user based on the program P3 (step S301).
  • the control unit 31 of the terminal 3 acquires an input sentence that is the result of voice recognition for the voice input by the voice input unit 38 (step S302).
  • the voice input unit 38 recognizes, for example, the voice picked up by the microphone during the period when the key included in the input unit 35 of the terminal 3 is pressed.
  • the voice input unit 38 may recognize the subsequent voice when the voice including a specific keyword is input.
  • the voice input unit 38 may recognize only the voice of a specific user (owner of the terminal 3) by using the voiceprint information.
  • the control unit 31 transmits an input sentence including a text or a phonetic symbol to the intermediate server 2 (step S303).
  • control unit 21 of the intermediate server 2 receives the input sentence (step S201)
  • the control unit 21 executes an analysis process of comparing the received input sentence with the DB group including the rule DB 241 and the prohibited word DB 242 (step S202). The details of the analysis process will be described later.
  • the control unit 21 determines whether or not to input the received input sentence into the conversation model 50 based on the result of the analysis process (step S203).
  • control unit 21 transmits the accepted input sentence to the server 1 (step S204).
  • the server 1 receives the input sentence (step S101), the control unit 11 inputs the received input sentence to the conversation model 50 (step S102), and acquires the model answer sentence output from the conversation model 50 (step). S103). The server 1 transmits the acquired model response text to the intermediate server 2 as a response to step S101 (step S104).
  • the control unit 21 of the intermediate server 2 receives the model response statement transmitted from the server 1 (step S205), and executes an analysis process of comparing the model response statement with the DB group including the rule DB 241 and the prohibited word DB 242. (Step S206). The details of the analysis process will be described later.
  • the control unit 21 determines whether or not to use the model answer sentence obtained from the conversation model 50 based on the result of the analysis process (step S207).
  • the control unit 21 converts the model answer sentence obtained from the conversation model 50 into the dialogue of the character (step S208), and the converted model answer sentence. Is transmitted to the terminal 3 (step S209).
  • the conversion in step S208 may be performed for each character by conversion to a polite word, addition of a predetermined flexion, or conversion to a predetermined phrase, or may be performed through a character-by-character dialogue conversion model.
  • step S203 When it is determined in step S203 that the input to the conversation model 50 is not performed (S203: NO), or when it is determined in step S207 that the model answer statement is not used (S207: NO), the control unit 21 sends the rule DB 241 to the rule DB 241. Based on this, a rule answer sentence corresponding to the input sentence is created (step S210).
  • a rule answer sentence may be created by exchanging a specific word from the model answer sentence, or a fixed phrase preset for the input sentence defined in the rule DB 241 may be used as the rule answer sentence. May be created as.
  • the control unit 21 converts the created rule answer sentence into a character line (step S211) and transmits it to the terminal 3 (step S212).
  • the terminal 3 receives the model answer sentence or the rule answer sentence transmitted from the intermediate server 2 (step S304), and the control unit 31 displays a line on the character to the display unit 34 or the voice output unit 36 according to the received answer sentence. Output (step S305).
  • the control unit 31 returns the process to step S302 and accepts an input statement from the user until the program P3 ends.
  • FIG. 7 is a flowchart showing an example of the analysis processing procedure.
  • the flowchart of FIG. 6 corresponds to the details of the processing of steps S202 and S206 by the intermediate server 2 in FIGS. 5 and 6.
  • the control unit 21 of the intermediate server 2 determines whether or not the received input statement (S201) or model response statement (S205) includes the prohibited word stored in the prohibited word DB 242 (step S601). ..
  • the prohibited word includes a word relating to a violation of public order and morals and a word judged to be a discriminatory expression, and is stored in the prohibited word DB 242.
  • the prohibited words may also include words that are judged to be political or violent expressions.
  • control unit 21 determines that the answer should be made on a rule basis (step S602), and returns the process to the next step S203 or step S207.
  • the control unit 21 collates the received input sentence with the rule of the input sentence to be answered based on the rule stored in advance in the rule DB 241. (Step S603). As a result of the collation, the control unit 21 determines whether or not the rule-based input sentence to be answered is met (step S604), and if it is determined to match (S604: YES), the control unit 21 determines. It is determined that the answer should be made in a fixed phrase based on the rule (step S605), and the process is returned to the next step S203 or step S207.
  • Step S603 may be replaced with a process in which the control unit 21 calculates a value indicating consistency (similarity) between the received input statement and the rule DB 241.
  • the control unit 21 may determine whether or not the calculated value conforms to the regulation depending on whether or not the calculated value is equal to or greater than a predetermined value, and determine whether or not to use the model answer statement. ..
  • step S604 when the field to be set is medical care, the control unit 21 determines that the input sentence or the model answer sentence regarding the health of the user and the user's family conforms to the regulation.
  • the control unit 21 determines that the input sentence or the model answer sentence regarding the schedule, the technique, and the tool conforms to the regulation.
  • the control unit 21 determines whether the input sentence or the model answer sentence is related to the setting of the character set as the conversation partner. (Step S606). In step S606, when a plurality of characters are set, the control unit 21 may determine that they do not match when the input statement is related to the setting data of any of the characters. This is to answer with a rule answer sentence that does not contradict the character settings (attributes).
  • step S606 When it is determined in step S606 that the setting data is related (S606: YES), the control unit 21 determines that a fixed phrase based on the character setting should be answered (step S607), and processes the next step S203 or step S207. Return to.
  • step S606 When it is determined in step S606 that the setting data is not related (S606: NO), the control unit 21 determines that the input is input to the conversation model 50 or determines that the model answer sentence is used (step S608). The control unit 21 returns the process to the next step S203 or step S207.
  • step S606 the control unit 21 calculates the consistency between the input statement or the model answer statement and the character setting in the rule DB 241 as a numerical value to determine whether or not it is related to the character setting, and the calculated value is predetermined. It may be determined whether or not it is equal to or greater than the value. In this case, if it is determined that the consistency is high, the control unit 21 determines that the input is input to the conversation model 50, or determines that the model answer sentence from the conversation model 50 is used.
  • FIG. 8 is an explanatory diagram showing an example of conversation in the agent system 100.
  • FIG. 8 shows an example of a screen displayed on the display unit 34 of the terminal 3.
  • a text conversation that is, a conversation with an agent as a chatbot is executed.
  • the text of the input sentence input by the user is displayed as a balloon-shaped image from the left side, and the text of the answer sentence (or the utterance sentence) from the character set as the conversation partner is a balloon from the right side. It is displayed as a balloon image.
  • the input sentence input by the user is "How are you?".
  • the control unit 21 of the intermediate server 2 determines in step S606 of the analysis process that the setting of the character set as the conversation partner is related, and in the rule answer sentence. Judge to answer.
  • a setting such as “race” “human” corresponding to a type such as “race”, “gender”, “age”, “origin”, “family”, “school”, etc. is set. May include.
  • the rule DB 241 includes a "word” for determining that the setting is related to the type.
  • the rule DB 241 includes, for example, “father,” “mother,” “brother,” “sister,” “sister,” “younger brother,” and the like as terms for determining that the type is "family.”
  • the rule DB 241 includes, for example, "hometown”, “home country”, “hometown”, “parent's home”, etc. in association with the type "origin”.
  • step S606 the control unit 21 can recognize that the input sentence from the user is an input sentence including a character setting, particularly a term related to "family".
  • the control unit 21 "does not have a sibling. ⁇ User>” due to the character setting data "no siblings (brothers / sisters)" included in the rule DB 241 when it is determined to be "related to the character setting". Do you have a ⁇ sibling word>? ”
  • the contents of the parentheses should include the word or its type that was the basis for judging that the input sentence was related to the setting.
  • the control unit 21 puts the word “brother” in parentheses in the rule answer sentence. Is converted based on the character settings and answered.
  • the agent system 100 it is determined whether or not to use the model answer sentence for the input sentence from the user, and if it is determined not to use the model answer sentence, the rule answer sentence corresponding to the input sentence is generated. It is output. For example, if the input sentence contains a prohibited word, an inappropriate answer sentence may be output unintentionally, so it is determined that the model answer sentence is not used and the rule answer sentence is used. .. On the other hand, other than such a case, it is determined that the model answer sentence is used. That is, instead of using the model answer sentence uniformly, the pattern of answering with the rule answer sentence and the pattern of answering with the model answer sentence are used properly.
  • the model answer sentence from the conversation model 50 is appropriate while avoiding the inconsistency in the character setting and preventing the answer sentence from the agent from becoming violent or discriminatory answer sentence. It becomes possible to realize a more natural conversation by using it.
  • the agent system 100 stores the user's data as a profile in the user DB 243 in order to understand the user based on a natural conversation with the agent, that is, in a natural manner.
  • the agent system 100 executes a conversation including the profile stored in the user DB 243.
  • the rule DB 241 also includes a rule that when the input sentence relates to the user's profile, it is used as the rule answer sentence.
  • the configuration of the agent system 100 of the second embodiment is the same as the configuration of the agent system 100 of the first embodiment except that the details of the processing in the intermediate server 2 are different, the same reference numerals are given to the common configurations. A detailed explanation will be omitted.
  • FIGS. 9 to 11 are flowcharts showing an example of the processing procedure in the agent system 100 of the second embodiment.
  • the procedures common to the procedures shown in the flowcharts of FIGS. 5 and 6 are assigned the same step numbers and detailed description thereof will be omitted.
  • the control unit 21 of the intermediate server 2 accepts an input statement (S201).
  • the control unit 21 of the intermediate server 2 of the second embodiment receives the user ID from the terminal 3 together with the input text.
  • the control unit 21 determines whether or not the received input sentence is the extraction target of the user's profile (step S221).
  • step S221 the control unit 21 analyzes the input sentence from the user in a morphological manner, and if the subject is the first person, it can be inferred that the user is speaking, so it is determined that the profile of the user can be extracted.
  • the control unit 21 determines that the user's profile is related to each type such as "gender”, “age”, “house”, “origin”, “habit”, “family”, “school”, “hobby”, etc. You may execute the process of S221 using the "word” for doing so.
  • Rule DB 241 includes "father,” “mother,” “brother,” “sister,” “sister,” “younger brother,” and the like as terms for determining that the term is related to "family.”
  • the rule DB 241 includes, for example, “breakfast”, “breakfast”, “lunch”, “snack”, etc. in association with “habit”.
  • the rule DB 241 includes, for example, “cooking”, “music”, “sports (type)", “game”, etc. in association with "hobbies”.
  • the control unit 21 Can determine if the user's profile can be extracted.
  • the control unit 21 determines whether or not the profile is to be extracted by determining whether or not the conversation (conversation set) is such that the profile can be extracted by pattern analysis after language processing. You may.
  • control unit 21 extracts data related to the user's profile from the received input sentence (step S222).
  • step S222 the control unit 21 extracts the "word" that is the basis for determining that the profile can be extracted as data related to the profile.
  • the control unit 21 may use all the input statements as data related to the profile.
  • step S222 for example, when the input sentence includes "(I) breakfast” and “eat”, the control unit 21 extracts "breakfast” as data related to the type "habit” in the user's profile.
  • the control unit 21 extracts "jogging” as data related to the type "hobby” in the user's profile.
  • the control unit 21 may store the term as data related to the type "habit” in the user's profile.
  • the control unit 21 extracts the "younger brother” as data relating to the type "family” in the user's profile.
  • the control unit 21 stores the extracted data in the user DB 243 in association with the user ID (step S223), and proceeds to the process in step S202.
  • the control unit 21 stores, for example, "eating (yes)" "breakfast” in association with the type of "habit”. In the case of "habit", the control unit 21 may store the dates in association with each other.
  • the user profile is created by increasing the data stored in the user DB 243 by step S223.
  • step S221 If it is determined in step S221 that extraction is not possible (S221: NO), the control unit 21 proceeds to step S202 as it is.
  • the correction process for correcting the model response sentence to the user is further executed based on the user profile created as described above.
  • the control unit 21 determines that the model answer statement is to be used (S207: YES)
  • the control unit 21 corrects the model answer statement based on the user profile stored in the user DB 243 (step S224).
  • the control unit 21 converts the corrected model answer sentence for a character (step S225).
  • step S224 for example, when the model answer sentence is the phrase "I want to play sports", "sports" is stored in the type "hobby" in the user's profile, and the item “jogging" is stored.
  • the "sports" part in the answer sentence can be corrected to "jogging”.
  • the model answer is corrected to "I want to go jogging".
  • the control unit 21 can correct the answer sentence to be more sympathetic to "I want to do” mo "jogging”.
  • the correction of the rule answer sentence is also executed.
  • the control unit 21 corrects the rule answer sentence based on the user profile stored in the user DB 243 (step S226), converts it into a character line (S211), and then converts it into a character line (S211). It is transmitted to the terminal 3 (S212).
  • the control unit 21 corrects the rule answer sentence by using the terms stored in the user DB 243 as the types “hobbies”, “habits”, etc. in the user profile, as in step S224.
  • 12 and 13 are explanatory views showing an example of conversation in the agent system 100 of the second embodiment.
  • 12 and 13 are screen examples displayed on the display unit 34 of the terminal 3, similar to the screen example shown in FIG. 8 of the first embodiment, and a text conversation is executed.
  • an input sentence (utterance sentence) that the character throws "Good morning / Have you eaten breakfast?" Is input.
  • the control unit 21 of the intermediate server 2 associates this input statement with the type of "habit” and " "Breakfast” is stored in the user DB 243.
  • an input sentence "I am also jogging every morning” is input from the user.
  • the control unit 21 determines that the profile can be extracted from the fact that the input sentence includes the word "every morning” indicating “cycle” and the word “jogging” "sports”, and determines the profile "every morning” and “jogging”. It is stored in the user DB 243.
  • the screen example of FIG. 13 shows a conversation made after the conversation shown in the screen example of FIG.
  • an input sentence is input from the user during a conversation between the user and the character set as the conversation partner, saying "I wonder if the weather is on the weekend".
  • input sentences asking the weather such as "Is it sunny tomorrow?" And “How is the weather on the weekend?" are input, and the type "hobby" in the user's profile is to be carried out outdoors (camping, If the conditions that barbecue, golf, jogging, walking, soccer, baseball, train, etc. are registered, "Recently ⁇ word of" hobby ">? May be included.
  • control unit 21 determines that the answer is based on the rule by the analysis process of step S202 (S602), and then "recently ⁇ the word of" hobby ">? Is created (S210), the registered word of "hobby” is read from the user profile of the user DB 243, and the rule answer sentence is corrected as "Recently jogging?" (S225). In the example of FIG. 13, it is converted into a polite language sentence according to the character setting (S211), and "Are you jogging recently?" Is output. In this way, it becomes possible to use the previously extracted user profile in the response from the agent (character).
  • the accumulation of the user's profile in the user DB 243 by the conversation makes the conversation between the user and the agent (character) more natural.
  • the agent character
  • FIG. 14 is a flowchart showing an example of a processing procedure for carrying out a conversation starting from an agent.
  • the control unit 31 of the terminal 3 starts the following processing at the timing determined to be the utterance timing based on the program P3. Whether or not it is an utterance timing can be determined by whether or not it is a set time (alarm), whether or not a predetermined time has passed since the end of the previous conversation, and the like.
  • the control unit 31 of the terminal 3 requests the intermediate server 2 to speak a character set as a conversation partner of the user (step S311). In step S311 the control unit 31 also transmits the user ID of the user.
  • the control unit 21 of the intermediate server 2 receives the utterance request (step S231) and creates an utterance sentence using the words and phrases of the profile associated with the user ID of the user DB 243 (step S232).
  • the control unit 21 selects, for example, the phrase “meal”, for example, “eating breakfast” from the profile type “habit” based on the time. If the habit of "eating breakfast” is stored as a user profile, the control unit 21 generates an utterance sentence "Did you eat breakfast?" In step S232.
  • control unit 21 may receive the position information of the terminal 3 together and create an utterance sentence based on the time and the position information.
  • the control unit 21 may acquire the weather information from an external server via the network N and create a call regarding the weather as an utterance sentence.
  • the control unit 21 converts the created utterance sentence into a character line (step S233), and transmits the converted utterance sentence to the terminal 3 (step S234). In this case, the conversation model 50 does not have to be used.
  • the intermediate server 2 executes the process of steps S231-S234 each time it receives an utterance request.
  • the control unit 31 of the terminal 3 receives the utterance sentence (step S312), and causes the character to output the dialogue to the display unit 34 or the voice output unit 36 according to the received utterance sentence (step S313). After that, the agent system 100 executes the processing procedure shown in the flowchart of FIGS. 9 to 11.
  • FIG. 15 is an explanatory diagram showing an example of conversation based on a profile in the agent system 100 of the second embodiment.
  • FIG. 15 is a screen example displayed on the display unit 34 of the terminal 3, similar to the screen example shown in FIG. 8 of the first embodiment, and a text conversation is executed.
  • the utterance is started from the agent side, that is, the call is made to the user.
  • the agent side it is possible for the agent side to actively make a call that reflects not only the information temporarily obtained in the flow of conversation but also the background profile of the user. This makes it possible to realize more realistic (more natural) communication between the user and the agent (character).
  • FIG. 16 is a block diagram showing a configuration example of the intermediate server 2 of the third embodiment.
  • the content DB 248 is stored in the auxiliary storage unit 24 of the intermediate server 2.
  • the content DB 238 stores questionnaires for users, advertising contents for users, and the like. Each content is associated with a corresponding type, for example, tags such as "meal”, “habit”, “health”, and "hobby", and is searchable.
  • the auxiliary storage unit 24 of the intermediate server 2 of the third embodiment stores the first model 245 and the second model 246 for profile extraction (the profile extraction model 247 is configured by the two models).
  • the first model 245 is an opportunity learning model that is learned to output a vector that quantifies the content and meaning of the input sentence when the input sentence received from the user is input.
  • the first model 245 has been trained to output a vector expressing the content / meaning of the input sentence as a numerical value.
  • the first model 245 may be configured to use the output from a natural language processing model such as BERT as a vector, for example.
  • the first model 245 is learned to output similar vectors for sentences with similar contents / meanings and different vectors for sentences with different contents / meanings, and the first model 245 outputs data related to the user profile in advance. Vectorize the sentences that can be extracted.
  • the intermediate server 2 has a similarity between the vector of the sentence that can extract the data related to the user's profile prepared in advance and the vector output from the first model 245 when the input sentence is input to the first model 245. , Whether or not the profile can be extracted from the input sentence can be determined depending on whether or not the value is equal to or greater than a predetermined value.
  • the configuration of the first model 245 is not limited to the above model. For example, even if the first model 245 is trained to combine classifiers and output the possibility that the data related to the user's profile can be extracted (similarity with the extractable sentence). good.
  • the second model 246 is a machine learning model that has been trained to extract words related to a profile from an input sentence when an input sentence is input.
  • the second model 246 is learned to output data indicating a preset word / phrase to be extracted, the type of the word / phrase, and an appearance position in the input sentence when an input sentence is input.
  • the intermediate server 2 inputs an input sentence determined to be an extraction target to the second model 246, and acquires words and phrases output from the second model 246 based on the data.
  • FIG. 17 is a schematic diagram of the second model 246.
  • the second model 246 includes an input layer that accepts an input sentence, an intermediate layer that performs an operation, and an output layer that outputs data indicating a phrase, a phrase type, and an appearance position in the input sentence.
  • the configuration of the agent system 100 of the third embodiment is the same as the configuration of the agent system 100 of the first embodiment. Is. Among the configurations of the agent system 100 of the third embodiment, the configurations common to those of the first embodiment are designated by the same reference numerals and detailed description thereof will be omitted.
  • FIGS. 18 and 19 are flowcharts showing an example of a processing procedure during a conversation by the intermediate server 2 of the third embodiment.
  • the processing procedure in the terminal 3 and the server 1 is omitted, but the processing in the terminal 3 and the server 1 is the same as the procedure shown in the flowcharts of FIGS. 5 and 6 of the first embodiment. be.
  • the procedures common to the processing procedure of the intermediate server 2 shown in the flowcharts of FIGS. 5 and 6 are given the same step numbers, and detailed description thereof will be omitted.
  • the control unit 21 of the intermediate server 2 accepts an input statement (S201).
  • the control unit 21 of the intermediate server 2 of the second embodiment receives the user ID from the terminal 3 together with the input text.
  • the control unit 21 inputs the received input sentence to the first model 245 of the profile extraction model 247 (step S241).
  • the control unit 21 determines whether or not the input statement is the target for extracting the user's profile depending on whether or not the similarity output from the first model 245 is equal to or higher than a predetermined value (the input statement extracts data related to the user's profile). (Whether or not the sentence is possible) is determined (step S242).
  • control unit 21 When it is determined to be the extraction target (S242: YES), the control unit 21 inputs the received input sentence to the second model 246 (step S243), and acquires the phrase output from the second model 246 (step S243). Step S244).
  • the control unit 21 stores the acquired words and phrases in association with the user ID in the user DB 243 (step S245), and proceeds to the process in step S202.
  • step S245 the profile of the user who input the input sentence is created and updated.
  • step S242 When it is determined in step S242 that it is not the extraction target (S242: NO), the control unit 21 is an answer sentence to the input sentence and an utterance sentence of the content asking the profile (a question answered by YES / NO). Create as a rule answer sentence (step S246). Prior to the process of step S246, the control unit 21 determines whether or not the input statement contains a prohibited word, and if it is determined that the input statement contains a prohibited word, a rule answer based on the rule base. You may create a statement.
  • step S246 the control unit 21 creates an utterance sentence of a question (closed question) that can be answered with YES / NO as a rule answer sentence.
  • the control unit 21 gives rule answer sentences such as "Do you eat breakfast every day?" "Did you sleep well last night?” "Do you like dogs?" "Do you like cats?" "Do you live in Tokyo?” create.
  • the sentences asking these profiles are stored in the rule DB 241 in advance, and the control unit 21 creates a rule answer sentence by selecting one of them.
  • the control unit 21 converts the created rule answer sentence for a character (step S247), and transmits the converted rule answer sentence to the terminal 3 (step S248).
  • the control unit 21 receives an input sentence (response sentence) from the user for the reply sentence transmitted in step S212 (step S249).
  • the input sentence received in step S249 is an answer to the effect of YES or NO.
  • the control unit 21 stores the content of the input sentence (response result of YES or NO) in the user DB 243 together with the content of the response sentence asked to the user in step S246 as a profile (step S250).
  • the control unit 21 selects content (content to be provided to the user) from the content DB 248 based on the profile stored in step S250 (step S251). As described above, the content is a Web-based content for conducting a questionnaire survey for users, a video related to advertising, or the like.
  • the control unit 21 transmits the selected content to the terminal 3 (step S252), and ends the process.
  • the content selected by the intermediate server 2 is output to the display unit 34 of the terminal 3, and the content is executed (reproduced) by the user.
  • FIGS. 20 and 21 are flowcharts showing another example of the processing procedure using the first model 245 and the second model 246.
  • the same step numbers are assigned to the procedures common to the processing procedures shown in the flowcharts of FIGS. 18 and 19, and detailed description thereof will be omitted.
  • step S242 when the control unit 21 of the intermediate server 2 determines in step S242 that it is not the extraction target (S242: NO), the content of asking the user for the profile (cannot answer YES / NO, asks the target itself).
  • the utterance sentence of the question (open question)) is created as a rule answer sentence to the input sentence (step S256).
  • step S256 is, for example, a sentence such as "Do you miss breakfast or dinner every day?” "Which do you like dogs or cats?" "What are your hobbies?"
  • the utterance sentences asking these profiles are stored in the rule DB 241 in advance, and the control unit 21 creates a rule answer sentence by selecting one of them.
  • the control unit 21 transmits the rule answer sentence asking the user to the terminal 3 after conversion (S247, S248), and when the input sentence which is the answer is received (S249), the input sentence is input to the second model 246 (step). S257).
  • the control unit 21 acquires a phrase output from the second model 246 (step S258).
  • the control unit 21 stores the acquired words and phrases in association with the user ID in the user DB 243 (step S259), and proceeds to the process in step S251.
  • FIG. 22 is a block diagram showing a configuration example of the intermediate server 2 of the modified example. As shown in FIG. 22, the intermediate server 2 stores a plurality of profile extraction models 247 that are different from each other. Each profile extraction model 247 is learned with different training data depending on the purpose, such as whether it is a conversation that asks for a hobby or a conversation that asks for a physical condition.
  • the intermediate server 2 of the modified example selects the profile extraction model 247 according to the purpose of the input sentence received from the terminal 3 in step S201 or S249, and then performs the subsequent processing (S241, S243, S257, etc.). Run.
  • the user profile can be acquired in a more natural conversation. Therefore, as an application of the agent system 100, it is possible to narrow down the field of the profile to be extracted. For example, when using the agent system 100 that enables natural conversation for the medical / long-term care field, a type that can determine whether or not it is a target for extracting a profile related to the health of the user and the user's family and its "words and phrases”. May be set. For example, "body temperature”, “blood pressure”, “pulse” and the like may be set as words and phrases of the type "health state”.
  • the agent system 100 when using the agent system 100 that enables natural conversation for business support, it is possible to determine whether or not the user's business-related profile is to be extracted, such as schedule and technical field. And its "words” may be set. For example, “body temperature”, “blood pressure”, “pulse” and the like may be set as words and phrases of the type "health state”.
  • the profile extraction model 247 has been described as a configuration for storing in the intermediate server 2.
  • the profile extraction model 247 may be stored in the server 1 and used by the intermediate server 2.
  • FIG. 23 is a block diagram showing a configuration example of the server 1 of the fourth embodiment.
  • the configurations common to those of the first embodiment are designated by the same reference numerals and detailed description thereof will be omitted.
  • the auxiliary storage unit 14 of the server 1 of the fourth embodiment stores the topic determination model 51 in addition to the conversation model 50.
  • the topic determination model 51 is a machine learning model in which predetermined training data for determining a topic has been learned.
  • the topic determination model 51 is a model that outputs data for determining a topic at that time each time an input sentence from a user and an answer sentence of an agent are input in order.
  • the predetermined training data learned by the topic determination model 51 is a set of an input sentence or an answer sentence and data for identifying a known topic.
  • the data for identifying a topic may be a topic tag, or each has a preset topic tag as a dimension, and a numerical value of the dimension corresponding to each topic tag indicates which topic is the conversation. It may be a vector represented by height.
  • the topic determined by the topic determination model 51 may include identification data of words that do not appear in the input sentence and the answer sentence.
  • FIG. 24 is a schematic diagram of the topic determination model 51.
  • the topic determination model 51 outputs data for determining what the topic is at that time each time an input sentence or a conversational sentence is input.
  • the topic determination model 51 outputs data as a vector indicating a topic.
  • a vector indicating a topic is represented by the length of a bar graph with respect to identification data (for example, a keyword) for identifying the topic, and the height of the possibility of the topic (the height of a numerical value of a dimension).
  • FIG. 24 shows how the probability of each topic's identification data changes as the conversation progresses.
  • the conversation model 50 of the fourth embodiment is learned to output a model answer sentence when the topic of the conversation is input together with the input sentence. Topics may be identified by identification data.
  • the conversation model 50 of the fourth embodiment has already learned training data including an input sentence and identification data output when the input sentence is input to the topic determination model 51.
  • FIGS. 25 and 26 are flowcharts showing an example of the processing procedure of the intermediate server 2 and the server 1 of the fourth embodiment.
  • the processing procedure in the terminal 3 is omitted in the flowcharts of FIGS. 25 and 26, the processing in the terminal 3 is the same as the procedure shown in the flowcharts of FIGS. 5 and 6 of the first embodiment.
  • the same step number is assigned to the procedure common to the processing procedure of the intermediate server 2 shown in the flowcharts of FIGS. 5 and 6, and detailed description thereof will be omitted.
  • control unit 21 of the intermediate server 2 accepts the input sentence (S201) and determines that the input sentence is input to the conversation model 50 as a result of the analysis process (S202) (S203: YES), the control unit 21 sends a request for topic determination together with the input sentence to the server. It is transmitted to 1 (step S264).
  • the server 1 receives the input sentence and the topic determination request (step S161), and the control unit 11 inputs the received input sentence to the topic determination model 51 (step S162).
  • the topic determination model 51 outputs data for determining what the topic is in response to the input of the input sentence, and the control unit 11 acquires the data output from the topic determination model 51 (step S163).
  • the control unit 11 determines a topic based on the data obtained in step S163 (step S164), and transmits data for identifying the determined topic to the intermediate server 2 (step S165).
  • the intermediate server 2 When the intermediate server 2 receives the data for identifying the topic (step S265), the intermediate server 2 transmits the input sentence and the topic identification data to the server 1 (step S266).
  • the server 1 receives the input sentence and the topic identification data (step S166), and the control unit 11 inputs the received input sentence and the topic identification data into the conversation model 50 (step S167).
  • the server 1 acquires the model answer sentence output from the conversation model 50 (S103), and transmits the acquired model answer sentence to the intermediate server 2 (S104).
  • the control unit 21 uses the model answer statement received in step S205. Notify the server 1 (step S267).
  • the control unit 21 may transmit the model answer sentence itself.
  • the control unit 21 advances the process to step S208 and transmits the model answer sentence to the terminal 3.
  • the server 1 After transmitting the model response text (S104), the server 1 determines whether or not the notification of using the transmitted model response text has been received from the intermediate server 2 (step S168). When it is determined that the message has been received (S168: YES), the control unit 11 inputs the model answer sentence from the conversation model 50 into the topic determination model 51 (step S169), and regarding one set of the input sentence and the answer sentence. Ends the processing of. It is preferable to input the two-way conversation into the topic determination model 51 one after another to obtain a determination.
  • step S207: NO the control unit 21 of the intermediate server 2 determines that the model answer statement is not used
  • the rule answer statement created in step S210 is input to the server 1.
  • Step S268 the control unit 21 advances the process to step S211. If it is determined in step S203 that the input to the conversation model 50 is not performed due to reasons such as offensive to public order and morals (S203: NO), the control unit 21 may skip the process of step S268. In addition, if the control unit 21 has not transmitted the input sentence from the user, the control unit 21 may transmit the input sentence and the rule answer sentence to be input to the topic determination model 51 at this point.
  • the topic determination model 51 By using the topic determination model 51 in this way, it is possible to have a conversation along the flow of the topic, improve the quality of the conversation, and realize a more natural conversation.
  • the topic determination model 51 has been described as a type in which conversations following a time series are continuously input as shown in FIG. 24. However, it may be a type that determines a topic at each time point. In this case, the processing of steps S168-S171 of the server 1 and the processing of steps S267 and S268 of the intermediate server 2 may not be executed.
  • the agent system 100 functions as a user concierge.
  • the configuration of the agent system 100 of the fifth embodiment is the same as the configuration of the agent system 100 of the first embodiment except for the details of the processing procedure of the intermediate server 2 described below. Therefore, in the description of the fifth embodiment, the same reference numerals are given to the configurations common to the configurations of the first embodiment, and detailed description thereof will be omitted.
  • FIG. 27 is a flowchart showing an example of a processing procedure during a conversation by the intermediate server 2 of the fifth embodiment.
  • the processing procedure in the terminal 3 and the server 1 is omitted in the flowchart of FIG. 27, the processing in the terminal 3 and the server 1 is the same as the procedure shown in the flowcharts of FIGS. 5 and 6 of the first embodiment. Further, in the flowchart of FIG. 27, the same step number is assigned to the procedure common to the processing procedure of the intermediate server 2 shown in the flowcharts of FIGS. 5 and 6, and detailed description thereof will be omitted.
  • control unit 21 determines whether or not to execute the search based on the input statement from the user (step S271).
  • the control unit 21 may determine whether or not the input sentence is a "question form” or a "question”.
  • the control unit 21 may determine whether or not the input sentence is a "statement asking about matters related to weather or season”.
  • the control unit 21 may create and use a model for determining whether or not it is an input sentence to be searched by machine learning, as in the profile extraction model 247 of the third embodiment.
  • the control unit 21 When it is determined that the search should be executed (S271: YES), the control unit 21 creates a search term based on the received input sentence and the user's profile (step S272).
  • the control unit 21 may use the input sentence as a search term as it is. For example, when the input sentence is "Is it sunny on the weekend?", The control unit 21 uses the word “Is it sunny on the weekend?" It may be created as a search term.
  • the control unit 21 When the input sentence is "greeting of the time”, the control unit 21 may create “weather”, "news”, etc. as search terms depending on the date.
  • the control unit 21 may create a search term such as "sports" of the user's "hobby" from the user profile of the user DB 243 together with the wording of "news”.
  • the control unit 21 executes a search using the created search term (step S273), creates a rule answer sentence using the search result (step S274), and proceeds to the process in step S211.
  • the control unit 21 may execute a search using the search engine and the dictionary provided in the intermediate server 2, or may perform an external search service, a map information providing service, a weather information providing service, and a public service via the network N.
  • the search may be executed using a transportation information providing service or the like.
  • the control unit 21 states that "the weather in (the user's" residence "district) is ". ”, Etc. and create a rule answer sentence.
  • control unit 21 advances the process to step S203.
  • the agent system 100 can exert a function as a concierge for individual users, such as guessing what the user wants to know on behalf of the user and executing a search by using an input sentence.
  • the processing procedure of FIG. 27 can also be combined with the processing procedure of the second to fourth embodiments.
  • the agent system 100 of the sixth embodiment makes the agent system 100 function as a user concierge as in the fifth embodiment.
  • a concierge secretary
  • it is used not only as a one-to-one conversation between the terminal 3 of each user and the agent, but also as a system that supports communication with other users. can.
  • the number of characters is not limited to one, and a plurality of characters may be set.
  • the intermediate server 2 stores the conversation history with the agent (character) in the user DB 243 in association with the user ID for each user.
  • the intermediate server 2 may refer to the past conversation history and reflect it in the conversation.
  • FIG. 28 is a block diagram showing a configuration example of the terminal 3 of the sixth embodiment.
  • the auxiliary storage unit 37 of the terminal 3 stores the communication program P32 for sharing messages, schedules, data, etc. between users in addition to the program P3.
  • the communication program P32 is an application program capable of communicating between a user and another user.
  • the communication program P32 is a message exchange application program, a chat program, a video call program, and the like.
  • the intermediate server 2 can cooperate with the communication program P32 by transmitting the control statement to the communication program P32 of the terminal 3.
  • FIG. 29 is a flowchart showing an example of a processing procedure during a conversation by the intermediate server 2 of the sixth embodiment.
  • the processing procedure in the terminal 3 and the server 1 is omitted in the flowchart of FIG. 29, the processing in the terminal 3 and the server 1 is the same as the procedure shown in the flowcharts of FIGS. 5 and 6 of the first embodiment. Further, in the flowchart of FIG. 29, the same step number is assigned to the procedure common to the processing procedure of the intermediate server 2 shown in the flowcharts of FIGS. 5 and 6, and detailed description thereof will be omitted.
  • the control unit 21 determines whether or not the input sentence from the received user relates to another user (whether or not the content relates to another user) (step S281).
  • the control unit 21 may determine, for example, whether or not the name (nickname) of another user registered in the user DB 243 and associated with the user is included.
  • the input texts are "What is the schedule for next week for ⁇ other user's name>?”, "Is ⁇ other user's name> fine?”, "This story is more detailed for ⁇ other user's name>”.
  • the control unit 21 can determine that it is related to another user depending on whether or not it conforms to a predetermined rule defined by the rule DB 241.
  • control unit 21 identifies the user ID of the other user (step S282).
  • the control unit 21 activates the communication program P32 addressed to the user corresponding to the user ID specified in step S282, creates a control statement regarding communication with another user (step S283), and inputs the input statement to the terminal of the user. It is transmitted to 3 (step S284). As a result, the communication program P32 is activated on the terminal 3 of the user, and communication with another user can be executed.
  • the control statement created in step S283 may be for reserving a video call with another user in the communication program P32.
  • the control statement created in step S283 is a message (message) that reports an input statement from the user to another user, saying, "Mr. A said,'Is ⁇ other user's name> how are you?'" The sentence) may be transmitted to the communication program P32.
  • the control statement created in step S284 sends a chat to the communication program P32 inquiring that "Mr. A wants to know if the schedule for X month and Y day of ⁇ other user's name> is free.” It may be something to make.
  • the control statement may be one that causes the communication program P32 to send a message asking the profile of another user, such as "Do you know the hobby of ⁇ other user's name>?".
  • the control statement may cause the communication program P32 to send a message that " ⁇ the name of another user> seems to be familiar with Z.”
  • control unit 21 After the processing of step S283 and step S284, the control unit 21 creates a rule answer sentence based on the profile stored in the user DB 243 in association with the specified user ID or the conversation history (step S285).
  • step S285 for example, the control unit 21 reports a recent conversation based on the conversation history between another user and the agent, and creates a sentence reporting the message as a rule answer sentence.
  • the control unit 21 may create a sentence that only reports that the message has been sent as a rule answer sentence.
  • the control unit 21 advances the process to step S211, converts the created rule answer sentence for the character (S211), and transmits the converted rule answer sentence to the terminal 3 (S212).
  • control unit 21 sends an input sentence to the server 1 (S204) instead of step S283 and step S284 and determines that the model answer sentence from the conversation model 50 is used (S207: YES), the said person concerned.
  • the model answer sentence may be corrected based on the profile or conversation history stored in the user DB 243 in association with the user ID of another user.
  • control unit 21 advances the process to step S203.
  • Modification 1 of the sixth embodiment In the first modification, instead of invoking the communication program P32, a one-to-many relationship with another user of the agent system 100 is used.
  • FIG. 30 is a flowchart showing an example of a processing procedure during a conversation by the intermediate server 2 in the modification 1 of the sixth embodiment. Since the flowchart of FIG. 30 is the same as the flowchart of FIG. 29 except that steps S293 and S294 shown below are different, the common procedure is given the same step number and detailed description thereof will be omitted.
  • the control unit 21 of the intermediate server 2 creates an utterance sentence addressed to the user (other user) corresponding to the user ID specified in step S282 (step S293), and the created utterance sentence is used as the other user. (Specifically, to the terminal 3 used by the other user) is transmitted toward (step S294). As a result, the notification from the agent system 100 is output to the terminal 3 of another user.
  • the utterance sentence in step S293 may be simply a sentence (message sentence) that reports an input sentence from the user, saying, "Mr. A said,'Is (another user) how are you?'".
  • the utterance sentence of step S293 may be a sentence inquiring "Mr. A wants to know whether the schedule of Mr. (other user)'s X month and Y day is free".
  • the control unit 21 creates a rule reply sentence for reporting to the user that a message has been sent to another user (step S295).
  • the control unit 21 may create a rule answer sentence together with a report of a recent conversation based on, for example, the conversation history of another user.
  • agent system 100 can take over the communication between users, for example, in a situation where direct communication between users feels a wall. In this way, the agent system 100 can function as a concierge (secretary) for each user.
  • the agent system 100 uses a one-to-many relationship with another user, and when the character of the conversation partner is set for each user, asks the character corresponding to the other user.
  • the process may be executed so as to cope with this. That is, communication between users is established through communication between concierges of each user.
  • FIG. 31 is a flowchart showing an example of a processing procedure during a conversation by the intermediate server 2 in the second modification of the sixth embodiment. Since the flowchart of FIG. 31 is the same as the flowchart of FIG. 29 except that steps S296-S298 shown below are different, the common procedure is given the same step number and detailed description thereof will be omitted.
  • control unit 21 determines whether or not the input sentence from the received user relates to another character different from the character set as the conversation partner of the user (other character). (Whether or not the content is related to) is determined (step S296).
  • control unit 21 determines the rule based on the conversation history between the other character and the user and the conversation history between the other character and the other user. Create an answer sentence (step S297).
  • control unit 21 may create, for example, a rule answer statement explaining the setting based on the setting data of another target character stored in the rule DB 241.
  • the control unit 21 may create a sentence that reports a recent conversation between another user and another character based on the conversation history between the other target character and the other user.
  • the control unit 21 advances the process to step S211, converts the created rule answer sentence for the character (S211), and transmits the converted rule answer sentence to the terminal 3 (S212).
  • control unit 21 sends an input sentence to the server 1 (S204) instead of step S297 and determines that the model answer sentence from the conversation model 50 is used (S207: YES), the other character.
  • the model answer sentence may be corrected based on the setting data of.
  • step S296 If it is determined in step S296 that the input sentence is not related to another character (S296: NO), the process proceeds to step S203.

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Machine Translation (AREA)

Abstract

This information processing method relates to an agent system that uses a conversation model trained through machine learning to conduct a conversation with a user using natural language. The information processing method uses a conversation model that is trained to output a model response statement when the model receives input of an input statement from a user, and a database that stores conversation rules relating to the conversation model. In the information processing method, a computer receives an input statement from a user, acquires a model response statement if the received input statement has been input into the conversation model, determines, on the basis of a comparison between the conversation rules stored in the database and the received input statement or acquired model response statement, whether to use the model response statement from the conversation model in response to the input statement, creates a rule response statement that is based on the conversation rules corresponding to the input statement if it is determined that the model response statement will not be used, and outputs either the model response statement or the rule response statement.

Description

情報処理方法、情報処理装置、情報処理システム及びコンピュータプログラムInformation processing methods, information processing equipment, information processing systems and computer programs
 本発明は、人間と会話する人工知能エージェントにおける情報処理方法、情報処理装置、情報処理システム及びコンピュータプログラムに関する。 The present invention relates to an information processing method, an information processing device, an information processing system, and a computer program in an artificial intelligence agent that talks with a human.
 自然言語処理の技術の発達により、ユーザからの自然言語による入力に対し、自然言語による回答を返すことが可能な人工知能(AI:Artificial Intelligence)ベースのエージェントが実現している。このようなエージェントは、特定のサービスに関するFAQ(Frequently Asked Questions)の検索、これらに関するチャットボットとして利用されている(特許文献1等)。 With the development of natural language processing technology, an artificial intelligence (AI: Artificial Intelligence) -based agent that can return a response in natural language to an input in natural language from a user has been realized. Such an agent is used as a search for FAQ (Frequently Asked Questions) related to a specific service and as a chatbot related to these (Patent Document 1 and the like).
特開2015-056069号公報Japanese Unexamined Patent Publication No. 2015-0506069
 人工知能エージェントは、学習ベースとしたコーパスの属性によって特性を有するように学習されることが期待されるが、オペレータによって恣意的に、その特性をコントロールすることは難しい。また、FAQなどのチャットボットとして利用される人工知能エージェントは、不特定多数と会話し、且つ受動的であるため、人工知能エージェント側で、特定ユーザとの会話に有用な情報を蓄積することが難しい。 The artificial intelligence agent is expected to be learned to have characteristics by the attributes of the corpus based on learning, but it is difficult for the operator to arbitrarily control the characteristics. In addition, since the artificial intelligence agent used as a chatbot such as FAQ talks with an unspecified number of people and is passive, the artificial intelligence agent side can accumulate useful information for conversation with a specific user. difficult.
 本発明は、斯かる事情を鑑みてなされたものであり、ユーザと人工知能エージェントとの会話が、互いの特性を鑑みた自然な会話となるように改善される情報処理方法、情報処理装置、情報処理システム及びコンピュータプログラムを提供することを目的とする。 The present invention has been made in view of such circumstances, and is an information processing method, an information processing apparatus, and an information processing method for improving a conversation between a user and an artificial intelligence agent so as to be a natural conversation in consideration of each other's characteristics. The purpose is to provide an information processing system and a computer program.
 本開示の一実施形態の情報処理方法は、ユーザからの入力文が入力された場合にモデル回答文を出力するように学習される会話モデルと、該会話モデルに関する会話ルールを記憶したデータベースとを用い、コンピュータが、ユーザからの入力文を受け付け、受け付けた入力文を前記会話モデルへ入力した場合、モデル回答文を取得し、受け付けた入力文、又は取得されるモデル回答文と、前記データベースに記憶されている会話ルールとの比較に基づき、前記入力文に対し前記会話モデルのモデル回答文を使用するか否かを判定し、前記モデル回答文を使用しないと判定された場合、前記入力文に対応する前記会話ルールに基づくルール回答文を作成し、前記モデル回答文、又は、ルール回答文を出力する。 The information processing method of the embodiment of the present disclosure includes a conversation model learned to output a model answer sentence when an input sentence from a user is input, and a database storing conversation rules related to the conversation model. When the computer accepts the input sentence from the user and inputs the accepted input sentence into the conversation model, the model answer sentence is acquired, and the accepted input sentence or the acquired model answer sentence is stored in the database. Based on the comparison with the stored conversation rule, it is determined whether or not to use the model answer sentence of the conversation model for the input sentence, and if it is determined not to use the model answer sentence, the input sentence is said. A rule answer sentence based on the conversation rule corresponding to the above is created, and the model answer sentence or the rule answer sentence is output.
 本開示の情報処理方法は、ユーザとの自然言語による会話を機械学習により学習済みの会話モデルを用いて実施するエージェントシステムに関する。情報処理方法では、コンピュータが、ユーザから入力された入力文、あるいは、その入力文を会話モデルへ入力した場合に出力されるモデル回答文の少なくともどちらか一方に対し、会話モデルからのモデル回答文を使用するか否かを判定する。コンピュータは、モデル回答文を使用しない場合にはルールベースのルール回答文を作成する。 The information processing method of the present disclosure relates to an agent system in which a conversation with a user in natural language is carried out by using a conversation model learned by machine learning. In the information processing method, the model answer sentence from the conversation model is used for at least one of the input sentence input by the user and the model answer sentence output when the input sentence is input to the conversation model. Determine if to use. The computer creates a rule-based rule answer when it does not use the model answer.
 本開示によれば、深層学習により学習されたモデルに基づく会話を全てとせず、一部必要な場合にはルールベースでの会話となるように制御される。これにより、ユーザのプロファイルに基づいた会話、互いの特性を鑑みた自然な会話となるように、人工知能エージェントとの会話の品質が向上する。 According to this disclosure, conversations based on the model learned by deep learning are not all, but are controlled to be rule-based conversations when some are necessary. As a result, the quality of the conversation with the artificial intelligence agent is improved so that the conversation is based on the user's profile and the conversation is natural in consideration of each other's characteristics.
エージェントシステムの構成例を示す模式図である。It is a schematic diagram which shows the configuration example of an agent system. サーバの構成例を示すブロック図である。It is a block diagram which shows the configuration example of a server. 中間サーバの構成例を示すブロック図である。It is a block diagram which shows the configuration example of an intermediate server. 端末の構成例を示すブロック図である。It is a block diagram which shows the configuration example of a terminal. エージェントシステムにおける処理手順の一例を示すフローチャートである。It is a flowchart which shows an example of the processing procedure in an agent system. エージェントシステムにおける処理手順の一例を示すフローチャートである。It is a flowchart which shows an example of the processing procedure in an agent system. 分析処理手順の一例を示すフローチャートである。It is a flowchart which shows an example of the analysis processing procedure. エージェントシステムにおける会話例を示す説明図である。It is explanatory drawing which shows the conversation example in the agent system. 第2実施形態のエージェントシステムにおける処理手順の一例を示すフローチャートである。It is a flowchart which shows an example of the processing procedure in the agent system of 2nd Embodiment. 第2実施形態のエージェントシステムにおける処理手順の一例を示すフローチャートである。It is a flowchart which shows an example of the processing procedure in the agent system of 2nd Embodiment. 第2実施形態のエージェントシステムにおける処理手順の一例を示すフローチャートである。It is a flowchart which shows an example of the processing procedure in the agent system of 2nd Embodiment. 第2実施形態のエージェントシステムにおける会話例を示す説明図である。It is explanatory drawing which shows the conversation example in the agent system of 2nd Embodiment. 第2実施形態のエージェントシステムにおける会話例を示す説明図である。It is explanatory drawing which shows the conversation example in the agent system of 2nd Embodiment. エージェントを起点とする会話を実施する処理手順の一例を示すフローチャートである。It is a flowchart which shows an example of the processing procedure which carries out the conversation which starts from an agent. 第2実施形態のエージェントシステムにおけるプロファイルに基づく会話例を示す説明図である。It is explanatory drawing which shows the conversation example based on the profile in the agent system of 2nd Embodiment. 第3実施形態の中間サーバの構成例を示すブロック図である。It is a block diagram which shows the configuration example of the intermediate server of 3rd Embodiment. 第2モデルの概要図である。It is a schematic diagram of the 2nd model. 第3実施形態の中間サーバによる会話中の処理手順の一例を示すフローチャートである。It is a flowchart which shows an example of the processing procedure during a conversation by the intermediate server of 3rd Embodiment. 第3実施形態の中間サーバによる会話中の処理手順の一例を示すフローチャートである。It is a flowchart which shows an example of the processing procedure during a conversation by the intermediate server of 3rd Embodiment. 第1モデル及び第2モデルを用いる処理手順の他の一例を示すフローチャートである。It is a flowchart which shows the other example of the processing procedure using the 1st model and the 2nd model. 第1モデル及び第2モデルを用いる処理手順の他の一例を示すフローチャートである。It is a flowchart which shows the other example of the processing procedure using the 1st model and the 2nd model. 第3実施形態の変形例の中間サーバの構成例を示すブロック図であるIt is a block diagram which shows the configuration example of the intermediate server of the modification of the 3rd Embodiment. 第4実施形態のサーバの構成例を示すブロック図である。It is a block diagram which shows the configuration example of the server of 4th Embodiment. 話題判定モデルの概要図である。It is a schematic diagram of a topic determination model. 第4実施形態の中間サーバ及びサーバの処理手順の一例を示すフローチャートである。It is a flowchart which shows an example of the intermediate server of 4th Embodiment, and the processing procedure of a server. 第4実施形態の中間サーバ及びサーバの処理手順の一例を示すフローチャートである。It is a flowchart which shows an example of the intermediate server of 4th Embodiment, and the processing procedure of a server. 第5実施形態の中間サーバによる会話中の処理手順の一例を示すフローチャートである。It is a flowchart which shows an example of the processing procedure during a conversation by the intermediate server of 5th Embodiment. 第6実施形態の端末の構成例を示すブロック図である。It is a block diagram which shows the structural example of the terminal of 6th Embodiment. 第6実施形態の中間サーバによる会話中の処理手順の一例を示すフローチャートである。It is a flowchart which shows an example of the processing procedure during a conversation by the intermediate server of 6th Embodiment. 第6実施形態の変形例1における中間サーバによる会話中の処理手順の一例を示すフローチャートである。6 is a flowchart showing an example of a processing procedure during a conversation by an intermediate server in the first modification of the sixth embodiment. 第6実施形態の変形例2における中間サーバによる会話中の処理手順の一例を示すフローチャートである。It is a flowchart which shows an example of the processing procedure during the conversation by the intermediate server in the modification 2 of the sixth embodiment.
 本開示をその実施の形態を示す図面を参照して具体的に説明する。以下の実施の形態では、本開示の情報処理方法を適用したエージェントシステムについて説明する。 The present disclosure will be specifically described with reference to the drawings showing the embodiments thereof. In the following embodiment, an agent system to which the information processing method of the present disclosure is applied will be described.
 (第1実施形態)
 図1は、エージェントシステム100の構成例を示す模式図である。本実施の形態では、会話応答を模擬する人工知能エージェントを用いて、ユーザとの会話を実現するエージェントシステム100について説明する。エージェントシステム100は、サーバ1、中間サーバ2、複数の端末3,3,3…を含む。各装置は、インターネット等のネットワークNを介して通信接続されている。
(First Embodiment)
FIG. 1 is a schematic diagram showing a configuration example of the agent system 100. In this embodiment, an agent system 100 that realizes a conversation with a user by using an artificial intelligence agent that simulates a conversation response will be described. The agent system 100 includes a server 1, an intermediate server 2, a plurality of terminals 3, 3, 3 ... Each device is communicated and connected via a network N such as the Internet.
 サーバ1は、種々の情報処理、情報の送受信が可能なサーバコンピュータである。本実施の形態では、サーバ1は、所定の訓練データを学習することで、ユーザからの入力文(発話文)を入力した場合に、当該入力文に対するエージェントからの回答文を出力するように学習済みの機械学習モデル(後述の会話モデル50)を生成してある。サーバ1は、当該モデルにユーザからの入力文し、回答文を生成して出力する。 Server 1 is a server computer capable of various information processing and transmission / reception of information. In the present embodiment, the server 1 learns to output a response sentence from the agent to the input sentence when the input sentence (spoken sentence) from the user is input by learning the predetermined training data. A completed machine learning model (conversation model 50 described later) has been generated. The server 1 inputs an input sentence from the user to the model, generates an answer sentence, and outputs the answer sentence.
 端末3は、各ユーザ(エージェントシステム100の利用者)が使用する情報処理端末であり、例えばスマートフォン、パーソナルコンピュータ、タブレット端末等である。端末3は、エージェントに相当するキャラクタ(ユーザの会話相手として設定される二次元又は三次元のアニメーション)の画像を表示し、ユーザから入力文の入力を受け付ける。サーバ1は、端末3に入力された入力文に対する回答文を生成し、端末3に出力して、キャラクタによる応答として表示させる。 The terminal 3 is an information processing terminal used by each user (user of the agent system 100), and is, for example, a smartphone, a personal computer, a tablet terminal, or the like. The terminal 3 displays an image of a character (two-dimensional or three-dimensional animation set as a conversation partner of the user) corresponding to the agent, and accepts input of an input sentence from the user. The server 1 generates a response sentence to the input sentence input to the terminal 3, outputs the answer sentence to the terminal 3, and displays it as a response by the character.
 中間サーバ2は、サーバ1と端末3との間に位置するサーバコンピュータであり、端末3に入力された入力文をサーバ1に送信すると共に、サーバ1が生成した回答文を端末3へ送信する。第1実施形態では、中間サーバ2は、会話の質を向上するべく、機械学習によるモデルだけでなく、ルールベースの会話を組み合わせて回答文を出力する。 The intermediate server 2 is a server computer located between the server 1 and the terminal 3, and transmits the input text input to the terminal 3 to the server 1 and the response text generated by the server 1 to the terminal 3. .. In the first embodiment, the intermediate server 2 outputs an answer sentence by combining not only a machine learning model but also a rule-based conversation in order to improve the quality of the conversation.
 以下の説明では、サーバ1及び中間サーバ2を異なる装置として説明するが、1つの装置内で各々の機能を実現するものとしてもよい。 In the following description, the server 1 and the intermediate server 2 will be described as different devices, but each function may be realized in one device.
 図2は、サーバ1の構成例を示すブロック図である。サーバ1は、制御部11、主記憶部12、通信部13、及び補助記憶部14を備える。 FIG. 2 is a block diagram showing a configuration example of the server 1. The server 1 includes a control unit 11, a main storage unit 12, a communication unit 13, and an auxiliary storage unit 14.
 制御部11は、一又は複数のCPU(Central Processing Unit)、MPU(Micro-Processing Unit)、GPU(Graphics Processing Unit)等の演算処理装置を有し、補助記憶部14に記憶されたプログラムP1を読み出して実行することにより、種々の情報処理、制御処理等を行なう。主記憶部12は、SRAM(Static Random Access Memory)、DRAM(Dynamic Random Access Memory)、フラッシュメモリ等の一時記憶領域であり、制御部11が演算処理を実行するために必要なデータを一時的に記憶する。通信部13は、通信に関する処理を行なうための通信モジュールであり、外部と情報の送受信を行なう。 The control unit 11 has one or more CPUs (Central Processing Units), MPUs (Micro-Processing Units), GPUs (Graphics Processing Units), and other arithmetic processing units, and stores the program P1 stored in the auxiliary storage unit 14. By reading and executing, various information processing, control processing, etc. are performed. The main storage unit 12 is a temporary storage area for SRAM (Static Random Access Memory), DRAM (Dynamic Random Access Memory), flash memory, etc., and temporarily stores data necessary for the control unit 11 to execute arithmetic processing. Remember. The communication unit 13 is a communication module for performing processing related to communication, and transmits / receives information to / from the outside.
 補助記憶部14は、大容量メモリ、ハードディスク等の不揮発性記憶領域であり、制御部11が処理を実行するために必要なプログラムP1、その他のデータを記憶している。また、補助記憶部14は、会話モデル50を記憶している。会話モデル50は、所定の訓練データを学習済みの機械学習モデルであり、ユーザからの入力文を入力した場合に、当該入力文に対する回答文を出力するように学習されるモデルである。 The auxiliary storage unit 14 is a non-volatile storage area such as a large-capacity memory or a hard disk, and stores the program P1 and other data necessary for the control unit 11 to execute processing. Further, the auxiliary storage unit 14 stores the conversation model 50. The conversation model 50 is a machine learning model in which predetermined training data has been learned, and is a model that is learned to output an answer sentence to the input sentence when an input sentence from the user is input.
 図3は、中間サーバ2の構成例を示すブロック図である。中間サーバ2は、制御部21、主記憶部22、通信部23、補助記憶部24を備える。 FIG. 3 is a block diagram showing a configuration example of the intermediate server 2. The intermediate server 2 includes a control unit 21, a main storage unit 22, a communication unit 23, and an auxiliary storage unit 24.
 制御部21はCPU等の演算処理装置であり、補助記憶部24に記憶されたプログラムP2を読み出して実行することにより、種々の情報処理、制御処理等を行なう。主記憶部22は、RAM等の一時記憶領域であり、制御部21が演算処理を実行するために必要なデータを一時的に記憶する。通信部23は、通信に関する処理を行なうための通信モジュールであり、外部と情報の送受信を行なう。 The control unit 21 is an arithmetic processing device such as a CPU, and performs various information processing, control processing, and the like by reading and executing the program P2 stored in the auxiliary storage unit 24. The main storage unit 22 is a temporary storage area such as a RAM, and temporarily stores data necessary for the control unit 21 to execute arithmetic processing. The communication unit 23 is a communication module for performing processing related to communication, and transmits / receives information to / from the outside.
 補助記憶部24は、大容量メモリ、ハードディスク等の不揮発性記憶領域であり、制御部21が処理を実行するために必要なプログラムP2、その他のデータを記憶している。また、補助記憶部24は、ルールDB241、禁止ワードDB242、ユーザDB243、を記憶している。ルールDB241は、会話モデル50に関する会話ルールを格納するデータベースである。禁止ワードDB242は、会話で禁止される単語を格納するデータベースである。ユーザDB243は、ユーザのプロファイル(名前を含む属性、家族、興味分野、趣味等)を含むユーザ情報をユーザIDに対応付けて格納するデータベースである。 The auxiliary storage unit 24 is a non-volatile storage area such as a large-capacity memory or a hard disk, and stores the program P2 and other data necessary for the control unit 21 to execute processing. Further, the auxiliary storage unit 24 stores the rule DB 241 and the prohibited word DB 242 and the user DB 243. The rule DB 241 is a database for storing conversation rules related to the conversation model 50. The prohibited word DB 242 is a database for storing words prohibited in conversation. The user DB 243 is a database that stores user information including user profiles (attributes including names, families, fields of interest, hobbies, etc.) in association with user IDs.
 図4は、端末3の構成例を示すブロック図である。端末3は、制御部31、主記憶部32、通信部33、表示部34、入力部35、音声出力部36、補助記憶部37及び音声入力部38を備える。 FIG. 4 is a block diagram showing a configuration example of the terminal 3. The terminal 3 includes a control unit 31, a main storage unit 32, a communication unit 33, a display unit 34, an input unit 35, a voice output unit 36, an auxiliary storage unit 37, and a voice input unit 38.
 制御部31は、CPU等の演算処理装置であり、補助記憶部37に記憶されたプログラムP3を読み出して実行することにより、種々の情報処理、制御処理等を行なう。主記憶部32は、RAM等の一時記憶領域であり、制御部31が演算処理を実行するために必要なデータを一時的に記憶する。通信部33は、通信に関する処理を行なうための通信モジュールであり、外部と情報の送受信を行なう。表示部34は、液晶ディスプレイ等の表示画面であり、画像を表示する。入力部35は、メカニカルキー等の操作インタフェースであり、ユーザから操作入力を受け付ける。音声出力部36は、音声を出力するスピーカであり、制御部31から与えられた音声を出力する。音声入力部38は、音声を入力するマイクであり、ユーザが発話した音声を入力し音声認識を実行する。制御部31は、音声入力部38から音声認識の結果を受信し、テキスト(あるいは発音記号)として取得可能である。補助記憶部37は、ハードディスク、大容量メモリ等の不揮発性記憶装置であり、制御部31が処理を実行するために必要なプログラムP3、その他のデータを記憶している。 The control unit 31 is an arithmetic processing device such as a CPU, and performs various information processing, control processing, and the like by reading and executing the program P3 stored in the auxiliary storage unit 37. The main storage unit 32 is a temporary storage area such as a RAM, and temporarily stores data necessary for the control unit 31 to execute arithmetic processing. The communication unit 33 is a communication module for performing processing related to communication, and transmits / receives information to / from the outside. The display unit 34 is a display screen such as a liquid crystal display and displays an image. The input unit 35 is an operation interface such as a mechanical key, and receives an operation input from the user. The voice output unit 36 is a speaker that outputs voice, and outputs the voice given by the control unit 31. The voice input unit 38 is a microphone for inputting voice, and inputs voice spoken by the user to perform voice recognition. The control unit 31 receives the result of voice recognition from the voice input unit 38 and can acquire it as a text (or a phonetic symbol). The auxiliary storage unit 37 is a non-volatile storage device such as a hard disk and a large-capacity memory, and stores the program P3 and other data necessary for the control unit 31 to execute processing.
 上述の如く、サーバ1は、入力文及び回答文のペアである訓練データ(コーパス)を基に会話モデル50を生成してあり、当該会話モデル50に入力文を入力して回答文を生成する。しかし、機械学習モデルによって現実的なサービスに適した回答を完全に再現することは難しい。会話モデル50は、学習した訓練データや入力文に依っては不適切な回答文(例えば公序良俗に反する回答文、会話の目的に反している回答文、キャラクタ側の設定に反する回答文等)を生成してしまう虞がある。 As described above, the server 1 generates the conversation model 50 based on the training data (corpus) which is a pair of the input sentence and the answer sentence, and inputs the input sentence to the conversation model 50 to generate the answer sentence. .. However, it is difficult for a machine learning model to completely reproduce an answer suitable for a realistic service. The conversation model 50 produces inappropriate answer sentences (for example, answer sentences contrary to public order and morals, answer sentences contrary to the purpose of conversation, answer sentences contrary to the setting on the character side, etc.) depending on the learned training data and input sentences. There is a risk of generating it.
 そこで中間サーバ2は、ユーザからの入力文、及び/又は会話モデル50からの回答文に応じて、機械学習モデルによる会話ではなくルールベースでの会話(回答)を行なうべきか否かを判定する。中間サーバ2は、ルールDB241及び禁止ワードDB242に、ルールベースで回答すべき入力文や、回答文として不適切な単語等を規定しておき、それらのルールと比較して、ルールベースで回答すべきか否かを判定する。ルールベースで回答すべきと判定した場合、中間サーバ2は会話モデル50から得られる回答文によらず、あらかじめ定められたルールDB241の会話ルールに従って回答文を出力する。 Therefore, the intermediate server 2 determines whether or not a rule-based conversation (answer) should be performed instead of a machine learning model conversation according to the input sentence from the user and / or the response sentence from the conversation model 50. .. The intermediate server 2 defines in the rule DB 241 and the prohibited word DB 242 an input sentence to be answered on a rule basis, an inappropriate word as an answer sentence, and the like, and compares with those rules to answer on a rule basis. Judge whether or not it is possible. When it is determined that the answer should be made on a rule basis, the intermediate server 2 outputs the answer sentence according to the conversation rule of the predetermined rule DB 241 regardless of the answer sentence obtained from the conversation model 50.
 図5及び図6は、エージェントシステム100における処理手順の一例を示すフローチャートである。ユーザが、端末3を用いてプログラムP3を起動させると、以下の処理が開始され、ユーザからの入力が続く限り繰り返し実行される。 5 and 6 are flowcharts showing an example of the processing procedure in the agent system 100. When the user activates the program P3 using the terminal 3, the following processing is started and repeatedly executed as long as the input from the user continues.
 端末3の制御部31は、プログラムP3に基づき、ユーザの会話相手として設定されるキャラクタを表示部34に表示させる(ステップS301)。 The control unit 31 of the terminal 3 causes the display unit 34 to display the character set as the conversation partner of the user based on the program P3 (step S301).
 端末3の制御部31は、音声入力部38にて入力された音声に対する音声認識の結果である入力文を取得する(ステップS302)。音声入力部38は例えば、端末3の入力部35に含まれるキーが押されている期間中にマイクで拾われた音声を認識する。音声入力部38は、特定のキーワードを含む音声が入力された場合に、以後の音声を認識してもよい。音声入力部38は声紋情報を用いて特定のユーザ(端末3の所有者)の音声のみを認識してもよい。 The control unit 31 of the terminal 3 acquires an input sentence that is the result of voice recognition for the voice input by the voice input unit 38 (step S302). The voice input unit 38 recognizes, for example, the voice picked up by the microphone during the period when the key included in the input unit 35 of the terminal 3 is pressed. The voice input unit 38 may recognize the subsequent voice when the voice including a specific keyword is input. The voice input unit 38 may recognize only the voice of a specific user (owner of the terminal 3) by using the voiceprint information.
 制御部31は、テキスト又は発音記号を含む入力文を、中間サーバ2へ送信する(ステップS303)。 The control unit 31 transmits an input sentence including a text or a phonetic symbol to the intermediate server 2 (step S303).
 中間サーバ2の制御部21は、入力文を受け付けると(ステップS201)、受け付けた入力文と、ルールDB241及び禁止ワードDB242を含むDB群とを比較する分析処理を実行する(ステップS202)。分析処理の詳細は、後述する。 When the control unit 21 of the intermediate server 2 receives the input sentence (step S201), the control unit 21 executes an analysis process of comparing the received input sentence with the DB group including the rule DB 241 and the prohibited word DB 242 (step S202). The details of the analysis process will be described later.
 制御部21は分析処理の結果に基づき、受け付けた入力文を会話モデル50へ入力するか否かを判定する(ステップS203)。 The control unit 21 determines whether or not to input the received input sentence into the conversation model 50 based on the result of the analysis process (step S203).
 会話モデル50へ入力すると判定された場合(S203:YES)、制御部21は受け付けた入力文を、サーバ1へ送信する(ステップS204)。 When it is determined to input to the conversation model 50 (S203: YES), the control unit 21 transmits the accepted input sentence to the server 1 (step S204).
 サーバ1では、入力文を受信し(ステップS101)、制御部11は、受信した入力文を会話モデル50へ入力し(ステップS102)、会話モデル50から出力されるモデル回答文を取得する(ステップS103)。サーバ1は、取得したモデル回答文を、ステップS101に対する応答として中間サーバ2へ送信する(ステップS104)。 The server 1 receives the input sentence (step S101), the control unit 11 inputs the received input sentence to the conversation model 50 (step S102), and acquires the model answer sentence output from the conversation model 50 (step). S103). The server 1 transmits the acquired model response text to the intermediate server 2 as a response to step S101 (step S104).
 中間サーバ2の制御部21は、サーバ1から送信されたモデル回答文を受信し(ステップS205)、モデル回答文と、ルールDB241及び禁止ワードDB242を含むDB群とを比較する分析処理を実行する(ステップS206)。分析処理の詳細は、後述する。 The control unit 21 of the intermediate server 2 receives the model response statement transmitted from the server 1 (step S205), and executes an analysis process of comparing the model response statement with the DB group including the rule DB 241 and the prohibited word DB 242. (Step S206). The details of the analysis process will be described later.
 制御部21は、分析処理の結果に基づき、会話モデル50から得られたモデル回答文を使用するか否かを判定する(ステップS207)。 The control unit 21 determines whether or not to use the model answer sentence obtained from the conversation model 50 based on the result of the analysis process (step S207).
 モデル回答文を使用すると判定された場合(S207:YES)、制御部21は、会話モデル50から得られたモデル回答文を、キャラクタのセリフへ変換し(ステップS208)、変換後のモデル回答文を端末3へ送信する(ステップS209)。ステップS208の変換は、キャラクタ毎に、丁寧語への変換、所定の語尾の付加、あるいは所定の言い回しへの変換により実行されてもよいし、キャラクタ毎のセリフ変換モデルを通じて実行されてもよい。 When it is determined to use the model answer sentence (S207: YES), the control unit 21 converts the model answer sentence obtained from the conversation model 50 into the dialogue of the character (step S208), and the converted model answer sentence. Is transmitted to the terminal 3 (step S209). The conversion in step S208 may be performed for each character by conversion to a polite word, addition of a predetermined flexion, or conversion to a predetermined phrase, or may be performed through a character-by-character dialogue conversion model.
 ステップS203で会話モデル50へ入力しないと判定された場合(S203:NO)、又は、ステップS207でモデル回答文を使用しないと判定された場合(S207:NO)、制御部21は、ルールDB241に基づき入力文に対応するルール回答文を作成する(ステップS210)。ステップS210において、モデル回答文から特定の単語を入れ替えることでルール回答文を作成することとしてもよいし、ルールDB241で規定されている入力文に対して予め設定されている定型文をルール回答文として作成してもよい。 When it is determined in step S203 that the input to the conversation model 50 is not performed (S203: NO), or when it is determined in step S207 that the model answer statement is not used (S207: NO), the control unit 21 sends the rule DB 241 to the rule DB 241. Based on this, a rule answer sentence corresponding to the input sentence is created (step S210). In step S210, a rule answer sentence may be created by exchanging a specific word from the model answer sentence, or a fixed phrase preset for the input sentence defined in the rule DB 241 may be used as the rule answer sentence. May be created as.
 制御部21は、作成したルール回答文を、キャラクタのセリフへ変換し(ステップS211)、端末3へ送信する(ステップS212)。 The control unit 21 converts the created rule answer sentence into a character line (step S211) and transmits it to the terminal 3 (step S212).
 端末3は、中間サーバ2から送信されたモデル回答文、又はルール回答文を受信し(ステップS304)、制御部31は、受信した回答文によりキャラクタにセリフを表示部34又は音声出力部36に出力させる(ステップS305)。制御部31は、処理をステップS302へ処理を戻し、プログラムP3が終了するまで、ユーザからの入力文を受け付ける。 The terminal 3 receives the model answer sentence or the rule answer sentence transmitted from the intermediate server 2 (step S304), and the control unit 31 displays a line on the character to the display unit 34 or the voice output unit 36 according to the received answer sentence. Output (step S305). The control unit 31 returns the process to step S302 and accepts an input statement from the user until the program P3 ends.
 次に、分析処理について説明する。図7は、分析処理手順の一例を示すフローチャートである。図6のフローチャートは、図5及び図6における中間サーバ2によるステップS202及びステップS206の処理の詳細に対応する。 Next, the analysis process will be described. FIG. 7 is a flowchart showing an example of the analysis processing procedure. The flowchart of FIG. 6 corresponds to the details of the processing of steps S202 and S206 by the intermediate server 2 in FIGS. 5 and 6.
 中間サーバ2の制御部21は、受け付けた入力文(S201)、又は、モデル回答文(S205)に、禁止ワードDB242に記憶されている禁止ワードが含まれるか否かを判断する(ステップS601)。禁止ワードは、公序良俗違反に関する語、差別的表現であると判断されるような語を含み、禁止ワードDB242に記憶されている。禁止ワードはその他、政治的表現、暴力的表現と判断されるような語を含んでもよい。 The control unit 21 of the intermediate server 2 determines whether or not the received input statement (S201) or model response statement (S205) includes the prohibited word stored in the prohibited word DB 242 (step S601). .. The prohibited word includes a word relating to a violation of public order and morals and a word judged to be a discriminatory expression, and is stored in the prohibited word DB 242. The prohibited words may also include words that are judged to be political or violent expressions.
 禁止ワードが含まれると判断された場合(S601:YES)、制御部21は、ルールベースで回答すべきと判定し(ステップS602)、処理を次のステップS203又はステップS207へ戻す。 When it is determined that the prohibited word is included (S601: YES), the control unit 21 determines that the answer should be made on a rule basis (step S602), and returns the process to the next step S203 or step S207.
 禁止ワードが含まれないと判断された場合(S601:NO)、制御部21は、受け付けた入力文と、ルールDB241に予め記憶されているルールベースで回答すべき入力文の規定とを照合する(ステップS603)。制御部21は、照合の結果、ルールベースで回答すべき入力文の規定に合致するか否かを判断し(ステップS604)、合致すると判断された場合(S604:YES)、制御部21は、規定に基づき定型文で回答すべきと判定し(ステップS605)、処理を次のステップS203又はステップS207へ戻す。 When it is determined that the prohibited word is not included (S601: NO), the control unit 21 collates the received input sentence with the rule of the input sentence to be answered based on the rule stored in advance in the rule DB 241. (Step S603). As a result of the collation, the control unit 21 determines whether or not the rule-based input sentence to be answered is met (step S604), and if it is determined to match (S604: YES), the control unit 21 determines. It is determined that the answer should be made in a fixed phrase based on the rule (step S605), and the process is returned to the next step S203 or step S207.
 ステップS603は、受け付けた入力文と、ルールDB241における規定との整合性(類似度)を示す値を制御部21が算出する処理に代替されてもよい。この場合、ステップS604では、制御部21は、算出した値が所定値以上であるか否かにより、規定に合致するか否かを判断し、モデル回答文を使用するか否かを判定するとよい。ステップS604において制御部21は、設定される分野が医療である場合、ユーザ及びユーザ家族の健康に関する入力文又はモデル回答文は、規定に合致すると判断する。設定される分野がビジネス支援である場合、制御部21は、スケジュール、技術、ツールに関する入力文又はモデル回答文は、規定に合致すると判断する。 Step S603 may be replaced with a process in which the control unit 21 calculates a value indicating consistency (similarity) between the received input statement and the rule DB 241. In this case, in step S604, the control unit 21 may determine whether or not the calculated value conforms to the regulation depending on whether or not the calculated value is equal to or greater than a predetermined value, and determine whether or not to use the model answer statement. .. In step S604, when the field to be set is medical care, the control unit 21 determines that the input sentence or the model answer sentence regarding the health of the user and the user's family conforms to the regulation. When the set field is business support, the control unit 21 determines that the input sentence or the model answer sentence regarding the schedule, the technique, and the tool conforms to the regulation.
 ルールベースで回答すべき入力文の規定に合致しないと判断された場合(S604:NO)、制御部21は、入力文、又はモデル回答文が、会話相手として設定されるキャラクタの設定に関するか否かを判断する(ステップS606)。ステップS606において制御部21は、キャラクタが複数設定されている場合、いずれかのキャラクタの設定データに関する入力文である場合に、合致しないと判断してもよい。キャラクタの設定(属性)と矛盾しないルール回答文で回答するためである。 When it is determined that the input sentence to be answered does not meet the rule-based rule (S604: NO), the control unit 21 determines whether the input sentence or the model answer sentence is related to the setting of the character set as the conversation partner. (Step S606). In step S606, when a plurality of characters are set, the control unit 21 may determine that they do not match when the input statement is related to the setting data of any of the characters. This is to answer with a rule answer sentence that does not contradict the character settings (attributes).
 ステップS606で設定データに関すると判断される場合(S606:YES)、制御部21は、キャラクタの設定に基づく定型文で回答すべきと判定し(ステップS607)、処理を次のステップS203又はステップS207へ戻す。 When it is determined in step S606 that the setting data is related (S606: YES), the control unit 21 determines that a fixed phrase based on the character setting should be answered (step S607), and processes the next step S203 or step S207. Return to.
 ステップS606で設定データに関しないと判断される場合(S606:NO)、制御部21は、会話モデル50へ入力すると判定するか、又は、モデル回答文を使用すると判定する(ステップS608)。制御部21は、処理を次のステップS203又はステップS207へ戻す。 When it is determined in step S606 that the setting data is not related (S606: NO), the control unit 21 determines that the input is input to the conversation model 50 or determines that the model answer sentence is used (step S608). The control unit 21 returns the process to the next step S203 or step S207.
 ステップS606において制御部21は、キャラクタの設定に関するか否かを、入力文、又は、モデル回答文と、ルールDB241におけるキャラクタの設定との間の整合性を数値として算出し、算出した値が所定値以上であるか否かを判断してもよい。この場合、整合性が高いと判断された場合には、制御部21は、会話モデル50へ入力すると判定するか、又は、会話モデル50からのモデル回答文を使用すると判定する。 In step S606, the control unit 21 calculates the consistency between the input statement or the model answer statement and the character setting in the rule DB 241 as a numerical value to determine whether or not it is related to the character setting, and the calculated value is predetermined. It may be determined whether or not it is equal to or greater than the value. In this case, if it is determined that the consistency is high, the control unit 21 determines that the input is input to the conversation model 50, or determines that the model answer sentence from the conversation model 50 is used.
 図8は、エージェントシステム100における会話例を示す説明図である。図8は、端末3の表示部34に表示される画面例を示す。図8の例では、エージェントシステム100において、テキストによる会話、即ちチャットボットとしてエージェントとの会話が実行される。図8の画面では、ユーザから入力された入力文のテキストが左側からの吹き出し状の画像で表示され、会話相手として設定されたキャラクタからの回答文(あるいは発話文)のテキストが右側からの吹き出し状の画像で表示される。ここでは、ユーザから入力された入力文は「お兄さん元気?」である。この「(あなたの)お兄さん元気?」という入力文に対し、中間サーバ2の制御部21は、会話相手として設定されたキャラクタの設定に関すると分析処理のステップS606で判断し、ルール回答文で回答するものと判定する。 FIG. 8 is an explanatory diagram showing an example of conversation in the agent system 100. FIG. 8 shows an example of a screen displayed on the display unit 34 of the terminal 3. In the example of FIG. 8, in the agent system 100, a text conversation, that is, a conversation with an agent as a chatbot is executed. On the screen of FIG. 8, the text of the input sentence input by the user is displayed as a balloon-shaped image from the left side, and the text of the answer sentence (or the utterance sentence) from the character set as the conversation partner is a balloon from the right side. It is displayed as a balloon image. Here, the input sentence input by the user is "How are you?". In response to this input sentence "(Your) brother is fine?", The control unit 21 of the intermediate server 2 determines in step S606 of the analysis process that the setting of the character set as the conversation partner is related, and in the rule answer sentence. Judge to answer.
 ルールDB241には、キャラクタ毎の設定として「種族」、「性別」、「年齢」、「出身」、「家族」、「学校」等の種別に対応させた「種族」=「人間」といった設定を含んでよい。またルールDB241は、種別に対応付けて、その設定に関すると判断するための「語」を含む。ルールDB241は例えば種別「家族」に関すると判断するための用語として、「父」「母」「兄」「姉」「妹」「弟」等を含む。ルールDB241は例えば種別「出身」に対応付けて「故郷」「母国」「ふるさと」「実家」等を含む。 In the rule DB 241, as a setting for each character, a setting such as "race" = "human" corresponding to a type such as "race", "gender", "age", "origin", "family", "school", etc. is set. May include. Further, the rule DB 241 includes a "word" for determining that the setting is related to the type. The rule DB 241 includes, for example, "father," "mother," "brother," "sister," "sister," "younger brother," and the like as terms for determining that the type is "family." The rule DB 241 includes, for example, "hometown", "home country", "hometown", "parent's home", etc. in association with the type "origin".
 図8の場合、ステップS606で、制御部21は、ユーザからの入力文が、キャラクタの設定、特に「家族」に関する用語が含まれる入力文である、と認識できる。制御部21は、「キャラクタの設定に関する」と判断された場合のルールDB241に含まれるキャラクタの設定データ「きょうだい(兄弟・姉妹)なし」により、「私にはきょうだいはいない。<ユーザ>には<きょうだいの語句>がいるのか?」という定型文で回答する。括弧(やま括弧<>)の中身は、入力文が設定に関すると判断する根拠となった語又はその種別を入れるとよい。この場合、「お兄さん」の語が、「家族」の種別に対応付けられた「兄」の類語であるために、制御部21は、括弧内にその語句「兄」を入れたルール回答文をキャラクタの設定に基づき変換して回答している。 In the case of FIG. 8, in step S606, the control unit 21 can recognize that the input sentence from the user is an input sentence including a character setting, particularly a term related to "family". The control unit 21 "does not have a sibling. <User>" due to the character setting data "no siblings (brothers / sisters)" included in the rule DB 241 when it is determined to be "related to the character setting". Do you have a <sibling word>? ” The contents of the parentheses (Yama parentheses <>) should include the word or its type that was the basis for judging that the input sentence was related to the setting. In this case, since the word "brother" is a synonym for "brother" associated with the type of "family", the control unit 21 puts the word "brother" in parentheses in the rule answer sentence. Is converted based on the character settings and answered.
 上述の処理のように、エージェントシステム100では、ユーザからの入力文に対してモデル回答文を使用するか否かが判定され、使用しないと判定された場合は入力文に対応するルール回答文が出力される。例えば、入力文に禁止ワードが含まれる場合には、意図せずに不適切な回答文が出力されてしまう恐れがあるため、モデル回答文を使用しないと判定されてルール回答文が使用される。一方で、そのような場合以外には、モデル回答文を使用すると判定される。すなわち、一律にモデル回答文を使用するのではなく、ルール回答文で回答するパターンとモデル回答文で回答するパターンとが使い分けられる。これにより、学習した訓練データや入力文等に依って意図せずに不適切な回答文が出力されることを回避することができる。例えば、ユーザは、キャラクタの設定に矛盾するような回答を受けることが回避される。また、エージェントシステム100における会話相手から公序良俗に違反するような回答を受けることや、極端な思想の会話となることを回避することもできる。 As described above, in the agent system 100, it is determined whether or not to use the model answer sentence for the input sentence from the user, and if it is determined not to use the model answer sentence, the rule answer sentence corresponding to the input sentence is generated. It is output. For example, if the input sentence contains a prohibited word, an inappropriate answer sentence may be output unintentionally, so it is determined that the model answer sentence is not used and the rule answer sentence is used. .. On the other hand, other than such a case, it is determined that the model answer sentence is used. That is, instead of using the model answer sentence uniformly, the pattern of answering with the rule answer sentence and the pattern of answering with the model answer sentence are used properly. As a result, it is possible to prevent an inappropriate answer sentence from being unintentionally output due to the learned training data, the input sentence, or the like. For example, the user is prevented from receiving an answer that is inconsistent with the character's settings. In addition, it is possible to avoid receiving an answer from a conversation partner in the agent system 100 that violates public order and morals, or having a conversation with an extreme idea.
 これにより、キャラクタの設定に矛盾することなく、またエージェントからの回答文が暴力的になったり、差別的な回答文となったりすることを回避しつつ、会話モデル50からのモデル回答文を適切に用いてより自然な会話を実現することが可能になる。 As a result, the model answer sentence from the conversation model 50 is appropriate while avoiding the inconsistency in the character setting and preventing the answer sentence from the agent from becoming violent or discriminatory answer sentence. It becomes possible to realize a more natural conversation by using it.
 (第2実施形態)
 第2実施形態では、エージェントシステム100は、エージェントとの自然な会話に基づいて、つまり自然な形でユーザを理解していくため、ユーザのデータをプロファイルとしてユーザDB243に記憶する。エージェントシステム100は、ユーザDB243に記憶されたプロファイルを盛り込んだ会話を実行する。第2実施形態では、ルールDB241にも、入力文がユーザのプロファイルに関する場合には、ルール回答文とする、というルールが含まれる。
(Second Embodiment)
In the second embodiment, the agent system 100 stores the user's data as a profile in the user DB 243 in order to understand the user based on a natural conversation with the agent, that is, in a natural manner. The agent system 100 executes a conversation including the profile stored in the user DB 243. In the second embodiment, the rule DB 241 also includes a rule that when the input sentence relates to the user's profile, it is used as the rule answer sentence.
 第2実施形態のエージェントシステム100の構成は、中間サーバ2における処理の詳細が異なる点を除き、第1実施形態のエージェントシステム100の構成と同様であるから、共通する構成には同一の符号を付して詳細な説明を省略する。 Since the configuration of the agent system 100 of the second embodiment is the same as the configuration of the agent system 100 of the first embodiment except that the details of the processing in the intermediate server 2 are different, the same reference numerals are given to the common configurations. A detailed explanation will be omitted.
 図9-図11は、第2実施形態のエージェントシステム100における処理手順の一例を示すフローチャートである。図9-図11のフローチャートに示す処理手順のうち、図5及び図6のフローチャートに示した手順と共通する手順については同一のステップ番号を付して詳細な説明を省略する。 9 to 11 are flowcharts showing an example of the processing procedure in the agent system 100 of the second embodiment. Of the processing procedures shown in the flowcharts of FIGS. 9 to 11, the procedures common to the procedures shown in the flowcharts of FIGS. 5 and 6 are assigned the same step numbers and detailed description thereof will be omitted.
 中間サーバ2の制御部21は、入力文を受け付ける(S201)。ステップS201において第2実施形態の中間サーバ2の制御部21は、入力文と共に端末3からユーザIDを受け付ける。制御部21は、受け付けた入力文が、ユーザのプロファイルの抽出対象であるか否かを判断する(ステップS221)。 The control unit 21 of the intermediate server 2 accepts an input statement (S201). In step S201, the control unit 21 of the intermediate server 2 of the second embodiment receives the user ID from the terminal 3 together with the input text. The control unit 21 determines whether or not the received input sentence is the extraction target of the user's profile (step S221).
 ステップS221において制御部21は、例えば、ユーザからの入力文を形態素解析し、主語が一人称である場合、ユーザ自身のことを話していると推測できるため、ユーザのプロファイルを抽出できると判断する。制御部21は、ユーザのプロファイルとして「性別」、「年齢」、「住まい」、「出身」、「習慣」、「家族」、「学校」、「趣味」等の種別ごとに、プロファイルに関すると判断するための「語」を用いてS221の処理を実行してもよい。ルールDB241は「家族」に関すると判断するための用語として、「父」「母」「兄」「姉」「妹」「弟」等を含む。ルールDB241は例えば「習慣」に対応付けて「朝食」「朝ごはん」「昼飯」「おやつ」等を含む。ルールDB241は例えば「趣味」に対応付けて「料理」「音楽」「スポーツ(の種類)」「ゲーム」等を含む。これらの語が、ユーザ自身を指す言葉「ぼくは」「わたしは」「おれは」「ぼくの」「わたしの」「おれの」等と共に、入力文に含まれるか否かによって、制御部21は、ユーザのプロファイルを抽出できるか否かを判断できる。制御部21は、言語処理後のパターン分析等によって、プロファイルを抽出できるような受け答えの会話(会話セット)になっているか否かを判定することによって、プロファイル抽出対象であるか否かを判断してもよい。 In step S221, for example, the control unit 21 analyzes the input sentence from the user in a morphological manner, and if the subject is the first person, it can be inferred that the user is speaking, so it is determined that the profile of the user can be extracted. The control unit 21 determines that the user's profile is related to each type such as "gender", "age", "house", "origin", "habit", "family", "school", "hobby", etc. You may execute the process of S221 using the "word" for doing so. Rule DB 241 includes "father," "mother," "brother," "sister," "sister," "younger brother," and the like as terms for determining that the term is related to "family." The rule DB 241 includes, for example, "breakfast", "breakfast", "lunch", "snack", etc. in association with "habit". The rule DB 241 includes, for example, "cooking", "music", "sports (type)", "game", etc. in association with "hobbies". Depending on whether or not these words are included in the input sentence, along with the words "I am", "I am", "I am", "My", "My", "I am", etc., which refer to the user himself, the control unit 21 Can determine if the user's profile can be extracted. The control unit 21 determines whether or not the profile is to be extracted by determining whether or not the conversation (conversation set) is such that the profile can be extracted by pattern analysis after language processing. You may.
 抽出対象であると判断された場合(S221:YES)、制御部21は、受け付けた入力文から、ユーザのプロファイルに関するデータを抽出する(ステップS222)。ステップS222において制御部21は、プロファイルを抽出できると判断した根拠となる「語」をプロファイルに関するデータとして抽出する。制御部21は、入力文全てをプロファイルに関するデータとしてもよい。 When it is determined that the object is to be extracted (S221: YES), the control unit 21 extracts data related to the user's profile from the received input sentence (step S222). In step S222, the control unit 21 extracts the "word" that is the basis for determining that the profile can be extracted as data related to the profile. The control unit 21 may use all the input statements as data related to the profile.
 ステップS222において制御部21は例えば、入力文に「(わたしは)朝ごはん」「食べる」が含まれる場合、「朝ごはん」を、ユーザのプロファイルにおける種別「習慣」に関するデータとして抽出する。その他、制御部21は、入力文に「(わたしは)ジョギング(スポーツ)」の語が含まれる場合、「ジョギング」を、ユーザのプロファイルにおける種別「趣味」に関するデータとして抽出する。制御部21は、入力文に「毎朝」等の周期に関する用語が付加されている場合、その用語をユーザのプロファイルにおける種別「習慣」に関するデータとして記憶してもよい。制御部21は、入力文に「ぼくの/弟」の語が含まれる場合、「弟」を、ユーザのプロファイルにおける種別「家族」に関するデータとして抽出する。 In step S222, for example, when the input sentence includes "(I) breakfast" and "eat", the control unit 21 extracts "breakfast" as data related to the type "habit" in the user's profile. In addition, when the input sentence includes the word "(I) jogging (sports)", the control unit 21 extracts "jogging" as data related to the type "hobby" in the user's profile. When a term related to a cycle such as "every morning" is added to the input sentence, the control unit 21 may store the term as data related to the type "habit" in the user's profile. When the input sentence includes the word "my / younger brother", the control unit 21 extracts the "younger brother" as data relating to the type "family" in the user's profile.
 制御部21は、抽出したデータを、ユーザDB243にユーザIDと対応付けて記憶し(ステップS223)、処理をステップS202へ進める。ステップS223において制御部21は、例えば「習慣」の種別に対応付けて「朝ごはん」を「食べる(yes )」を記憶する。「習慣」の場合、制御部21は、日付を対応付けて記憶してもよい。ステップS223によってユーザDB243に記憶されているデータが増えることで、ユーザのプロファイルが作成されていく。 The control unit 21 stores the extracted data in the user DB 243 in association with the user ID (step S223), and proceeds to the process in step S202. In step S223, the control unit 21 stores, for example, "eating (yes)" "breakfast" in association with the type of "habit". In the case of "habit", the control unit 21 may store the dates in association with each other. The user profile is created by increasing the data stored in the user DB 243 by step S223.
 ステップS221で抽出できないと判断された場合(S221:NO)、制御部21は処理をそのままステップS202へ進める。 If it is determined in step S221 that extraction is not possible (S221: NO), the control unit 21 proceeds to step S202 as it is.
 第2実施形態では、上記のようにして作成されたユーザのプロファイルに基づき、ユーザへのモデル回答文を補正する補正処理がさらに実行される。制御部21は、モデル回答文を使用すると判断した場合(S207:YES)、ユーザDB243に記憶されているユーザのプロファイルに基づき、モデル回答文を補正する(ステップS224)。制御部21は、補正後のモデル回答文をキャラクタ用に変換する(ステップS225)。ステップS224にて制御部21は、例えばモデル回答文が「わたしはスポーツをしてみたい」という文言である場合において、ユーザのプロファイルにおける種別「趣味」に「スポーツ」が記憶されており且つその種目「ジョギング」も記憶されているときには、回答文における「スポーツ」の部分を「ジョギング」に補正できる。その結果、モデル回答文は、「わたしはジョギングをしてみたい」に補正される。このとき制御部21は、「わたし『も』ジョギングをしてみたい」とより共感が得られる回答文へ補正することも可能である。 In the second embodiment, the correction process for correcting the model response sentence to the user is further executed based on the user profile created as described above. When the control unit 21 determines that the model answer statement is to be used (S207: YES), the control unit 21 corrects the model answer statement based on the user profile stored in the user DB 243 (step S224). The control unit 21 converts the corrected model answer sentence for a character (step S225). In step S224, for example, when the model answer sentence is the phrase "I want to play sports", "sports" is stored in the type "hobby" in the user's profile, and the item "jogging" is stored. When "" is also memorized, the "sports" part in the answer sentence can be corrected to "jogging". As a result, the model answer is corrected to "I want to go jogging". At this time, the control unit 21 can correct the answer sentence to be more sympathetic to "I want to do" mo "jogging".
 また、この第2実施形態では、ルール回答文の補正も実行される。制御部21は、ルール回答文を作成した場合(S210)、ユーザDB243に記憶されているユーザのプロファイルに基づき、ルール回答文を補正し(ステップS226)、キャラクタのセリフに変換し(S211)、端末3へ送信する(S212)。ステップS226において制御部21はステップS224と同様に、ユーザのプロファイルにおける種別「趣味」「習慣」等としてユーザDB243に記憶されている用語を用いてルール回答文を補正する。 Further, in this second embodiment, the correction of the rule answer sentence is also executed. When the rule answer sentence is created (S210), the control unit 21 corrects the rule answer sentence based on the user profile stored in the user DB 243 (step S226), converts it into a character line (S211), and then converts it into a character line (S211). It is transmitted to the terminal 3 (S212). In step S226, the control unit 21 corrects the rule answer sentence by using the terms stored in the user DB 243 as the types “hobbies”, “habits”, etc. in the user profile, as in step S224.
 図12及び図13は、第2実施形態のエージェントシステム100における会話例を示す説明図である。図12及び図13は、第1実施形態の図8に示した画面例同様、端末3の表示部34に表示される画面例であり、テキストによる会話が実行されている。 12 and 13 are explanatory views showing an example of conversation in the agent system 100 of the second embodiment. 12 and 13 are screen examples displayed on the display unit 34 of the terminal 3, similar to the screen example shown in FIG. 8 of the first embodiment, and a text conversation is executed.
 図12の画面例では、ユーザと、会話相手として設定されたキャラクタとの会話中に、キャラクタから「おはようございます/朝ごはん食べましたか」と投げかける入力文(発話文)が入力されている。ユーザから「朝ごはん食べるよ」という「習慣」を示すプロファイルを抽出できる入力文が入力されると、中間サーバ2の制御部21は、この入力文から、「習慣」の種別に対応付けて「朝ごはん」を、ユーザDB243に記憶する。また、図12の画面例では、ユーザから「毎朝ジョギングもしてるよ」という入力文も入力されている。制御部21は、入力文が「毎朝」という「周期」を示す語、及び「ジョギング」という「スポーツ」の語を含むことからプロファイルを抽出できると判断し、「毎朝」「ジョギング」というプロファイルをユーザDB243に記憶する。 In the screen example of FIG. 12, during a conversation between the user and the character set as the conversation partner, an input sentence (utterance sentence) that the character throws "Good morning / Have you eaten breakfast?" Is input. When an input statement that can extract a profile indicating a "habit" such as "I will eat breakfast" is input from the user, the control unit 21 of the intermediate server 2 associates this input statement with the type of "habit" and " "Breakfast" is stored in the user DB 243. Further, in the screen example of FIG. 12, an input sentence "I am also jogging every morning" is input from the user. The control unit 21 determines that the profile can be extracted from the fact that the input sentence includes the word "every morning" indicating "cycle" and the word "jogging" "sports", and determines the profile "every morning" and "jogging". It is stored in the user DB 243.
 図13の画面例は、図12の画面例に示した会話の後になされた会話を示している。図13の画面例では、ユーザと、会話相手として設定されたキャラクタとの会話中に、ユーザから「週末は天気どうかなあ」と投げかける入力文が入力されている。ルールDB241には、「明日晴れるかな?」「週末の天気はどうだろう?」という、天候を問う入力文が入力され、且つユーザのプロファイルにおける種別「趣味」に屋外で実施するもの(キャンプ、バーベキュー、ゴルフ、ジョギング、散歩、サッカー、野球、電車等)が登録されているとの条件が揃った場合、「最近<「趣味」の語>している?」と返す、いうルールが含まれていてもよい。このようなルールが設定されている場合、制御部21は、ステップS202の分析処理によってルールベースで回答すると判定した上で(S602)、「最近<「趣味」の語>している?」のルール回答文を作成し(S210)、ユーザDB243のユーザのプロファイルから「趣味」の登録語を読み出し、「最近、ジョギングしている?」とルール回答文を補正する(S225)。図13の例では、キャラクタの設定に従って、丁寧語の文章に変換され(S211)、「最近、ジョギングしていますか?」と出力されている。このように、エージェント(キャラクタ)からの返答に、以前に抽出されたユーザのプロファイルを用いることが可能になる。 The screen example of FIG. 13 shows a conversation made after the conversation shown in the screen example of FIG. In the screen example of FIG. 13, an input sentence is input from the user during a conversation between the user and the character set as the conversation partner, saying "I wonder if the weather is on the weekend". In the rule DB 241, input sentences asking the weather such as "Is it sunny tomorrow?" And "How is the weather on the weekend?" Are input, and the type "hobby" in the user's profile is to be carried out outdoors (camping, If the conditions that barbecue, golf, jogging, walking, soccer, baseball, train, etc. are registered, "Recently <word of" hobby ">? May be included. When such a rule is set, the control unit 21 determines that the answer is based on the rule by the analysis process of step S202 (S602), and then "recently <the word of" hobby ">? Is created (S210), the registered word of "hobby" is read from the user profile of the user DB 243, and the rule answer sentence is corrected as "Recently jogging?" (S225). In the example of FIG. 13, it is converted into a polite language sentence according to the character setting (S211), and "Are you jogging recently?" Is output. In this way, it becomes possible to use the previously extracted user profile in the response from the agent (character).
 このようにして、ユーザのプロファイルをユーザDB243に記憶、逐次更新しつつ、そのユーザのプロファイルを盛り込んだ会話を実現することができる。例えば、「東京」であると前提としたルール回答文、モデル回答文をユーザの「住まい」として登録してある都市名に補正するといったことが可能である。この場合、ユーザは、自分を理解している、記憶してくれているといった実感や愛着を得ることができ、キャラクタとの会話に対する満足度(ひいてはエージェントシステム100に対する満足度)が向上する。 In this way, it is possible to realize a conversation incorporating the user's profile while storing the user's profile in the user DB 243 and sequentially updating it. For example, it is possible to correct the rule answer sentence and the model answer sentence assuming that it is "Tokyo" to the city name registered as the user's "house". In this case, the user can obtain the feeling and attachment that he / she understands and remembers himself / herself, and the degree of satisfaction with the conversation with the character (and thus the degree of satisfaction with the agent system 100) is improved.
 会話によってユーザのプロファイルがユーザDB243に蓄積されることは、ユーザとエージェント(キャラクタ)との間の会話をより自然にする。次述するように、ユーザの会話相手のキャラクタからの発話が起点となる場合、この発話文を、ユーザのプロファイルに基づく単語を用いて作成し、ユーザに向けて出力することも可能である。 The accumulation of the user's profile in the user DB 243 by the conversation makes the conversation between the user and the agent (character) more natural. As described below, when the starting point is an utterance from the character of the user's conversation partner, it is also possible to create this utterance sentence using a word based on the user's profile and output it to the user.
 図14は、エージェントを起点とする会話を実施する処理手順の一例を示すフローチャートである。端末3の制御部31は、プログラムP3に基づき、発話タイミングであると判断したタイミングで以下の処理を開始する。発話タイミングであるか否かは、設定された時刻(アラーム)であるか否か、前回の会話終了から所定時間経過しているか否か、などにより判断できる。 FIG. 14 is a flowchart showing an example of a processing procedure for carrying out a conversation starting from an agent. The control unit 31 of the terminal 3 starts the following processing at the timing determined to be the utterance timing based on the program P3. Whether or not it is an utterance timing can be determined by whether or not it is a set time (alarm), whether or not a predetermined time has passed since the end of the previous conversation, and the like.
 端末3の制御部31は、中間サーバ2へ、ユーザの会話相手として設定されたキャラクタの発話を要求する(ステップS311)。ステップS311において制御部31は、ユーザのユーザIDを共に送信する。 The control unit 31 of the terminal 3 requests the intermediate server 2 to speak a character set as a conversation partner of the user (step S311). In step S311 the control unit 31 also transmits the user ID of the user.
 中間サーバ2の制御部21は、発話の要求を受け付け(ステップS231)、ユーザDB243のユーザIDに対応付けられたプロファイルの語句を用い、発話文を作成する(ステップS232)。ステップS232において制御部21は、例えば、時刻に基づいてプロファイルの種別「習慣」から「食事」、例えば「朝ごはんを食べる」の語句を選択する。「朝ごはんを食べる」という習慣があることがユーザのプロファイルとして記憶されている場合、制御部21は、ステップS232では「朝ごはんを食べたか?」という発話文を生成する。 The control unit 21 of the intermediate server 2 receives the utterance request (step S231) and creates an utterance sentence using the words and phrases of the profile associated with the user ID of the user DB 243 (step S232). In step S232, the control unit 21 selects, for example, the phrase “meal”, for example, “eating breakfast” from the profile type “habit” based on the time. If the habit of "eating breakfast" is stored as a user profile, the control unit 21 generates an utterance sentence "Did you eat breakfast?" In step S232.
 ステップS232において制御部21は、端末3の位置情報を共に受け付け、時刻及び位置情報に基づいて発話文を作成してもよい。制御部21は、天候情報を、ネットワークNを介して外部サーバから取得し、天候に関する呼びかけを発話文として作成してもよい。 In step S232, the control unit 21 may receive the position information of the terminal 3 together and create an utterance sentence based on the time and the position information. The control unit 21 may acquire the weather information from an external server via the network N and create a call regarding the weather as an utterance sentence.
 制御部21は、作成した発話文をキャラクタのセリフへ変換し(ステップS233)、変換後の発話文を端末3へ送信する(ステップS234)。この場合、会話モデル50は使用されなくてよい。中間サーバ2は、発話の要求を受け付ける都度、ステップS231-S234の処理を実行する。 The control unit 21 converts the created utterance sentence into a character line (step S233), and transmits the converted utterance sentence to the terminal 3 (step S234). In this case, the conversation model 50 does not have to be used. The intermediate server 2 executes the process of steps S231-S234 each time it receives an utterance request.
 端末3の制御部31は、発話文を受信し(ステップS312)、受信した発話文によりキャラクタにセリフを表示部34又は音声出力部36に出力させる(ステップS313)。以後、エージェントシステム100は、図9-図11のフローチャートに示した処理手順を実行する。 The control unit 31 of the terminal 3 receives the utterance sentence (step S312), and causes the character to output the dialogue to the display unit 34 or the voice output unit 36 according to the received utterance sentence (step S313). After that, the agent system 100 executes the processing procedure shown in the flowchart of FIGS. 9 to 11.
 このように、会話に基づいて取得されたプロファイルに関するデータをユーザDB243に記憶できるので、ユーザのプロファイルに沿った会話を成立させることができる。 In this way, since the data related to the profile acquired based on the conversation can be stored in the user DB 243, it is possible to establish a conversation according to the user's profile.
 図15は、第2実施形態のエージェントシステム100におけるプロファイルに基づく会話例を示す説明図である。図15は、第1実施形態の図8に示した画面例同様、端末3の表示部34に表示される画面例であり、テキストによる会話が実行されている。図15の例では、エージェント側を起点とした発話、つまり、ユーザへの呼びかけがされている。このように、会話の流れの中で一時的に得られる情報のみならず、ユーザの背景的なプロファイルを反映した呼びかけを、エージェント側から能動的に行なうことも可能になる。これにより、ユーザとエージェント(キャラクタ)との間で、より現実に近い(より自然な)コミュニケーションを実現することができる。 FIG. 15 is an explanatory diagram showing an example of conversation based on a profile in the agent system 100 of the second embodiment. FIG. 15 is a screen example displayed on the display unit 34 of the terminal 3, similar to the screen example shown in FIG. 8 of the first embodiment, and a text conversation is executed. In the example of FIG. 15, the utterance is started from the agent side, that is, the call is made to the user. In this way, it is possible for the agent side to actively make a call that reflects not only the information temporarily obtained in the flow of conversation but also the background profile of the user. This makes it possible to realize more realistic (more natural) communication between the user and the agent (character).
 (第3実施形態)
 第3実施形態では、中間サーバ2の処理により、能動的にユーザのプロファイルを抽出するように会話を進める。図16は、第3実施形態の中間サーバ2の構成例を示すブロック図である。第3実施形態において中間サーバ2の補助記憶部24には、コンテンツDB248が記憶されている。コンテンツDB238には、ユーザ向けのアンケート、ユーザ向けの広告宣伝コンテンツ等が記憶されている。各コンテンツには、対応する種別、例えば「食事」「習慣」「健康」「趣味」といったタグが対応付けられ、検索可能にしてある。
(Third Embodiment)
In the third embodiment, the conversation is promoted so as to actively extract the user's profile by the processing of the intermediate server 2. FIG. 16 is a block diagram showing a configuration example of the intermediate server 2 of the third embodiment. In the third embodiment, the content DB 248 is stored in the auxiliary storage unit 24 of the intermediate server 2. The content DB 238 stores questionnaires for users, advertising contents for users, and the like. Each content is associated with a corresponding type, for example, tags such as "meal", "habit", "health", and "hobby", and is searchable.
 第3実施形態の中間サーバ2の補助記憶部24には、プロファイル抽出用の第1モデル245及び第2モデル246(2つのモデルによりプロファイル抽出モデル247が構成される)が記憶されている。 The auxiliary storage unit 24 of the intermediate server 2 of the third embodiment stores the first model 245 and the second model 246 for profile extraction (the profile extraction model 247 is configured by the two models).
 第1モデル245は、ユーザから受け付けた入力文が入力された場合に、その入力文の内容・意味を数値化したベクトルを出力するように学習される機会学習モデルである。第1モデル245は、入力された入力文の内容・意味を数値として表現したベクトルを出力するように学習済みである。第1モデル245は例えば、BERT等の自然言語処理モデルからの出力をベクトルとして利用する構成としてもよい。第1モデル245は、例えば、似た内容・意味の文章は似たベクトルを、異なる内容・意味の文章は異なるベクトルを出力するように学習され、第1モデル245は予めユーザのプロファイルに関するデータを抽出可能な文をベクトル化しておく。中間サーバ2は、この予め用意されたユーザのプロファイルに関するデータを抽出可能な文のベクトルと、入力文を第1モデル245に入力した場合に、第1モデル245から出力されるベクトルの類似度が、所定値以上であるかによって、入力文からプロファイルを抽出できるか否かを判断できる。なお、第1モデル245の構成は上述のモデルに限らない。例えば、第1モデル245は、分類器を組み合わせ、ユーザのプロファイルに関するデータを抽出可能な文である可能性(抽出可能な文との類似度)を出力するように学習されたものであってもよい。 The first model 245 is an opportunity learning model that is learned to output a vector that quantifies the content and meaning of the input sentence when the input sentence received from the user is input. The first model 245 has been trained to output a vector expressing the content / meaning of the input sentence as a numerical value. The first model 245 may be configured to use the output from a natural language processing model such as BERT as a vector, for example. The first model 245 is learned to output similar vectors for sentences with similar contents / meanings and different vectors for sentences with different contents / meanings, and the first model 245 outputs data related to the user profile in advance. Vectorize the sentences that can be extracted. The intermediate server 2 has a similarity between the vector of the sentence that can extract the data related to the user's profile prepared in advance and the vector output from the first model 245 when the input sentence is input to the first model 245. , Whether or not the profile can be extracted from the input sentence can be determined depending on whether or not the value is equal to or greater than a predetermined value. The configuration of the first model 245 is not limited to the above model. For example, even if the first model 245 is trained to combine classifiers and output the possibility that the data related to the user's profile can be extracted (similarity with the extractable sentence). good.
 第2モデル246は、入力文を入力した場合に、入力文からプロファイルに関する単語を抽出するように学習済みの機械学習モデルである。第2モデル246は、入力文が入力された場合に、あらかじめ設定されている抽出対象の語句、その語句の種別、及び、入力文中の出現位置を示すデータを出力するように学習されている。中間サーバ2は、抽出対象であると判断された入力文を第2モデル246に入力し、第2モデル246から出力される語句を、データに基づいて取得する。 The second model 246 is a machine learning model that has been trained to extract words related to a profile from an input sentence when an input sentence is input. The second model 246 is learned to output data indicating a preset word / phrase to be extracted, the type of the word / phrase, and an appearance position in the input sentence when an input sentence is input. The intermediate server 2 inputs an input sentence determined to be an extraction target to the second model 246, and acquires words and phrases output from the second model 246 based on the data.
 図17は、第2モデル246の概要図である。第2モデル246は、入力文を受け付ける入力層と、演算を行なう中間層と、語句、語句の種別、及び、入力文中の出現位置を示すデータを出力する出力層とを備える。 FIG. 17 is a schematic diagram of the second model 246. The second model 246 includes an input layer that accepts an input sentence, an intermediate layer that performs an operation, and an output layer that outputs data indicating a phrase, a phrase type, and an appearance position in the input sentence.
 図16及び図17に示したプロファイル抽出モデル247と、これを用いた処理手順の内容とを除き、第3実施形態のエージェントシステム100の構成は、第1実施形態のエージェントシステム100の構成と同様である。第3実施形態のエージェントシステム100の構成のうち、第1実施形態と共通する構成には、同一の符号を付して詳細な説明を省略する。 Except for the profile extraction model 247 shown in FIGS. 16 and 17, and the content of the processing procedure using the profile extraction model 247, the configuration of the agent system 100 of the third embodiment is the same as the configuration of the agent system 100 of the first embodiment. Is. Among the configurations of the agent system 100 of the third embodiment, the configurations common to those of the first embodiment are designated by the same reference numerals and detailed description thereof will be omitted.
 図18及び図19は、第3実施形態の中間サーバ2による会話中の処理手順の一例を示すフローチャートである。図18及び図19のフローチャートでは、端末3及びサーバ1における処理手順を省略するが、端末3及びサーバ1における処理は、第1実施形態の図5及び図6のフローチャートに示した手順と同様である。また、図18及び図19のフローチャートでは、図5及び図6のフローチャートに示した中間サーバ2の処理手順と共通する手順については同一のステップ番号を付して詳細な説明を省略する。 18 and 19 are flowcharts showing an example of a processing procedure during a conversation by the intermediate server 2 of the third embodiment. In the flowcharts of FIGS. 18 and 19, the processing procedure in the terminal 3 and the server 1 is omitted, but the processing in the terminal 3 and the server 1 is the same as the procedure shown in the flowcharts of FIGS. 5 and 6 of the first embodiment. be. Further, in the flowcharts of FIGS. 18 and 19, the procedures common to the processing procedure of the intermediate server 2 shown in the flowcharts of FIGS. 5 and 6 are given the same step numbers, and detailed description thereof will be omitted.
 中間サーバ2の制御部21は、入力文を受け付ける(S201)。ステップS201において第2実施形態の中間サーバ2の制御部21は、入力文と共に端末3からユーザIDを受け付ける。 The control unit 21 of the intermediate server 2 accepts an input statement (S201). In step S201, the control unit 21 of the intermediate server 2 of the second embodiment receives the user ID from the terminal 3 together with the input text.
 制御部21は、受け付けた入力文をプロファイル抽出モデル247の第1モデル245へ入力する(ステップS241)。制御部21は、第1モデル245から出力される類似度が所定値以上か否かにより、入力文がユーザのプロファイルの抽出対象であるか否か(入力文が、ユーザのプロファイルに関するデータを抽出可能な文であるか否か)を判断する(ステップS242)。 The control unit 21 inputs the received input sentence to the first model 245 of the profile extraction model 247 (step S241). The control unit 21 determines whether or not the input statement is the target for extracting the user's profile depending on whether or not the similarity output from the first model 245 is equal to or higher than a predetermined value (the input statement extracts data related to the user's profile). (Whether or not the sentence is possible) is determined (step S242).
 抽出対象であると判断された場合(S242:YES)、制御部21は、受け付けた入力文を第2モデル246へ入力し(ステップS243)、第2モデル246から出力される語句を取得する(ステップS244)。制御部21は、取得した語句を、ユーザDB243にユーザIDと対応付けて記憶し(ステップS245)、処理をステップS202へ進める。ステップS245により、入力文を入力したユーザのプロファイルが作成され、更新される。 When it is determined to be the extraction target (S242: YES), the control unit 21 inputs the received input sentence to the second model 246 (step S243), and acquires the phrase output from the second model 246 (step S243). Step S244). The control unit 21 stores the acquired words and phrases in association with the user ID in the user DB 243 (step S245), and proceeds to the process in step S202. In step S245, the profile of the user who input the input sentence is created and updated.
 ステップS242で抽出対象でないと判断された場合(S242:NO)、制御部21は、入力文に対する回答文であり、且つ、プロファイルを問う内容(YES/NOで答えられる問いかけ)の発話文を、ルール回答文として作成する(ステップS246)。ステップS246の処理の前に、制御部21は、入力文に禁止ワードが含まれているか否かを判断し、禁止ワードが含まれていると判断された場合には、ルールベースに基づくルール回答文を作成してもよい。 When it is determined in step S242 that it is not the extraction target (S242: NO), the control unit 21 is an answer sentence to the input sentence and an utterance sentence of the content asking the profile (a question answered by YES / NO). Create as a rule answer sentence (step S246). Prior to the process of step S246, the control unit 21 determines whether or not the input statement contains a prohibited word, and if it is determined that the input statement contains a prohibited word, a rule answer based on the rule base. You may create a statement.
 ステップS246の処理において制御部21は、YES/NOで答えることが可能な問いかけ(クローズ質問)の発話文をルール回答文として作成する。制御部21は、「朝ごはんは毎日食べるか?」「昨晩はよく眠れたか?」「犬が好きか?」「猫が好きか?」「東京に住んでいるのか?」といったルール回答文を作成する。これらのプロファイルを問う文はあらかじめルールDB241に記憶してあり、制御部21は、いずれかを選択することでルール回答文を作成する。 In the process of step S246, the control unit 21 creates an utterance sentence of a question (closed question) that can be answered with YES / NO as a rule answer sentence. The control unit 21 gives rule answer sentences such as "Do you eat breakfast every day?" "Did you sleep well last night?" "Do you like dogs?" "Do you like cats?" "Do you live in Tokyo?" create. The sentences asking these profiles are stored in the rule DB 241 in advance, and the control unit 21 creates a rule answer sentence by selecting one of them.
 制御部21は、作成した問いかけであるルール回答文を、キャラクタ用に変換し(ステップS247)、変換後のルール回答文を端末3へ送信する(ステップS248)。 The control unit 21 converts the created rule answer sentence for a character (step S247), and transmits the converted rule answer sentence to the terminal 3 (step S248).
 制御部21は、ステップS212で送信した回答文に対するユーザからの入力文(返答文)を受け付ける(ステップS249)。ステップS249で受け付ける入力文は、YES又はNOの趣旨の回答である。制御部21は、入力文の内容(YES又はNOの返答結果)を、ステップS246でユーザに問うた回答文の内容と共にユーザDB243にプロファイルとして記憶する(ステップS250)。 The control unit 21 receives an input sentence (response sentence) from the user for the reply sentence transmitted in step S212 (step S249). The input sentence received in step S249 is an answer to the effect of YES or NO. The control unit 21 stores the content of the input sentence (response result of YES or NO) in the user DB 243 together with the content of the response sentence asked to the user in step S246 as a profile (step S250).
 制御部21は、ステップS250で記憶したプロファイルに基づき、コンテンツDB248からコンテンツ(ユーザに提供すべきコンテンツ)を選定する(ステップS251)。コンテンツは、上述したように、ユーザ向けのアンケート調査をするWebベースのコンテンツ、又は、広告宣伝に係る動画などである。制御部21は、選定したコンテンツを端末3へ送信し(ステップS252)、処理を終了する。端末3の表示部34には、中間サーバ2によって選定されたコンテンツが出力され、該コンテンツがユーザに対して実施(再生)される。 The control unit 21 selects content (content to be provided to the user) from the content DB 248 based on the profile stored in step S250 (step S251). As described above, the content is a Web-based content for conducting a questionnaire survey for users, a video related to advertising, or the like. The control unit 21 transmits the selected content to the terminal 3 (step S252), and ends the process. The content selected by the intermediate server 2 is output to the display unit 34 of the terminal 3, and the content is executed (reproduced) by the user.
 図20及び図21は、第1モデル245及び第2モデル246を用いる処理手順の他の一例を示すフローチャートである。図20及び図21のフローチャートでは、図18及び図19のフローチャートに示した処理手順と共通する手順については同一のステップ番号を付して詳細な説明を省略する。 20 and 21 are flowcharts showing another example of the processing procedure using the first model 245 and the second model 246. In the flowcharts of FIGS. 20 and 21, the same step numbers are assigned to the procedures common to the processing procedures shown in the flowcharts of FIGS. 18 and 19, and detailed description thereof will be omitted.
 他の一例では、中間サーバ2の制御部21は、ステップS242で抽出対象でないと判断された場合(S242:NO)、ユーザにプロファイルを問う内容(YES/NOで答えられない、対象そのものを問う問いかけ(オープン質問))の発話文を、入力文に対するルール回答文として作成する(ステップS256)。 In another example, when the control unit 21 of the intermediate server 2 determines in step S242 that it is not the extraction target (S242: NO), the content of asking the user for the profile (cannot answer YES / NO, asks the target itself). The utterance sentence of the question (open question)) is created as a rule answer sentence to the input sentence (step S256).
 ステップS256の問いかけは例えば、「毎日欠かさないのは朝ごはんか、夕ごはんか?」「犬と猫どちらが好きか?」「趣味は何か?」といった文である。これらのプロファイルを問う発話文はあらかじめルールDB241に記憶してあり、制御部21は、いずれかを選択することで、ルール回答文を作成する。 The question in step S256 is, for example, a sentence such as "Do you miss breakfast or dinner every day?" "Which do you like dogs or cats?" "What are your hobbies?" The utterance sentences asking these profiles are stored in the rule DB 241 in advance, and the control unit 21 creates a rule answer sentence by selecting one of them.
 制御部21は、端末3へ、ユーザへ問いかけるルール回答文を変換後に送信し(S247,S248)、回答である入力文を受け付けると(S249)、入力文を第2モデル246へ入力する(ステップS257)。制御部21は、第2モデル246から出力される語句を取得する(ステップS258)。制御部21は、取得した語句を、ユーザDB243にユーザIDと対応付けて記憶し(ステップS259)、処理をステップS251へ進める。 The control unit 21 transmits the rule answer sentence asking the user to the terminal 3 after conversion (S247, S248), and when the input sentence which is the answer is received (S249), the input sentence is input to the second model 246 (step). S257). The control unit 21 acquires a phrase output from the second model 246 (step S258). The control unit 21 stores the acquired words and phrases in association with the user ID in the user DB 243 (step S259), and proceeds to the process in step S251.
 (第3実施形態の変形例)
 第3実施形態では、第1モデル245及び第2モデル246を含むプロファイル抽出モデル247は、1つで全てのジャンルの会話に対してプロファイルを抽出するために利用するものとして説明した。しかしながら、会話の目的、抽出するプロファイルの種別に応じて異なる訓練データで学習することにより、抽出精度が向上することが期待できる。図22は、変形例の中間サーバ2の構成例を示すブロック図である。図22に示すように、中間サーバ2は、互いに異なる複数のプロファイル抽出モデル247を記憶しておく。各プロファイル抽出モデル247は、趣味を聞き出す会話なのか、体調を聞き出す会話なのかといった目的に応じて異なる訓練データで学習されている。
(Modified example of the third embodiment)
In the third embodiment, the profile extraction model 247 including the first model 245 and the second model 246 has been described as being used by one to extract profiles for conversations of all genres. However, it can be expected that the extraction accuracy will be improved by learning with different training data according to the purpose of conversation and the type of profile to be extracted. FIG. 22 is a block diagram showing a configuration example of the intermediate server 2 of the modified example. As shown in FIG. 22, the intermediate server 2 stores a plurality of profile extraction models 247 that are different from each other. Each profile extraction model 247 is learned with different training data depending on the purpose, such as whether it is a conversation that asks for a hobby or a conversation that asks for a physical condition.
 変形例の中間サーバ2は、ステップS201又はS249で端末3から受け付けた入力文に対し、その目的に応じてプロファイル抽出モデル247を選択してから、以後の処理(S241、S243、S257等)を実行する。 The intermediate server 2 of the modified example selects the profile extraction model 247 according to the purpose of the input sentence received from the terminal 3 in step S201 or S249, and then performs the subsequent processing (S241, S243, S257, etc.). Run.
 第3実施形態で示したように、エージェントシステム100では、より自然な会話の中でユーザのプロファイルを取得できる。そこでエージェントシステム100の応用として、抽出対象とするプロファイルの分野を絞ることも可能である。例えば、医療・介護分野に向けて、自然な会話を可能とするエージェントシステム100を利用する場合、ユーザ及びユーザ家族の健康に関するプロファイルの抽出対象であるか否かを判定可能な種別及びその「語句」が設定されてもよい。例えば、種別「健康状態」の語句として「体温」「血圧」「脈拍」等が設定されてもよい。他の例では、ビジネス支援のために自然な会話を可能とするエージェントシステム100を利用する場合、スケジュール、技術分野等、ユーザのビジネス関連のプロファイルの抽出対象であるか否かを判定可能な種別及びその「語句」が設定されてもよい。例えば、種別「健康状態」の語句として「体温」「血圧」「脈拍」等が設定されてもよい。 As shown in the third embodiment, in the agent system 100, the user profile can be acquired in a more natural conversation. Therefore, as an application of the agent system 100, it is possible to narrow down the field of the profile to be extracted. For example, when using the agent system 100 that enables natural conversation for the medical / long-term care field, a type that can determine whether or not it is a target for extracting a profile related to the health of the user and the user's family and its "words and phrases". May be set. For example, "body temperature", "blood pressure", "pulse" and the like may be set as words and phrases of the type "health state". In another example, when using the agent system 100 that enables natural conversation for business support, it is possible to determine whether or not the user's business-related profile is to be extracted, such as schedule and technical field. And its "words" may be set. For example, "body temperature", "blood pressure", "pulse" and the like may be set as words and phrases of the type "health state".
 なお、第3実施形態及び変形例では、プロファイル抽出モデル247を中間サーバ2で記憶する構成として説明した。しかしながら、プロファイル抽出モデル247は、サーバ1にて記憶され、中間サーバ2がこれを利用する構成としてもよい。 In the third embodiment and the modified example, the profile extraction model 247 has been described as a configuration for storing in the intermediate server 2. However, the profile extraction model 247 may be stored in the server 1 and used by the intermediate server 2.
 (第4実施形態)
 第4実施形態では、会話モデル50に、会話の話題を反映させる。図23は、第4実施形態のサーバ1の構成例を示すブロック図である。第4実施形態のエージェントシステム100の構成のうち、第1実施形態と共通する構成については同一の符号を付して詳細な説明を省略する。
(Fourth Embodiment)
In the fourth embodiment, the conversation model 50 reflects the topic of conversation. FIG. 23 is a block diagram showing a configuration example of the server 1 of the fourth embodiment. Among the configurations of the agent system 100 of the fourth embodiment, the configurations common to those of the first embodiment are designated by the same reference numerals and detailed description thereof will be omitted.
 第4実施形態のサーバ1の補助記憶部14は、会話モデル50の他に、話題判定モデル51を記憶している。話題判定モデル51は、話題を判定するための所定の訓練データを学習済みの機械学習モデルである。話題判定モデル51は、ユーザからの入力文及びエージェントの回答文を順に入力する都度、その時点での話題を判定するためのデータを出力するモデルである。話題判定モデル51が学習する所定の訓練データは、入力文又は回答文と、既知の話題を識別するためのデータとの組である。話題を識別するためのデータは、話題タグであってもよいし、あらかじめ設定された話題タグをそれぞれ次元として持ち、いずれの話題の会話であるかを、各話題タグに対応する次元の数値の高さで表すベクトルであってもよい。なお、話題判定モデル51によって判定される話題は、入力される入力文及び回答文には登場しない語の識別データを含んでもよい。 The auxiliary storage unit 14 of the server 1 of the fourth embodiment stores the topic determination model 51 in addition to the conversation model 50. The topic determination model 51 is a machine learning model in which predetermined training data for determining a topic has been learned. The topic determination model 51 is a model that outputs data for determining a topic at that time each time an input sentence from a user and an answer sentence of an agent are input in order. The predetermined training data learned by the topic determination model 51 is a set of an input sentence or an answer sentence and data for identifying a known topic. The data for identifying a topic may be a topic tag, or each has a preset topic tag as a dimension, and a numerical value of the dimension corresponding to each topic tag indicates which topic is the conversation. It may be a vector represented by height. The topic determined by the topic determination model 51 may include identification data of words that do not appear in the input sentence and the answer sentence.
 図24は、話題判定モデル51の概要図である。話題判定モデル51は、入力文又は会話文が入力される都度、その時点での話題が何であるかを判定するためのデータを出力する。図24の概要図では、話題判定モデル51は、話題を示すベクトルとしてデータを出力する。図24では、話題を示すベクトルを、話題を識別する識別データ(例えばキーワード)に対する棒グラフの長さによってその話題である可能性の高さ(次元の数値の高さ)を表している。図24では、会話が進むにつれて各話題の識別データに対する可能性の高さが変化している様子を示す。 FIG. 24 is a schematic diagram of the topic determination model 51. The topic determination model 51 outputs data for determining what the topic is at that time each time an input sentence or a conversational sentence is input. In the schematic diagram of FIG. 24, the topic determination model 51 outputs data as a vector indicating a topic. In FIG. 24, a vector indicating a topic is represented by the length of a bar graph with respect to identification data (for example, a keyword) for identifying the topic, and the height of the possibility of the topic (the height of a numerical value of a dimension). FIG. 24 shows how the probability of each topic's identification data changes as the conversation progresses.
 第4実施形態の会話モデル50は、入力文とあわせて会話の話題が入力された場合に、モデル回答文を出力するように学習されている。話題は、識別データによって識別されるとよい。第4実施形態の会話モデル50は、入力文と、当該入力文を話題判定モデル51へ入力した場合に出力される識別データとを含む訓練データを学習済みである。 The conversation model 50 of the fourth embodiment is learned to output a model answer sentence when the topic of the conversation is input together with the input sentence. Topics may be identified by identification data. The conversation model 50 of the fourth embodiment has already learned training data including an input sentence and identification data output when the input sentence is input to the topic determination model 51.
 図25及び図26は、第4実施形態の中間サーバ2及びサーバ1の処理手順の一例を示すフローチャートである。図25及び図26のフローチャートでは、端末3における処理手順を省略するが、端末3における処理は、第1実施形態の図5及び図6のフローチャートに示した手順と同様である。また、図25及び図26のフローチャートでは、図5及び図6のフローチャートに示した中間サーバ2の処理手順と共通する手順については同一のステップ番号を付して詳細な説明を省略する。 25 and 26 are flowcharts showing an example of the processing procedure of the intermediate server 2 and the server 1 of the fourth embodiment. Although the processing procedure in the terminal 3 is omitted in the flowcharts of FIGS. 25 and 26, the processing in the terminal 3 is the same as the procedure shown in the flowcharts of FIGS. 5 and 6 of the first embodiment. Further, in the flowcharts of FIGS. 25 and 26, the same step number is assigned to the procedure common to the processing procedure of the intermediate server 2 shown in the flowcharts of FIGS. 5 and 6, and detailed description thereof will be omitted.
 中間サーバ2の制御部21は、入力文を受け付け(S201)、分析処理(S202)の結果、会話モデル50へ入力すると判定されると(S203:YES)、入力文と共に話題判定のリクエストをサーバ1へ送信する(ステップS264)。 When the control unit 21 of the intermediate server 2 accepts the input sentence (S201) and determines that the input sentence is input to the conversation model 50 as a result of the analysis process (S202) (S203: YES), the control unit 21 sends a request for topic determination together with the input sentence to the server. It is transmitted to 1 (step S264).
 サーバ1では、入力文と話題判定のリクエストとを受信し(ステップS161)、制御部11は、受信した入力文を話題判定モデル51へ入力する(ステップS162)。話題判定モデル51は、入力文の入力に応じて、話題が何であるかを判定するためのデータを出力し、制御部11は、話題判定モデル51から出力される当該データを取得する(ステップS163)。制御部11は、ステップS163で得られたデータに基づいて、話題を判定し(ステップS164)、判定した話題を識別するデータを中間サーバ2へ送信する(ステップS165)。 The server 1 receives the input sentence and the topic determination request (step S161), and the control unit 11 inputs the received input sentence to the topic determination model 51 (step S162). The topic determination model 51 outputs data for determining what the topic is in response to the input of the input sentence, and the control unit 11 acquires the data output from the topic determination model 51 (step S163). ). The control unit 11 determines a topic based on the data obtained in step S163 (step S164), and transmits data for identifying the determined topic to the intermediate server 2 (step S165).
 中間サーバ2は、話題を識別するデータを受信すると(ステップS265)、入力文と話題の識別データとを、サーバ1へ送信する(ステップS266)。 When the intermediate server 2 receives the data for identifying the topic (step S265), the intermediate server 2 transmits the input sentence and the topic identification data to the server 1 (step S266).
 サーバ1は、入力文と話題の識別データとを受信し(ステップS166)、制御部11は、受信した入力文と話題の識別データとを会話モデル50へ入力する(ステップS167)。サーバ1は、会話モデル50から出力されるモデル回答文を取得し(S103)、取得したモデル回答文を、中間サーバ2へ送信する(S104)。 The server 1 receives the input sentence and the topic identification data (step S166), and the control unit 11 inputs the received input sentence and the topic identification data into the conversation model 50 (step S167). The server 1 acquires the model answer sentence output from the conversation model 50 (S103), and transmits the acquired model answer sentence to the intermediate server 2 (S104).
 中間サーバ2は、サーバ1からモデル回答文を受信した後(S205)、モデル回答文を使用すると判定した場合(S207:YES)、制御部21は、ステップS205で受信したモデル回答文を使用することをサーバ1へ通知する(ステップS267)。制御部21は、モデル回答文そのものを送信してもよい。制御部21は、処理をステップS208へ進め、モデル回答文を端末3へ送信する。 When the intermediate server 2 determines to use the model answer statement after receiving the model answer statement from the server 1 (S205) (S207: YES), the control unit 21 uses the model answer statement received in step S205. Notify the server 1 (step S267). The control unit 21 may transmit the model answer sentence itself. The control unit 21 advances the process to step S208 and transmits the model answer sentence to the terminal 3.
 サーバ1は、モデル回答文を送信した後(S104)、中間サーバ2から、送信したモデル回答文を使用することの通知を受信したか否かを判断する(ステップS168)。受信したと判断された場合(S168:YES)、制御部11は、会話モデル50からのモデル回答文を話題判定モデル51へ入力し(ステップS169)、1回の入力文及び回答文の組についての処理を終了する。双方向の会話を逐次話題判定モデル51へ入力し、判定を得ることが好ましい。 After transmitting the model response text (S104), the server 1 determines whether or not the notification of using the transmitted model response text has been received from the intermediate server 2 (step S168). When it is determined that the message has been received (S168: YES), the control unit 11 inputs the model answer sentence from the conversation model 50 into the topic determination model 51 (step S169), and regarding one set of the input sentence and the answer sentence. Ends the processing of. It is preferable to input the two-way conversation into the topic determination model 51 one after another to obtain a determination.
 双方向の会話を話題判定モデル51に入力するため、中間サーバ2の制御部21は、モデル回答文を使用しないと判定した場合(S207:NO)、ステップS210で作成したルール回答文をサーバ1へ送信する(ステップS268)。その後制御部21は、処理をステップS211へ進める。なお制御部21は、ステップS203にて、公序良俗に反する等の理由により会話モデル50へ入力しないと判定された場合(S203:NO)、ステップS268の処理をスキップしてもよい。その他、制御部21は、ユーザからの入力文を送信していない場合は、この時点で入力文とルール回答文とを話題判定モデル51へ入力すべく送信してもよい。 In order to input the two-way conversation into the topic determination model 51, when the control unit 21 of the intermediate server 2 determines that the model answer statement is not used (S207: NO), the rule answer statement created in step S210 is input to the server 1. (Step S268). After that, the control unit 21 advances the process to step S211. If it is determined in step S203 that the input to the conversation model 50 is not performed due to reasons such as offensive to public order and morals (S203: NO), the control unit 21 may skip the process of step S268. In addition, if the control unit 21 has not transmitted the input sentence from the user, the control unit 21 may transmit the input sentence and the rule answer sentence to be input to the topic determination model 51 at this point.
 このように話題判定モデル51を用いることで、話題の流れに沿った会話が可能となり、会話の質が向上し、より自然な会話を実現することができる。 By using the topic determination model 51 in this way, it is possible to have a conversation along the flow of the topic, improve the quality of the conversation, and realize a more natural conversation.
 第4実施形態では、話題判定モデル51が、図24に示したような、時系列に続く会話を続けて入力されるタイプとして説明した。しかしながら、各時点での話題を判定するタイプであってもよい。この場合、サーバ1のステップS168-S171の処理、中間サーバ2のステップS267,S268の処理は実行されなくてもよい。 In the fourth embodiment, the topic determination model 51 has been described as a type in which conversations following a time series are continuously input as shown in FIG. 24. However, it may be a type that determines a topic at each time point. In this case, the processing of steps S168-S171 of the server 1 and the processing of steps S267 and S268 of the intermediate server 2 may not be executed.
 (第5実施形態)
 第5実施形態では、エージェントシステム100をユーザのコンシェルジュとして機能させる。第5実施形態のエージェントシステム100の構成は、以下に説明する主に中間サーバ2の処理手順の詳細を除き、第1実施形態のエージェントシステム100の構成と同様である。したがって、第5実施形態の説明では、第1実施形態の構成と共通する構成については同一の符号を付して詳細な説明を省略する。
(Fifth Embodiment)
In the fifth embodiment, the agent system 100 functions as a user concierge. The configuration of the agent system 100 of the fifth embodiment is the same as the configuration of the agent system 100 of the first embodiment except for the details of the processing procedure of the intermediate server 2 described below. Therefore, in the description of the fifth embodiment, the same reference numerals are given to the configurations common to the configurations of the first embodiment, and detailed description thereof will be omitted.
 図27は、第5実施形態の中間サーバ2による会話中の処理手順の一例を示すフローチャートである。図27のフローチャートでは、端末3及びサーバ1における処理手順を省略するが、端末3及びサーバ1における処理は、第1実施形態の図5及び図6のフローチャートに示した手順と同様である。また、図27のフローチャートでは、図5及び図6のフローチャートに示した中間サーバ2の処理手順と共通する手順については同一のステップ番号を付して詳細な説明を省略する。 FIG. 27 is a flowchart showing an example of a processing procedure during a conversation by the intermediate server 2 of the fifth embodiment. Although the processing procedure in the terminal 3 and the server 1 is omitted in the flowchart of FIG. 27, the processing in the terminal 3 and the server 1 is the same as the procedure shown in the flowcharts of FIGS. 5 and 6 of the first embodiment. Further, in the flowchart of FIG. 27, the same step number is assigned to the procedure common to the processing procedure of the intermediate server 2 shown in the flowcharts of FIGS. 5 and 6, and detailed description thereof will be omitted.
 制御部21は、ステップS202の分析処理の実行後に、ユーザからの入力文に基づく検索を実行すべきか否かを判断する(ステップS271)。ステップS271において制御部21は、入力文が「質問形」「問いかけ」であるか否かで判断してもよい。制御部21は、入力文が「天候又は季節に関する事柄を問いかける文」であるか否かで判断してもよい。その他、制御部21は、第3実施形態のプロファイル抽出モデル247同様、検索すべき入力文であるか否かを判定するモデルを機械学習により作成しておき、使用してもよい。 After executing the analysis process in step S202, the control unit 21 determines whether or not to execute the search based on the input statement from the user (step S271). In step S271, the control unit 21 may determine whether or not the input sentence is a "question form" or a "question". The control unit 21 may determine whether or not the input sentence is a "statement asking about matters related to weather or season". In addition, the control unit 21 may create and use a model for determining whether or not it is an input sentence to be searched by machine learning, as in the profile extraction model 247 of the third embodiment.
 検索を実行すべきと判断された場合(S271:YES)、制御部21は、受け付けた入力文及びユーザのプロファイルに基づき、検索語を作成する(ステップS272)。ステップS272において制御部21は、入力文が質問形である場合は、入力文をそのまま検索語としてもよい。例えば、入力文が「週末晴れるかな?」であった場合、制御部21は「週末晴れるかな?」の文言と共に、ユーザDB243のユーザのプロファイルから、ユーザの「住まい」の地域を示す語を、検索語として作成してもよい。入力文が「時候の挨拶」である場合、制御部21は、日にちによって「天気」「ニュース」等を検索語として作成してもよい。制御部21は、「ニュース」の文言と共に、ユーザDB243のユーザのプロファイルからユーザの「趣味」の「スポーツ」等を検索語として作成してもよい。 When it is determined that the search should be executed (S271: YES), the control unit 21 creates a search term based on the received input sentence and the user's profile (step S272). In step S272, when the input sentence is a question form, the control unit 21 may use the input sentence as a search term as it is. For example, when the input sentence is "Is it sunny on the weekend?", The control unit 21 uses the word "Is it sunny on the weekend?" It may be created as a search term. When the input sentence is "greeting of the time", the control unit 21 may create "weather", "news", etc. as search terms depending on the date. The control unit 21 may create a search term such as "sports" of the user's "hobby" from the user profile of the user DB 243 together with the wording of "news".
 制御部21は、作成した検索語により検索を実行し(ステップS273)、検索結果を用いてルール回答文を作成し(ステップS274)、処理をステップS211へ進める。ステップS273において制御部21は、中間サーバ2が備える検索エンジン及び辞書を用いて検索を実行してもよいし、ネットワークNを介して外部の検索サービス、地図情報提供サービス、気象情報提供サービス、公共交通機関情報提供サービス等を用いて検索を実行してもよい。制御部21はステップS274において、「(ユーザの「住まい」の地区)の天気は、…です。」等とルール回答文を作成する。 The control unit 21 executes a search using the created search term (step S273), creates a rule answer sentence using the search result (step S274), and proceeds to the process in step S211. In step S273, the control unit 21 may execute a search using the search engine and the dictionary provided in the intermediate server 2, or may perform an external search service, a map information providing service, a weather information providing service, and a public service via the network N. The search may be executed using a transportation information providing service or the like. In step S274, the control unit 21 states that "the weather in (the user's" residence "district) is ...". ”, Etc. and create a rule answer sentence.
 検索を実行すべきでないと判断された場合(S271:NO)、制御部21は処理をステップS203へ進める。 When it is determined that the search should not be executed (S271: NO), the control unit 21 advances the process to step S203.
 これにより、エージェントシステム100は、入力文によって、ユーザに代わってユーザが知りたいと欲するものを推測して検索を実行するなど、ユーザ個人向けのコンシェルジュとしての機能を発揮することが可能になる。図27の処理手順は、第2-第4実施形態の処理手順と組み合わせることも可能である。 As a result, the agent system 100 can exert a function as a concierge for individual users, such as guessing what the user wants to know on behalf of the user and executing a search by using an input sentence. The processing procedure of FIG. 27 can also be combined with the processing procedure of the second to fourth embodiments.
 (第6実施形態)
 第6実施形態のエージェントシステム100は、第5実施形態同様、エージェントシステム100をユーザのコンシェルジュとして機能させる。第6実施形態のエージェントシステム100では更に、コンシェルジュ(秘書)として、各ユーザの端末3とエージェントとの間の1対1の会話のみならず、他のユーザとのコミュニケーションを支援するシステムとしても利用できる。また、キャラクタは1つに限らず、複数キャラクタを設定してもよい。
(Sixth Embodiment)
The agent system 100 of the sixth embodiment makes the agent system 100 function as a user concierge as in the fifth embodiment. In the agent system 100 of the sixth embodiment, as a concierge (secretary), it is used not only as a one-to-one conversation between the terminal 3 of each user and the agent, but also as a system that supports communication with other users. can. Further, the number of characters is not limited to one, and a plurality of characters may be set.
 第6実施形態では、中間サーバ2は、ユーザDB243に、ユーザIDに対応付けてエージェント(キャラクタ)との会話履歴をユーザ毎に記憶する。中間サーバ2は、過去の会話履歴を参照して会話に反映させてもよい。 In the sixth embodiment, the intermediate server 2 stores the conversation history with the agent (character) in the user DB 243 in association with the user ID for each user. The intermediate server 2 may refer to the past conversation history and reflect it in the conversation.
 図28は、第6実施形態の端末3の構成例を示すブロック図である。第6実施形態では、端末3の補助記憶部37は、プログラムP3に加えて、ユーザ間でメッセージ、スケジュール、データ等を共有するためのコミュニケーションプログラムP32を記憶している。コミュニケーションプログラムP32は、ユーザと他のユーザとがコミュニケーションをとることが可能なアプリケーションプログラムである。コミュニケーションプログラムP32は、メッセージ交換アプリケーションプログラム、チャットプログラム、ビデオ通話プログラム等である。中間サーバ2は、制御文を端末3のコミュニケーションプログラムP32へ送信することで、コミュニケーションプログラムP32と連携することが可能である。 FIG. 28 is a block diagram showing a configuration example of the terminal 3 of the sixth embodiment. In the sixth embodiment, the auxiliary storage unit 37 of the terminal 3 stores the communication program P32 for sharing messages, schedules, data, etc. between users in addition to the program P3. The communication program P32 is an application program capable of communicating between a user and another user. The communication program P32 is a message exchange application program, a chat program, a video call program, and the like. The intermediate server 2 can cooperate with the communication program P32 by transmitting the control statement to the communication program P32 of the terminal 3.
 図29は、第6実施形態の中間サーバ2による会話中の処理手順の一例を示すフローチャートである。図29のフローチャートでは、端末3及びサーバ1における処理手順を省略するが、端末3及びサーバ1における処理は、第1実施形態の図5及び図6のフローチャートに示した手順と同様である。また、図29のフローチャートでは、図5及び図6のフローチャートに示した中間サーバ2の処理手順と共通する手順については同一のステップ番号を付して詳細な説明を省略する。 FIG. 29 is a flowchart showing an example of a processing procedure during a conversation by the intermediate server 2 of the sixth embodiment. Although the processing procedure in the terminal 3 and the server 1 is omitted in the flowchart of FIG. 29, the processing in the terminal 3 and the server 1 is the same as the procedure shown in the flowcharts of FIGS. 5 and 6 of the first embodiment. Further, in the flowchart of FIG. 29, the same step number is assigned to the procedure common to the processing procedure of the intermediate server 2 shown in the flowcharts of FIGS. 5 and 6, and detailed description thereof will be omitted.
 制御部21は、ステップS202の分析処理の実行後に、受け付けたユーザからの入力文が、他のユーザに関するか否か(他のユーザに関する内容であるか否か)を判定する(ステップS281)。ステップS281において制御部21は、例えば、ユーザDB243に登録されており、ユーザと対応付けられている他のユーザの名前(ニックネーム)が含まれているか否かによって判定してもよい。入力文が「<他のユーザの名前>の来週のスケジュールは?」、「<他のユーザの名前>は元気かな?」、「この話は、<他のユーザの名前>の方が詳しい」等、ルールDB241にて定められた所定のルールに合致するか否かによって、制御部21は他のユーザに関すると判定できる。 After executing the analysis process in step S202, the control unit 21 determines whether or not the input sentence from the received user relates to another user (whether or not the content relates to another user) (step S281). In step S281, the control unit 21 may determine, for example, whether or not the name (nickname) of another user registered in the user DB 243 and associated with the user is included. The input texts are "What is the schedule for next week for <other user's name>?", "Is <other user's name> fine?", "This story is more detailed for <other user's name>". Etc., the control unit 21 can determine that it is related to another user depending on whether or not it conforms to a predetermined rule defined by the rule DB 241.
 他のユーザに関すると判定された場合(S281:YES)、制御部21は、当該他のユーザのユーザIDを特定する(ステップS282)。 When it is determined that the user is related to another user (S281: YES), the control unit 21 identifies the user ID of the other user (step S282).
 制御部21は、ステップS282で特定したユーザIDに対応するユーザ宛のコミュニケーションプログラムP32を起動し、他のユーザとのコミュニケーションに関する制御文を作成し(ステップS283)、入力文を入力したユーザの端末3へ送信する(ステップS284)。これにより、当該ユーザの端末3ではコミュニケーションプログラムP32が起動し、他のユーザとのコミュニケーションが実行可能となる。 The control unit 21 activates the communication program P32 addressed to the user corresponding to the user ID specified in step S282, creates a control statement regarding communication with another user (step S283), and inputs the input statement to the terminal of the user. It is transmitted to 3 (step S284). As a result, the communication program P32 is activated on the terminal 3 of the user, and communication with another user can be executed.
 ステップS283で作成される制御文は、コミュニケーションプログラムP32にて、他のユーザとのビデオ通話の予約するためのものであってもよい。ステップS283で作成される制御文は、他のユーザ宛に「Aさんが『<他のユーザの名前>さん元気かな?』と言っていましたよ」とユーザからの入力文を報告するメッセージ(伝言文)を、コミュニケーションプログラムP32に送信させるものであってよい。ステップS284で作成される制御文は、「Aさんから、<他のユーザの名前>さんのX月Y日のスケジュールが空いているか知りたいとのことです」と問い合わせるチャットをコミュニケーションプログラムP32に送信させるものであってもよい。制御文は、「<他のユーザの名前>さんの趣味を知っていますか?」等、他のユーザのプロファイルを問うメッセージをコミュニケーションプログラムP32に送信させるものであってもよい。制御文は、「<他のユーザの名前>は、Zについて詳しいらしいですね。」というメッセージをコミュニケーションプログラムP32に送信させるものであってもよい。 The control statement created in step S283 may be for reserving a video call with another user in the communication program P32. The control statement created in step S283 is a message (message) that reports an input statement from the user to another user, saying, "Mr. A said,'Is <other user's name> how are you?'" The sentence) may be transmitted to the communication program P32. The control statement created in step S284 sends a chat to the communication program P32 inquiring that "Mr. A wants to know if the schedule for X month and Y day of <other user's name> is free." It may be something to make. The control statement may be one that causes the communication program P32 to send a message asking the profile of another user, such as "Do you know the hobby of <other user's name>?". The control statement may cause the communication program P32 to send a message that "<the name of another user> seems to be familiar with Z."
 ステップS283及びステップS284の処理の後、制御部21は、特定したユーザIDに対応付けてユーザDB243に記憶されているプロファイル、又は会話履歴に基づき、ルール回答文を作成する(ステップS285)。 After the processing of step S283 and step S284, the control unit 21 creates a rule answer sentence based on the profile stored in the user DB 243 in association with the specified user ID or the conversation history (step S285).
 ステップS285において制御部21は、例えば、他のユーザとエージェントとの会話履歴に基づいて最近の会話を報告すると共に、伝言したことを報告する文をルール回答文として作成する。制御部21は、伝言したことを報告するのみの文をルール回答文として作成してもよい。 In step S285, for example, the control unit 21 reports a recent conversation based on the conversation history between another user and the agent, and creates a sentence reporting the message as a rule answer sentence. The control unit 21 may create a sentence that only reports that the message has been sent as a rule answer sentence.
 制御部21は、処理をステップS211へ進め、作成したルール回答文をキャラクタ用に変換し(S211)、変換後のルール回答文を端末3へ送信する(S212)。 The control unit 21 advances the process to step S211, converts the created rule answer sentence for the character (S211), and transmits the converted rule answer sentence to the terminal 3 (S212).
 ステップS283及びステップS284に代替して制御部21は、入力文をサーバ1へ送信して(S204)、会話モデル50からのモデル回答文を使用すると判定された場合に(S207:YES)、当該他のユーザのユーザIDに対応付けてユーザDB243に記憶されているプロファイル又は会話履歴に基づき、モデル回答文を補正することとしてもよい。 When the control unit 21 sends an input sentence to the server 1 (S204) instead of step S283 and step S284 and determines that the model answer sentence from the conversation model 50 is used (S207: YES), the said person concerned. The model answer sentence may be corrected based on the profile or conversation history stored in the user DB 243 in association with the user ID of another user.
 入力文が、他のユーザに関するものでないと判定された場合(S281:NO)、制御部21は、処理をステップS203へ進める。 If it is determined that the input statement is not related to another user (S281: NO), the control unit 21 advances the process to step S203.
 (第6実施形態の変形例1)
 変形例1では、コミュニケーションプログラムP32を起動させることに代替して、エージェントシステム100の他のユーザとの間の1対多の関係を用いる。
(Modification 1 of the sixth embodiment)
In the first modification, instead of invoking the communication program P32, a one-to-many relationship with another user of the agent system 100 is used.
 図30は、第6実施形態の変形例1における中間サーバ2による会話中の処理手順の一例を示すフローチャートである。図30のフローチャートは、図29のフローチャートと、以下に示すステップS293及びS294が異なる以外は、同様であるため、共通する手順については同一のステップ番号を付して詳細な説明を省略する。 FIG. 30 is a flowchart showing an example of a processing procedure during a conversation by the intermediate server 2 in the modification 1 of the sixth embodiment. Since the flowchart of FIG. 30 is the same as the flowchart of FIG. 29 except that steps S293 and S294 shown below are different, the common procedure is given the same step number and detailed description thereof will be omitted.
 変形例1において中間サーバ2の制御部21は、ステップS282で特定したユーザIDに対応するユーザ(他のユーザ)宛の発話文を作成し(ステップS293)、作成した発話文を当該他のユーザへ向けて(詳細には、当該他のユーザが使用する端末3へ)送信する(ステップS294)。これにより、他のユーザの端末3ではエージェントシステム100からの通知が出力される。ステップS293の発話文は、単に「Aさんが『(他のユーザ)さん元気かな?』と言っていましたよ」とユーザからの入力文を報告する文(伝言文)であってよい。ステップS293の発話文は、「Aさんから、(他のユーザ)さんのX月Y日のスケジュールが空いているか知りたいとのことです」と問い合わせる文であってもよい。 In the first modification, the control unit 21 of the intermediate server 2 creates an utterance sentence addressed to the user (other user) corresponding to the user ID specified in step S282 (step S293), and the created utterance sentence is used as the other user. (Specifically, to the terminal 3 used by the other user) is transmitted toward (step S294). As a result, the notification from the agent system 100 is output to the terminal 3 of another user. The utterance sentence in step S293 may be simply a sentence (message sentence) that reports an input sentence from the user, saying, "Mr. A said,'Is (another user) how are you?'". The utterance sentence of step S293 may be a sentence inquiring "Mr. A wants to know whether the schedule of Mr. (other user)'s X month and Y day is free".
 制御部21は、他のユーザへメッセージを送信した旨をユーザに報告をするルール回答文を作成する(ステップS295)。ステップS295において制御部21は、例えば、他のユーザの会話履歴に基づいて最近の会話の報告と併せてルール回答文を作成してもよい。 The control unit 21 creates a rule reply sentence for reporting to the user that a message has been sent to another user (step S295). In step S295, the control unit 21 may create a rule answer sentence together with a report of a recent conversation based on, for example, the conversation history of another user.
 これにより、例えばユーザ間での直接的なコミュニケーションに壁を感じるような状況の場合に、エージェントシステム100が、ユーザ間のコミュニケーションを肩代わりすることができる。このようにエージェントシステム100は、各ユーザのコンシェルジュ(秘書)として機能することが可能である。 This allows the agent system 100 to take over the communication between users, for example, in a situation where direct communication between users feels a wall. In this way, the agent system 100 can function as a concierge (secretary) for each user.
 (第6実施形態の変形例2)
 変形例2では、エージェントシステム100は、他のユーザとの間の1対多の関係を用いつつ、ユーザ毎に会話相手のキャラクタが設定されている場合、その他のユーザに対応するキャラクタへの問いかけであってもこれに対応できるように処理を実行してもよい。つまり、各ユーザのコンシェルジュ同士のコミュニケーションを介して、ユーザ間のコミュニケーションを成立させる。
(Modification 2 of the sixth embodiment)
In the second modification, the agent system 100 uses a one-to-many relationship with another user, and when the character of the conversation partner is set for each user, asks the character corresponding to the other user. However, the process may be executed so as to cope with this. That is, communication between users is established through communication between concierges of each user.
 図31は、第6実施形態の変形例2における中間サーバ2による会話中の処理手順の一例を示すフローチャートである。図31のフローチャートは、図29のフローチャートと、以下に示すステップS296-S298が異なる以外は、同様であるため、共通する手順については同一のステップ番号を付して詳細な説明を省略する。 FIG. 31 is a flowchart showing an example of a processing procedure during a conversation by the intermediate server 2 in the second modification of the sixth embodiment. Since the flowchart of FIG. 31 is the same as the flowchart of FIG. 29 except that steps S296-S298 shown below are different, the common procedure is given the same step number and detailed description thereof will be omitted.
 変形例2では制御部21は、ステップS202の分析処理の実行後に、受け付けたユーザからの入力文が、当該ユーザの会話相手として設定されるキャラクタと異なる他のキャラクタに関するか否か(他のキャラクタに関する内容であるか否か)を判定する(ステップS296)。 In the second modification, after the analysis process of step S202 is executed, the control unit 21 determines whether or not the input sentence from the received user relates to another character different from the character set as the conversation partner of the user (other character). (Whether or not the content is related to) is determined (step S296).
 他のキャラクタに関すると判定された場合(S296:YES)、制御部21は、当該他のキャラクタとユーザとの間の会話履歴、及び、他のキャラクタと他のユーザとの会話履歴に基づき、ルール回答文を作成する(ステップS297)。 When it is determined that the character is related to another character (S296: YES), the control unit 21 determines the rule based on the conversation history between the other character and the user and the conversation history between the other character and the other user. Create an answer sentence (step S297).
 ステップS297において制御部21は、例えば、ルールDB241に記憶してある、対象の他のキャラクタの設定データに基づいて、設定を説明するルール回答文を作成してもよい。制御部21は、対象の他のキャラクタと他のユーザとの会話履歴に基づいて、他のユーザと他のキャラクタとの最近の会話を報告する文を作成してもよい。 In step S297, the control unit 21 may create, for example, a rule answer statement explaining the setting based on the setting data of another target character stored in the rule DB 241. The control unit 21 may create a sentence that reports a recent conversation between another user and another character based on the conversation history between the other target character and the other user.
 制御部21は、処理をステップS211へ進め、作成したルール回答文をキャラクタ用に変換し(S211)、変換後のルール回答文を端末3へ送信する(S212)。 The control unit 21 advances the process to step S211, converts the created rule answer sentence for the character (S211), and transmits the converted rule answer sentence to the terminal 3 (S212).
 ステップS297に代替して制御部21は、入力文をサーバ1へ送信して(S204)、会話モデル50からのモデル回答文を使用すると判定された場合に(S207:YES)、当該他のキャラクタの設定データに基づき、モデル回答文を補正することとしてもよい。 When the control unit 21 sends an input sentence to the server 1 (S204) instead of step S297 and determines that the model answer sentence from the conversation model 50 is used (S207: YES), the other character. The model answer sentence may be corrected based on the setting data of.
 ステップS296で、入力文が、他のキャラクタに関するものでないと判定された場合(S296:NO)、処理をステップS203へ進める。 If it is determined in step S296 that the input sentence is not related to another character (S296: NO), the process proceeds to step S203.
 上述のように開示された実施の形態は全ての点で例示であって、制限的なものではない。本発明の範囲は、請求の範囲によって示され、請求の範囲と均等の意味及び範囲内での全ての変更が含まれる。 The embodiments disclosed as described above are exemplary in all respects and are not restrictive. The scope of the present invention is indicated by the scope of claims and includes all modifications within the meaning and scope equivalent to the scope of claims.
 1 サーバ
 2 中間サーバ
 21 制御部
 P2 プログラム
 24 補助記憶部
 241 ルールDB
 242 禁止ワードDB
 243 ユーザDB
 247 プロファイル抽出モデル
 3 端末
 34 表示部
 35 入力部
 P3 プログラム
 P32 コミュニケーションプログラム
 50 会話モデル
 51 話題判定モデル
 
1 Server 2 Intermediate server 21 Control unit P2 Program 24 Auxiliary storage unit 241 Rule DB
242 Banned Word DB
243 User DB
247 Profile extraction model 3 Terminal 34 Display unit 35 Input unit P3 Program P32 Communication program 50 Conversation model 51 Topic judgment model

Claims (19)

  1.  ユーザからの入力文が入力された場合にモデル回答文を出力するように学習される会話モデルと、該会話モデルに関する会話ルールを記憶したデータベースとを用い、
     コンピュータが、
     ユーザからの入力文を受け付け、
     受け付けた入力文を前記会話モデルへ入力した場合、モデル回答文を取得し、
     受け付けた入力文、又は取得されるモデル回答文と、前記データベースに記憶されている会話ルールとの比較に基づき、前記入力文に対し前記会話モデルのモデル回答文を使用するか否かを判定し、
     前記モデル回答文を使用しないと判定された場合、前記会話ルールに基づき前記入力文に対応するルール回答文を作成し、
     前記モデル回答文、又は、ルール回答文を出力する
     情報処理方法。
    Using a conversation model that is learned to output a model answer sentence when an input sentence from the user is input, and a database that stores conversation rules related to the conversation model,
    The computer
    Accepts input from the user and accepts
    When the received input sentence is input to the conversation model, the model answer sentence is acquired and the model answer sentence is acquired.
    Based on the comparison between the received input sentence or the acquired model answer sentence and the conversation rule stored in the database, it is determined whether or not to use the model answer sentence of the conversation model for the input sentence. ,
    If it is determined not to use the model answer sentence, a rule answer sentence corresponding to the input sentence is created based on the conversation rule, and the rule answer sentence is created.
    An information processing method that outputs the model answer sentence or the rule answer sentence.
  2.  前記コンピュータが、
     受け付けた入力文が、前記ユーザのプロファイルの抽出対象であるか否かを判断し、
     抽出対象であると判断された場合、前記入力文から、前記ユーザのプロファイルに関するデータを抽出し、
     抽出したデータによって、前記データベースに前記ユーザのプロファイルを作成する
     請求項1に記載の情報処理方法。
    The computer
    It is determined whether or not the received input sentence is the extraction target of the user's profile.
    If it is determined to be the extraction target, the data related to the user's profile is extracted from the input statement.
    The information processing method according to claim 1, wherein a profile of the user is created in the database using the extracted data.
  3.  前記コンピュータは、
     前記ユーザのプロファイルに基づき、前記ユーザへのモデル回答文、又はルール回答文を補正し、
     補正後のモデル回答文、又はルール回答文を出力する
     請求項2に記載の情報処理方法。
    The computer
    Based on the user's profile, the model response text or rule response text to the user is corrected.
    The information processing method according to claim 2, which outputs a corrected model answer sentence or a rule answer sentence.
  4.  前記コンピュータは、
     前記ユーザのプロファイルを問う発話文を作成し、
     前記発話文に対するユーザからの回答である入力文を受け付け、
     受け付けた入力文により前記ユーザのプロファイルを更新する
     請求項2又は3に記載の情報処理方法。
    The computer
    Create an utterance that asks the user's profile,
    Accepting the input sentence that is the answer from the user to the utterance sentence,
    The information processing method according to claim 2 or 3, wherein the profile of the user is updated by the received input text.
  5.  入力文が入力された場合に、前記入力文に対する前記プロファイルの抽出に関するデータを出力するように学習されるプロファイル抽出モデルを用い、
     前記コンピュータが、ユーザから受け付けた入力文を前記プロファイル抽出モデルに入力して得られるデータに基づき、前記入力文が前記ユーザのプロファイルの抽出対象であるか否かを判断する
     請求項2又は3に記載の情報処理方法。
    Using a profile extraction model that is trained to output data related to the extraction of the profile for the input statement when an input statement is input.
    According to claim 2 or 3, the computer determines whether or not the input sentence is the extraction target of the user's profile based on the data obtained by inputting the input sentence received from the user into the profile extraction model. The information processing method described.
  6.  前記コンピュータは、
     前記ユーザの会話相手として設定されるキャラクタを起点とする発話文に、前記ユーザのプロファイルに基づく単語を用い、該単語を含む発話文を前記ユーザに向けて出力する
     請求項2から請求項5のいずれか1項に記載の情報処理方法。
    The computer
    Claims 2 to 5 use a word based on the user's profile as a starting point of a character set as a conversation partner of the user, and output the utterance sentence including the word to the user. The information processing method according to any one of the following items.
  7.  前記コンピュータは、
     前記ユーザのプロファイルに基づき、コンテンツを選定し、
     選定したコンテンツを前記ユーザに対して実施する
     請求項2から請求項6のいずれか1項に記載の情報処理方法。
    The computer
    Select content based on the user's profile
    The information processing method according to any one of claims 2 to 6, wherein the selected content is applied to the user.
  8.  前記会話モデルは、入力文とあわせて会話の話題を入力してモデル回答文を出力するように学習されており、
     前記コンピュータは、
     ユーザからの入力文と、前記ユーザの会話相手として設定されるキャラクタからの回答文、又は発話文とに基づき、会話の話題を逐次判定し、
     前記入力文と、判定した話題を前記会話モデルへ入力する
     請求項1から請求項7のいずれか1項に記載の情報処理方法。
    The conversation model is learned to input a conversation topic together with an input sentence and output a model answer sentence.
    The computer
    Based on the input sentence from the user and the answer sentence or the utterance sentence from the character set as the conversation partner of the user, the topic of the conversation is sequentially determined.
    The information processing method according to any one of claims 1 to 7, wherein the input sentence and the determined topic are input to the conversation model.
  9.  前記会話ルールは、前記ユーザの会話相手として設定されるキャラクタの設定データを含む
     請求項1から請求項8のいずれか1項に記載の情報処理方法。
    The information processing method according to any one of claims 1 to 8, wherein the conversation rule includes setting data of a character set as a conversation partner of the user.
  10.  前記コンピュータは、
     受け付けた入力文、又は取得されるモデル回答文と、前記会話ルールとの整合性を示す値を算出し、算出した値が所定値以上であるか否かに応じて、前記会話モデルのモデル回答文を使用するか否かを判定する
     請求項1から請求項9のいずれか1項に記載の情報処理方法。
    The computer
    A value indicating consistency between the received input sentence or the acquired model answer sentence and the conversation rule is calculated, and the model answer of the conversation model is determined according to whether or not the calculated value is equal to or more than a predetermined value. The information processing method according to any one of claims 1 to 9, which determines whether or not to use a statement.
  11.  前記コンピュータは、
     ユーザからの入力文に基づく検索を実行すべきか否かを判断し、
     検索を実行すべきと判断された場合、前記入力文から検索語を作成し、
     作成した検索語による検索結果を用いて回答文を作成する
     請求項1から請求項10のいずれか1項に記載の情報処理方法。
    The computer
    Determine if you should perform a search based on user input,
    If it is determined that the search should be executed, a search term is created from the above input sentence, and the search term is created.
    The information processing method according to any one of claims 1 to 10, wherein an answer sentence is created using the search result by the created search term.
  12.  前記コンピュータは、
     ユーザからの入力文が、前記ユーザの会話相手として設定されるキャラクタと異なる他のキャラクタに関するか否かを判定し、
     他のキャラクタに関すると判定された場合、前記他のキャラクタと前記ユーザの会話履歴、又は前記他のキャラクタの設定データに基づき、ルール回答文を作成するか、又はモデル回答文を補正する
     請求項1から請求項11のいずれか1項に記載の情報処理方法。
    The computer
    It is determined whether or not the input sentence from the user is related to another character different from the character set as the conversation partner of the user.
    If it is determined that the character is related to another character, a rule answer sentence is created or a model answer sentence is corrected based on the conversation history between the other character and the user or the setting data of the other character. The information processing method according to any one of claims 11.
  13.  前記コンピュータは、
     ユーザからの入力文が他のユーザに関するか否かを判定し、
     他のユーザに関すると判定された場合、前記他のユーザの会話履歴、又は前記他のユーザのプロファイルに基づき、ルール回答文を作成するか、又はモデル回答文を補正する
     請求項1から請求項11のいずれか1項に記載の情報処理方法。
    The computer
    Determine if the input from the user is related to another user,
    Claims 1 to 11 for creating a rule answer sentence or amending a model answer sentence based on the conversation history of the other user or the profile of the other user when it is determined to be related to another user. The information processing method according to any one of the above.
  14.  前記コンピュータは、
     前記他のユーザに関すると判定された場合、前記他のユーザ宛の発話文を作成し、前記他のユーザへ向けて出力する
     請求項13に記載の情報処理方法。
    The computer
    The information processing method according to claim 13, wherein when it is determined to be related to the other user, an utterance sentence addressed to the other user is created and output to the other user.
  15.  前記コンピュータは、
     前記他のユーザに関すると判定された場合、前記他のユーザとのコミュニケーションアプリケーションを起動し、
     前記他のユーザとのコミュニケーションに関する制御文を出力する
     請求項13に記載の情報処理方法。
    The computer
    If it is determined that it is related to the other user, the communication application with the other user is started, and the communication application is started.
    The information processing method according to claim 13, which outputs a control statement relating to communication with the other user.
  16.  前記コンピュータは、
     前記モデル回答文又はルール回答文に基づくテキスト又は音声を、前記ユーザの会話相手として設定されるキャラクタ用の変換モデルに入力し、変換後のテキスト又は音声を出力する
     請求項1から請求項15のいずれか1項に記載の情報処理方法。
    The computer
    The text or voice based on the model answer sentence or the rule answer sentence is input to the conversion model for the character set as the conversation partner of the user, and the converted text or voice is output. The information processing method according to any one of the items.
  17.  ユーザからの入力文が入力された場合にモデル回答文を出力するように学習される会話モデルと、該会話モデルに関する会話ルールを含むデータベースとを記憶した記憶部と、
     前記入力文に対する処理を実行する処理部と
     を備え、
     前記処理部は、
     ユーザからの入力文を受け付け、
     受け付けた入力文を前記会話モデルへ入力した場合、モデル回答文を取得し、
     受け付けた入力文、又は取得されるモデル回答文と、前記データベースに記憶されている会話ルールとの比較に基づき、前記入力文に対し前記会話モデルのモデル回答文を使用するか否かを判定し、
     前記会話モデルを使用しないと判定された場合、前記会話ルールに基づき前記入力文に対応するルール回答文を作成し、
     前記モデル回答文、又は、ルール回答文を出力する
     情報処理装置。
    A storage unit that stores a conversation model that is learned to output a model answer sentence when an input sentence from a user is input, and a database that includes conversation rules related to the conversation model.
    It is equipped with a processing unit that executes processing for the input statement.
    The processing unit
    Accepts input from the user and accepts
    When the received input sentence is input to the conversation model, the model answer sentence is acquired and the model answer sentence is acquired.
    Based on the comparison between the received input sentence or the acquired model answer sentence and the conversation rule stored in the database, it is determined whether or not to use the model answer sentence of the conversation model for the input sentence. ,
    If it is determined not to use the conversation model, a rule answer sentence corresponding to the input sentence is created based on the conversation rule, and a rule answer sentence is created.
    An information processing device that outputs the model answer sentence or the rule answer sentence.
  18.  ユーザからの入力文が入力された場合にモデル回答文を出力するように学習される会話モデルと、該会話モデルに関する会話ルールを含むデータベースとを記憶した第1装置と、
     表示部、操作部、及び、ユーザからの入力文を受け付け、前記入力文に対する処理を実行する処理部を備える第2装置と
     を含み、
     前記第2装置は、
     前記操作部によりユーザからの入力文を受け付け、
     受け付けた入力文を、前記第1装置の前記会話モデルへ入力した場合のモデル回答文を取得し、
     前記第1装置の前記データベースに記憶されている会話ルールを取得し、
     受け付けた入力文、又は取得されるモデル回答文と、前記会話ルールとの比較に基づき、前記入力文に対し前記会話モデルのモデル回答文を使用するか否かを判定し、
     前記モデル回答文を使用しないと判定された場合、前記会話ルールに基づき前記入力文に対応するルール回答文を作成し、
     前記モデル回答文、又は、ルール回答文を出力する
     情報処理システム。
    A first device that stores a conversation model that is learned to output a model answer sentence when an input sentence from a user is input, and a database that includes conversation rules related to the conversation model.
    Includes a display unit, an operation unit, and a second device including a processing unit that receives input texts from users and executes processing for the input texts.
    The second device is
    The operation unit accepts input statements from the user and receives them.
    The model answer sentence when the received input sentence is input to the conversation model of the first apparatus is acquired, and the model answer sentence is acquired.
    The conversation rule stored in the database of the first device is acquired, and the conversation rule is acquired.
    Based on the comparison between the received input sentence or the acquired model answer sentence and the conversation rule, it is determined whether or not to use the model answer sentence of the conversation model for the input sentence.
    If it is determined not to use the model answer sentence, a rule answer sentence corresponding to the input sentence is created based on the conversation rule, and the rule answer sentence is created.
    An information processing system that outputs the model answer sentence or the rule answer sentence.
  19.  コンピュータに、
     ユーザからの入力文が入力された場合にモデル回答文を出力するように学習される会話モデルと、該会話モデルに関する会話ルールを記憶したデータベースとを用い、
     ユーザからの入力文を受け付け、
     受け付けた入力文を前記会話モデルへ入力した場合、モデル回答文を取得し、
     受け付けた入力文、又は取得されるモデル回答文と、前記データベースに記憶されている会話ルールとの比較に基づき、前記入力文に対し前記会話モデルのモデル回答文を使用するか否かを判定し、
     前記モデル回答文を使用しないと判定された場合、前記会話ルールに基づき前記入力文に対応するルール回答文を作成し、
     前記モデル回答文、又は、ルール回答文を出力する
     処理を実行させるコンピュータプログラム。
     
    On the computer
    Using a conversation model that is learned to output a model answer sentence when an input sentence from the user is input, and a database that stores conversation rules related to the conversation model,
    Accepts input from the user and accepts
    When the received input sentence is input to the conversation model, the model answer sentence is acquired and the model answer sentence is acquired.
    Based on the comparison between the received input sentence or the acquired model answer sentence and the conversation rule stored in the database, it is determined whether or not to use the model answer sentence of the conversation model for the input sentence. ,
    If it is determined not to use the model answer sentence, a rule answer sentence corresponding to the input sentence is created based on the conversation rule, and the rule answer sentence is created.
    A computer program that executes a process to output the model answer sentence or the rule answer sentence.
PCT/JP2021/044021 2020-12-02 2021-12-01 Information processing method, information processing device, information processing system, and computer program WO2022118869A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2022566952A JPWO2022118869A1 (en) 2020-12-02 2021-12-01

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202063120358P 2020-12-02 2020-12-02
US63/120,358 2020-12-02

Publications (1)

Publication Number Publication Date
WO2022118869A1 true WO2022118869A1 (en) 2022-06-09

Family

ID=81853199

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/044021 WO2022118869A1 (en) 2020-12-02 2021-12-01 Information processing method, information processing device, information processing system, and computer program

Country Status (2)

Country Link
JP (1) JPWO2022118869A1 (en)
WO (1) WO2022118869A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020038510A (en) * 2018-09-04 2020-03-12 株式会社セントメディア Information processing device and program
JP2020118842A (en) * 2019-01-23 2020-08-06 株式会社日立製作所 Interaction device and interaction method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020038510A (en) * 2018-09-04 2020-03-12 株式会社セントメディア Information processing device and program
JP2020118842A (en) * 2019-01-23 2020-08-06 株式会社日立製作所 Interaction device and interaction method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KURACHI YOICHI, SHINJI IKUKAWA, HIDEKI HARA: "AI chatbot that realizes advanced customer contact", FUJITSU/FUJITSU SCIENCE REVIEW, KAMIKODANAKA , KANAGAWA, JP, vol. 69, no. 1, 1 January 2018 (2018-01-01), JP , pages 16 - 22, XP055937083, ISSN: 0016-2515 *

Also Published As

Publication number Publication date
JPWO2022118869A1 (en) 2022-06-09

Similar Documents

Publication Publication Date Title
JP6753707B2 (en) Artificial intelligence system that supports communication
WO2020147428A1 (en) Interactive content generation method and apparatus, computer device, and storage medium
WO2020177282A1 (en) Machine dialogue method and apparatus, computer device, and storage medium
CN109690526B (en) Method, device and system for intelligent automatic chat
Spence et al. Welcoming our robot overlords: Initial expectations about interaction with a robot
KR101797856B1 (en) Method and system for artificial intelligence learning using messaging service and method and system for relaying answer using artificial intelligence
US8719200B2 (en) Cyberpersonalities in artificial reality
US10079013B2 (en) Sharing intents to provide virtual assistance in a multi-person dialog
US11657371B2 (en) Machine-learning-based application for improving digital content delivery
CN109643325B (en) Recommending friends in automatic chat
CN111831798A (en) Information processing method, information processing device, electronic equipment and computer readable storage medium
US20150149182A1 (en) Sharing Intents to Provide Virtual Assistance in a Multi-Person Dialog
Griol et al. Developing enhanced conversational agents for social virtual worlds
JP2017153078A (en) Artificial intelligence learning method, artificial intelligence learning system, and answer relay method
JP7242736B2 (en) Information processing device, information processing method, and information processing program
US20220351716A1 (en) System and method for a personalized dialogue system using knowledge-based entity services
Wilks et al. A prototype for a conversational companion for reminiscing about images
KR102140253B1 (en) Method for providing customized public knowledge information based on chatbot communication and System of the Same
KR20220018886A (en) Neural-network-based human-machine interaction method, device, and medium
KR102709455B1 (en) Method and system for defining emotional machines
Ismail et al. Review of personalized language learning systems
CN115714030A (en) Medical question-answering system and method based on pain perception and active interaction
Franzoni et al. Emotion recognition for self-aid in addiction treatment, psychotherapy, and nonviolent communication
Kittle How (not) to think about the sense of ‘able’relevant to free will
KR20220133665A (en) Apparatus and method for providing characteristics information

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21900620

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022566952

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21900620

Country of ref document: EP

Kind code of ref document: A1