WO2024024065A1 - Sensitivity transformation method, sensitivity transformation device, and sensitivity transformation program - Google Patents

Sensitivity transformation method, sensitivity transformation device, and sensitivity transformation program Download PDF

Info

Publication number
WO2024024065A1
WO2024024065A1 PCT/JP2022/029208 JP2022029208W WO2024024065A1 WO 2024024065 A1 WO2024024065 A1 WO 2024024065A1 JP 2022029208 W JP2022029208 W JP 2022029208W WO 2024024065 A1 WO2024024065 A1 WO 2024024065A1
Authority
WO
WIPO (PCT)
Prior art keywords
situation
sensibility
representing
sensitivity
expression
Prior art date
Application number
PCT/JP2022/029208
Other languages
French (fr)
Japanese (ja)
Inventor
陽子 ▲徳▼永
巌樹 戸嶋
史朗 小澤
Original Assignee
日本電信電話株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電信電話株式会社 filed Critical 日本電信電話株式会社
Priority to PCT/JP2022/029208 priority Critical patent/WO2024024065A1/en
Publication of WO2024024065A1 publication Critical patent/WO2024024065A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis

Definitions

  • the present invention relates to a sensibility conversion method, a sensibility conversion device, and a sensibility conversion program.
  • the present invention has been made in view of the above, and aims to eliminate the gap in understanding and recognition between the speaker and the interlocutor in conversational communication.
  • the sensibility conversion method is a sensibility conversion method performed by a sensibility conversion device, wherein the sensibility conversion device
  • the method includes a storage unit that stores historical data in which information representing a situation is associated with an expression representing a sensitivity in the situation, and an extraction step for extracting an expression representing a sensitivity from a sentence uttered by a speaker, and using the historical data.
  • a specific step of specifying is a specific step of specifying.
  • FIG. 1 is a schematic diagram illustrating a schematic configuration of a sensitivity conversion device.
  • FIG. 2 is a diagram illustrating the data structure of attribute data.
  • FIG. 3 is a diagram for explaining the processing of the sensitivity conversion device.
  • FIG. 4 is a flowchart showing the sensibility conversion processing procedure.
  • FIG. 5 is a diagram illustrating a computer that executes the sensitivity conversion program.
  • the sensibility conversion device of this embodiment is capable of inputting a textual version of the speaker's own sensibilities in a certain situation, so that the recipient has the same or similar sensibilities as the speaker. Convert and output to text.
  • the sensibility conversion device is used, for example, to express the sensibility that a speaker feels about the situation in a certain situation.
  • the sensibility conversion device can be used to predict the future situation and express the sensibility felt about the predicted future situation when the situation changes due to some trigger or in chronological order. good.
  • sensibility expression is a verbal expression of the sensibility felt by the speaker based on the input situation or trigger. For example, expressions such as "Oh no, I think I'm going to lose", which are expressed based on the match situation and the opponent's attack, fall under this category. Even if people have the same sensibilities, the way they express their sensibilities in words may differ depending on the person.
  • a bearish person often uses words like ⁇ I'm going to lose'' when we're even slightly behind, while a bullish person often says ⁇ I think I'll be fine'' when we're even slightly behind, and even when we're one step closer to losing, we say ⁇ I think I'm going to lose.'' shall not be said.
  • you interact without knowing the other person's personality even if the weak person says, ⁇ I think I'm going to lose,'' the bullish person will not be able to understand why he feels so inferior, leading to miscommunication. There is a fear.
  • the sensitivity conversion device can, for example, tell a bullish person, ⁇ What the other person just said, ⁇ I think I'm going to lose,'' means to you, ⁇ I think it's going to be okay.'' teach. This will lead to understanding of others and diversity, such as being able to understand the differences in how the situation is perceived, the range of language expression, and the other person's personality, and it will be possible to reduce miscommunication.
  • the sensibility conversion device described below is applied to a situation in which the content of the dialogue is shogi, and players a and b of a team of two are thinking about their next move while having a dialogue with each other. Note that, for example, it is assumed that the dialogue within the team cannot be heard by the opponent, such as in remote shogi using a computer or with an AI as the opponent.
  • FIG. 1 is a schematic diagram illustrating a schematic configuration of a sensitivity conversion device.
  • the sensitivity conversion device 10 of this embodiment is realized by a general-purpose computer such as a personal computer, and includes an input section 11, an output section 12, a communication control section 13, a storage section 14, and a control section 15. .
  • the input unit 11 is realized using an input device such as a keyboard or a mouse, and inputs various instruction information such as starting processing to the control unit 15 in response to input operations by the user.
  • the output unit 12 is realized by a display device such as a liquid crystal display, a printing device such as a printer, an information communication device, and the like.
  • the communication control unit 13 is realized by a NIC (Network Interface Card) or the like, and controls communication via a network between the control unit 15 and external devices such as a server or a management device that manages various data.
  • NIC Network Interface Card
  • the storage unit 14 is realized by a semiconductor memory element such as a RAM (Random Access Memory) or a flash memory, or a storage device such as a hard disk or an optical disk. Note that the storage unit 14 may be configured to communicate with the control unit 15 via the communication control unit 13. In the present embodiment, the storage unit 14 stores, for example, history data 14a used in the sensibility conversion process described later, a first model 14b, a second model 14c, etc. that are generated and updated in the sensibility conversion process. Ru.
  • FIG. 2 is a diagram illustrating the data structure of history data.
  • the history data 14a associates information representing past situations of the speaker and the interlocutor with expressions representing sensibilities in the situations.
  • information such as the board (winning rate), the board (number of moves), the board (pieces held), etc. is recorded as the past situation of shogi.
  • emotional expressions representing the sensibilities of players a and b such as "scary" and "dangerous,” corresponding to these situations are also associated.
  • the information representing the situation may be information that changes over time, for example.
  • the information representing the situation may be information that changes depending on the trigger for changing the situation.
  • an evaluation value by AI may be used as the information representing the situation.
  • the situation is exemplified by points gained or lost, and the trigger for changing the situation is exemplified by the opponent's attack pattern.
  • the situation and the trigger for changing the situation represent the environment that caused the speaker to feel something in their brain, and the speaker is thought to develop a feeling based on this and verbalize this into speech. It will be done.
  • the speaker's utterances may be input in advance as text as in a text chat, or may be voice utterances that are recognized or transcribed manually.
  • FIG. 3 is a diagram for explaining the processing of the sensitivity conversion device.
  • the sensibility conversion device 10 of this embodiment is configured such that, in the current situation x of shogi, a player a hugs the player immediately after the opponent makes the next move y such as "36 steps" as an opportunity to change the situation x.
  • the obtained sensibility is converted into the sensibility of player b and is conveyed to player b.
  • player a's emotional expression "scary” is converted into player b's emotional expression "quite dangerous.”
  • the current situation of shogi x represents the current situation that is objectively observable, and includes the win rate value evaluated by AI on the current score, the position of the pieces, the type and number of pieces held, the game record so far, etc. It is expressed as each value or a vector that combines multiple values.
  • the control unit 15 is implemented using a CPU (Central Processing Unit), an NP (Network Processor), an FPGA (Field Programmable Gate Array), etc., and executes a processing program stored in a memory.
  • the control unit 15 functions as an acquisition unit 15a, an extraction unit 15b, an estimation unit 15c, a specification unit 15d, and a conversion unit 15e, as illustrated in FIG.
  • these functional units may be implemented in different hardware.
  • the acquisition unit 15a may be implemented in hardware different from other functional units.
  • the control unit 15 may include other functional units.
  • the acquisition unit 15a acquires an uttered sentence that verbalizes the speaker's sensibilities. For example, the acquisition unit 15a acquires an utterance t a that verbalizes the feelings felt by the player a.
  • the extracting unit 15b extracts expressions representing sensibilities (sensitivity expressions) from sentences uttered by the speaker. Specifically, the extraction unit 15b divides the uttered sentence into words by morphological analysis, and extracts emotional expressions from the words. For example, the extraction unit 15b may refer to a pre-built dictionary of words as emotional expressions and extract words corresponding to the dictionary as emotional expressions. Alternatively, the extraction unit 15b may extract adjectives that often express the degree or feeling of something as emotional expressions.
  • the extraction unit 15b outputs the emotional expression wa, x, y of the hand y in the situation x by the player a from the input utterance ta .
  • the extraction unit 15b may transfer all the words divided by morphological analysis to the estimation unit 15c, which will be described below, without extracting emotional expressions. In that case, the estimation unit 15c may process only the emotional expression.
  • the estimation unit 15c uses the history data 14a to estimate information representing the situation corresponding to the extracted emotional expression.
  • the estimation unit 15c uses the first model 14b trained using the history data 14a to output information representing the corresponding situation when the emotional expression of the speaker is input.
  • the information representing the situation corresponding to the emotional expression is estimated.
  • the estimation unit 15c uses the first model 14b learned using the history data 14a indicating what kinds of utterances the speaker has made in what situations in the past, to estimate the input emotional expression. Estimate what kind of situation the speaker is describing with respect to a word or word group.
  • the identification unit 15d uses the history data 14a to identify the emotional expression of the interlocutor that corresponds to the information representing the estimated situation.
  • the identifying unit 15d uses the second model 14c learned using the history data 14a to output the emotional expression of the interlocutor in the situation. Identify the emotional expression of the interlocutor that corresponds to the information representing the estimated situation.
  • the identification unit 15d uses the second model 14c learned using the history data 14a indicating what kind of utterances the recipient (interlocutor) has made in what situations in the past. Predict how the recipient's interlocutor would verbalize the situation using words or word groups.
  • the history data 14a includes a set of board conditions and the player's sensibility expression for the next move for past games between players a and b.
  • the value representing the situation is an evaluation value such as a winning rate evaluated by the AI on the current board and expressed as a numerical value of 0-1.
  • the opponent is AI, but the AI that calculates the evaluation value and the opponent AI may be the same or different.
  • x) of the next move y in a certain situation x is the same value in the same situation, regardless of the player.
  • x, a) weighted for each player may be used as the evaluation value calculated by AI. For example, for player a who prefers attack-oriented tactics, if the board is easy to attack, a certain number is added to p(y
  • the evaluation value may be calculated for each board in a past game played by player a, taking into account the evaluation value annotated by the player himself or the final game result.
  • a function M shogi,a (x, y) that outputs an evaluation value by inputting the board x and the hand y may be defined, and this may be calculated as the evaluation value P(y
  • the evaluation value instead of calculating for each player, the evaluation value may be calculated using a function M shogi,all (x, y) for all players for which game data exists.
  • M shogi,player (x, y) as an evaluation value from the experience of each player, it becomes possible to express more detailed differences in sensibilities of each player using different linguistic expressions.
  • the spaces of players a and b are formed from each player's past game experience, so the data is sparse, and the probability that the same situation x will appear again is low, so the evaluation value is calculated. It may not be possible. Therefore, for example, personas may be defined based on user attributes such as school, level, years of experience, age, etc., and evaluation values M shogi,a ⁇ (x, y) may be defined for players grouped by persona. This makes it possible to calculate the evaluation value using the history data 14a of past games of people who have similar sensibilities to player a. Furthermore, if there is no matching situation x in the past, a value with a small influence, such as the median value of possible numerical values, may be used as the evaluation value.
  • the emotional expression of player a regarding move y in situation x is converted into text that allows player b to have the same or similar sensitivity as player a.
  • the emotional expression by player a is expressed by the following equation (1) using a different mapping function F for each player.
  • the function F a takes as input the winning probability P of player a in move y in situation x, and obtains the linguistic expression f a by player a.
  • F a is equivalent to defining the process in which player a inputs information about the current situation in his brain and verbalizes what he feels as a function.
  • the function F b takes as input the winning probability P' of player b in move y in situation x, and obtains the linguistic expression f b by player b.
  • F b is equivalent to defining the process of verbalizing what player b feels in his or her brain as a function.
  • the inverse function F a -1 of F a can be regarded as a function that returns the verbalized emotional expression to the original situation that caused player a to have that emotion. Therefore, inputting the emotional expression of player a to this function F a -1 can be expressed as F a -1 (wa , x, y ). This can be said to represent the situation that is the source of player a's emotional expression.
  • F a ⁇ 1 and F b are modeled by machine learning using the historical data 14a.
  • F a -1 is the first model 14b that has been trained to receive the emotional expression of player a as input and output an evaluation value P(y
  • F b is trained to receive the evaluation value P'(y
  • This is the second model 14c.
  • the machine learning method is not particularly limited.
  • the emotional expression input to F a -1 may be just a Bag-of-Words expression, or it may be a numerical value or vector expressing the meaning of a word, a numerical value representing positive/negative, etc., which can be used as a sensory expression. It may also include variables that explain the meaning and usage of the word.
  • the conversion unit 15e converts the uttered sentence using an expression expressing the sensibilities of the identified interlocutor. For example, only the part of the sentence uttered by the speaker that has been converted as an emotional expression is rewritten so that the grammar is correct. That is, the conversion unit 15e formats the emotional expression output from the identification unit 15d so as not to damage the original context, and presents it to player b.
  • the conversion unit 15e may, for example, replace a part of the original uttered sentence with the converted emotional expression and present it so that the grammar becomes natural.
  • the conversion unit 15e may display the converted emotional expression in parallel to the emotional expression portion of the original utterance.
  • the conversion unit 15e may present the converted emotional expression in real time, or may execute the emotional conversion process in advance and present it as feedback when reviewing later.
  • FIG. 4 is a flowchart showing the sensibility conversion processing procedure.
  • the flowchart in FIG. 4 is started, for example, at the timing when there is an input instructing the start of processing.
  • the acquisition unit 15a acquires an uttered sentence that verbalizes the speaker's sensibilities. Further, the extracting unit 15b extracts an expression representing sensitivity (sensitivity expression) from the utterance (step S1). Specifically, the extraction unit 15b divides the utterance into words by morphological analysis, and extracts emotional expressions from the words.
  • sensitivity expression sensitivity expression
  • the estimation unit 15c uses the history data 14a to estimate information representing the situation corresponding to the extracted emotional expression (step S2). For example, the estimation unit 15c uses the first model 14b trained using the history data 14a to output information representing the corresponding situation when the emotional expression of the speaker is input. The information representing the situation corresponding to the emotional expression is estimated.
  • the estimation unit 15c uses the first model 14b learned using the history data 14a indicating what kinds of utterances the speaker has made in what situations in the past, to estimate the input emotional expression. Estimate what kind of situation the speaker is describing with respect to a word or word group.
  • the identification unit 15d uses the history data 14a to identify the emotional expression of the interlocutor that corresponds to the information representing the estimated situation (step S3). For example, when information representing a situation is input, the identifying unit 15d uses the second model 14c learned using the history data 14a to output the emotional expression of the interlocutor in the situation. Identify the emotional expression of the interlocutor that corresponds to the information representing the estimated situation.
  • the identification unit 15d uses the second model 14c learned using the historical data 14a indicating what kind of utterances the receiver (interlocutor) has made in what situations in the past. Predict how the recipient's interlocutor would verbalize the situation using words or word groups.
  • the conversion unit 15e converts the uttered sentence using an expression expressing the identified sensibilities of the interlocutor (step S4). For example, only the part of the sentence uttered by the speaker that has been converted as an emotional expression is rewritten so that the grammar is correct. That is, the conversion unit 15e formats the emotional expression output from the identification unit 15d so as not to damage the original context, and presents it to the interlocutor. This completes the series of sensibility conversion processes.
  • the storage unit 14 stores history data in which information representing past situations of the speaker and the interlocutor is associated with expressions representing sensibilities in the situation.
  • the extracting unit 15b extracts expressions representing sensibilities from the utterances by the speaker.
  • the estimation unit 15c estimates information representing a situation corresponding to the extracted expression representing the sensitivity, using the history data 14a.
  • the identification unit 15d uses the history data 14a to identify an expression representing the sensibility of the interlocutor that corresponds to the information representing the estimated situation.
  • the estimation unit 15c uses the first model trained using the history data 14a so that when an expression representing the speaker's sensibilities is input, information representing the corresponding situation is output. 14b is used to estimate information representing the situation corresponding to the extracted expression representing sensitivity. Further, the identifying unit 15d uses the second model 14c learned using the historical data 14a so as to output an expression representing the sensibilities of the interlocutor in the situation when information representing the situation is input. Then, an expression representing the sensibility of the interlocutor corresponding to the information representing the estimated situation is identified.
  • the sensibility conversion device 10 collects data on the experience of the speaker/interlocutor in a dialogue, the experience of personas with similar sensibilities, and the current situation such as evaluation values using AI in competitive games such as shogi.
  • the sensibility expressed by the speaker is converted into an expression that allows the interlocutor to feel the same way.
  • the information representing the situation may be information that changes over time.
  • the information representing the situation may be information that changes depending on the trigger for changing the situation.
  • the sensibility conversion device 10 can convert the sensibility expression even when the situation changes.
  • an evaluation value obtained by AI may be used as the information representing the situation.
  • the situation can be simply expressed by using objective evaluation values such as AI as information representing the situation.
  • AI evaluation values are obtained for both shogi and sports, it becomes possible to convert words said in shogi to words said during a sports match.
  • the conversion unit 15e converts the uttered sentence using an expression that expresses the sensibilities of the identified interlocutor. This makes it possible for the interlocutor to accurately grasp the emotional expression of the speaker and to eliminate any discrepancies in understanding and recognition between the speaker and the speaker.
  • the sensibility conversion device 10 can be implemented by installing a sensibility conversion program that executes the above-mentioned sensibility conversion process into a desired computer as packaged software or online software.
  • the information processing device can be made to function as the sensibility conversion device 10.
  • information processing devices include mobile communication terminals such as smartphones, mobile phones, and PHSs (Personal Handyphone Systems), as well as slate terminals such as PDAs (Personal Digital Assistants).
  • the functions of the sensibility conversion device 10 may be implemented in a cloud server.
  • FIG. 5 is a diagram showing an example of a computer that executes the sensibility conversion program.
  • Computer 1000 includes, for example, memory 1010, CPU 1020, hard disk drive interface 1030, disk drive interface 1040, serial port interface 1050, video adapter 1060, and network interface 1070. These parts are connected by a bus 1080.
  • the memory 1010 includes a ROM (Read Only Memory) 1011 and a RAM 1012.
  • the ROM 1011 stores, for example, a boot program such as BIOS (Basic Input Output System).
  • Hard disk drive interface 1030 is connected to hard disk drive 1031.
  • Disk drive interface 1040 is connected to disk drive 1041.
  • a removable storage medium such as a magnetic disk or an optical disk is inserted into the disk drive 1041, for example.
  • a mouse 1051 and a keyboard 1052 are connected to the serial port interface 1050.
  • a display 1061 is connected to the video adapter 1060.
  • the hard disk drive 1031 stores, for example, an OS 1091, an application program 1092, a program module 1093, and program data 1094. Each piece of information described in the above embodiments is stored in, for example, the hard disk drive 1031 or the memory 1010.
  • the sensibility conversion program is stored in the hard disk drive 1031, for example, as a program module 1093 in which commands to be executed by the computer 1000 are written. Specifically, a program module 1093 in which each process executed by the sensibility conversion device 10 described in the above embodiment is described is stored in the hard disk drive 1031.
  • data used for information processing by the sensibility conversion program is stored as program data 1094 in, for example, the hard disk drive 1031.
  • the CPU 1020 reads out the program module 1093 and program data 1094 stored in the hard disk drive 1031 to the RAM 1012 as necessary, and executes each of the above-described procedures.
  • program module 1093 and program data 1094 related to the sensitivity conversion program are not limited to being stored in the hard disk drive 1031; for example, they may be stored in a removable storage medium and read by the CPU 1020 via the disk drive 1041 or the like. May be served.
  • the program module 1093 and program data 1094 related to the sensitivity conversion program are stored in another computer connected via a network such as a LAN (Local Area Network) or a WAN (Wide Area Network), and are transmitted via the network interface 1070. The data may also be read out by the CPU 1020.
  • LAN Local Area Network
  • WAN Wide Area Network
  • a game was used as an example to clarify the issue, but communication with a history that generally involves consensus building can be modeled using a similar framework.
  • AI it is possible to use the degree of match with the company's decision-making policy, the selection tendencies of general people, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Machine Translation (AREA)

Abstract

A storage unit (14) stores history data (14a) in which information indicating a past situation of a speaker and an interlocutor is associated with an expression expressing sensitivity in that situation. An extraction unit (15b) extracts an expression expressing sensitivity from a phrase uttered by the speaker. An estimation unit (15c) uses the history data (14a) to estimate information expressing a situation corresponding to the extracted expression expressing sensitivity. An identification unit (15d) uses the history data (14a) to identify an expression expressing the sensitivity of the interlocutor corresponding to the estimated information expressing a situation.

Description

感性変換方法、感性変換装置および感性変換プログラムKansei conversion method, kansei conversion device, and kansei conversion program
 本発明は、感性変換方法、感性変換装置および感性変換プログラムに関する。 The present invention relates to a sensibility conversion method, a sensibility conversion device, and a sensibility conversion program.
 従来、対話において相手に合わせて言語表現を変換する技術が知られている。例えば、多言語翻訳のように、ある言語から異なる言語に、意味を保ったままテキストを変換する技術が知られている(特許文献1参照)。また、あるテキストについて、より共感できるような表現に変換する技術が知られている(非特許文献1参照)。 Conventionally, there is a known technology for converting linguistic expressions in dialogue to suit the other party. For example, there is a known technique such as multilingual translation, which converts text from one language to another while preserving its meaning (see Patent Document 1). Furthermore, a technique is known for converting a certain text into an expression that can be more empathetic (see Non-Patent Document 1).
特開2021-039501号公報Japanese Patent Application Publication No. 2021-039501
 しかしながら、従来技術では、対話によるコミュニケーションにおいて、発話者と受け手との理解・認識のずれを解消することは困難である。例えば、発話者が言った「いい感じ」と受け手が感じる「いい感じ」とでは、程度が異なる場合がある。このような理解・認識のずれは、発話者と受け手の感性が異なることが原因と考えられる。感性は個人ごとに異なり、その人の生まれ持った性格やこれまでの経験、相手との関係性等の様々な要素で形成される。理解・認識のずれは、ミスコミュニケーションの要因であり、仕事の打ち合わせや普段の雑談等の会話において大きな問題となり得る。 However, with the conventional technology, it is difficult to resolve the gap in understanding and recognition between the speaker and the receiver in conversational communication. For example, the "good feeling" said by the speaker and the "good feeling" felt by the receiver may differ in degree. This discrepancy in understanding and recognition is thought to be due to the difference in the sensibilities of the speaker and the receiver. Sensitivity differs from person to person and is formed by various factors such as a person's innate personality, past experiences, and relationships with others. Differences in understanding and recognition are a cause of miscommunication, and can become a major problem in conversations such as work meetings and casual chats.
 本発明は、上記に鑑みてなされたものであって、対話によるコミュニケーションにおいて、発話者と対話者との理解・認識のずれを解消することを目的とする。 The present invention has been made in view of the above, and aims to eliminate the gap in understanding and recognition between the speaker and the interlocutor in conversational communication.
 上述した課題を解決し、目的を達成するために、本発明に係る感性変換方法は、感性変換装置が実行する感性変換方法であって、前記感性変換装置は、発話者および対話者の過去の状況を表す情報と該状況における感性を表す表現とを対応付けた履歴データを記憶する記憶部を有し、発話者による発話文から感性を表す表現を抽出する抽出工程と、前記履歴データを用いて、抽出された前記感性を表す表現に対応する状況を表す情報を推定する推定工程と、前記履歴データを用いて、推定された前記状況を表す情報に対応する前記対話者の感性を表す表現を特定する特定工程と、を含んだことを特徴とする。 In order to solve the above-mentioned problems and achieve the purpose, the sensibility conversion method according to the present invention is a sensibility conversion method performed by a sensibility conversion device, wherein the sensibility conversion device The method includes a storage unit that stores historical data in which information representing a situation is associated with an expression representing a sensitivity in the situation, and an extraction step for extracting an expression representing a sensitivity from a sentence uttered by a speaker, and using the historical data. an estimation step of estimating information representing a situation corresponding to the extracted expression representing the sensibility, and an expression representing the sensibility of the interlocutor corresponding to the information representing the estimated situation using the historical data; A specific step of specifying.
 本発明によれば、対話によるコミュニケーションにおいて、発話者と対話者との理解・認識のずれを解消することが可能となる。 According to the present invention, it is possible to eliminate the gap in understanding and recognition between the speaker and the interlocutor in communication through dialogue.
図1は、感性変換装置の概略構成を例示する模式図である。FIG. 1 is a schematic diagram illustrating a schematic configuration of a sensitivity conversion device. 図2は、属性データのデータ構成を例示する図である。FIG. 2 is a diagram illustrating the data structure of attribute data. 図3は、感性変換装置の処理を説明するための図である。FIG. 3 is a diagram for explaining the processing of the sensitivity conversion device. 図4は、感性変換処理手順を示すフローチャートである。FIG. 4 is a flowchart showing the sensibility conversion processing procedure. 図5は、感性変換プログラムを実行するコンピュータを例示する図である。FIG. 5 is a diagram illustrating a computer that executes the sensitivity conversion program.
 以下、図面を参照して、本発明の一実施形態を詳細に説明する。なお、この実施形態により本発明が限定されるものではない。また、図面の記載において、同一部分には同一の符号を付して示している。 Hereinafter, one embodiment of the present invention will be described in detail with reference to the drawings. Note that the present invention is not limited to this embodiment. In addition, in the description of the drawings, the same parts are denoted by the same reference numerals.
[感性変換装置の概要]
 本実施形態の感性変換装置は、言語によるコミュニケーションにおいて、発話者がある状況において自分の感性を言語化した内容をテキスト化したものを入力すると、受け手が発話者と同じあるいは類似の感性を持つようなテキストに変換して出力する。
[Overview of sensitivity conversion device]
In verbal communication, the sensibility conversion device of this embodiment is capable of inputting a textual version of the speaker's own sensibilities in a certain situation, so that the recipient has the same or similar sensibilities as the speaker. Convert and output to text.
 感性変換装置は、例えば、ある状況において、その状況そのものについて発話者が感じた感性を表現する場合に用いられる。また、感性変換装置は、状況が何らかのきっかけを機に、または時系列に変化する場合に、未来の状況を予測して、予測した未来の状況について感じた感性を表現する場合に用いられてもよい。 The sensibility conversion device is used, for example, to express the sensibility that a speaker feels about the situation in a certain situation. In addition, the sensibility conversion device can be used to predict the future situation and express the sensibility felt about the predicted future situation when the situation changes due to some trigger or in chronological order. good.
 例えば、複数人でゲームのチームプレイを行っている最中に、ゲームの現在の状況について感じたことを述べる場合や、今後起こり得る自チームやライバルチームの行動を予測し、予測した未来の状況について感じたことを述べる場合にも用いることが可能である。あるいは、複数人でスポーツを観戦している際に、現在の戦況について感じたことを述べる場合や、応援している人や対戦相手がとり得る次の攻撃を想定し、起こり得る未来の戦況について感じたことを述べる場合にも用いることが可能である。 For example, when you are playing a game with multiple people as a team, you may express your feelings about the current situation of the game, or you may predict future actions of your own team or a rival team that may occur in the future. It can also be used to express what you feel about something. Or, when watching sports with multiple people, you may want to express your feelings about the current situation, or imagine the next attack that your supporters or opponent might take, and talk about possible future situations of the situation. It can also be used to describe what you feel.
 ここで、感性を表す表現(以下、感性表現とも記す)とは、入力された状況やきっかけを元にして発話者が抱いた感性を言語で表現したものである。例えば、試合の状況と相手の攻撃とを見て表現した「やばい、負けそう」等の表現が該当する。同じ感性を抱いたとしても、人によって言語化した結果の感性表現は異なると考えられる。例えば、弱気な人は、少しでも劣勢になると「負けそう」という言葉を多用するが、強気な人は、少し劣勢なくらいでは「大丈夫でしょう」と表現し、負ける一歩手前まで「負けそう」とは言わないものとする。この場合に、相手の性格を知らずに対話すると、弱気な人が「負けそう」と言っても、強気な人はなぜそこまで劣勢と感じているのかを理解できず、ミスコミュニケーションのきっかけとなる恐れがある。 Here, the expression expressing sensibility (hereinafter also referred to as sensibility expression) is a verbal expression of the sensibility felt by the speaker based on the input situation or trigger. For example, expressions such as "Oh no, I think I'm going to lose", which are expressed based on the match situation and the opponent's attack, fall under this category. Even if people have the same sensibilities, the way they express their sensibilities in words may differ depending on the person. For example, a bearish person often uses words like ``I'm going to lose'' when we're even slightly behind, while a bullish person often says ``I think I'll be fine'' when we're even slightly behind, and even when we're one step closer to losing, we say ``I think I'm going to lose.'' shall not be said. In this case, if you interact without knowing the other person's personality, even if the weak person says, ``I think I'm going to lose,'' the bullish person will not be able to understand why he feels so inferior, leading to miscommunication. There is a fear.
 そこで、感性変換装置は、このようなミスコミュニケーションを減らすために、例えば、強気な人に『今、相手が言った「負けそう」は、あなたにとっての「大丈夫でしょう」というレベルの意味です』と教示する。これにより、自分との状況の捉え方や言語表現の幅の違いや相手の性格を理解することができる等、他者理解や多様性理解につながり、ミスコミュニケーションを減らすことが可能となる。 Therefore, in order to reduce such miscommunication, the sensitivity conversion device can, for example, tell a bullish person, ``What the other person just said, ``I think I'm going to lose,'' means to you, ``I think it's going to be okay.'' teach. This will lead to understanding of others and diversity, such as being able to understand the differences in how the situation is perceived, the range of language expression, and the other person's personality, and it will be possible to reduce miscommunication.
 なお、以下に説明する感性変換装置は、対話の内容を将棋とし、二人一組のチームのプレイヤーa、bが味方同士で対話しながら次の手を考える場面に適用される。なお、例えば、コンピュータを用いた遠隔将棋やAIを対戦相手とする等、チーム内の対話は対戦相手に聞こえないものとする。 The sensibility conversion device described below is applied to a situation in which the content of the dialogue is shogi, and players a and b of a team of two are thinking about their next move while having a dialogue with each other. Note that, for example, it is assumed that the dialogue within the team cannot be heard by the opponent, such as in remote shogi using a computer or with an AI as the opponent.
[感性変換装置の構成]
 図1は、感性変換装置の概略構成を例示する模式図である。図1に例示するように、本実施形態の感性変換装置10は、パソコン等の汎用コンピュータで実現され、入力部11、出力部12、通信制御部13、記憶部14、および制御部15を備える。
[Configuration of sensitivity conversion device]
FIG. 1 is a schematic diagram illustrating a schematic configuration of a sensitivity conversion device. As illustrated in FIG. 1, the sensitivity conversion device 10 of this embodiment is realized by a general-purpose computer such as a personal computer, and includes an input section 11, an output section 12, a communication control section 13, a storage section 14, and a control section 15. .
 入力部11は、キーボードやマウス等の入力デバイスを用いて実現され、ユーザによる入力操作に対応して、制御部15に対して処理開始などの各種指示情報を入力する。出力部12は、液晶ディスプレイなどの表示装置、プリンター等の印刷装置、情報通信装置等によって実現される。通信制御部13は、NIC(Network Interface Card)等で実現され、サーバや、各種データを管理する管理装置等の外部の装置と制御部15とのネットワークを介した通信を制御する。 The input unit 11 is realized using an input device such as a keyboard or a mouse, and inputs various instruction information such as starting processing to the control unit 15 in response to input operations by the user. The output unit 12 is realized by a display device such as a liquid crystal display, a printing device such as a printer, an information communication device, and the like. The communication control unit 13 is realized by a NIC (Network Interface Card) or the like, and controls communication via a network between the control unit 15 and external devices such as a server or a management device that manages various data.
 記憶部14は、RAM(Random Access Memory)、フラッシュメモリ(Flash Memory)等の半導体メモリ素子、または、ハードディスク、光ディスク等の記憶装置によって実現される。なお、記憶部14は、通信制御部13を介して制御部15と通信する構成でもよい。本実施形態において、記憶部14には、例えば、後述する感性変換処理に用いられる履歴データ14aや、感性変換処理で生成・更新される第1のモデル14b、第2のモデル14c等が記憶される。 The storage unit 14 is realized by a semiconductor memory element such as a RAM (Random Access Memory) or a flash memory, or a storage device such as a hard disk or an optical disk. Note that the storage unit 14 may be configured to communicate with the control unit 15 via the communication control unit 13. In the present embodiment, the storage unit 14 stores, for example, history data 14a used in the sensibility conversion process described later, a first model 14b, a second model 14c, etc. that are generated and updated in the sensibility conversion process. Ru.
 ここで、図2は、履歴データのデータ構成を例示する図である。図2に示すように、履歴データ14aは、発話者および対話者の過去の状況を表す情報と該状況における感性を表す表現とを対応付けたものである。図2には、将棋の過去の状況として、盤面(勝率)、盤面(手数)、盤面(持ち駒)等の情報が記録されている。また、これらの状況に対応するプレイヤーa、bの「怖い」「やばい」等の感性を表す感性表現が対応付けられている。 Here, FIG. 2 is a diagram illustrating the data structure of history data. As shown in FIG. 2, the history data 14a associates information representing past situations of the speaker and the interlocutor with expressions representing sensibilities in the situations. In FIG. 2, information such as the board (winning rate), the board (number of moves), the board (pieces held), etc. is recorded as the past situation of shogi. In addition, emotional expressions representing the sensibilities of players a and b, such as "scary" and "dangerous," corresponding to these situations are also associated.
 また、履歴データ14aにおいて、状況を表す情報は、例えば、時系列に変化する情報であってもよい。あるいは、状況を表す情報は、該状況を変化させるきっかけにより変化する情報であってもよい。また、状況を表す情報は、AIによる評価値が用いられてもよい。 Furthermore, in the history data 14a, the information representing the situation may be information that changes over time, for example. Alternatively, the information representing the situation may be information that changes depending on the trigger for changing the situation. Further, as the information representing the situation, an evaluation value by AI may be used.
 例えば、ゲームやスポーツにおいては、状況とは得失点、状況を変化させるきっかけとは、対戦相手の攻撃パターンが例示される。状況と状況を変化させるきっかけとは、発話者が脳内で何かを感じる元となった環境を表すものであり、発話者はこれを元に感性を抱き、これを言語化して発話すると考えられる。話者の発話は、テキストチャットのように予めテキストで入力されたものでもよいし、音声での発話を音声認識したものや人手で書き起こされたものでもよい。 For example, in games and sports, the situation is exemplified by points gained or lost, and the trigger for changing the situation is exemplified by the opponent's attack pattern. The situation and the trigger for changing the situation represent the environment that caused the speaker to feel something in their brain, and the speaker is thought to develop a feeling based on this and verbalize this into speech. It will be done. The speaker's utterances may be input in advance as text as in a text chat, or may be voice utterances that are recognized or transcribed manually.
 ここで、図3は、感性変換装置の処理を説明するための図である。本実施形態の感性変換装置10は、将棋の現在の状況xにおいて、この状況xを変え得るきっかけとして、対戦相手が「3六歩」等の次の手yを指した直後にプレイヤーaが抱いた感性を、プレイヤーbの感性に変換してプレイヤーbに伝える。図3に示す例では、プレイヤーaの感性表現「怖い」がプレイヤーbの感性表現「かなり危険」に変換されている。 Here, FIG. 3 is a diagram for explaining the processing of the sensitivity conversion device. The sensibility conversion device 10 of this embodiment is configured such that, in the current situation x of shogi, a player a hugs the player immediately after the opponent makes the next move y such as "36 steps" as an opportunity to change the situation x. The obtained sensibility is converted into the sensibility of player b and is conveyed to player b. In the example shown in FIG. 3, player a's emotional expression "scary" is converted into player b's emotional expression "quite dangerous."
 なお、将棋の現在の状況xとは、客観的に観測可能な現在の状況を表し、現在の譜面についてAIで評価された勝率の値、駒の位置、持ち駒の種類や数、これまでの棋譜等の各値や、複数の値を組み合わせたべクトルで表される。 The current situation of shogi x represents the current situation that is objectively observable, and includes the win rate value evaluated by AI on the current score, the position of the pieces, the type and number of pieces held, the game record so far, etc. It is expressed as each value or a vector that combines multiple values.
 図1の説明に戻る。制御部15は、CPU(Central Processing Unit)やNP(Network Processor)やFPGA(Field Programmable Gate Array)等を用いて実現され、メモリに記憶された処理プログラムを実行する。これにより、制御部15は、図1に例示するように、取得部15a、抽出部15b、推定部15c、特定部15dおよび変換部15eとして機能する。なお、これらの機能部は、それぞれが異なるハードウェアに実装されてもよい。例えば取得部15aは他の機能部とは異なるハードウェアに実装されてもよい。また、制御部15は、その他の機能部を備えてもよい。 Returning to the explanation of FIG. 1. The control unit 15 is implemented using a CPU (Central Processing Unit), an NP (Network Processor), an FPGA (Field Programmable Gate Array), etc., and executes a processing program stored in a memory. Thereby, the control unit 15 functions as an acquisition unit 15a, an extraction unit 15b, an estimation unit 15c, a specification unit 15d, and a conversion unit 15e, as illustrated in FIG. Note that these functional units may be implemented in different hardware. For example, the acquisition unit 15a may be implemented in hardware different from other functional units. Further, the control unit 15 may include other functional units.
 取得部15aは、発話者の感性を言語化した発話文を取得する。例えば、取得部15aは、プレイヤーaが抱いた感性を言語化した発話文tを取得する。 The acquisition unit 15a acquires an uttered sentence that verbalizes the speaker's sensibilities. For example, the acquisition unit 15a acquires an utterance t a that verbalizes the feelings felt by the player a.
 抽出部15bは、発話者による発話文から感性を表す表現(感性表現)を抽出する。具体的には、抽出部15bは、発話文を形態素解析で単語に分割し、その中から感性表現を抽出する。抽出部15bは、例えば、予め構築された感性表現とする単語の辞書を参照し、辞書に該当する単語を感性表現として抽出してもよい。あるいは、抽出部15bは、物事の程度や感じ方を表現する場合が多い形容詞を感性表現として抽出してもよい。 The extracting unit 15b extracts expressions representing sensibilities (sensitivity expressions) from sentences uttered by the speaker. Specifically, the extraction unit 15b divides the uttered sentence into words by morphological analysis, and extracts emotional expressions from the words. For example, the extraction unit 15b may refer to a pre-built dictionary of words as emotional expressions and extract words corresponding to the dictionary as emotional expressions. Alternatively, the extraction unit 15b may extract adjectives that often express the degree or feeling of something as emotional expressions.
 具体的には、抽出部15bは、入力された発話文tから、プレイヤーaによる状況xにおける手yの感性表現wa,x,yを出力する。 Specifically, the extraction unit 15b outputs the emotional expression wa, x, y of the hand y in the situation x by the player a from the input utterance ta .
 なお、抽出部15bが感性表現を抽出せずに、形態素解析で分割したすべての単語を、以下に説明する推定部15cに転送してもよい。その場合には、推定部15cが感性表現のみを処理対象とすればよい。 Note that the extraction unit 15b may transfer all the words divided by morphological analysis to the estimation unit 15c, which will be described below, without extracting emotional expressions. In that case, the estimation unit 15c may process only the emotional expression.
 推定部15cは、履歴データ14aを用いて、抽出された感性表現に対応する状況を表す情報を推定する。 The estimation unit 15c uses the history data 14a to estimate information representing the situation corresponding to the extracted emotional expression.
 例えば、推定部15cは、発話者の感性表現が入力された場合に、対応する状況を表す情報を出力するように、履歴データ14aを用いて学習された第1のモデル14bを用いて、抽出された感性表現に対応する状況を表す情報を推定する。 For example, the estimation unit 15c uses the first model 14b trained using the history data 14a to output information representing the corresponding situation when the emotional expression of the speaker is input. The information representing the situation corresponding to the emotional expression is estimated.
 つまり、推定部15cは、発話者が過去にどのような状況でどのような発話をしたのかを示す履歴データ14aを用いて学習された第1のモデル14bを用いて、入力された感性表現の単語または単語群について、話者がどのような状況について述べているかを推定する。 In other words, the estimation unit 15c uses the first model 14b learned using the history data 14a indicating what kinds of utterances the speaker has made in what situations in the past, to estimate the input emotional expression. Estimate what kind of situation the speaker is describing with respect to a word or word group.
 特定部15dは、履歴データ14aを用いて、推定された状況を表す情報に対応する対話者の感性表現を特定する。 The identification unit 15d uses the history data 14a to identify the emotional expression of the interlocutor that corresponds to the information representing the estimated situation.
 例えば、特定部15dは、状況を表す情報が入力された場合に、該状況における対話者の感性表現を出力するように、履歴データ14aを用いて学習された第2のモデル14cを用いて、推定された状況を表す情報に対応する対話者の感性表現を特定する。 For example, when information representing a situation is input, the identifying unit 15d uses the second model 14c learned using the history data 14a to output the emotional expression of the interlocutor in the situation. Identify the emotional expression of the interlocutor that corresponds to the information representing the estimated situation.
 つまり、特定部15dは、受け手(対話者)が過去にどのような状況でどのような発話をしたのかを示す履歴データ14aを用いて学習された第2のモデル14cを用いて、推定された状況について、受け手の対話者ならどのように言語化するかを単語または単語群で予測する。 In other words, the identification unit 15d uses the second model 14c learned using the history data 14a indicating what kind of utterances the recipient (interlocutor) has made in what situations in the past. Predict how the recipient's interlocutor would verbalize the situation using words or word groups.
 本実施形態の感性変換装置10において、履歴データ14aでは、過去のプレイヤーa、bの対局について、盤面を表す状況と、次の手に対するプレイヤーの感性表現がセットになっている。図2に示した例では、状況を表す値は、現在の盤面でのAIにより評価され0-1の数値で表される勝率等の評価値である。本実施形態では、AIを対戦相手としているが、評価値を計算するAIと対戦相手のAIとは、同一であっても異なっていてもよい。 In the sensibility conversion device 10 of the present embodiment, the history data 14a includes a set of board conditions and the player's sensibility expression for the next move for past games between players a and b. In the example shown in FIG. 2, the value representing the situation is an evaluation value such as a winning rate evaluated by the AI on the current board and expressed as a numerical value of 0-1. In this embodiment, the opponent is AI, but the AI that calculates the evaluation value and the opponent AI may be the same or different.
 ある状況xにおける次の手yの評価値p(y|x)は、プレイヤーに依らず、同じ状況では同じ値となる。しかし、実際には、プレイヤーの性格、価値観、好みのプレイスタイル、将棋の流派、文化的な背景等により、好む手、嫌いな手、次の候補として考えられない手等が存在する。そのため、AIで計算された評価値に、プレイヤーごとに重み付けした評価値P(y|x,a)を用いてもよい。例えば、攻め重視の戦法を好むプレイヤーaについて、攻めやすい盤面の場合にはp(y|x)に一定数を加算し、攻めにくい盤面の場合にはp(y|x)に1未満の小数を乗算する等の方法が例示される。 The evaluation value p(y|x) of the next move y in a certain situation x is the same value in the same situation, regardless of the player. However, in reality, depending on the player's personality, values, preferred playing style, school of shogi, cultural background, etc., there are moves that the player likes, moves that he dislikes, and moves that cannot be considered as next candidates. Therefore, the evaluation value P(y|x, a) weighted for each player may be used as the evaluation value calculated by AI. For example, for player a who prefers attack-oriented tactics, if the board is easy to attack, a certain number is added to p(y|x), and if the board is difficult to attack, a decimal less than 1 is added to p(y|x). Examples include methods such as multiplying by .
 プレイヤーaが経験した過去の対局の各盤面について、プレイヤー自身がアノテーションした評価値や最終的な対局結果も加味して評価値を計算してもよい。盤面xおよび手yを入力として、評価値を出力する関数Mshogi,a(x、y)を定義して、これを評価値P(y|x,a)として計算してもよい。または、プレイヤーごとに計算するのではなく、対局データがあるすべてのプレイヤーを対象とした関数Mshogi,all(x、y)を用いて評価値を計算してもよい。ただし、プレイヤーごとの経験からMshogi,player(x、y)を評価値として計算することにより、プレイヤーごとのより細かい感性の違いを異なる言語表現で表すことが可能となる。 The evaluation value may be calculated for each board in a past game played by player a, taking into account the evaluation value annotated by the player himself or the final game result. A function M shogi,a (x, y) that outputs an evaluation value by inputting the board x and the hand y may be defined, and this may be calculated as the evaluation value P(y|x, a). Alternatively, instead of calculating for each player, the evaluation value may be calculated using a function M shogi,all (x, y) for all players for which game data exists. However, by calculating M shogi,player (x, y) as an evaluation value from the experience of each player, it becomes possible to express more detailed differences in sensibilities of each player using different linguistic expressions.
 また、プレイヤーa,bそれぞれの空間は、それぞれのプレイヤーがこれまでの対局の経験から形成されるために疎なデータとなり、同一の状況xが再度出現する可能性は低く、評価値を計算することができない場合がある。そこで、例えば、流派、段、経験年数、年齢等のユーザ属性等でペルソナを定義して、ペルソナごとにまとめたプレイヤーについて評価値Mshogi,a~(x、y)を定義してもよい。これにより、プレイヤーaと感性が似た人物の過去の対局の履歴データ14aを用いて、評価値を計算することが可能となる。また、さらに合致する状況xが過去にない場合には、数値がとり得る値の中央値等の影響が小さい値を評価値としてもよい。 In addition, the spaces of players a and b are formed from each player's past game experience, so the data is sparse, and the probability that the same situation x will appear again is low, so the evaluation value is calculated. It may not be possible. Therefore, for example, personas may be defined based on user attributes such as school, level, years of experience, age, etc., and evaluation values M shogi,a~ (x, y) may be defined for players grouped by persona. This makes it possible to calculate the evaluation value using the history data 14a of past games of people who have similar sensibilities to player a. Furthermore, if there is no matching situation x in the past, a value with a small influence, such as the median value of possible numerical values, may be used as the evaluation value.
 そして、感性変換処理では、状況xにおける手yに対するプレイヤーaの感性表現を、プレイヤーbがプレイヤーaと同じあるいは似た感性を持つことができるようなテキストに変換する。 Then, in the sensitivity conversion process, the emotional expression of player a regarding move y in situation x is converted into text that allows player b to have the same or similar sensitivity as player a.
 ここで、プレイヤーaによる感性表現は、プレイヤーごとに異なる写像関数Fを用いて、次式(1)で表される。 Here, the emotional expression by player a is expressed by the following equation (1) using a different mapping function F for each player.
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 関数Fは、状況xおける手yのプレイヤーaの勝率Pを入力とし、プレイヤーaによる言語表現fを得る。つまり、Fは、プレイヤーaが脳内で現在の状況についての情報をインプットして、感じたことを言語化する過程を関数として定義したことと同義である。 The function F a takes as input the winning probability P of player a in move y in situation x, and obtains the linguistic expression f a by player a. In other words, F a is equivalent to defining the process in which player a inputs information about the current situation in his brain and verbalizes what he feels as a function.
 また、プレイヤーbによる感性表現は、次式(2)で表される。 Furthermore, the emotional expression by player b is expressed by the following equation (2).
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
 関数Fは、状況xおける手yのプレイヤーbの勝率P’を入力とし、プレイヤーbによる言語表現fを得る。つまり、Fは、プレイヤーbの脳内で感じたことを言語化する過程を関数として定義したことと同義である。 The function F b takes as input the winning probability P' of player b in move y in situation x, and obtains the linguistic expression f b by player b. In other words, F b is equivalent to defining the process of verbalizing what player b feels in his or her brain as a function.
 ここで、Fの逆関数F -1は、言語化された感性表現を、プレイヤーaがその感性をもつきっかけとなった元の状況に戻す関数と捉えることができる。よって、この関数F -1に、プレイヤーaの感性表現を入力することは、F -1(wa,x,y)と表せる。これは、プレイヤーaの感性表現の元となる状況を表したものといえる。 Here, the inverse function F a -1 of F a can be regarded as a function that returns the verbalized emotional expression to the original situation that caused player a to have that emotion. Therefore, inputting the emotional expression of player a to this function F a -1 can be expressed as F a -1 (wa , x, y ). This can be said to represent the situation that is the source of player a's emotional expression.
 この状況をプレイヤーbの脳内の過程を表す関数Fに入力することは、F(F -1(wa,x,y))と表せる。これは、プレイヤーaが捉えている状況をプレイヤーbの脳内にインプットした場合に、プレイヤーbがその感性を言語化した感性表現と言える。よって、プレイヤーbの感性表現に変換されたテキストについて、次式(3)が成立する。 Inputting this situation into a function F b representing the process in the brain of player b can be expressed as F b (F a -1 (wa , x, y )). This can be said to be an emotional expression in which when the situation perceived by player a is input into player b's brain, player b verbalizes that emotion. Therefore, the following equation (3) holds for the text converted into the emotional expression of player b.
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000003
 このように、状況xと手yについてプレイヤーaが発話した感性表現を入力し、プレイヤーbが同じ感性を抱くことができる感性表現に変換できる。 In this way, the emotional expression uttered by player a regarding situation x and move y can be input and converted into an emotional expression that allows player b to feel the same emotion.
 ここで、F -1、Fは履歴データ14aを用いた機械学習によりモデル化される。つまりF -1は、プレイヤーaの感性表現を入力とし、状況を表す評価値P(y|x,a)を出力するように学習された第1のモデル14bである。また、Fは、過去にプレイヤーbが経験した状況の評価値P’(y|x,b)を入力とし、その状況において発話した感性表現wb,x,yを出力するように学習された第2のモデル14cである。 Here, F a −1 and F b are modeled by machine learning using the historical data 14a. In other words, F a -1 is the first model 14b that has been trained to receive the emotional expression of player a as input and output an evaluation value P(y|x, a) representing the situation. In addition, F b is trained to receive the evaluation value P'(y|x, b) of a situation experienced by player b in the past and output the emotional expression w b, x, y uttered in that situation. This is the second model 14c.
 機械学習の手法は特に限定されない。また、F -1に入力する感性表現は、Bag-of-Wordsの表現だけでもよいし、さらに単語の意味を表現する数値やベクトル、ポジティブ/ネガティブを表すような数値等、感性表現として用いた単語の意味や使われ方を説明する変数が加えられたものでもよい。 The machine learning method is not particularly limited. In addition, the emotional expression input to F a -1 may be just a Bag-of-Words expression, or it may be a numerical value or vector expressing the meaning of a word, a numerical value representing positive/negative, etc., which can be used as a sensory expression. It may also include variables that explain the meaning and usage of the word.
 変換部15eは、特定された対話者の感性を表す表現を用いて発話文を変換する。例えば、話者の発話文のうち、感性表現として変換した部分のみを書き換えて、文法が成立するように整形する。すなわち、変換部15eは、特定部15dから出力される感性表現は、元の文脈を損なわないように整形し、プレイヤーbに提示する。 The conversion unit 15e converts the uttered sentence using an expression expressing the sensibilities of the identified interlocutor. For example, only the part of the sentence uttered by the speaker that has been converted as an emotional expression is rewritten so that the grammar is correct. That is, the conversion unit 15e formats the emotional expression output from the identification unit 15d so as not to damage the original context, and presents it to player b.
 その際に、変換部15eは、例えば、自然な文法になるように、元の発話文の一部分を変換された感性表現に置き換えて提示してもよい。あるいは、変換部15eは、元の発話文のうち、感性表現の部分に変換後の感性表現を並列に表示してもよい。また、変換部15eは、リアルタイムに変換後の感性表現を提示してもよいし、予め感性変換処理を実行し、後に振り返りを行う際にフィードバックとして提示してもよい。 At that time, the conversion unit 15e may, for example, replace a part of the original uttered sentence with the converted emotional expression and present it so that the grammar becomes natural. Alternatively, the conversion unit 15e may display the converted emotional expression in parallel to the emotional expression portion of the original utterance. Further, the conversion unit 15e may present the converted emotional expression in real time, or may execute the emotional conversion process in advance and present it as feedback when reviewing later.
[感性変換処理]
 次に、感性変換装置10による感性変換処理について説明する。図4は、感性変換処理手順を示すフローチャートである。図4のフローチャートは、例えば、処理の開始を指示する入力があったタイミングで開始される。
[Emotional conversion processing]
Next, the sensibility conversion process by the sensibility conversion device 10 will be explained. FIG. 4 is a flowchart showing the sensibility conversion processing procedure. The flowchart in FIG. 4 is started, for example, at the timing when there is an input instructing the start of processing.
 まず、取得部15aが、発話者の感性を言語化した発話文を取得する。また、抽出部15bが、発話文から感性を表す表現(感性表現)を抽出する(ステップS1)。具体的には、抽出部15bは、発話文を形態素解析で単語に分割し、その中から感性表現を抽出する。 First, the acquisition unit 15a acquires an uttered sentence that verbalizes the speaker's sensibilities. Further, the extracting unit 15b extracts an expression representing sensitivity (sensitivity expression) from the utterance (step S1). Specifically, the extraction unit 15b divides the utterance into words by morphological analysis, and extracts emotional expressions from the words.
 次に、推定部15cが、履歴データ14aを用いて、抽出された感性表現に対応する状況を表す情報を推定する(ステップS2)。例えば、推定部15cは、発話者の感性表現が入力された場合に、対応する状況を表す情報を出力するように、履歴データ14aを用いて学習された第1のモデル14bを用いて、抽出された感性表現に対応する状況を表す情報を推定する。 Next, the estimation unit 15c uses the history data 14a to estimate information representing the situation corresponding to the extracted emotional expression (step S2). For example, the estimation unit 15c uses the first model 14b trained using the history data 14a to output information representing the corresponding situation when the emotional expression of the speaker is input. The information representing the situation corresponding to the emotional expression is estimated.
 つまり、推定部15cは、発話者が過去にどのような状況でどのような発話をしたのかを示す履歴データ14aを用いて学習された第1のモデル14bを用いて、入力された感性表現の単語または単語群について、話者がどのような状況について述べているかを推定する。 In other words, the estimation unit 15c uses the first model 14b learned using the history data 14a indicating what kinds of utterances the speaker has made in what situations in the past, to estimate the input emotional expression. Estimate what kind of situation the speaker is describing with respect to a word or word group.
 また、特定部15dは、履歴データ14aを用いて、推定された状況を表す情報に対応する対話者の感性表現を特定する(ステップS3)。例えば、特定部15dは、状況を表す情報が入力された場合に、該状況における対話者の感性表現を出力するように、履歴データ14aを用いて学習された第2のモデル14cを用いて、推定された状況を表す情報に対応する対話者の感性表現を特定する。 Furthermore, the identification unit 15d uses the history data 14a to identify the emotional expression of the interlocutor that corresponds to the information representing the estimated situation (step S3). For example, when information representing a situation is input, the identifying unit 15d uses the second model 14c learned using the history data 14a to output the emotional expression of the interlocutor in the situation. Identify the emotional expression of the interlocutor that corresponds to the information representing the estimated situation.
 つまり、特定部15dは、受け手(対話者)が過去にどのような状況でどのような発話をしたのかを示す履歴データ14aを用いて学習された第2のモデル14cを用いて、推定された状況について、受け手の対話者ならどのように言語化するかを単語または単語群で予測する。 In other words, the identification unit 15d uses the second model 14c learned using the historical data 14a indicating what kind of utterances the receiver (interlocutor) has made in what situations in the past. Predict how the recipient's interlocutor would verbalize the situation using words or word groups.
 そして、変換部15eが、特定された対話者の感性を表す表現を用いて発話文を変換する(ステップS4)。例えば、話者の発話文のうち、感性表現として変換した部分のみを書き換えて、文法が成立するように整形する。すなわち、変換部15eは、特定部15dから出力される感性表現は、元の文脈を損なわないように整形し、対話者に提示する。これにより、一連の感性変換処理が終了する。 Then, the conversion unit 15e converts the uttered sentence using an expression expressing the identified sensibilities of the interlocutor (step S4). For example, only the part of the sentence uttered by the speaker that has been converted as an emotional expression is rewritten so that the grammar is correct. That is, the conversion unit 15e formats the emotional expression output from the identification unit 15d so as not to damage the original context, and presents it to the interlocutor. This completes the series of sensibility conversion processes.
[効果]
 以上、説明したように、本実施形態の感性変換装置10において、記憶部14は、発話者および対話者の過去の状況を表す情報と該状況における感性を表す表現とを対応付けた履歴データを記憶する。また、抽出部15bが、発話者による発話文から感性を表す表現を抽出する。また、推定部15cが、履歴データ14aを用いて、抽出された感性を表す表現に対応する状況を表す情報を推定する。また、特定部15dが、履歴データ14aを用いて、推定された状況を表す情報に対応する対話者の感性を表す表現を特定する。
[effect]
As described above, in the sensibility conversion device 10 of the present embodiment, the storage unit 14 stores history data in which information representing past situations of the speaker and the interlocutor is associated with expressions representing sensibilities in the situation. Remember. Further, the extracting unit 15b extracts expressions representing sensibilities from the utterances by the speaker. Furthermore, the estimation unit 15c estimates information representing a situation corresponding to the extracted expression representing the sensitivity, using the history data 14a. Further, the identification unit 15d uses the history data 14a to identify an expression representing the sensibility of the interlocutor that corresponds to the information representing the estimated situation.
 具体的には、推定部15cは、発話者の感性を表す表現が入力された場合に、対応する状況を表す情報を出力するように、履歴データ14aを用いて学習された第1のモデルを14b用いて、抽出された感性を表す表現に対応する状況を表す情報を推定する。また、特定部15dは、状況を表す情報が入力された場合に、該状況における対話者の感性を表す表現を出力するように、履歴データ14aを用いて学習された第2のモデル14cを用いて、推定された状況を表す情報に対応する対話者の感性を表す表現を特定する。 Specifically, the estimation unit 15c uses the first model trained using the history data 14a so that when an expression representing the speaker's sensibilities is input, information representing the corresponding situation is output. 14b is used to estimate information representing the situation corresponding to the extracted expression representing sensitivity. Further, the identifying unit 15d uses the second model 14c learned using the historical data 14a so as to output an expression representing the sensibilities of the interlocutor in the situation when information representing the situation is input. Then, an expression representing the sensibility of the interlocutor corresponding to the information representing the estimated situation is identified.
 このように、感性変換装置10は、対話の発話者・対話者の経験や、似た感性を持つペルソナの経験のデータと、将棋のような勝負事におけるAIを用いた評価値等の現在の状況を表す情報を用いて、発話者が表現した感性について、対話者が同じように感じられるような表現に変換する。 In this way, the sensibility conversion device 10 collects data on the experience of the speaker/interlocutor in a dialogue, the experience of personas with similar sensibilities, and the current situation such as evaluation values using AI in competitive games such as shogi. The sensibility expressed by the speaker is converted into an expression that allows the interlocutor to feel the same way.
 これにより、同じ状態について発話者が感じたことを対話者がより正確にとらえることが可能となり、理解・認識のずれを生じることなくコミュニケーションを行うことが可能となる。例えば、勝負事等に勝つために戦略を練る際に、お互いの戦況に対する認識を合わせることにより目線を合わせ、効率的にコミュニケーションを行うことが可能となる。また、プレイヤーが言った言語表現を的確にとらえ、チームの戦略を考える役割の人が、プレイヤーの感性を忠実に捉えて戦略に反映することにより、勝負事における勝率が高くなる。このように、感性変換装置10によれば、対話によるコミュニケーションにおいて、発話者と対話者との理解・認識のずれを解消することが可能となる。 This makes it possible for the interlocutor to more accurately capture what the speaker feels about the same situation, and it becomes possible to communicate without causing a gap in understanding or recognition. For example, when formulating a strategy to win a game, by aligning each other's understanding of the situation, it becomes possible to look at each other's eyes and communicate efficiently. In addition, the winning rate in games will increase if the person responsible for accurately capturing the verbal expressions spoken by the players and thinking about the team's strategy faithfully captures the players' sensibilities and reflects them in the strategy. In this way, according to the sensibility conversion device 10, it is possible to eliminate the gap in understanding and recognition between the speaker and the interlocutor in communication through dialogue.
 また、状況を表す情報は、時系列に変化する情報でもよい。または、状況を表す情報は、該状況を変化させるきっかけにより変化する情報であってもよい。これにより、感性変換装置10は、状況が変化する場合にも、感性表現を変換することが可能となる。 Additionally, the information representing the situation may be information that changes over time. Alternatively, the information representing the situation may be information that changes depending on the trigger for changing the situation. Thereby, the sensibility conversion device 10 can convert the sensibility expression even when the situation changes.
 また、状況を表す情報は、AIによる評価値が用いられてもよい。このように、状況を表す情報として、AI等による客観的な評価値を用いることで端的に状況を表現できる。また、AIによる評価値が得られるような状況であれば、異なるシーンでも感性の変換が可能になる。例えば、将棋・スポーツについてともにAIの評価値が得られる場合に、将棋で言った言葉を、スポーツの試合の際に言った言葉に変換すること等が可能となる。 Furthermore, an evaluation value obtained by AI may be used as the information representing the situation. In this way, the situation can be simply expressed by using objective evaluation values such as AI as information representing the situation. Furthermore, if the situation is such that an evaluation value can be obtained by AI, it becomes possible to convert the sensibilities even in different scenes. For example, if AI evaluation values are obtained for both shogi and sports, it becomes possible to convert words said in shogi to words said during a sports match.
 また、変換部15eは、特定された対話者の感性を表す表現を用いて発話文を変換する。これにより、対話者が発話者の感性表現を正確に把握して、発話者との理解・認識のずれを解消することが可能となる。 Furthermore, the conversion unit 15e converts the uttered sentence using an expression that expresses the sensibilities of the identified interlocutor. This makes it possible for the interlocutor to accurately grasp the emotional expression of the speaker and to eliminate any discrepancies in understanding and recognition between the speaker and the speaker.
[プログラム]
 上記実施形態に係る感性変換装置10が実行する処理をコンピュータが実行可能な言語で記述したプログラムを作成することもできる。一実施形態として、感性変換装置10は、パッケージソフトウェアやオンラインソフトウェアとして上記の感性変換処理を実行する感性変換プログラムを所望のコンピュータにインストールさせることによって実装できる。例えば、上記の感性変換プログラムを情報処理装置に実行させることにより、情報処理装置を感性変換装置10として機能させることができる。また、その他にも、情報処理装置にはスマートフォン、携帯電話機やPHS(Personal Handyphone System)等の移動体通信端末、さらには、PDA(Personal Digital Assistant)等のスレート端末等がその範疇に含まれる。また、感性変換装置10の機能を、クラウドサーバに実装してもよい。
[program]
It is also possible to create a program in which the processing executed by the sensibility conversion device 10 according to the embodiment described above is written in a computer-executable language. As one embodiment, the sensibility conversion device 10 can be implemented by installing a sensibility conversion program that executes the above-mentioned sensibility conversion process into a desired computer as packaged software or online software. For example, by causing the information processing device to execute the above-mentioned sensibility conversion program, the information processing device can be made to function as the sensibility conversion device 10. In addition, information processing devices include mobile communication terminals such as smartphones, mobile phones, and PHSs (Personal Handyphone Systems), as well as slate terminals such as PDAs (Personal Digital Assistants). Further, the functions of the sensibility conversion device 10 may be implemented in a cloud server.
 図5は、感性変換プログラムを実行するコンピュータの一例を示す図である。コンピュータ1000は、例えば、メモリ1010と、CPU1020と、ハードディスクドライブインタフェース1030と、ディスクドライブインタフェース1040と、シリアルポートインタフェース1050と、ビデオアダプタ1060と、ネットワークインタフェース1070とを有する。これらの各部は、バス1080によって接続される。 FIG. 5 is a diagram showing an example of a computer that executes the sensibility conversion program. Computer 1000 includes, for example, memory 1010, CPU 1020, hard disk drive interface 1030, disk drive interface 1040, serial port interface 1050, video adapter 1060, and network interface 1070. These parts are connected by a bus 1080.
 メモリ1010は、ROM(Read Only Memory)1011およびRAM1012を含む。ROM1011は、例えば、BIOS(Basic Input Output System)等のブートプログラムを記憶する。ハードディスクドライブインタフェース1030は、ハードディスクドライブ1031に接続される。ディスクドライブインタフェース1040は、ディスクドライブ1041に接続される。ディスクドライブ1041には、例えば、磁気ディスクや光ディスク等の着脱可能な記憶媒体が挿入される。シリアルポートインタフェース1050には、例えば、マウス1051およびキーボード1052が接続される。ビデオアダプタ1060には、例えば、ディスプレイ1061が接続される。 The memory 1010 includes a ROM (Read Only Memory) 1011 and a RAM 1012. The ROM 1011 stores, for example, a boot program such as BIOS (Basic Input Output System). Hard disk drive interface 1030 is connected to hard disk drive 1031. Disk drive interface 1040 is connected to disk drive 1041. A removable storage medium such as a magnetic disk or an optical disk is inserted into the disk drive 1041, for example. For example, a mouse 1051 and a keyboard 1052 are connected to the serial port interface 1050. For example, a display 1061 is connected to the video adapter 1060.
 ここで、ハードディスクドライブ1031は、例えば、OS1091、アプリケーションプログラム1092、プログラムモジュール1093およびプログラムデータ1094を記憶する。上記実施形態で説明した各情報は、例えばハードディスクドライブ1031やメモリ1010に記憶される。 Here, the hard disk drive 1031 stores, for example, an OS 1091, an application program 1092, a program module 1093, and program data 1094. Each piece of information described in the above embodiments is stored in, for example, the hard disk drive 1031 or the memory 1010.
 また、感性変換プログラムは、例えば、コンピュータ1000によって実行される指令が記述されたプログラムモジュール1093として、ハードディスクドライブ1031に記憶される。具体的には、上記実施形態で説明した感性変換装置10が実行する各処理が記述されたプログラムモジュール1093が、ハードディスクドライブ1031に記憶される。 Further, the sensibility conversion program is stored in the hard disk drive 1031, for example, as a program module 1093 in which commands to be executed by the computer 1000 are written. Specifically, a program module 1093 in which each process executed by the sensibility conversion device 10 described in the above embodiment is described is stored in the hard disk drive 1031.
 また、感性変換プログラムによる情報処理に用いられるデータは、プログラムデータ1094として、例えば、ハードディスクドライブ1031に記憶される。そして、CPU1020が、ハードディスクドライブ1031に記憶されたプログラムモジュール1093やプログラムデータ1094を必要に応じてRAM1012に読み出して、上述した各手順を実行する。 Further, data used for information processing by the sensibility conversion program is stored as program data 1094 in, for example, the hard disk drive 1031. Then, the CPU 1020 reads out the program module 1093 and program data 1094 stored in the hard disk drive 1031 to the RAM 1012 as necessary, and executes each of the above-described procedures.
 なお、感性変換プログラムに係るプログラムモジュール1093やプログラムデータ1094は、ハードディスクドライブ1031に記憶される場合に限られず、例えば、着脱可能な記憶媒体に記憶されて、ディスクドライブ1041等を介してCPU1020によって読み出されてもよい。あるいは、感性変換プログラムに係るプログラムモジュール1093やプログラムデータ1094は、LAN(Local Area Network)やWAN(Wide Area Network)等のネットワークを介して接続された他のコンピュータに記憶され、ネットワークインタフェース1070を介してCPU1020によって読み出されてもよい。 Note that the program module 1093 and program data 1094 related to the sensitivity conversion program are not limited to being stored in the hard disk drive 1031; for example, they may be stored in a removable storage medium and read by the CPU 1020 via the disk drive 1041 or the like. May be served. Alternatively, the program module 1093 and program data 1094 related to the sensitivity conversion program are stored in another computer connected via a network such as a LAN (Local Area Network) or a WAN (Wide Area Network), and are transmitted via the network interface 1070. The data may also be read out by the CPU 1020.
 以上、本発明者によってなされた発明を適用した実施形態について説明したが、本実施形態による本発明の開示の一部をなす記述および図面により本発明は限定されることはない。すなわち、本実施形態に基づいて当業者等によりなされる他の実施形態、実施例および運用技術等は全て本発明の範疇に含まれる。 Although embodiments to which the invention made by the present inventor is applied have been described above, the present invention is not limited by the description and drawings that form part of the disclosure of the present invention by this embodiment. That is, all other embodiments, examples, operational techniques, etc. made by those skilled in the art based on this embodiment are included in the scope of the present invention.
 例えば、上記の実施形態では、課題を明瞭化するためゲームの例で行ったが、一般的に合意形成を伴う、履歴のあるコミュニケーションでは、同様の枠組みでモデル化することが可能となる。AIの代わりに、会社の意思決定方針との合致度、一般的な人達の選択傾向などを用いることが可能である。 For example, in the above embodiment, a game was used as an example to clarify the issue, but communication with a history that generally involves consensus building can be modeled using a similar framework. Instead of AI, it is possible to use the degree of match with the company's decision-making policy, the selection tendencies of general people, etc.
 10 感性変換装置
 11 入力部
 12 出力部
 13 通信制御部
 14 記憶部
 14a 履歴データ
 14b 第1のモデル
 14c 第2のモデル
 15 制御部
 15a 取得部
 15b 抽出部
 15c 推定部
 15d 特定部
 15e 変換部
10 Sensitivity conversion device 11 Input section 12 Output section 13 Communication control section 14 Storage section 14a History data 14b First model 14c Second model 15 Control section 15a Acquisition section 15b Extraction section 15c Estimation section 15d Specification section 15e Conversion section

Claims (8)

  1.  感性変換装置が実行する感性変換方法であって、
     前記感性変換装置は、発話者および対話者の過去の状況を表す情報と該状況における感性を表す表現とを対応付けた履歴データを記憶する記憶部を有し、
     発話者による発話文から感性を表す表現を抽出する抽出工程と、
     前記履歴データを用いて、抽出された前記感性を表す表現に対応する状況を表す情報を推定する推定工程と、
     前記履歴データを用いて、推定された前記状況を表す情報に対応する前記対話者の感性を表す表現を特定する特定工程と、
     を含むことを特徴とする感性変換方法。
    A sensibility conversion method executed by a sensibility conversion device, the method comprising:
    The sensibility conversion device has a storage unit that stores history data in which information representing past situations of the speaker and the interlocutor is associated with expressions representing sensibilities in the situation;
    an extraction step of extracting expressions expressing sensitivity from sentences uttered by the speaker;
    an estimation step of estimating information representing a situation corresponding to the extracted expression representing the sensitivity using the historical data;
    a specifying step of using the historical data to identify an expression representing the sensibility of the interlocutor that corresponds to information representing the estimated situation;
    A sensibility conversion method characterized by comprising:
  2.  前記推定工程は、発話者の感性を表す表現が入力された場合に、対応する状況を表す情報を出力するように、前記履歴データを用いて学習された第1のモデルを用いて、抽出された前記感性を表す表現に対応する状況を表す情報を推定し、
     前記特定工程は、状況を表す情報が入力された場合に、該状況における対話者の感性を表す表現を出力するように、前記履歴データを用いて学習された第2のモデルを用いて、推定された前記状況を表す情報に対応する前記対話者の感性を表す表現を特定する、
     ことを特徴とする請求項1に記載の感性変換方法。
    The estimation step uses the first model trained using the historical data to output information representing a corresponding situation when an expression representing the speaker's sensibility is input. estimate information representing the situation corresponding to the expression representing the sensitivity,
    The identification step includes estimation using a second model trained using the historical data so that when information representing the situation is input, an expression representing the sensibility of the interlocutor in the situation is output. identifying an expression representing the sensibility of the interlocutor that corresponds to information representing the situation
    The sensibility conversion method according to claim 1, characterized in that:
  3.  前記状況を表す情報は、時系列に変化する情報であることを特徴とする請求項1に記載の感性変換方法。 The sensibility conversion method according to claim 1, wherein the information representing the situation is information that changes over time.
  4.  前記状況を表す情報は、該状況を変化させるきっかけにより変化する情報であることを特徴とする請求項1に記載の感性変換方法。 The sensibility conversion method according to claim 1, wherein the information representing the situation is information that changes depending on a trigger for changing the situation.
  5.  前記状況を表す情報は、AIによる評価値が用いられることを特徴とする請求項1に記載の感性変換方法。 The sensibility conversion method according to claim 1, wherein the information representing the situation uses an evaluation value obtained by AI.
  6.  さらに、特定された前記対話者の感性を表す表現を用いて前記発話文を変換する変換工程をさらに含むことを特徴とする請求項1に記載の感性変換方法。 The sensibility conversion method according to claim 1, further comprising a conversion step of converting the uttered sentence using an expression representing the specified sensibility of the interlocutor.
  7.  発話者および対話者の過去の状況を表す情報と該状況における感性を表す表現とを対応付けた履歴データを記憶する記憶部と、
     発話者による発話文から感性を表す表現を抽出する抽出部と、
     前記履歴データを用いて、抽出された前記感性を表す表現に対応する状況を表す情報を推定する推定部と、
     前記履歴データを用いて、推定された前記状況を表す情報に対応する前記対話者の感性を表す表現を特定する特定部と、
     を有することを特徴とする感性変換装置。
    a storage unit that stores history data that associates information representing past situations of the speaker and the interlocutor with expressions representing sensibilities in the situation;
    an extraction unit that extracts an expression expressing sensitivity from a sentence uttered by a speaker;
    an estimation unit that uses the historical data to estimate information representing a situation corresponding to the extracted expression representing the sensitivity;
    an identification unit that uses the history data to identify an expression representing the sensibility of the interlocutor that corresponds to information representing the estimated situation;
    A sensitivity conversion device characterized by having the following.
  8.  コンピュータに請求項1~6のいずれか1項に記載の感性変換方法を実行させるための感性変換プログラム。 An affective conversion program for causing a computer to execute the affective converting method according to any one of claims 1 to 6.
PCT/JP2022/029208 2022-07-28 2022-07-28 Sensitivity transformation method, sensitivity transformation device, and sensitivity transformation program WO2024024065A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/029208 WO2024024065A1 (en) 2022-07-28 2022-07-28 Sensitivity transformation method, sensitivity transformation device, and sensitivity transformation program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/029208 WO2024024065A1 (en) 2022-07-28 2022-07-28 Sensitivity transformation method, sensitivity transformation device, and sensitivity transformation program

Publications (1)

Publication Number Publication Date
WO2024024065A1 true WO2024024065A1 (en) 2024-02-01

Family

ID=89705859

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/029208 WO2024024065A1 (en) 2022-07-28 2022-07-28 Sensitivity transformation method, sensitivity transformation device, and sensitivity transformation program

Country Status (1)

Country Link
WO (1) WO2024024065A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06309355A (en) * 1993-04-21 1994-11-04 Olympus Optical Co Ltd Sensibility expression translation system
JP2021196462A (en) * 2020-06-12 2021-12-27 パーソルワークスデザイン株式会社 Telephone answering device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06309355A (en) * 1993-04-21 1994-11-04 Olympus Optical Co Ltd Sensibility expression translation system
JP2021196462A (en) * 2020-06-12 2021-12-27 パーソルワークスデザイン株式会社 Telephone answering device

Similar Documents

Publication Publication Date Title
US10977452B2 (en) Multi-lingual virtual personal assistant
US10762892B2 (en) Rapid deployment of dialogue system
US20200183983A1 (en) Dialogue System and Computer Program Therefor
CN112233698B (en) Character emotion recognition method, device, terminal equipment and storage medium
US20080059147A1 (en) Methods and apparatus for context adaptation of speech-to-speech translation systems
JP6832501B2 (en) Meaning generation method, meaning generation device and program
US20180277145A1 (en) Information processing apparatus for executing emotion recognition
Adel et al. Features for factored language models for code-Switching speech.
CN112397056B (en) Voice evaluation method and computer storage medium
JP2009139390A (en) Information processing system, processing method and program
CN114830139A (en) Training models using model-provided candidate actions
CN110335608B (en) Voiceprint verification method, voiceprint verification device, voiceprint verification equipment and storage medium
KR20220064940A (en) Method and apparatus for generating speech, electronic device and storage medium
JP2015187684A (en) Unsupervised training method, training apparatus, and training program for n-gram language model
CN109658931A (en) Voice interactive method, device, computer equipment and storage medium
JP7400112B2 (en) Biasing alphanumeric strings for automatic speech recognition
JPWO2019150583A1 (en) Question group extraction method, question group extraction device and question group extraction program
CN112562723B (en) Pronunciation accuracy determination method and device, storage medium and electronic equipment
US10248649B2 (en) Natural language processing apparatus and a natural language processing method
JP6605997B2 (en) Learning device, learning method and program
San-Segundo et al. Proposing a speech to gesture translation architecture for Spanish deaf people
WO2024024065A1 (en) Sensitivity transformation method, sensitivity transformation device, and sensitivity transformation program
CN114490967A (en) Training method of dialogue model, dialogue method and device of dialogue robot and electronic equipment
CN115240696A (en) Speech recognition method and readable storage medium
CN114519347A (en) Method and device for generating conversation content for language and vocabulary learning training

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22953154

Country of ref document: EP

Kind code of ref document: A1