WO2024089858A1 - Presentation device, presentation method, and presentation program - Google Patents

Presentation device, presentation method, and presentation program Download PDF

Info

Publication number
WO2024089858A1
WO2024089858A1 PCT/JP2022/040253 JP2022040253W WO2024089858A1 WO 2024089858 A1 WO2024089858 A1 WO 2024089858A1 JP 2022040253 W JP2022040253 W JP 2022040253W WO 2024089858 A1 WO2024089858 A1 WO 2024089858A1
Authority
WO
WIPO (PCT)
Prior art keywords
risk
presentation
information
word
value
Prior art date
Application number
PCT/JP2022/040253
Other languages
French (fr)
Japanese (ja)
Inventor
涼平 西條
巌樹 戸嶋
翔平 松尾
愛 中根
Original Assignee
日本電信電話株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電信電話株式会社 filed Critical 日本電信電話株式会社
Priority to PCT/JP2022/040253 priority Critical patent/WO2024089858A1/en
Publication of WO2024089858A1 publication Critical patent/WO2024089858A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/253Grammatical analysis; Style critique

Definitions

  • the present invention relates to a presentation device, a presentation method, and a presentation program.
  • the present invention has been made in consideration of the above, and aims to quantitatively analyze the risks inherent in expressions used in language-mediated communication.
  • the presentation device is characterized by having a memory unit that stores information regarding the risk of language trouble for each word that constitutes the dialogue, and a calculation unit that refers to the memory unit and calculates a risk value that represents the magnitude of the risk for each word that constitutes the input dialogue.
  • the present invention makes it possible to quantitatively analyze the risks inherent in expressions used in language-mediated communication.
  • FIG. 1 is a schematic diagram illustrating a schematic configuration of a presentation device according to the present embodiment.
  • FIG. 2 is a diagram illustrating an example of the data structure of the risk expression dictionary.
  • FIG. 3 is a diagram for explaining the risk expression dictionary.
  • FIG. 4 is a diagram illustrating an example of the data structure of the relationship DB.
  • FIG. 5 is a diagram illustrating an example of the data configuration of the social attribute DB.
  • FIG. 6 is a diagram illustrating an example of the data configuration of the individual characteristic DB.
  • FIG. 7 is a diagram illustrating an example of the data configuration of the communication history DB.
  • FIG. 8 is a diagram for explaining the process of the calculation unit.
  • FIG. 9 is a diagram showing an example of a screen display of the presentation process result.
  • FIG. 10 is a flowchart showing the procedure of the presentation process.
  • FIG. 11 is a diagram illustrating an example of a computer that executes a presentation program.
  • FIG. 1 is a schematic diagram illustrating a schematic configuration of a presentation device according to the present embodiment.
  • a presentation device 10 according to the present embodiment is realized by a general-purpose computer such as a personal computer, and includes an input unit 11, an output unit 12, a communication control unit 13, a storage unit 14, and a control unit 15.
  • the input unit 11 is realized using input devices such as a keyboard, mouse, camera, microphone, etc., and inputs various instruction information such as starting processing to the control unit 15 in response to input operations by an operator.
  • the output unit 12 is realized by a display device such as a liquid crystal display, a printing device such as a printer, etc. For example, the output unit 12 displays the results of the presentation processing described below.
  • the communication control unit 13 is realized by a NIC (Network Interface Card) or the like, and controls communication between the control unit 15 and external devices via telecommunication lines such as a LAN (Local Area Network) or the Internet.
  • the communication control unit 13 controls communication between the control unit 15 and an external management device that manages various types of information.
  • the storage unit 14 is realized by a semiconductor memory element such as a RAM (Random Access Memory) or a flash memory, or a storage device such as a hard disk or an optical disk.
  • the storage unit 14 stores in advance the processing program that operates the presentation device 10 and data used during execution of the processing program, or stores it temporarily each time processing is performed.
  • the storage unit 14 may be configured to communicate with the control unit 15 via the communication control unit 13.
  • the storage unit 14 stores a risk expression dictionary 14a.
  • the risk expression dictionary 14a includes information about the risk of language trouble for each word that constitutes the dialogue.
  • the risk expression dictionary 14a includes information about one or more risks, such as the part of speech of each word, the number of meanings indicated by each word, or the ambiguity of the meaning indicated by each word. This information is collected prior to or during the presentation process described below, via the input unit 11 or from a management device that manages various information, and is stored in the storage unit 14.
  • FIG. 2 is a diagram illustrating an example of the data structure of the risk expression dictionary.
  • FIG. 3 is a diagram for explaining the risk expression dictionary.
  • the risk expression dictionary 14a includes information items for each word, such as part of speech, meaning, number of meanings, degree of danger, risk type, etc.
  • the number of meanings is the number of different meanings for the same word.
  • the number of homonyms may be added.
  • the risk level is a value set by the user for each word, taking into account, for example, the part of speech, risk type, relevance to business, etc.
  • the risk level may be a value based on a predefined classification table of ambiguous words, etc.
  • the risk type is, for example, a value ranging from 1 to 7 indicating each class of ambiguity classification.
  • FIG. 3 shows an example of 7 classes of ambiguity classification.
  • a risk type of 1 is set for a word with multiple meanings.
  • the risk expression dictionary 14a may also include information items that change dynamically during use, such as the importance of each word in a sentence. For example, for words with high risk values calculated in the presentation process described below, if the importance is not high, it may be possible to set the risk value to be calculated as low.
  • the risk expression dictionary 14a is not limited to information items for each word, but may be information items for each group of expressions consisting of multiple words.
  • expressions may include verbal expressions of facial expressions, etc.
  • the storage unit 14 also stores a relationship DB (Data Base) 14b, a social attribute DB 14c, an individual characteristic DB 14d, a communication history DB 14e, etc.
  • the relationship DB (Data Base) 14b includes information about the relationship between speakers.
  • the social attribute DB 14c includes information that represents the social attributes of the speaker and the interlocutor.
  • the individual characteristic DB 14d includes information that represents the speaker's characteristics related to utterances.
  • the communication history DB 14e also includes dialogue history. This information is collected prior to or during the presentation process described below via the input unit 11 or from a management device that manages various information, and is stored in the storage unit 14.
  • FIG. 4 is a diagram illustrating an example of the data structure of the relationship DB.
  • the relationship DB 14b includes, for each user, information items that represent the relationship with a specific other user, such as a relationship score, relationship information, and non-work-related interactions.
  • FIG. 4 illustrates the relationship DB 14b of a user with a user ID of 002.
  • the relationship score ranges from 0.0 to 1.0, with the closer the score is to 1, the closer the relationship is, and the closer the score is to 0, the more distant the relationship is.
  • the relationship score may be calculated using the frequency of interactions in the communication history DB 14e, which will be described later.
  • the relationship score may be set with reference to the job titles in the social attribute DB 14c, which will be described later, to 0.5 for the same job title, and 0.1 for jobs with greater distance between them, such as between the president and an ordinary employee.
  • the relationship score may be calculated comprehensively with reference to the information items in the communication history DB 14e, social attribute DB 14c, and personal characteristic DB 14d, which will be described later, or may be calculated from compatibility using the company's corporate culture, the person's personality evaluation score, such as the Big 5, etc.
  • Relationship information indicates the business relationship between a user and a specific other user, such as a relationship with a superior in the same department. Non-work interactions indicate whether or not there is interaction outside of work.
  • Figure 5 is a diagram illustrating an example of the data structure of the social attribute DB.
  • the social attribute DB 14c includes affiliation, job title, years of employment, etc. as information items that represent the social attributes, i.e., social status, of the speaker and interlocutor.
  • the social attribute DB 14c is set for each conversation topic or situation.
  • Figure 5 illustrates the social attribute DB 14c for business topics.
  • FIG. 6 is a diagram illustrating an example of the data structure of the personal characteristic DB.
  • the personal characteristic DB 14d includes, for each user, information items that represent the speaker's characteristics regarding each utterance, such as frequency, degree of agreement with usage status, frequently occurring topics, and risk value evaluation.
  • the frequency is a value between 0.0 and 1.0 that indicates how often the statement occurs.
  • the consistency of usage is a value between 0.0 and 1.0 that indicates the consistency of the topics in which the statement occurs.
  • the risk value evaluation is a value between 0.0 and 1.0 that indicates the user's self-evaluation of the history of risk values calculated for the word, with a higher value indicating a higher evaluation that the risk value is appropriate.
  • FIG. 7 is a diagram illustrating an example of the data structure of the communication history DB.
  • the communication history DB 14e represents a dialogue history, and for each utterance, includes information items such as the utterance ID, user ID, time, utterance content, risk value, topic, dialogue partner ID, and miscommunication record.
  • the risk value is a risk value previously calculated by the presentation process described below.
  • the topic is a label that indicates an overview of the conversation in which the comment was made, and an appropriate social attribute DB14c is selected based on this.
  • the miscommunication record indicates whether or not there is a record of any trouble resulting from the comment.
  • the communication history DB14e may also include non-verbal communication expressions such as facial expressions based on video data, etc.
  • control unit 15 is realized using a CPU (Central Processing Unit) or the like, and executes a processing program stored in memory.
  • the control unit 15 functions as an acquisition unit 15a, a calculation unit 15b, and a presentation unit 15c, as exemplified in FIG. 1, and executes the presentation process.
  • each of these functional units, or some of them, may be implemented in different hardware.
  • the control unit 15 may also have other functional units.
  • the acquisition unit 15a acquires the words that make up the dialogue to be processed. Specifically, the acquisition unit 15a accepts input of the dialogue to be processed via the input unit 11 or the communication control unit 13, and performs morphological analysis to extract the words that make up the dialogue.
  • the acquisition unit 15a may store the words constituting the acquired dialogue in the storage unit 14 prior to the presentation process described below. Alternatively, the acquisition unit 15a may immediately transfer this information to the calculation unit 15b.
  • the calculation unit 15b refers to the risk expression dictionary 14a of the storage unit 14 and calculates a risk value that indicates the magnitude of risk of each word that constitutes the input dialogue.
  • the risk value indicates an estimate of the magnitude of risk of language trouble, i.e., miscommunication, occurring for each word that appears in the sentence, or for the entire sentence.
  • the calculation unit 15b calculates the risk value using risk variables derived from risk-related information contained in the risk expression dictionary 14a.
  • the risk variable is derived based on, for example, the part of speech or the risk level of the word. Alternatively, the risk variable is derived based on the number of meanings of the word. Alternatively, the risk variable is derived based on the risk type. Note that risk variables related to context are not used.
  • the calculation unit 15b may also calculate a risk value for each group of expressions consisting of multiple words, which is a unit of information in the risk expression dictionary 14a.
  • the expressions may include verbal expressions of facial expressions, etc.
  • the calculation unit 15b calculates a risk value using the product of multiple pieces of risk-related information that have been weighted in a predetermined manner. Specifically, the calculation unit 15b calculates a risk value r i of the i-th word in the sentence using the following formula (1).
  • the risk variable p i1 based on the part of speech of the word and the risk level takes three values from 1 to 3, with the larger value indicating a higher risk.
  • the part of speech is a modifier such as an adjective or adverb, it is ambiguous and risky, so it is set to 3, while a demonstrative is set to 2, and the rest are set to 1.
  • the risk level may be applied as is, or a value based on the part of speech and a value of the risk level may be used in combination.
  • the risk variable p i2 based on the number of meanings of a word may be, for example, a value registered in the risk expression dictionary 14 a as is, or may be a value classified into about three levels, similar to the risk variable p i1 .
  • the risk variable p i3 based on the risk type of the word may be set to a value by a user who is familiar with the business situation, for example.
  • weighting variables w 1 to w 3 and the risk variables p i1 to p i3 are adjusted so that the risk value r i takes a value greater than 0.0.
  • Fig. 8 is a diagram for explaining the processing of the calculation unit.
  • Fig. 8(1) illustrates risk levels defined in three stages, levels 1 to 3, as a risk variable p i3 based on the risk type of a word.
  • Fig. 8(2) illustrates a risk variable p i2 based on the number of meanings of a word.
  • Fig. 8(3) shows an example of calculating the risk value ri using the part of speech p i1 , the number of meanings p i2 , and the risk level p i3 of the i-th word.
  • the weight variable 1 (w 1 ) of the part of speech p i1 is set to 0, and the risk value ri is calculated using the number of meanings p i2 and the risk level p i3 while suppressing the influence of the part of speech p i1 .
  • the calculation unit 15b may calculate the risk value by using the sum of multiple pieces of risk-related information that have been weighted in a predetermined manner. For example, the calculation unit 15b calculates the risk value r i of the i-th word in the sentence by the following formula (2).
  • each of the risk variables p i1 to p i3 is normalized to a value between 0.0 and 1.0, and the weight variables w 1 to w 3 are set to values in the range of 0.0 to 1.0.
  • the calculation unit 15b may further calculate the risk value using one or more of information regarding the relationship between the speakers, information representing the social attributes of the speaker and the interlocutor, information representing the characteristics of the speaker, or the dialogue history. For example, the calculation unit 15b calculates the risk value by referring to the relationship DB 14b, the social attribute DB 14c, the personal characteristic DB 14d, or the communication history DB 14e. This makes it possible to calculate a risk value taking into account the context.
  • the calculation unit 15b calculates the risk value r i of the i-th word in the sentence, for example, by the following formula (3).
  • the risk variables p i1 to p i3 are the same as those in formula (1) above.
  • the weight variables w 1 to w 7 are set to values in the range of 0.0 to 1.0.
  • the weight variables w 1 to w 7 and the risk variables p i1 to p i7 are adjusted so that the calculated risk value r i is a value greater than 0.0.
  • the risk variable p i4 based on the relationship between users may be, for example, a value obtained by subtracting the relationship score in the relationship DB 14b from 1, or may be a product of the relationship score subtracted from 1, where 2 is used when there is non-work-related interaction and 1 is used when there is no non-work-related interaction.
  • the risk variable p i4 becomes a larger value the weaker the relationship is, and the risk value becomes larger.
  • the risk variable p i5 based on social attributes may be, for example, the difference between users in years of employment in the social attribute DB 14c, or may be quantified as a larger value for higher-ranking positions and used as the quantified difference in job titles between users in conversation. By calculating in this way, the risk variable p i5 becomes a larger value the greater the difference in positions within the organization, such as years of employment or job title, and the larger the risk value becomes.
  • the product or sum of the risk variable p i4 and the risk variable p i5 calculated as described above may be substituted for the risk variable p i4 and the risk variable p i5 .
  • the risk variable p i6 based on the personal characteristics may be, for example, a value obtained by subtracting the degree of agreement of the use situation in the personal characteristics DB 14d from 1, or may be the reciprocal of the degree of agreement of the use situation.
  • the risk variable p i6 may be the frequency in the personal characteristics DB 14d, or may be the product of the value obtained by subtracting the degree of agreement of the use situation from 1 and the frequency.
  • the risk variable p i7 based on the dialogue history may be, for example, a risk value in the communication history DB 14e, or may be the product of the risk value and a value obtained by quantifying a record of miscommunication (2 if there is, 1 if there is no).
  • the risk variable p i7 may be set as a value obtained by quantifying the relationship between the topic and the business, and the greater the relationship to the business. For example, confidential information related to corporate strategy is set to 1, general internal information is set to 0.5, and public information related to recreational events after work is set to 0.1, etc.
  • the calculation unit 15b may calculate the risk value r i of the i-th word in the sentence using the following formula (4).
  • the risk variables p i1 to p i7 are values normalized to 0.0 to 1.0 similar to the above formula (3), and the weight variables w 1 to w 7 are values in the range of 0.0 to 1.0.
  • the values of the risk variables p i1 to p i7 are not limited to those mentioned above.
  • the following values may be applied by additionally registering a predetermined information item in each piece of information (14a to 14e) in the storage unit 14.
  • the risk variables p i1 to p i3 using the risk expression dictionary 14 a may be the degree of occurrence of miscommunication caused by the word, the degree of danger when miscommunication occurs due to the word, the frequency of occurrence of the word in general conversation, etc.
  • the risk variable p i4 using the relationship DB 14 b or the risk variable p i5 using the social attribute DB 14 c may be applied.
  • the risk variable p i6 using the individual characteristic DB 14d the speaker's background, attributes, evaluation of risk values calculated for past statements, and the like may be applied.
  • the method of calculating the risk value is not limited to using each piece of information (14a to 14e) in the storage unit 14 described above.
  • the risk value may be calculated by using the following pieces of information that can be collected during a conversation in combination:
  • the risk value may be calculated by combining the analysis results of biological signals such as facial expressions, electroencephalograms, and gaze, with the content of statements in the communication history DB14e.
  • the evaluation value of the medical or objective changes in the physical condition of the person involved may be additionally applied as a risk variable.
  • the seriousness of a problem that occurs after a conversation may be fed back and reflected in the risk or importance in the risk expression dictionary 14a, or in the risk value in the communication history DB 14e.
  • the seriousness of this problem may be subjectively evaluated by the user and reflected in the risk or importance in the risk expression dictionary 14a.
  • the weight variables w1 to w7 are variables that adjust the influence of each risk variable, and are set according to the topic and participants of the conversation that is the subject of the risk value calculation process. For example, if there is a participant who is not good at guessing the meaning of ambiguous statements, the values of the risk expression dictionary 14a will be strongly reflected in the risk value by increasing the values of the weight variables w1 to w3 .
  • the weighting variable w7 for the value in the communication history DB 14e is increased. This allows the value of the risk variable based on frequently appearing expressions or expressions closely related to the business to be strongly reflected in the risk value.
  • the values of the weighting variables w4 and w6 are increased so that the values of the risk variable p i4 based on the relationship DB 14b and the risk variable p i5 based on the personal characteristic DB 14d are more strongly reflected in the risk value.
  • weighting variables w1 to w7 may be dynamically adjusted based on a judgment of the quality of consensus reached in the most recent dialogue or discussion, a measurement of satisfaction with past dialogues, a measurement of proficiency with the topic that is the subject of the conversation, etc.
  • the risk value of an entire sentence or a group of expressions consisting of multiple words can also be calculated using the risk value of each word calculated by the above formulas (1) to (4).
  • the average value of the risk values r i calculated for the words contained in each sentence may be used as the risk value of the sentence.
  • the risk value of the sentence may be determined by calculating a weighted average of the risk values r 1 to r N weighted based on the risk level or importance of the risk expression dictionary 14a or a value set in advance.
  • the sum or product of the risk values r i may be used.
  • the presentation unit 15c presents the presentation information generated based on the calculated risk value to the user.
  • the presentation unit 15c presents the risk value calculated by the calculation unit 15b, or presentation information such as a message, a notification sound, or a vibration generated based on the risk value, to the user via the output unit 12.
  • FIG. 9 is a diagram showing an example of a screen display of the results of the presentation process.
  • information on the calculated risk value is presented on an online conference tool.
  • the presentation unit 15c displays the text of words with high risk values by highlighting them.
  • the presentation unit 15c changes the color of the text of words with high risk values, underlines them, makes the font larger, makes them bold, or highlights them.
  • the presentation unit 15c displays text with a high risk value in a format that attracts the user's attention, such as by displaying the text in a pop-up.
  • the presentation unit 15c may also display words with a high risk value in a position that is easily noticeable to the user.
  • the presentation unit 15c may present words with a high risk value using audio information or tactile information.
  • the volume of the notification sound for words with a high risk value may be increased, a sound that is likely to make the user feel the need to take action, such as an incorrect answer sound, may be used, vibrations may be increased, or a vibration frequency or pattern that is likely to attract attention may be used.
  • the presentation unit 15c may also present words with high risk values in the form of advice from a CG or image agent placed on the screen.
  • the presentation unit 15c may present the same screen to user 1 and user 2, or different screens.
  • the presentation unit 15c presents the same screen, both users are more likely to feel that they are on an equal footing, and mutual understanding is promoted.
  • the presentation unit 15c presents different screens, the promotion of mutual understanding is lessened, but the efficiency of communication is improved by allowing users with better communication skills to more flexibly engage in dialogue.
  • the presentation unit 15c may adjust the presentation method or cancel the presentation itself.
  • the presentation unit 15c may present in a different format, such as reducing the degree of highlighting or stopping the highlighting for a user who is bothered by the highlighting, or changing the visual information to sound information, in response to a user's operation or in response to the user's characteristics.
  • the way in which the user characteristics are reflected may differ depending on the composition of the conference participants.
  • Fig. 10 is a flowchart showing a procedure of the presentation process.
  • the flowchart in Fig. 10 is started, for example, when a user performs an operation input to instruct the start of the process.
  • the acquisition unit 15a acquires the words that make up the dialogue to be processed (step S1). For example, the acquisition unit 15a accepts input of the dialogue to be processed via the input unit 11 or the communication control unit 13, and performs morphological analysis to extract the words that make up the dialogue.
  • the calculation unit 15b refers to the risk expression dictionary 14a and calculates a risk value that indicates the degree of risk of language trouble for the words that make up the input dialogue (step S2). For example, the calculation unit 15b calculates the risk value using the product of multiple pieces of information that have been weighted in a predetermined manner in the risk expression dictionary 14a. Alternatively, the calculation unit 15b calculates the risk value using the sum of multiple pieces of information that have been weighted in a predetermined manner in the risk expression dictionary 14a.
  • the calculation unit 15b calculates the risk value using one or more pieces of information from the relationship DB 14b, the social attribute DB 14c, the personal characteristic DB 14d, or the communication history DB 14e.
  • the presentation unit 15c also presents the presentation information generated based on the calculated risk value to the user (step S3). For example, the presentation unit 15c presents the risk value calculated by the calculation unit 15b to the user via the output unit 12. This completes the series of presentation processes.
  • the risk value based on the ambiguous expression is calculated, but the present invention is not limited to this.
  • the presentation device 10 can calculate a risk value of an expression based on a stereotype instead of or in addition to the risk value based on the ambiguous expression.
  • the calculation unit 15b calculates the risk value using information on the risk based on the stereotype.
  • risk variables p i1 to p i3 based on the risk expression dictionary 14a may be risk levels set from the perspective of stereotypes, ambiguous expressions, or risk types set from the perspective of stereotypes.
  • the risk variable p i4 using the relationship DB 14b or the risk variable p i5 using the social attribute DB 14c may be applied.
  • the background, attributes, and the like of the speaker may be applied as the risk variable p i6 using the individual characteristic DB 14d.
  • the calculation unit 15b calculates a risk value based on a stereotype.
  • the storage unit 14 stores the risk expression dictionary 14a, which is information on the risk of language trouble for each word constituting the dialogue.
  • the calculation unit 15b refers to the risk expression dictionary 14a and calculates a risk value indicating the magnitude of risk of each word constituting the input dialogue.
  • the risk expression dictionary 14a stores information related to one or more risks, such as the part of speech of each word, the number of meanings indicated by each word, or the ambiguity of the meaning indicated by each word.
  • the calculation unit 15b also calculates the risk value using the product of information about multiple risks that have been weighted in a predetermined manner. Alternatively, the calculation unit 15b calculates the risk value using the sum of information about multiple risks that have been weighted in a predetermined manner.
  • the calculation unit 15b further calculates the risk value using one or more of the relationship DB 14b including information on the relationship between speakers, the social attribute DB 14c including information representing the social attributes of the speaker and interlocutor, the personal characteristic DB 14d including information representing the speaker's characteristics related to utterances, and the communication history DB 14e including the dialogue history. This enables the presentation device 10 to calculate the risk value with higher accuracy.
  • the calculation unit 15b also calculates the risk value using information about risks based on stereotypes. This also makes it possible to calculate a risk value with higher accuracy.
  • a program in which the process executed by the presentation device 10 according to the above embodiment is written in a language executable by a computer can also be created.
  • the presentation device 10 can be implemented by installing a presentation program that executes the above presentation process as package software or online software on a desired computer.
  • the information processing device can function as the presentation device 10 by executing the above presentation program on an information processing device.
  • the information processing device referred to here includes desktop or notebook personal computers.
  • the information processing device also includes mobile communication terminals such as smartphones, mobile phones, and PHS (Personal Handyphone System), and even slate terminals such as PDA (Personal Digital Assistant).
  • the functions of the presentation device 10 may be implemented on a cloud server.
  • FIG. 11 is a diagram showing an example of a computer that executes a presentation program.
  • the computer 1000 has, for example, a memory 1010, a CPU 1020, a hard disk drive interface 1030, a disk drive interface 1040, a serial port interface 1050, a video adapter 1060, and a network interface 1070. These components are connected by a bus 1080.
  • the memory 1010 includes a ROM (Read Only Memory) 1011 and a RAM 1012.
  • the ROM 1011 stores a boot program such as a BIOS (Basic Input Output System).
  • BIOS Basic Input Output System
  • the hard disk drive interface 1030 is connected to a hard disk drive 1031.
  • the disk drive interface 1040 is connected to a disk drive 1041.
  • a removable storage medium such as a magnetic disk or optical disk is inserted into the disk drive 1041.
  • the serial port interface 1050 is connected to a mouse 1051 and a keyboard 1052, for example.
  • the video adapter 1060 is connected to a display 1061, for example.
  • the hard disk drive 1031 stores, for example, an OS 1091, an application program 1092, a program module 1093, and program data 1094. Each piece of information described in the above embodiment is stored, for example, in the hard disk drive 1031 or memory 1010.
  • the presentation program is stored in the hard disk drive 1031, for example, as a program module 1093 in which instructions to be executed by the computer 1000 are written.
  • the program module 1093 in which each process executed by the presentation device 10 described in the above embodiment is written is stored in the hard disk drive 1031.
  • data used for information processing by the presentation program is stored as program data 1094, for example, in the hard disk drive 1031.
  • the CPU 1020 reads the program module 1093 and program data 1094 stored in the hard disk drive 1031 into the RAM 1012 as necessary, and executes each of the above-mentioned procedures.
  • the program module 1093 and program data 1094 related to the presentation program are not limited to being stored in the hard disk drive 1031, but may be stored in a removable storage medium, for example, and read by the CPU 1020 via the disk drive 1041 or the like.
  • the program module 1093 and program data 1094 related to the presentation program may be stored in another computer connected via a network, such as a LAN or WAN (Wide Area Network), and read by the CPU 1020 via the network interface 1070.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Provided is a presentation device (10) wherein a storage unit (14) stores a risky expression dictionary (14a) that is information pertaining to the risk of language trouble for each word constituting an interaction text. A calculation unit (15b) refers to the risky expression dictionary (14a) and calculates a risk value representing the degree of risk for each word constituting an interaction text that was input.

Description

提示装置、提示方法および提示プログラムPresentation device, presentation method, and presentation program
 本発明は、提示装置、提示方法および提示プログラムに関する。 The present invention relates to a presentation device, a presentation method, and a presentation program.
 言語を介したコミュニケーションにおいて、曖昧な表現や高圧的な表現、固定観念に基づく表現等、相互尊重・相互理解を阻害しかねない表現が使われる場合がある。従来、コミュニケーション後に、使われた表現の振り返りを容易に行えるように、発言の内容の文字起こしを即時に行うシステムが提案されている(特許文献1参照)。 In verbal communication, ambiguous expressions, overbearing expressions, expressions based on stereotypes, and other expressions that may hinder mutual respect and understanding may be used. Previously, a system has been proposed that instantly transcribes what was said so that users can easily review the expressions used after communication (see Patent Document 1).
特開2020-140629号公報JP 2020-140629 A
 しかしながら、従来技術によれば、言語を介したコミュニケーションにおいて、使われた表現が内包するリスクを定量的に分析することは困難である。 However, with conventional technology, it is difficult to quantitatively analyze the risks inherent in expressions used in language-mediated communication.
 本発明は、上記に鑑みてなされたものであって、言語を介したコミュニケーションにおいて、使われた表現が内包するリスクを定量的に分析することを目的とする。 The present invention has been made in consideration of the above, and aims to quantitatively analyze the risks inherent in expressions used in language-mediated communication.
 上述した課題を解決し、目的を達成するために、本発明に係る提示装置は、対話文を構成する各単語について、言語トラブルのリスクに関する情報を記憶する記憶部と、前記記憶部を参照し、入力された対話文を構成する各単語の前記リスクの大きさを表すリスク値を算出する算出部と、を有することを特徴とする。 In order to solve the above-mentioned problems and achieve the objective, the presentation device according to the present invention is characterized by having a memory unit that stores information regarding the risk of language trouble for each word that constitutes the dialogue, and a calculation unit that refers to the memory unit and calculates a risk value that represents the magnitude of the risk for each word that constitutes the input dialogue.
 本発明によれば、言語を介したコミュニケーションにおいて、使われた表現が内包するリスクを定量的に分析することが可能となる。 The present invention makes it possible to quantitatively analyze the risks inherent in expressions used in language-mediated communication.
図1は、本実施形態の提示装置の概略構成を例示する模式図である。FIG. 1 is a schematic diagram illustrating a schematic configuration of a presentation device according to the present embodiment. 図2は、リスク表現辞書のデータ構成を例示する図である。FIG. 2 is a diagram illustrating an example of the data structure of the risk expression dictionary. 図3は、リスク表現辞書を説明するための図である。FIG. 3 is a diagram for explaining the risk expression dictionary. 図4は、関係性DBのデータ構成を例示する図である。FIG. 4 is a diagram illustrating an example of the data structure of the relationship DB. 図5は、社会属性DBのデータ構成を例示する図である。FIG. 5 is a diagram illustrating an example of the data configuration of the social attribute DB. 図6は、個人特性DBのデータ構成を例示する図である。FIG. 6 is a diagram illustrating an example of the data configuration of the individual characteristic DB. 図7は、コミュニケーション履歴DBのデータ構成を例示する図である。FIG. 7 is a diagram illustrating an example of the data configuration of the communication history DB. 図8は、算出部の処理を説明するための図である。FIG. 8 is a diagram for explaining the process of the calculation unit. 図9は、提示処理結果の画面表示例を示す図である。FIG. 9 is a diagram showing an example of a screen display of the presentation process result. 図10は、提示処理手順を示すフローチャートである。FIG. 10 is a flowchart showing the procedure of the presentation process. 図11は、提示プログラムを実行するコンピュータの一例を示す図である。FIG. 11 is a diagram illustrating an example of a computer that executes a presentation program.
 以下、図面を参照して、本発明の一実施形態を詳細に説明する。なお、この実施形態により本発明が限定されるものではない。また、図面の記載において、同一部分には同一の符号を付して示している。 Below, one embodiment of the present invention will be described in detail with reference to the drawings. Note that the present invention is not limited to this embodiment. In addition, in the drawings, the same parts are denoted by the same reference numerals.
[提示装置の構成]
 図1は、本実施形態の提示装置の概略構成を例示する模式図である。図1に例示するように、本実施形態の提示装置10は、パソコン等の汎用コンピュータで実現され、入力部11、出力部12、通信制御部13、記憶部14、および制御部15を備える。
[Configuration of presentation device]
Fig. 1 is a schematic diagram illustrating a schematic configuration of a presentation device according to the present embodiment. As illustrated in Fig. 1, a presentation device 10 according to the present embodiment is realized by a general-purpose computer such as a personal computer, and includes an input unit 11, an output unit 12, a communication control unit 13, a storage unit 14, and a control unit 15.
 入力部11は、キーボードやマウス、カメラ、マイク等の入力デバイスを用いて実現され、操作者による入力操作に対応して、制御部15に対して処理開始などの各種指示情報を入力する。出力部12は、液晶ディスプレイなどの表示装置、プリンター等の印刷装置等によって実現される。例えば、出力部12には、後述する提示処理の結果が表示される。 The input unit 11 is realized using input devices such as a keyboard, mouse, camera, microphone, etc., and inputs various instruction information such as starting processing to the control unit 15 in response to input operations by an operator. The output unit 12 is realized by a display device such as a liquid crystal display, a printing device such as a printer, etc. For example, the output unit 12 displays the results of the presentation processing described below.
 通信制御部13は、NIC(Network Interface Card)等で実現され、LAN(Local Area Network)やインターネットなどの電気通信回線を介した外部の装置と制御部15との通信を制御する。例えば、通信制御部13は、各種情報を管理する外部の管理装置等と制御部15との通信を制御する。 The communication control unit 13 is realized by a NIC (Network Interface Card) or the like, and controls communication between the control unit 15 and external devices via telecommunication lines such as a LAN (Local Area Network) or the Internet. For example, the communication control unit 13 controls communication between the control unit 15 and an external management device that manages various types of information.
 記憶部14は、RAM(Random Access Memory)、フラッシュメモリ(Flash Memory)等の半導体メモリ素子、または、ハードディスク、光ディスク等の記憶装置によって実現される。記憶部14には、提示装置10を動作させる処理プログラムや、処理プログラムの実行中に使用されるデータなどが予め記憶され、あるいは処理の都度一時的に記憶される。なお、記憶部14は、通信制御部13を介して制御部15と通信する構成でもよい。 The storage unit 14 is realized by a semiconductor memory element such as a RAM (Random Access Memory) or a flash memory, or a storage device such as a hard disk or an optical disk. The storage unit 14 stores in advance the processing program that operates the presentation device 10 and data used during execution of the processing program, or stores it temporarily each time processing is performed. The storage unit 14 may be configured to communicate with the control unit 15 via the communication control unit 13.
 本実施形態において、記憶部14は、リスク表現辞書14aを記憶する。リスク表現辞書14aは、対話文を構成する各単語について、言語トラブルのリスクに関する情報を含む。具体的には、リスク表現辞書14aは、各単語の品詞、各単語が示す意味数、または各単語の示す意味の曖昧さのいずれか1つ以上のリスクに関する情報を含む。これらの情報は、後述する提示処理に先立って、あるいは提示処理の最中に、入力部11を介して、または各種情報を管理する管理装置等から収集され、記憶部14に記憶される。 In this embodiment, the storage unit 14 stores a risk expression dictionary 14a. The risk expression dictionary 14a includes information about the risk of language trouble for each word that constitutes the dialogue. Specifically, the risk expression dictionary 14a includes information about one or more risks, such as the part of speech of each word, the number of meanings indicated by each word, or the ambiguity of the meaning indicated by each word. This information is collected prior to or during the presentation process described below, via the input unit 11 or from a management device that manages various information, and is stored in the storage unit 14.
 ここで、図2は、リスク表現辞書のデータ構成を例示する図である。また、図3は、リスク表現辞書を説明するための図である。まず、図2に例示するように、リスク表現辞書14aは、単語ごとに、品詞、意味、意味数、危険度、リスク種別等の情報項目を含む。 Here, FIG. 2 is a diagram illustrating an example of the data structure of the risk expression dictionary. Also, FIG. 3 is a diagram for explaining the risk expression dictionary. First, as illustrated in FIG. 2, the risk expression dictionary 14a includes information items for each word, such as part of speech, meaning, number of meanings, degree of danger, risk type, etc.
 意味数とは、同一の単語について異なる意味の数である。加えて、同音異義語の数を加算してもよい。危険度は、例えば、品詞やリスク種別、業務との関連等を加味して、単語ごとにユーザが設定する値である。あるいは、危険度は、予め規定された曖昧語の分類表等に基づく値が適用されてもよい。 The number of meanings is the number of different meanings for the same word. In addition, the number of homonyms may be added. The risk level is a value set by the user for each word, taking into account, for example, the part of speech, risk type, relevance to business, etc. Alternatively, the risk level may be a value based on a predefined classification table of ambiguous words, etc.
 リスク種別は、例えば、曖昧さの分類の各クラスを示す1~7の範囲の値である。ここで、図3には、曖昧さの7クラスの分類が例示されている。例えば、複数の意味をもつ単語には、リスク種別=1が設定される。 The risk type is, for example, a value ranging from 1 to 7 indicating each class of ambiguity classification. Here, FIG. 3 shows an example of 7 classes of ambiguity classification. For example, a risk type of 1 is set for a word with multiple meanings.
 なお、リスク表現辞書14aには、各単語の文中での重要度等、利用中に動的に変化する情報項目が含まれていてもよい。例えば、後述する提示処理で算出されたリスク値が高い単語について、重要度が高くない場合には、リスク値が低く算出されるように個別に設定可能であってもよい。 The risk expression dictionary 14a may also include information items that change dynamically during use, such as the importance of each word in a sentence. For example, for words with high risk values calculated in the presentation process described below, if the importance is not high, it may be possible to set the risk value to be calculated as low.
 また、リスク表現辞書14aは、単語ごとの情報項目に限定されず、複数の単語で構成される一まとまりの表現ごとの情報項目であってもよい。この場合の表現には、表情の言葉による表現等が含まれてもよい。 In addition, the risk expression dictionary 14a is not limited to information items for each word, but may be information items for each group of expressions consisting of multiple words. In this case, expressions may include verbal expressions of facial expressions, etc.
 また、記憶部14は、関係性DB(Data Base)14b、社会属性DB14c、個人特性DB14d、コミュニケーション履歴DB14e等を記憶する。関係性DB(Data Base)14bは、話者同士の関係性に関する情報を含む。社会属性DB14cは、発話者および対話者の社会属性を表す情報を含む。個人特性DB14dは、発言に関する話者の特性を表す情報を含む。コミュニケーション履歴DB14eは、または対話履歴を含む。これらの情報は、後述する提示処理に先立って、あるいは提示処理の最中に、入力部11を介して、または各種情報を管理する管理装置等から収集され、記憶部14に記憶される。 The storage unit 14 also stores a relationship DB (Data Base) 14b, a social attribute DB 14c, an individual characteristic DB 14d, a communication history DB 14e, etc. The relationship DB (Data Base) 14b includes information about the relationship between speakers. The social attribute DB 14c includes information that represents the social attributes of the speaker and the interlocutor. The individual characteristic DB 14d includes information that represents the speaker's characteristics related to utterances. The communication history DB 14e also includes dialogue history. This information is collected prior to or during the presentation process described below via the input unit 11 or from a management device that manages various information, and is stored in the storage unit 14.
 ここで、図4は、関係性DBのデータ構成を例示する図である。図4に例示するように、関係性DB14bは、ユーザごとに、特定の相手ユーザとの関係性を表す情報項目として、関係性スコア、関係性情報、業務外交流等を含む。図4には、ユーザIDが002のユーザの関係性DB14bが例示されている。 Here, FIG. 4 is a diagram illustrating an example of the data structure of the relationship DB. As illustrated in FIG. 4, the relationship DB 14b includes, for each user, information items that represent the relationship with a specific other user, such as a relationship score, relationship information, and non-work-related interactions. FIG. 4 illustrates the relationship DB 14b of a user with a user ID of 002.
 関係性スコアは、0.0~1.0の範囲の値をとり、1に近いほど親密な関係であることを表し、0に近いほど疎遠な関係であることを表す。例えば、対話の頻度が高い相手ユーザは親密な関係になることから、関係性スコアは、後述するコミュニケーション履歴DB14eにおける対話頻度を用いて算出されてもよい。あるいは、関係性スコアは、後述する社会属性DB14cの役職を参照して、同一の役職であれば0.5、社長と平社員というように役職間の距離が離れると0.1というように設定されてもよい。または、関係性スコアは、後述するコミュニケーション履歴DB14e、社会属性DB14c、個人特性DB14dの情報項目を参照して、総合的に算出されてもよいし、企業の社風や本人のBig5等の性格の評価値等を用いた相性から算出されてもよい。 The relationship score ranges from 0.0 to 1.0, with the closer the score is to 1, the closer the relationship is, and the closer the score is to 0, the more distant the relationship is. For example, since a user with whom you interact frequently is more intimate, the relationship score may be calculated using the frequency of interactions in the communication history DB 14e, which will be described later. Alternatively, the relationship score may be set with reference to the job titles in the social attribute DB 14c, which will be described later, to 0.5 for the same job title, and 0.1 for jobs with greater distance between them, such as between the president and an ordinary employee. Alternatively, the relationship score may be calculated comprehensively with reference to the information items in the communication history DB 14e, social attribute DB 14c, and personal characteristic DB 14d, which will be described later, or may be calculated from compatibility using the company's corporate culture, the person's personality evaluation score, such as the Big 5, etc.
 関係性情報は、ユーザからみた特定の相手ユーザとの業務上の関係を表し、例えば、同部署上司との関係等である。業務外交流とは、業務外での交流の有無を表す。 Relationship information indicates the business relationship between a user and a specific other user, such as a relationship with a superior in the same department. Non-work interactions indicate whether or not there is interaction outside of work.
 図5は、社会属性DBのデータ構成を例示する図である。図5に例示するように、社会属性DB14cは、発話者および対話者の社会属性すなわち社会的な立場を表す情報項目として、所属、役職、在籍年数等を含む。社会属性DB14cは、対話の話題あるいは場面ごとに設定される。図5には、ビジネス話題における社会属性DB14cが例示されている。 Figure 5 is a diagram illustrating an example of the data structure of the social attribute DB. As illustrated in Figure 5, the social attribute DB 14c includes affiliation, job title, years of employment, etc. as information items that represent the social attributes, i.e., social status, of the speaker and interlocutor. The social attribute DB 14c is set for each conversation topic or situation. Figure 5 illustrates the social attribute DB 14c for business topics.
 図6は、個人特性DBのデータ構成を例示する図である。図6に例示するように、個人特性DB14dは、ユーザごとに、各発言に関する話者の特性を表す情報項目として、頻度、利用状況の一致度、頻出する際の話題、リスク値に関する評価等が含まれる。 FIG. 6 is a diagram illustrating an example of the data structure of the personal characteristic DB. As illustrated in FIG. 6, the personal characteristic DB 14d includes, for each user, information items that represent the speaker's characteristics regarding each utterance, such as frequency, degree of agreement with usage status, frequently occurring topics, and risk value evaluation.
 頻度は、当該発言の発生頻度を表す0.0~1.0の範囲の値である。利用状況の一致度とは、当該発言が発生する話題の一致度を表す0.0~1.0の範囲の値である。リスク値に関する評価とは、当該単語に対して算出されたリスク値の履歴に対するユーザによる自己評価を表す0.0~1.0の範囲の値であり、値が大きいほどリスク値が適切との評価が高いことを意味する。 The frequency is a value between 0.0 and 1.0 that indicates how often the statement occurs. The consistency of usage is a value between 0.0 and 1.0 that indicates the consistency of the topics in which the statement occurs. The risk value evaluation is a value between 0.0 and 1.0 that indicates the user's self-evaluation of the history of risk values calculated for the word, with a higher value indicating a higher evaluation that the risk value is appropriate.
 図7は、コミュニケーション履歴DBのデータ構成を例示する図である。図7に例示するように、コミュニケーション履歴DB14eは、対話履歴を表し、発言ごとに、発言ID、ユーザID、時刻、発言内容、リスク値、話題、対話相手ID、ミスコミュニケーション記録等の情報項目を含む。 FIG. 7 is a diagram illustrating an example of the data structure of the communication history DB. As illustrated in FIG. 7, the communication history DB 14e represents a dialogue history, and for each utterance, includes information items such as the utterance ID, user ID, time, utterance content, risk value, topic, dialogue partner ID, and miscommunication record.
 リスク値は、後述する提示処理により過去に算出されたリスク値である。話題は、当該発言がなされた会話の概要を示すラベルであり、これにより適切な社会属性DB14cが選択される。ミスコミュニケーション記録は、当該発言に起因するトラブルの記録の有無を示す。 The risk value is a risk value previously calculated by the presentation process described below. The topic is a label that indicates an overview of the conversation in which the comment was made, and an appropriate social attribute DB14c is selected based on this. The miscommunication record indicates whether or not there is a record of any trouble resulting from the comment.
 なお、コミュニケーション履歴DB14eは、映像データ等による表情等の非言語コミュニケーション表現を含んでもよい。 In addition, the communication history DB14e may also include non-verbal communication expressions such as facial expressions based on video data, etc.
 図1の説明に戻る。制御部15は、CPU(Central Processing Unit)等を用いて実現され、メモリに記憶された処理プログラムを実行する。これにより、制御部15は、図1に例示するように、取得部15a、算出部15bおよび提示部15cとして機能して、提示処理を実行する。なお、これらの機能部は、それぞれ、あるいは一部が異なるハードウェアに実装されてもよい。また、制御部15は、その他の機能部を備えてもよい。 Returning to the explanation of FIG. 1, the control unit 15 is realized using a CPU (Central Processing Unit) or the like, and executes a processing program stored in memory. As a result, the control unit 15 functions as an acquisition unit 15a, a calculation unit 15b, and a presentation unit 15c, as exemplified in FIG. 1, and executes the presentation process. Note that each of these functional units, or some of them, may be implemented in different hardware. The control unit 15 may also have other functional units.
 取得部15aは、処理対象の対話文を構成する単語を取得する。具体的には、取得部15aは、入力部11あるいは通信制御部13を介して、処理対象の対話文の入力を受け付けて、形態素解析を行って対話文を構成する単語を抽出する。 The acquisition unit 15a acquires the words that make up the dialogue to be processed. Specifically, the acquisition unit 15a accepts input of the dialogue to be processed via the input unit 11 or the communication control unit 13, and performs morphological analysis to extract the words that make up the dialogue.
 なお、取得部15aは、後述する提示処理に先立って取得した対話文を構成する単語を記憶部14に記憶させてもよい。あるいは、取得部15aは、これらの情報を直ちに算出部15bに転送してもよい。 The acquisition unit 15a may store the words constituting the acquired dialogue in the storage unit 14 prior to the presentation process described below. Alternatively, the acquisition unit 15a may immediately transfer this information to the calculation unit 15b.
 算出部15bは、記憶部14のリスク表現辞書14aを参照し、入力された対話文を構成する各単語のリスクの大きさを表すリスク値を算出する。ここで、リスク値は、文章中に現れる単語ごとに、あるいは文章全体について、言語トラブルすなわちミスコミュニケーション発生のリスクの大きさの推定値を示す。算出部15bは、リスク表現辞書14aに含まれるリスクに関する情報から導出されるリスク変数を用いて、リスク値を算出する。 The calculation unit 15b refers to the risk expression dictionary 14a of the storage unit 14 and calculates a risk value that indicates the magnitude of risk of each word that constitutes the input dialogue. Here, the risk value indicates an estimate of the magnitude of risk of language trouble, i.e., miscommunication, occurring for each word that appears in the sentence, or for the entire sentence. The calculation unit 15b calculates the risk value using risk variables derived from risk-related information contained in the risk expression dictionary 14a.
 ここで、リスク変数は、例えば、単語の品詞や危険度に基づいて導出される。あるいは、リスク変数は、単語の意味数に基づいて導出される。または、リスク変数は、リスク種別に基づいて導出される。なお、文脈に関するリスク変数は使用しないものとする。 Here, the risk variable is derived based on, for example, the part of speech or the risk level of the word. Alternatively, the risk variable is derived based on the number of meanings of the word. Alternatively, the risk variable is derived based on the risk type. Note that risk variables related to context are not used.
 また、算出部15bは、リスク表現辞書14aの情報の単位となっている、複数の単語で構成される一まとまりの表現ごとにリスク値を算出してもよい。この場合の表現には、表情の言葉による表現等が含まれてもよい。 The calculation unit 15b may also calculate a risk value for each group of expressions consisting of multiple words, which is a unit of information in the risk expression dictionary 14a. In this case, the expressions may include verbal expressions of facial expressions, etc.
 算出部15bは、所定の重み付けを行った複数のリスクに関する情報の積を用いてリスク値を算出する。具体的には、算出部15bは、次式(1)により、文章中のi番目の単語のリスク値rを算出する。 The calculation unit 15b calculates a risk value using the product of multiple pieces of risk-related information that have been weighted in a predetermined manner. Specifically, the calculation unit 15b calculates a risk value r i of the i-th word in the sentence using the following formula (1).
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 ここで、単語の品詞や危険度に基づくリスク変数pi1は、例えば、大きいほどリスクが高いことを示す1~3の3段階の値をとる。例えば、品詞が形容詞や副詞等の修飾語である場合には、曖昧でリスクが高いため3とし、指示語では2、その他は1とする。または、危険度をそのまま適用したり、品詞に基づく値と危険度の値とを組み合わせて用いたりしてもよい。 Here, the risk variable p i1 based on the part of speech of the word and the risk level takes three values from 1 to 3, with the larger value indicating a higher risk. For example, if the part of speech is a modifier such as an adjective or adverb, it is ambiguous and risky, so it is set to 3, while a demonstrative is set to 2, and the rest are set to 1. Alternatively, the risk level may be applied as is, or a value based on the part of speech and a value of the risk level may be used in combination.
 また、単語の意味数に基づくリスク変数pi2は、例えば、リスク表現辞書14aに登録されている値をそのまま適用してもよいし、リスク変数pi1と同様に、3段階程度の値に分類された値でもよい。 In addition, the risk variable p i2 based on the number of meanings of a word may be, for example, a value registered in the risk expression dictionary 14 a as is, or may be a value classified into about three levels, similar to the risk variable p i1 .
 また、単語のリスク種別に基づくリスク変数pi3は、例えば、業務場面に精通したユーザによる値が設定されてもよい。 Furthermore, the risk variable p i3 based on the risk type of the word may be set to a value by a user who is familiar with the business situation, for example.
 また、リスク値rは、0.0より大きい値をとるように、重み変数w~wおよびリスク変数pi1~pi3が調整される。 Furthermore, the weighting variables w 1 to w 3 and the risk variables p i1 to p i3 are adjusted so that the risk value r i takes a value greater than 0.0.
 ここで、図8は、算出部の処理を説明するための図である。図8(1)には、単語のリスク種別に基づくリスク変数pi3として、レベル1~3の3段階で定義された危険度が例示されている。また、図8(2)には、単語の意味数に基づくリスク変数pi2が例示されている。 Here, Fig. 8 is a diagram for explaining the processing of the calculation unit. Fig. 8(1) illustrates risk levels defined in three stages, levels 1 to 3, as a risk variable p i3 based on the risk type of a word. Fig. 8(2) illustrates a risk variable p i2 based on the number of meanings of a word.
 そして、図8(3)には、i番目の単語の品詞pi1、意味数pi2、危険度pi3を用いたリスク値rの算出例が示されている。ここで、図8(3)に示す例では、品詞pi1の重み変数1(w)を0として、品詞pi1の影響を抑えて意味数pi2および危険度pi3によりリスク値rが算出されている。 Fig. 8(3) shows an example of calculating the risk value ri using the part of speech p i1 , the number of meanings p i2 , and the risk level p i3 of the i-th word. In the example shown in Fig. 8(3), the weight variable 1 (w 1 ) of the part of speech p i1 is set to 0, and the risk value ri is calculated using the number of meanings p i2 and the risk level p i3 while suppressing the influence of the part of speech p i1 .
 なお、図8(3)に示す例では、リスク値rが0となることを防ぐため、品詞pi1×重み変数1(w)、意味数pi2×重み変数2(w)、危険度pi3×重み変数3(w)のそれぞれに1が加算された上で乗算されている。 In the example shown in FIG. 8 (3), in order to prevent the risk value r i from becoming 0, 1 is added to each of the part of speech p i1 × weight variable 1 (w 1 ), the number of meanings p i2 × weight variable 2 (w 2 ), and the risk level p i3 × weight variable 3 (w 3 ), and then multiplied.
 図1の説明に戻る。算出部15bは、所定の重み付けを行った複数のリスクに関する情報の和を用いてリスク値を算出してもよい。例えば、算出部15bは、次式(2)により、文章中のi番目の単語のリスク値rを算出する。 Returning to the description of Fig. 1, the calculation unit 15b may calculate the risk value by using the sum of multiple pieces of risk-related information that have been weighted in a predetermined manner. For example, the calculation unit 15b calculates the risk value r i of the i-th word in the sentence by the following formula (2).
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
 この場合には、各リスク変数pi1~pi3は0.0~1.0に正規化された値とし、重み変数w~wは0.0~1.0の範囲の値とする。これにより、算出されるリスク値rも0.0~1.0の範囲の値となり、リスク値間の比較が容易に可能となる。 In this case, each of the risk variables p i1 to p i3 is normalized to a value between 0.0 and 1.0, and the weight variables w 1 to w 3 are set to values in the range of 0.0 to 1.0. This causes the calculated risk value r i to also be a value in the range of 0.0 to 1.0, making it easy to compare risk values.
 算出部15bは、さらに話者同士の関係性に関する情報、発話者および対話者の社会属性を表す情報、話者の特性を表す情報、または対話履歴のいずれか1つ以上を用いて、リスク値を算出してもよい。例えば、算出部15bは、関係性DB14b、社会属性DB14c、個人特性DB14d、またはコミュニケーション履歴DB14eのいずれかを参照してリスク値を算出する。これにより、文脈を考慮したリスク値を算出することが可能となる。 The calculation unit 15b may further calculate the risk value using one or more of information regarding the relationship between the speakers, information representing the social attributes of the speaker and the interlocutor, information representing the characteristics of the speaker, or the dialogue history. For example, the calculation unit 15b calculates the risk value by referring to the relationship DB 14b, the social attribute DB 14c, the personal characteristic DB 14d, or the communication history DB 14e. This makes it possible to calculate a risk value taking into account the context.
 この場合に、算出部15bは、例えば、次式(3)により、文章中のi番目の単語のリスク値rを算出する。 In this case, the calculation unit 15b calculates the risk value r i of the i-th word in the sentence, for example, by the following formula (3).
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000003
 ここで、リスク変数pi1~pi3は、上記式(1)と同様である。また、重み変数w~wは0.0~1.0の範囲の値とする。また、算出されるリスク値rは0.0より大きい値をとるように、重み変数w~wおよびリスク変数pi1~pi7が調整される。 Here, the risk variables p i1 to p i3 are the same as those in formula (1) above. The weight variables w 1 to w 7 are set to values in the range of 0.0 to 1.0. The weight variables w 1 to w 7 and the risk variables p i1 to p i7 are adjusted so that the calculated risk value r i is a value greater than 0.0.
 ユーザ間の関係性に基づくリスク変数pi4は、例えば、関係性DB14bの関係性スコアを1から引いた値としてもよいし、業務外交流がありの場合に2、なしの場合に1として、1から関係性スコアを引いた値との積としてもよい。このようにして算出することにより、リスク変数pi4は、関係性が希薄であるほど大きい値となり、リスク値が大きくなる。 The risk variable p i4 based on the relationship between users may be, for example, a value obtained by subtracting the relationship score in the relationship DB 14b from 1, or may be a product of the relationship score subtracted from 1, where 2 is used when there is non-work-related interaction and 1 is used when there is no non-work-related interaction. By calculating in this manner, the risk variable p i4 becomes a larger value the weaker the relationship is, and the risk value becomes larger.
 社会属性に基づくリスク変数pi5は、例えば、社会属性DB14cの在籍年数のユーザ間での差としてもよいし、役職が上位職であるほど大きい値として数値化し、対話中のユーザ間の数値化された役職の差としてもよい。このようにして算出することにより、リスク変数pi5は、在籍年数や職位等の組織内の立場の差が大きいほど大きい値となり、リスク値が大きくなる。 The risk variable p i5 based on social attributes may be, for example, the difference between users in years of employment in the social attribute DB 14c, or may be quantified as a larger value for higher-ranking positions and used as the quantified difference in job titles between users in conversation. By calculating in this way, the risk variable p i5 becomes a larger value the greater the difference in positions within the organization, such as years of employment or job title, and the larger the risk value becomes.
 あるいは、上記のようにして算出したリスク変数pi4とリスク変数pi5との積または和を、リスク変数pi4およびリスク変数pi5として代替してもよい。 Alternatively, the product or sum of the risk variable p i4 and the risk variable p i5 calculated as described above may be substituted for the risk variable p i4 and the risk variable p i5 .
 個人特性に基づくリスク変数pi6は、例えば、個人特性DB14dの利用状況の一致度を1から引いた値としてもよいし、利用状況の一致度の逆数としてもよい。あるいは、リスク変数pi6は、個人特性DB14dの頻度としてもよいし、1から利用状況の一致度を引いた値と頻度との積としてもよい。 The risk variable p i6 based on the personal characteristics may be, for example, a value obtained by subtracting the degree of agreement of the use situation in the personal characteristics DB 14d from 1, or may be the reciprocal of the degree of agreement of the use situation. Alternatively, the risk variable p i6 may be the frequency in the personal characteristics DB 14d, or may be the product of the value obtained by subtracting the degree of agreement of the use situation from 1 and the frequency.
 対話履歴に基づくリスク変数pi7は、例えば、コミュニケーション履歴DB14eのリスク値としてもよいし、リスク値とミスコミュニケーション記録を数値化した値(ありの場合は2、なしの場合は1)との積としてもよい。あるいは、業務への関係が深いほどコミュニケーションの齟齬のリスクが大きいと仮定して、リスク変数pi7は、話題の業務との関係性等を数値化した値として、業務との関係が深いほど大きくなるように設定してもよい。例えば、企業戦略に関わる極秘情報は1、一般的な社内情報は0.5、業務後のリクリエーションイベントに関する公開情報は0.1等とする。 The risk variable p i7 based on the dialogue history may be, for example, a risk value in the communication history DB 14e, or may be the product of the risk value and a value obtained by quantifying a record of miscommunication (2 if there is, 1 if there is no). Alternatively, assuming that the deeper the relationship to the business, the greater the risk of communication discrepancies, the risk variable p i7 may be set as a value obtained by quantifying the relationship between the topic and the business, and the greater the relationship to the business. For example, confidential information related to corporate strategy is set to 1, general internal information is set to 0.5, and public information related to recreational events after work is set to 0.1, etc.
 または、算出部15bは、次式(4)により、文章中のi番目の単語のリスク値rを算出してもよい。 Alternatively, the calculation unit 15b may calculate the risk value r i of the i-th word in the sentence using the following formula (4).
Figure JPOXMLDOC01-appb-M000004
Figure JPOXMLDOC01-appb-M000004
 ここで、リスク変数pi1~pi7は、上記式(3)と同様の値が0.0~1.0に正規化された値とし、重み変数w~wは0.0~1.0の範囲の値とする。 Here, the risk variables p i1 to p i7 are values normalized to 0.0 to 1.0 similar to the above formula (3), and the weight variables w 1 to w 7 are values in the range of 0.0 to 1.0.
 なお、リスク変数pi1~pi7の値は上記に限定されない。例えば、上記の記憶部14の各情報(14a~14e)に所定の情報項目を追加登録することにより、以下に例示するような値が適用されてもよい。 The values of the risk variables p i1 to p i7 are not limited to those mentioned above. For example, the following values may be applied by additionally registering a predetermined information item in each piece of information (14a to 14e) in the storage unit 14.
 例えば、リスク表現辞書14aを用いるリスク変数pi1~pi3として、その単語によるミスコミュニケーションの発生度合い、その単語でミスコミュニケーションが発生した際の危険度、一般的な会話におけるその単語の出現頻度等が適用されてもよい。 For example, the risk variables p i1 to p i3 using the risk expression dictionary 14 a may be the degree of occurrence of miscommunication caused by the word, the degree of danger when miscommunication occurs due to the word, the frequency of occurrence of the word in general conversation, etc.
 また、関係性DB14bを用いるリスク変数pi4または社会属性DB14cを用いるリスク変数pi5として、話者同士の共通する背景、属性の濃さの度合いや重なり度合い、話者同士の共通認識の割合、話者同士の人間関係等が適用されてもよい。 In addition, as the risk variable p i4 using the relationship DB 14 b or the risk variable p i5 using the social attribute DB 14 c, the common background between the speakers, the degree of intensity or overlap of the attributes, the proportion of common understanding between the speakers, the human relationships between the speakers, etc. may be applied.
 また、個人特性DB14dを用いるリスク変数pi6として、話者のもつ背景、属性、過去の発言に対して算出されたリスク値に対する評価等が適用されてもよい。 Furthermore, as the risk variable p i6 using the individual characteristic DB 14d, the speaker's background, attributes, evaluation of risk values calculated for past statements, and the like may be applied.
 また、コミュニケーション履歴DB14eを用いるリスク変数pi7として、現在の対話で出現回数が多い単語、過去の対話で出現回数が多い単語、過去の対話でミスコミュニケーションが発生した単語や共通認識が得られた単語等が適用されもよい。 In addition, as the risk variable p i7 using the communication history DB14e, words that appear frequently in the current dialogue, words that appear frequently in past dialogues, words that have caused miscommunication in past dialogues, words that have gained common understanding, etc. may be applied.
 さらには、リスク値の算出方法は、上記の記憶部14の各情報(14a~14e)を用いる場合に限定されない。例えば、対話中に収集可能な以下に例示する情報を併用してリスク値が算出されてもよい。 Furthermore, the method of calculating the risk value is not limited to using each piece of information (14a to 14e) in the storage unit 14 described above. For example, the risk value may be calculated by using the following pieces of information that can be collected during a conversation in combination:
 例えば、リスク値は、表情、脳波、視線等の生態信号の分析結果と、コミュニケーション履歴DB14eの発言内容等とを併用して算出されてもよい。また、当事者の医学的もしくは客観的な体調変化の評価値がリスク変数として追加で適用されてもよい。 For example, the risk value may be calculated by combining the analysis results of biological signals such as facial expressions, electroencephalograms, and gaze, with the content of statements in the communication history DB14e. In addition, the evaluation value of the medical or objective changes in the physical condition of the person involved may be additionally applied as a risk variable.
 あるいは、対話後に発生したトラブルの深刻度をフィードバックして、リスク表現辞書14aの危険度または重要度、あるいはコミュニケーション履歴DB14eのリスク値に反映させてもよい。このトラブルの深刻度をユーザの主観により評価して、リスク表現辞書14aの危険度または重要度に反映させてもよい。また、これらの情報が併用されてもよい。 Alternatively, the seriousness of a problem that occurs after a conversation may be fed back and reflected in the risk or importance in the risk expression dictionary 14a, or in the risk value in the communication history DB 14e. The seriousness of this problem may be subjectively evaluated by the user and reflected in the risk or importance in the risk expression dictionary 14a. These pieces of information may also be used in combination.
 また、上記の重み変数w~wは、各リスク変数の影響を調整する変数であって、リスク値算出の処理対象の対話の話題や参加者に応じて設定される。例えば、曖昧な発言の意味の推察が苦手な参加者がいる場合に、重み変数w~wの値を大きくすることにより、リスク表現辞書14aの値がリスク値に強く反映されるようになる。 The weight variables w1 to w7 are variables that adjust the influence of each risk variable, and are set according to the topic and participants of the conversation that is the subject of the risk value calculation process. For example, if there is a participant who is not good at guessing the meaning of ambiguous statements, the values of the risk expression dictionary 14a will be strongly reflected in the risk value by increasing the values of the weight variables w1 to w3 .
 また、頻繁に開催される会議で参加者が互いに発言の背景の理解が深い場合には、コミュニケーション履歴DB14eの値に対する重み変数wを大きくする。これにより、出現頻度の高い表現や業務との関連が深い表現に基づくリスク変数の値がリスク値に強く反映されるようになる。 In addition, in the case of frequently held conferences where participants have a deep understanding of the background of each other's comments, the weighting variable w7 for the value in the communication history DB 14e is increased. This allows the value of the risk variable based on frequently appearing expressions or expressions closely related to the business to be strongly reflected in the risk value.
 また、上司と部下との1対1の対話等の特定の関係性がある場合には、重み変数w、wの値を大きくすることにより、関係性DB14bに基づくリスク変数pi4、社会属性DB14cに基づくリスク変数pi5の値がリスク値に強く反映されるようになる。 In addition, when there is a specific relationship such as one-to-one dialogue between a superior and a subordinate, by increasing the values of the weighting variables w4 and w5 , the values of the risk variable p i4 based on the relationship DB 14b and the risk variable p i5 based on the social attribute DB 14c will be strongly reflected in the risk value.
 また、特定の優れた技能をもつ話者同士の対話等、その個人に固有の発言や表現が含まれる場合には、重み変数w、wの値を大きくすることにより、関係性DB14bに基づくリスク変数pi4、個人特性DB14dに基づくリスク変数pi5の値がリスク値に強く反映されるようになる。 In addition, when the conversation includes statements or expressions unique to individuals, such as a conversation between speakers with specific excellent skills, the values of the weighting variables w4 and w6 are increased so that the values of the risk variable p i4 based on the relationship DB 14b and the risk variable p i5 based on the personal characteristic DB 14d are more strongly reflected in the risk value.
 また、重み変数w~wは、直前の対話や議論における合意形成の質の判断や、過去の対話に対する満足度の測定結果、会話のテーマとなっている話題についての熟練度の測定結果等により、動的に調整されてもよい。 In addition, the weighting variables w1 to w7 may be dynamically adjusted based on a judgment of the quality of consensus reached in the most recent dialogue or discussion, a measurement of satisfaction with past dialogues, a measurement of proficiency with the topic that is the subject of the conversation, etc.
 上記式(1)~式(4)で算出された各単語のリスク値を用いて、文全体あるいは複数単語による一まとまりの表現(例えば、句や節など)のリスクを算出することもできる。その際には、各文に含まれる単語に対して算出されたリスク値rの平均値を文のリスク値としてもよい。また、リスク値r~rの各値にリスク表現辞書14aの危険度や重要度あるいは事前に設定した値に基づく重み付けを行った値の荷重平均を算出し、文のリスク値としてもよい。その他、リスク値rの総和、積などを用いることもできる。 The risk value of an entire sentence or a group of expressions consisting of multiple words (for example, a phrase or clause) can also be calculated using the risk value of each word calculated by the above formulas (1) to (4). In this case, the average value of the risk values r i calculated for the words contained in each sentence may be used as the risk value of the sentence. In addition, the risk value of the sentence may be determined by calculating a weighted average of the risk values r 1 to r N weighted based on the risk level or importance of the risk expression dictionary 14a or a value set in advance. Alternatively, the sum or product of the risk values r i may be used.
 提示部15cは、算出されたリスク値に基づいて生成した提示情報をユーザに提示する。例えば、提示部15cは、算出部15bが算出したリスク値、あるいはリスク値に基づいて生成したメッセージ、通知音、振動等の提示情報を、出力部12を介してユーザに提示する。 The presentation unit 15c presents the presentation information generated based on the calculated risk value to the user. For example, the presentation unit 15c presents the risk value calculated by the calculation unit 15b, or presentation information such as a message, a notification sound, or a vibration generated based on the risk value, to the user via the output unit 12.
 ここで、図9は、提示処理結果の画面表示例を示す図である。図9に示す例では、オンライン会議ツール上に、算出されたリスク値の情報が提示されている。例えば、提示部15cは、リスク値が高い単語のテキストを、強調表示により表示する。例えば、提示部15cは、リスク値が高い単語のテキストの色を変えたり下線を引いたりフォントを大きくしたり太字にしたりハイライト表示したりする。 Here, FIG. 9 is a diagram showing an example of a screen display of the results of the presentation process. In the example shown in FIG. 9, information on the calculated risk value is presented on an online conference tool. For example, the presentation unit 15c displays the text of words with high risk values by highlighting them. For example, the presentation unit 15c changes the color of the text of words with high risk values, underlines them, makes the font larger, makes them bold, or highlights them.
 あるいは、提示部15cは、リスク値が高いテキストをポップアップ表示する等、ユーザの注意をひく形式で表示する。また、提示部15cは、リスク値が高い単語を、ユーザの目につきやすい位置に配置して表示してもよい。 Alternatively, the presentation unit 15c displays text with a high risk value in a format that attracts the user's attention, such as by displaying the text in a pop-up. The presentation unit 15c may also display words with a high risk value in a position that is easily noticeable to the user.
 また、提示部15cは、テキストを表示する代わりに、あるいはテキスト表示に加えて、音声情報や触覚情報を利用して、リスク値が高い単語を提示してもよい。例えば、リスク値が高い単語の通知音の音量を大きくしたり、不正解音等の対処の必要を感じさせ易い音を使用したり、振動を大きくしたり、注意を引き易い振動の周波数やパターンを使用したりする。 In addition, instead of or in addition to displaying text, the presentation unit 15c may present words with a high risk value using audio information or tactile information. For example, the volume of the notification sound for words with a high risk value may be increased, a sound that is likely to make the user feel the need to take action, such as an incorrect answer sound, may be used, vibrations may be increased, or a vibration frequency or pattern that is likely to attract attention may be used.
 また、提示部15cは、画面上にCGや画像のエージェントを配置して、エージェントからの助言といった形式でリスク値の高い単語を提示してもよい。 The presentation unit 15c may also present words with high risk values in the form of advice from a CG or image agent placed on the screen.
 なお、図9に例示するオンライン会議ツールにおいて、提示部15cは、ユーザ1およびユーザ2に同一の画面を提示してもよいし、異なる画面を提示してもよい。例えば、提示部15cが同一の画面を提示することにより、両者が対等と感じ取り易く、相互理解が促進される。一方、提示部15cが異なる画面を提示することにより、相互理解促進の程度は薄れるものの、コミュニケーション能力がより高いユーザが機動的に対話に臨むことにより、コミュニケーションの効率が向上する。 In the online conference tool illustrated in FIG. 9, the presentation unit 15c may present the same screen to user 1 and user 2, or different screens. For example, when the presentation unit 15c presents the same screen, both users are more likely to feel that they are on an equal footing, and mutual understanding is promoted. On the other hand, when the presentation unit 15c presents different screens, the promotion of mutual understanding is lessened, but the efficiency of communication is improved by allowing users with better communication skills to more flexibly engage in dialogue.
 さらに、提示部15cは、提示方法の調整や提示自体のキャンセル等を行ってもよい。例えば、提示部15cは、ユーザの操作により、あるいはユーザの特性に応じて、強調表示が気になるユーザに対しては程度を弱めて表示したり強調表示を止めたり、視覚情報を音情報に変更する等、異なる形式で提示したりしてもよい。また、ユーザ特性は、会議参加者のメンバ構成によって、反映の仕方が異なっても良い。 Furthermore, the presentation unit 15c may adjust the presentation method or cancel the presentation itself. For example, the presentation unit 15c may present in a different format, such as reducing the degree of highlighting or stopping the highlighting for a user who is bothered by the highlighting, or changing the visual information to sound information, in response to a user's operation or in response to the user's characteristics. Furthermore, the way in which the user characteristics are reflected may differ depending on the composition of the conference participants.
[提示処理]
 次に、図10を参照して、本実施形態に係る提示装置10による提示処理について説明する。図10は、提示処理手順を示すフローチャートである。図10のフローチャートは、例えば、ユーザが開始を指示する操作入力を行ったタイミングで開始される。
[Presentation Processing]
Next, a presentation process by the presentation device 10 according to the present embodiment will be described with reference to Fig. 10. Fig. 10 is a flowchart showing a procedure of the presentation process. The flowchart in Fig. 10 is started, for example, when a user performs an operation input to instruct the start of the process.
 まず、取得部15aが、処理対象の対話文を構成する単語を取得する(ステップS1)。例えば、取得部15aは、入力部11あるいは通信制御部13を介して、処理対象の対話文の入力を受け付けて、形態素解析を行って対話文を構成する単語を抽出する。 First, the acquisition unit 15a acquires the words that make up the dialogue to be processed (step S1). For example, the acquisition unit 15a accepts input of the dialogue to be processed via the input unit 11 or the communication control unit 13, and performs morphological analysis to extract the words that make up the dialogue.
 次に、算出部15bが、リスク表現辞書14aを参照し、入力された対話文を構成する単語の言語トラブルのリスクの大きさを表すリスク値を算出する(ステップS2)。例えば、算出部15bは、リスク表現辞書14aの所定の重み付けを行った複数の情報の積を用いてリスク値を算出する。あるいは、算出部15bは、リスク表現辞書14aの所定の重み付けを行った複数の情報の和を用いてリスク値を算出する。 Next, the calculation unit 15b refers to the risk expression dictionary 14a and calculates a risk value that indicates the degree of risk of language trouble for the words that make up the input dialogue (step S2). For example, the calculation unit 15b calculates the risk value using the product of multiple pieces of information that have been weighted in a predetermined manner in the risk expression dictionary 14a. Alternatively, the calculation unit 15b calculates the risk value using the sum of multiple pieces of information that have been weighted in a predetermined manner in the risk expression dictionary 14a.
 さらに、算出部15bは、関係性DB14b、社会属性DB14c、個人特性DB14d、またはコミュニケーション履歴DB14eの情報のいずれか1つ以上を用いて、リスク値を算出する。 Furthermore, the calculation unit 15b calculates the risk value using one or more pieces of information from the relationship DB 14b, the social attribute DB 14c, the personal characteristic DB 14d, or the communication history DB 14e.
 また、提示部15cが、算出されたリスク値に基づいて生成した提示情報をユーザに提示する(ステップS3)。例えば、提示部15cは、算出部15bが算出したリスク値を、出力部12を介してユーザに提示する。これにより、一連の提示処理が終了する。 The presentation unit 15c also presents the presentation information generated based on the calculated risk value to the user (step S3). For example, the presentation unit 15c presents the risk value calculated by the calculation unit 15b to the user via the output unit 12. This completes the series of presentation processes.
[他の実施形態]
 上記実施形態の提示装置10では、曖昧表現に基づくリスク値が算出されているが、これに限定されない。例えば、提示装置10は、曖昧表現に基づくリスク値に代えて、あるいは曖昧表現に基づくリスク値に加えて、固定観念に基づく表現のリスク値を算出することも可能である。この場合には、算出部15bは、固定観念に基づくリスクに関する情報を用いて、リスク値を算出する。
[Other embodiments]
In the presentation device 10 of the above embodiment, the risk value based on the ambiguous expression is calculated, but the present invention is not limited to this. For example, the presentation device 10 can calculate a risk value of an expression based on a stereotype instead of or in addition to the risk value based on the ambiguous expression. In this case, the calculation unit 15b calculates the risk value using information on the risk based on the stereotype.
 例えば、リスク表現辞書14aに基づくリスク変数pi1~pi3として、固定観念の観点で設定された危険度、曖昧表現のほかに固定観念の観点で設定されたリスク種別等が適用されてもよい。 For example, risk variables p i1 to p i3 based on the risk expression dictionary 14a may be risk levels set from the perspective of stereotypes, ambiguous expressions, or risk types set from the perspective of stereotypes.
 また、関係性DB14bを用いるリスク変数pi4または社会属性DB14cを用いるリスク変数pi5として、話者同士に共通する背景、属性の濃さの度合いや重なり度合い等が適用されてもよい。 Furthermore, as the risk variable p i4 using the relationship DB 14b or the risk variable p i5 using the social attribute DB 14c, a background common to the speakers, the degree of intensity of attributes, the degree of overlap, or the like may be applied.
 また、個人特性DB14dを用いるリスク変数pi6として、話者の持つ背景、属性等が適用されてもよい。 Furthermore, the background, attributes, and the like of the speaker may be applied as the risk variable p i6 using the individual characteristic DB 14d.
 具体的には、例えば、話者同士に共通する背景の重なり度合いが少ない場合には、話者間で言語の解釈がすれ違う可能性が高いものとして、リスク変数pi4、リスク変数pi6の値を大きく設定する。また、性別、年齢等の話者の持つ背景と、過去のトラブル事例における当事者のもつ背景とを比較して、過去のトラブル事例におけるリスク値をもとにリスク変数pi6を設定する。これにより、算出部15bは、固定観念に基づくリスク値を算出することが可能となる。 Specifically, for example, when the degree of overlap of the common backgrounds of the speakers is low, it is assumed that there is a high possibility that the speakers will misinterpret the language, and the values of the risk variables p i4 and p i6 are set to be large. In addition, the background of the speaker, such as gender and age, is compared with the background of the parties in past trouble cases, and the risk variable p i6 is set based on the risk values in the past trouble cases. This enables the calculation unit 15b to calculate a risk value based on a stereotype.
[効果]
 以上、説明したように、本実施形態の提示装置10において、記憶部14が、対話文を構成する各単語について、言語トラブルのリスクに関する情報であるリスク表現辞書14aを記憶する。算出部15bが、リスク表現辞書14aを参照し、入力された対話文を構成する各単語のリスクの大きさを表すリスク値を算出する。
[effect]
As described above, in the presentation device 10 of this embodiment, the storage unit 14 stores the risk expression dictionary 14a, which is information on the risk of language trouble for each word constituting the dialogue. The calculation unit 15b refers to the risk expression dictionary 14a and calculates a risk value indicating the magnitude of risk of each word constituting the input dialogue.
 具体的には、リスク表現辞書14aは、各単語の品詞、各単語が示す意味数、または各単語の示す意味の曖昧さのいずれか1つ以上のリスクに関する情報を記憶する。 Specifically, the risk expression dictionary 14a stores information related to one or more risks, such as the part of speech of each word, the number of meanings indicated by each word, or the ambiguity of the meaning indicated by each word.
 また、算出部15bは、所定の重み付けを行った複数のリスクに関する情報の積を用いてリスク値を算出する。あるいは、算出部15bは、所定の重み付けを行った複数のリスクに関する情報の和を用いてリスク値を算出する。 The calculation unit 15b also calculates the risk value using the product of information about multiple risks that have been weighted in a predetermined manner. Alternatively, the calculation unit 15b calculates the risk value using the sum of information about multiple risks that have been weighted in a predetermined manner.
 これにより、提示装置10は、言語を介したコミュニケーションにおいて、使われた表現が内包するリスクを定量的に分析することが可能となる。 This enables the presentation device 10 to quantitatively analyze the risks inherent in expressions used in language-mediated communication.
 また、算出部15bは、さらに話者同士の関係性に関する情報を含む関係性DB14b、発話者および対話者の社会属性を表す情報を含む社会属性DB14c、発言に関する話者の特性を表す情報を含む個人特性DB14d、または対話履歴を含むコミュニケーション履歴DB14eのいずれか1つ以上を用いて、リスク値を算出する。これにより、提示装置10はより高精度なリスク値を算出することが可能となる。 The calculation unit 15b further calculates the risk value using one or more of the relationship DB 14b including information on the relationship between speakers, the social attribute DB 14c including information representing the social attributes of the speaker and interlocutor, the personal characteristic DB 14d including information representing the speaker's characteristics related to utterances, and the communication history DB 14e including the dialogue history. This enables the presentation device 10 to calculate the risk value with higher accuracy.
 また、算出部15bは、固定観念に基づくリスクに関する情報を用いて、リスク値を算出する。これによっても、より高精度なリスク値を算出することが可能となる。 The calculation unit 15b also calculates the risk value using information about risks based on stereotypes. This also makes it possible to calculate a risk value with higher accuracy.
[プログラム]
 上記実施形態に係る提示装置10が実行する処理をコンピュータが実行可能な言語で記述したプログラムを作成することもできる。一実施形態として、提示装置10は、パッケージソフトウェアやオンラインソフトウェアとして上記の提示処理を実行する提示プログラムを所望のコンピュータにインストールさせることによって実装できる。例えば、上記の提示プログラムを情報処理装置に実行させることにより、情報処理装置を提示装置10として機能させることができる。ここで言う情報処理装置には、デスクトップ型またはノート型のパーソナルコンピュータが含まれる。また、その他にも、情報処理装置にはスマートフォン、携帯電話機やPHS(Personal Handyphone System)などの移動体通信端末、さらには、PDA(Personal Digital Assistant)などのスレート端末などがその範疇に含まれる。また、提示装置10の機能を、クラウドサーバに実装してもよい。
[program]
A program in which the process executed by the presentation device 10 according to the above embodiment is written in a language executable by a computer can also be created. As an embodiment, the presentation device 10 can be implemented by installing a presentation program that executes the above presentation process as package software or online software on a desired computer. For example, the information processing device can function as the presentation device 10 by executing the above presentation program on an information processing device. The information processing device referred to here includes desktop or notebook personal computers. In addition, the information processing device also includes mobile communication terminals such as smartphones, mobile phones, and PHS (Personal Handyphone System), and even slate terminals such as PDA (Personal Digital Assistant). The functions of the presentation device 10 may be implemented on a cloud server.
 図11は、提示プログラムを実行するコンピュータの一例を示す図である。コンピュータ1000は、例えば、メモリ1010と、CPU1020と、ハードディスクドライブインタフェース1030と、ディスクドライブインタフェース1040と、シリアルポートインタフェース1050と、ビデオアダプタ1060と、ネットワークインタフェース1070とを有する。これらの各部は、バス1080によって接続される。 FIG. 11 is a diagram showing an example of a computer that executes a presentation program. The computer 1000 has, for example, a memory 1010, a CPU 1020, a hard disk drive interface 1030, a disk drive interface 1040, a serial port interface 1050, a video adapter 1060, and a network interface 1070. These components are connected by a bus 1080.
 メモリ1010は、ROM(Read Only Memory)1011およびRAM1012を含む。ROM1011は、例えば、BIOS(Basic Input Output System)等のブートプログラムを記憶する。ハードディスクドライブインタフェース1030は、ハードディスクドライブ1031に接続される。ディスクドライブインタフェース1040は、ディスクドライブ1041に接続される。ディスクドライブ1041には、例えば、磁気ディスクや光ディスク等の着脱可能な記憶媒体が挿入される。シリアルポートインタフェース1050には、例えば、マウス1051およびキーボード1052が接続される。ビデオアダプタ1060には、例えば、ディスプレイ1061が接続される。 The memory 1010 includes a ROM (Read Only Memory) 1011 and a RAM 1012. The ROM 1011 stores a boot program such as a BIOS (Basic Input Output System). The hard disk drive interface 1030 is connected to a hard disk drive 1031. The disk drive interface 1040 is connected to a disk drive 1041. A removable storage medium such as a magnetic disk or optical disk is inserted into the disk drive 1041. The serial port interface 1050 is connected to a mouse 1051 and a keyboard 1052, for example. The video adapter 1060 is connected to a display 1061, for example.
 ここで、ハードディスクドライブ1031は、例えば、OS1091、アプリケーションプログラム1092、プログラムモジュール1093およびプログラムデータ1094を記憶する。上記実施形態で説明した各情報は、例えばハードディスクドライブ1031やメモリ1010に記憶される。 Here, the hard disk drive 1031 stores, for example, an OS 1091, an application program 1092, a program module 1093, and program data 1094. Each piece of information described in the above embodiment is stored, for example, in the hard disk drive 1031 or memory 1010.
 また、提示プログラムは、例えば、コンピュータ1000によって実行される指令が記述されたプログラムモジュール1093として、ハードディスクドライブ1031に記憶される。具体的には、上記実施形態で説明した提示装置10が実行する各処理が記述されたプログラムモジュール1093が、ハードディスクドライブ1031に記憶される。 The presentation program is stored in the hard disk drive 1031, for example, as a program module 1093 in which instructions to be executed by the computer 1000 are written. Specifically, the program module 1093 in which each process executed by the presentation device 10 described in the above embodiment is written is stored in the hard disk drive 1031.
 また、提示プログラムによる情報処理に用いられるデータは、プログラムデータ1094として、例えば、ハードディスクドライブ1031に記憶される。そして、CPU1020が、ハードディスクドライブ1031に記憶されたプログラムモジュール1093やプログラムデータ1094を必要に応じてRAM1012に読み出して、上述した各手順を実行する。 In addition, data used for information processing by the presentation program is stored as program data 1094, for example, in the hard disk drive 1031. Then, the CPU 1020 reads the program module 1093 and program data 1094 stored in the hard disk drive 1031 into the RAM 1012 as necessary, and executes each of the above-mentioned procedures.
 なお、提示プログラムに係るプログラムモジュール1093やプログラムデータ1094は、ハードディスクドライブ1031に記憶される場合に限られず、例えば、着脱可能な記憶媒体に記憶されて、ディスクドライブ1041等を介してCPU1020によって読み出されてもよい。あるいは、提示プログラムに係るプログラムモジュール1093やプログラムデータ1094は、LANやWAN(Wide Area Network)等のネットワークを介して接続された他のコンピュータに記憶され、ネットワークインタフェース1070を介してCPU1020によって読み出されてもよい。 The program module 1093 and program data 1094 related to the presentation program are not limited to being stored in the hard disk drive 1031, but may be stored in a removable storage medium, for example, and read by the CPU 1020 via the disk drive 1041 or the like. Alternatively, the program module 1093 and program data 1094 related to the presentation program may be stored in another computer connected via a network, such as a LAN or WAN (Wide Area Network), and read by the CPU 1020 via the network interface 1070.
 以上、本発明者によってなされた発明を適用した実施形態について説明したが、本実施形態による本発明の開示の一部をなす記述および図面により本発明は限定されることはない。すなわち、本実施形態に基づいて当業者等によりなされる他の実施形態、実施例および運用技術等は全て本発明の範疇に含まれる。 The above describes an embodiment of the invention made by the inventor, but the present invention is not limited to the description and drawings that form part of the disclosure of the present invention according to this embodiment. In other words, other embodiments, examples, operational techniques, etc. made by those skilled in the art based on this embodiment are all included in the scope of the present invention.
 10 提示装置
 11 入力部
 12 出力部
 13 通信制御部
 14 記憶部
 14a リスク表現辞書
 14b 関係性DB
 14c 社会属性DB
 14d 個人特性DB
 14e コミュニケーション履歴DB
 15 制御部
 15a 取得部
 15b 算出部
 15c 提示部
REFERENCE SIGNS LIST 10 Presentation device 11 Input unit 12 Output unit 13 Communication control unit 14 Storage unit 14a Risk expression dictionary 14b Relationship DB
14c Social attribute DB
14d Personal characteristics DB
14e Communication history DB
15 Control unit 15a Acquisition unit 15b Calculation unit 15c Presentation unit

Claims (8)

  1.  対話文を構成する各単語について、言語トラブルのリスクに関する情報を記憶する記憶部と、
     前記記憶部を参照し、入力された対話文を構成する各単語の前記リスクの大きさを表すリスク値を算出する算出部と、
     を有することを特徴とする提示装置。
    A storage unit that stores information regarding a risk of language trouble for each word that constitutes a dialogue;
    a calculation unit that refers to the storage unit and calculates a risk value representing the magnitude of the risk of each word constituting the input dialogue;
    A presentation device comprising:
  2.  前記記憶部は、各単語の品詞、各単語が示す意味数、または各単語の示す意味の曖昧さのいずれか1つ以上の前記リスクに関する情報を記憶することを特徴とする請求項1に記載の提示装置。 The presentation device according to claim 1, characterized in that the storage unit stores information related to the risk, which is one or more of the part of speech of each word, the number of meanings indicated by each word, and the ambiguity of the meaning indicated by each word.
  3.  前記算出部は、所定の重み付けを行った複数の前記リスクに関する情報の積を用いて前記リスク値を算出することを特徴とする請求項1に記載の提示装置。 The presentation device according to claim 1, characterized in that the calculation unit calculates the risk value using a product of multiple pieces of information related to the risks that have been weighted in a predetermined manner.
  4.  前記算出部は、所定の重み付けを行った複数の前記リスクに関する情報の和を用いて前記リスク値を算出することを特徴とする請求項1に記載の提示装置。 The presentation device according to claim 1, characterized in that the calculation unit calculates the risk value using the sum of information related to multiple risks that have been weighted in a predetermined manner.
  5.  前記算出部は、さらに話者同士の関係性に関する情報、発話者および対話者の社会属性を表す情報、発言に関する話者の特性を表す情報、または対話履歴のいずれか1つ以上を用いて、前記リスク値を算出することを特徴とする請求項1に記載の提示装置。 The presentation device according to claim 1, characterized in that the calculation unit further calculates the risk value using one or more of the following: information on the relationship between speakers, information representing the social attributes of the speaker and interlocutor, information representing the speaker's characteristics related to the utterance, or dialogue history.
  6.  前記算出部は、固定観念に基づく前記リスクに関する情報を用いて、前記リスク値を算出することを特徴とする請求項1に記載の提示装置。 The presentation device according to claim 1, characterized in that the calculation unit calculates the risk value using information about the risk based on stereotypes.
  7.  提示装置が実行する提示方法であって、
     前記提示装置は、対話文を構成する各単語について、言語トラブルのリスクに関する情報を記憶する記憶部を有し、
     前記記憶部を参照し、入力された対話文を構成する各単語の前記リスクの大きさを表すリスク値を算出する算出工程を含んだことを特徴とする提示方法。
    A presentation method executed by a presentation device, comprising:
    The presentation device has a storage unit that stores information regarding a risk of language trouble for each word constituting a dialogue,
    A presentation method comprising the step of: referring to the storage unit and calculating a risk value representing the magnitude of risk of each word constituting the input dialogue.
  8.  コンピュータを請求項1~6のいずれか1項に記載の提示装置として機能させるための提示プログラム。 A presentation program for causing a computer to function as a presentation device according to any one of claims 1 to 6.
PCT/JP2022/040253 2022-10-27 2022-10-27 Presentation device, presentation method, and presentation program WO2024089858A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/040253 WO2024089858A1 (en) 2022-10-27 2022-10-27 Presentation device, presentation method, and presentation program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/040253 WO2024089858A1 (en) 2022-10-27 2022-10-27 Presentation device, presentation method, and presentation program

Publications (1)

Publication Number Publication Date
WO2024089858A1 true WO2024089858A1 (en) 2024-05-02

Family

ID=90830282

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/040253 WO2024089858A1 (en) 2022-10-27 2022-10-27 Presentation device, presentation method, and presentation program

Country Status (1)

Country Link
WO (1) WO2024089858A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014235584A (en) * 2013-06-03 2014-12-15 日本電気株式会社 Document analysis system, document analysis method, and program
JP2020170310A (en) * 2019-04-02 2020-10-15 富士通株式会社 Conversation analysis device, conversation analysis method and conversation analysis program
JP2021140240A (en) * 2020-03-02 2021-09-16 コニカミノルタ株式会社 Interaction support system, interaction support method, and interaction support program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014235584A (en) * 2013-06-03 2014-12-15 日本電気株式会社 Document analysis system, document analysis method, and program
JP2020170310A (en) * 2019-04-02 2020-10-15 富士通株式会社 Conversation analysis device, conversation analysis method and conversation analysis program
JP2021140240A (en) * 2020-03-02 2021-09-16 コニカミノルタ株式会社 Interaction support system, interaction support method, and interaction support program

Similar Documents

Publication Publication Date Title
US11960543B2 (en) Providing suggestions for interaction with an automated assistant in a multi-user message exchange thread
US20240078276A1 (en) Display Device Displaying a Keyword for Selecting a Next Slide During Presentation
US10554590B2 (en) Personalized automated agent
KR102249437B1 (en) Automatically augmenting message exchange threads based on message classfication
US20150287043A1 (en) Network-based identification of device usage patterns that can indicate that the user has a qualifying disability
JP2022079458A (en) Automated assistant having conferencing ability
US11321675B2 (en) Cognitive scribe and meeting moderator assistant
US11558334B2 (en) Multi-message conversation summaries and annotations
US20140330566A1 (en) Providing social-graph content based on a voice print
US20110061004A1 (en) Use of Communicator Application to Establish Communication with Experts
US11880663B2 (en) Assistant for providing information on unknown topics
CN111444729B (en) Information processing method, device, equipment and readable storage medium
WO2022265743A1 (en) Retrospection assistant for virtual meetings
TWM555527U (en) Smart agent robot system
US20240103893A1 (en) Generating content endorsements using machine learning nominator(s)
US11960847B2 (en) Systems and methods for generating responses for an intelligent virtual
WO2024089858A1 (en) Presentation device, presentation method, and presentation program
US20240305711A1 (en) Methods and systems to bookmark moments in conversation calls
US7873516B2 (en) Virtual vocal dynamics in written exchange
US11232363B2 (en) System and method of providing news analysis using artificial intelligence
US10963823B1 (en) Systems and methods for chatbot applications performing tasks based on user stress levels
WO2024176469A1 (en) Presentation device, presentation method, and presentation program
JP2020077054A (en) Selection device and selection method
JP5685014B2 (en) Discussion soundness calculation device
US20210050118A1 (en) Systems And Methods For Facilitating Expert Communications

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22963505

Country of ref document: EP

Kind code of ref document: A1