WO2021130953A1 - Conversation assistance device, conversation assistance system, conversation assistance method, and recording medium - Google Patents

Conversation assistance device, conversation assistance system, conversation assistance method, and recording medium Download PDF

Info

Publication number
WO2021130953A1
WO2021130953A1 PCT/JP2019/051090 JP2019051090W WO2021130953A1 WO 2021130953 A1 WO2021130953 A1 WO 2021130953A1 JP 2019051090 W JP2019051090 W JP 2019051090W WO 2021130953 A1 WO2021130953 A1 WO 2021130953A1
Authority
WO
WIPO (PCT)
Prior art keywords
conversation
patient
text
agreement
category
Prior art date
Application number
PCT/JP2019/051090
Other languages
French (fr)
Japanese (ja)
Inventor
孝之 近藤
利憲 細井
長谷川 武史
秀章 三澤
潤一郎 日下
有希 草野
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Priority to JP2021566679A priority Critical patent/JP7388450B2/en
Priority to PCT/JP2019/051090 priority patent/WO2021130953A1/en
Publication of WO2021130953A1 publication Critical patent/WO2021130953A1/en

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H80/00ICT specially adapted for facilitating communication between medical practitioners or patients, e.g. for collaborative diagnosis, therapy or health monitoring

Definitions

  • the present invention relates to a technical field of a conversation support device, a conversation support system, a conversation support method, and a recording medium for supporting a conversation between a medical worker and a patient.
  • Patent Documents 1 and 2 In medical settings such as hospitals, various conversations are held between medical staff and patients. For example, a conversation for informed consent takes place between a healthcare professional and a patient. Devices for supporting such informed consent are described in Patent Documents 1 and 2. Other prior art documents related to the present invention include Patent Documents 3 to 6.
  • Japanese Unexamined Patent Publication No. 2005-063162 Japanese Unexamined Patent Publication No. 2015-170120
  • Japanese Unexamined Patent Publication No. 2017-11755 International Publication No. 2019/038807 Pamphlet Japanese Unexamined Patent Publication No. 2017-049710 Japanese Unexamined Patent Publication No. 2017-11756
  • the medical staff When informed consent is obtained, the medical staff is required to explain to the patient the actions (for example, medical actions) performed by the medical staff to the patient in just proportion.
  • the devices described in Patent Documents 1 to 6 described above are not intended to support the medical staff to explain to the patient the actions to be performed on the patient in just proportion. Therefore, the devices described in Patent Documents 1 to 6 described above have a technical problem that the possibility of omission in the explanation from the medical staff to the patient is relatively high.
  • An object of the present invention is to provide a conversation support device, a conversation support system, a conversation support method, and a computer program capable of solving the above-mentioned technical problems.
  • An example of a conversation support device for solving a problem is an agreement to explain an action performed by a medical worker to a patient to the patient and to obtain an agreement between the medical worker and the patient about the action.
  • Each of the plurality of conversation texts obtained by subdividing the text indicating the content of the conversation between the medical staff and the patient in the acquisition process is determined according to the type of utterance content to be uttered in the agreement acquisition process.
  • An example of a conversation support system for solving a problem is an agreement to explain to the patient what the health care worker does to the patient and to obtain an agreement between the health care worker and the patient about the action. It is a conversation support system including a conversation recording device that records the content of a conversation between the medical worker and the patient in the acquisition process, an example of the conversation support device described above, and the display device.
  • An example of a conversation support method for solving a problem is an agreement to explain to the patient what the health care worker does to the patient and to obtain an agreement between the health care worker and the patient about the action.
  • Each of the plurality of conversation texts obtained by subdividing the text indicating the content of the conversation between the medical staff and the patient in the acquisition process is determined according to the type of utterance content to be uttered in the agreement acquisition process. It is a conversation support method that classifies into at least one of a plurality of categories distinguished by the above, and displays at least a part of the plurality of conversation texts together with the category in which each conversation text is classified.
  • An example of a recording medium for solving a problem is a recording medium in which a computer program for causing a computer to execute a conversation support method for supporting a conversation between a medical worker and a patient is recorded, and the conversation support method is described above.
  • the medical worker and the patient in the process of obtaining an agreement to explain the action to be performed by the medical worker to the patient and to obtain an agreement between the medical worker and the patient about the action.
  • At least one of a plurality of categories in which each of the plurality of conversation texts obtained by subdividing the text indicating the content of the conversation is distinguished according to the type of the content to be spoken in the process of obtaining the agreement.
  • It is a recording medium that classifies into one and displays at least a part of the plurality of conversation texts together with the category in which each conversation text is classified.
  • conversation support device conversation support system, conversation support method and recording medium described above, it is possible to reduce the possibility of omission of explanation from the medical staff to the patient.
  • FIG. 1 is a block diagram showing an overall configuration of the conversation support system of the present embodiment.
  • FIG. 2 is a block diagram showing a configuration of the conversation support device of the present embodiment.
  • FIG. 3 is a data structure diagram showing an example of the data structure of the IC management DB.
  • FIG. 4 is a flowchart showing the flow of the conversation support operation performed by the conversation support device.
  • FIG. 5 is a plan view showing an example of a GUI for accepting input of initial information.
  • FIG. 6 is a plan view showing an example of the IC support screen.
  • FIG. 7 is a block diagram showing the configuration of the conversation support device of the first modification.
  • FIG. 8 is a plan view showing an example of a warning screen warning that the conversation required for informed consent is insufficient.
  • FIG. 9 is a plan view showing an example of an IC support screen including an index showing the number of conversation texts classified into one category.
  • FIG. 10 is a block diagram showing a configuration of a conversation support device of the second modification.
  • FIG. 11 is an explanatory diagram showing an example of summary information.
  • FIG. 12 is a plan view showing an example of an IC support screen including a GUI for designating at least a part of a plurality of conversation texts to be included in the summary information.
  • FIG. 13 is a block diagram showing the configuration of the conversation support device of the third modification.
  • FIG. 14 is a block diagram showing the overall configuration of the conversation support system of the fourth modification.
  • FIG. 15 is a block diagram showing a configuration of a conversation support device of the fourth modification.
  • FIG. 16 is a block diagram showing a configuration of a conversation support device of the fifth modification.
  • FIG. 17 is a plan view showing an example of an IC support screen including a GUI for correcting the classification result of the conversation text.
  • FIG. 18 is a plan view showing an example of an IC support screen including a GUI for searching conversation text.
  • FIG. 1 is a block diagram showing a configuration of the conversation support system SYS of the present embodiment.
  • the conversation support system SYS supports conversations between medical staff and patients.
  • the conversation support system SYS supports conversation in a situation where the medical staff explains to the patient the actions (for example, medical actions) performed by the medical staff to the patient. May be good.
  • the conversation support system SYS may support conversation for the purpose of confirming whether or not sufficient explanation has been given to the patient by the medical staff.
  • the conversation support system SYS explains to the patient an action (for example, medical action) performed by the medical worker to the patient, and the medical worker and the patient agree on the action. It may assist the conversation between the healthcare professional and the patient in the process of obtaining an agreement to obtain.
  • An example of a conversation between a healthcare professional and a patient in the process of obtaining such an agreement is a conversation for obtaining informed consent (IC).
  • informed consent in the medical field means an agreement between the medical staff and the patient after obtaining sufficient information.
  • the conversation support system SYS that supports the conversation between the medical staff and the patient in the process of obtaining consent for obtaining informed consent will be described.
  • the “medical worker” in the present embodiment refers to all persons engaged in medical care, which is an activity aimed at at least one of treatment of illness, prevention of illness, maintenance of health, recovery of health, and promotion of health. It may be included.
  • a healthcare professional may include a person who is capable of performing medical practice independently. At least one of a doctor, dentist and midwife is an example of a person who can perform medical practice independently.
  • a healthcare professional may include a person who is capable of performing medical practice under the direction of a superior person (eg, a doctor or dentist).
  • At least one person such as a nurse, pharmacist, clinical laboratory technician, radiologist, and physiotherapist is an example of a person who can perform medical treatment under the instruction of a superior person.
  • a healthcare professional may include a person performing the procedure at a practitioner's office (eg, at least one of an acupuncture and moxibustion clinic, an osteopathic clinic, and an osteopathic clinic).
  • An example of a person performing the procedure is at least one of the masseuse, acupuncturist, moxibutionist and judo rehabilitator.
  • a health care worker may include a person engaged in health services.
  • a public health nurse is an example of a person engaged in health work.
  • a health care worker may include a person engaged in welfare work.
  • An example of a person engaged in welfare work is at least one of a social worker, a child welfare worker, a mental health worker, a clinical psychologist and a clinical development psychologist.
  • a health care worker may include a person engaged in long-term care work.
  • An example of a person engaged in long-term care work is at least one of a long-term care worker, a home-visit caregiver, a long-term care support specialist, and a home helper.
  • the "patient” in the present embodiment includes all persons who receive medical treatment, which is an activity aimed at at least one of treatment of illness, prevention of illness, maintenance of health, recovery of health, and promotion of health. You may be. Depending on the patient's condition, the patient may not be able to communicate. In this case, the patient's agent (eg, relative, guardian or assistant) usually speaks with the healthcare professional on behalf of the patient. Therefore, the "patient” in the present embodiment may also include a patient's agent.
  • the patient's agent eg, relative, guardian or assistant
  • the conversation support system SYS may support conversations between one healthcare professional and one patient.
  • the conversation support system SYS may support conversations between multiple healthcare professionals and a single patient.
  • the conversation support system SYS may support conversations between one healthcare professional and multiple patients.
  • the conversation support system SYS may support conversations between a plurality of healthcare professionals and a plurality of patients.
  • the conversation support system SYS includes a recording device 1, a conversation support device 2, a display device 3, and an input device 4, as shown in FIG. ing.
  • the recording device 1 is a device that records a conversation between a medical worker and a patient. By recording the conversation between the medical staff and the patient, the recording device 1 generates voice data indicating the content of the conversation between the medical staff and the patient by voice. Therefore, the recording device 1 may include, for example, a microphone and a data processing device that converts a conversation recorded by the microphone as an analog electronic signal into digital audio data. As an example, the recording device 1 may be an information terminal (for example, a smartphone) having a built-in microphone. The recording device 1 outputs the generated voice data to the conversation support device 2.
  • the conversation support device 2 uses the voice data generated by the recording device 1 to perform a conversation support operation for supporting the conversation between the medical staff and the patient.
  • the conversation support operation supports conversation between the medical staff and the patient so that the explanation given from the medical staff to the patient when obtaining informed consent is not omitted. It may include an action.
  • the conversation support action includes, for example, an action that assists the conversation between the medical staff and the patient so that the medical staff gives sufficient explanation to the patient when obtaining informed consent. You may.
  • FIG. 2 is a block diagram showing the configuration of the conversation support device 2.
  • the conversation support device 2 includes a CPU (Central Processing Unit) 21, a storage device 22, and an input / output IF (Interface) 23.
  • CPU Central Processing Unit
  • IF Interface
  • the CPU 21 reads a computer program.
  • the CPU 21 may read a computer program stored in the storage device 22.
  • the CPU 21 may read a computer program stored in a computer-readable recording medium using a recording medium reading device (not shown).
  • the CPU 21 may acquire a computer program from a device (not shown) arranged outside the conversation support device (2) via a communication device (not shown) (that is, it may be downloaded or read).
  • the CPU 21 executes the read computer program.
  • a logical functional block for executing an operation to be performed by the conversation support device 2 (for example, the conversation support operation described above) is realized in the CPU 21. That is, the CPU 21 can function as a controller for realizing a logical functional block for executing the operation to be performed by the conversation support device 2.
  • FIG. 2 shows an example of a logical functional block realized in the CPU 21 to execute a conversation support operation.
  • a text conversion unit 211, a classification unit 212, and a display control unit 213 are realized in the CPU 21.
  • the details of the operations of the text conversion unit 211, the classification unit 212, and the display control unit 213 will be described in detail later with reference to FIG. 3 and the like, but the outline thereof will be briefly described here.
  • the text conversion unit 211 converts the voice data transmitted from the recording device 1 into text data.
  • the classification unit 212 distinguishes each of the plurality of conversation texts obtained by subdividing the sentences indicated by the text data according to the type of utterance content to be uttered in the process of obtaining consensus for obtaining informed consent.
  • the display control unit 213 controls the display device 3 so as to display the IC support screen 31 (see FIG. 6 described later) in order to support the conversation between the medical staff and the patient based on the classification result of the classification unit 212. ..
  • the recording device 1 itself may include a text conversion unit that converts the voice data recorded by the recording device 1 into text data.
  • the recording device 1 may transmit text data to the conversation support device 2 in addition to or instead of the voice data.
  • the conversation support device 2 may include, in addition to or instead of the text conversion unit 211, a data acquisition unit that acquires text data transmitted by the recording device 1 as a logical functional block realized in the CPU 21. ..
  • the conversation support device 2 does not have to include the text conversion unit 211.
  • the conversation support device 2 may be an information terminal (for example, at least one of a personal computer and a tablet computer) used by a medical worker.
  • the conversation support device 2 may be a server installed in the facility where the medical staff is working.
  • the conversation support device 2 may be a server (so-called cloud server) installed outside the facility where the medical staff is working.
  • the storage device 22 can store desired data.
  • the storage device 22 may temporarily store a computer program executed by the CPU 21.
  • the storage device 22 may temporarily store data temporarily used by the CPU 21 when the CPU 21 is executing a computer program.
  • the storage device 22 may store data stored by the conversation support device 2 for a long period of time.
  • the storage device 22 may include at least one of a RAM (Random Access Memory), a ROM (Read Only Memory), a hard disk device, a magneto-optical disk device, an SSD (Solid State Drive), and a disk array device. Good.
  • the storage device 22 stores the IC management DB (DataBase) 221 for managing the informed consent that is the support target of the conversation support operation.
  • the IC management DB 221 contains as many records as the number of times the informed consent has been acquired, including information regarding the contents of the informed consent.
  • FIG. 3 which shows an example of the data structure of the IC management DB 221
  • the record including the information regarding the contents of the informed consent includes, for example, the information indicating the identification number (ID) for identifying the record and the informed consent.
  • the IC management DB 221 may be referred to as agreement-related data.
  • the input / output IF23 is a device that transmits / receives data between the conversation support device 2 and an external device of the conversation support device 2 (for example, at least one of a recording device 1, a display device 3 and an input device 4). .. Therefore, the conversation support device 2 transmits data to an external device of the conversation support device 2 via the input / output IF23. Further, the conversation support device 2 receives data transmitted from an external device of the conversation support device 2 via the input / output IF23.
  • the display device 3 is an output device (that is, a display) capable of displaying desired information.
  • the display device 3 displays the IC support screen 31 under the control of the display control unit 213.
  • the display device 3 may be a display provided in an information terminal (for example, at least one of a personal computer and a tablet computer) used by a medical professional.
  • the display device 3 may be a display that can be visually recognized by both the medical staff and the patient.
  • the conversation support system SYS may separately include a display device 3 that can be visually recognized by the medical staff and a display device 3 that can be visually recognized by the patient. That is, the conversation support system SYS may include a plurality of display devices 3. In this case, the information displayed on one display device 3 may be the same as or different from the information displayed on the other display devices 3.
  • the input device 4 is a device that receives an input operation from a user of the conversation support device 2 (for example, at least one of a medical worker and a patient).
  • the input device 4 may include, for example, a user-operable operating device.
  • the input device 4 may include, for example, at least one of a keyboard, a mouse, and a touch panel as an example of the operating device.
  • the input device 4 may be an operating device included in an information terminal (for example, at least one of a personal computer and a tablet computer) used by a medical professional.
  • the input device 4 may be an operating device that can be operated by both the medical staff and the patient.
  • the conversation support system SYS may separately include an input device 4 that can be operated by the medical staff and an input device 4 that can be operated by the patient. That is, the conversation support system SYS may include a plurality of input devices 4.
  • FIG. 4 is a flowchart showing the flow of the conversation support operation performed by the conversation support device 2.
  • the conversation support device 2 accepts the input of the initial information regarding the informed consent, and registers the received initial information in the IC management DB 221 (step S11). However, when the initial information has already been input (for example, the initial information has been registered in the IC management DB 221), the conversation support device 2 does not have to perform the operation of step S11.
  • the display control unit 213 of the conversation support device 2 may control the display device 3 so as to display a GUI (Graphical User Interface) 31 for receiving input of initial information regarding informed consent.
  • GUI Graphic User Interface
  • the user of the conversation support device 2 (for example, at least one of the medical staff and the patient) may input the initial information using the input device 4 while referring to the GUI 32 displayed on the display device 3. ..
  • the GUI 32 may include a GUI for accepting input of information included in the IC management DB 221.
  • the GUI 32 acquires a text box 321 for inputting the title (IC title) of informed consent and informed consent as the GUI for receiving the input of the information included in the IC management DB 221.
  • the GUI 32 may include a GUI for accepting input for designating a category (in other words, a tag) for classifying conversation text.
  • the classification unit 212 classifies each of the plurality of conversation texts into at least one category designated by using the GUI 32.
  • the classification unit 212 does not have to use at least one category not specified by using the GUI 32 as a classification destination of the conversation text.
  • the GUI 32 includes a plurality of check boxes 326 corresponding to each of the plurality of categories as a GUI that accepts input for designating a category for classifying the conversation text. For example, as shown in FIG.
  • the GUI 32 includes a check box 326-1 that specifies the category in which the conversational text that refers to the "purpose of the informed outlet (ie, the purpose of the consensus acquisition process)" is classified.
  • Checkbox 326-2 which specifies the category in which conversational texts referring to "patient symptoms or medical conditions” are classified, and conversational texts referring to "tests or treatments performed on patients” are classified
  • the check box 326-3 to specify the category
  • the check box 326-4 to specify the category in which the conversation text referring to "patient trial or study” is classified, and the "patient opinion”.
  • a check box 326-5 which specifies the category in which the conversation text referred to is classified, and "Agreement between the medical worker and the patient (note that the agreement here is at least one of consent and refusal).
  • the check box 326-6 which specifies the category in which the conversation text that mentions "may include display", is classified, and the conversation text that mentions "future medical policy for patients" is classified. It may include at least one of the check boxes 326-7 that specify the category to be used. Further, as shown in FIG. 5, the GUI 32 includes a check box 326 for designating a category in which conversation texts referring to "disease name or diagnosis name" are classified, and "acts to be performed this time (for example, medical acts)".
  • Check box 326 that specifies the category in which the conversation text that mentions "merits related to life prognosis and merits related to QOL (Quality Of Life)" is classified, and “acts to be performed this time (for example, medical treatment)"
  • a check box 326 that specifies the category in which the conversation text that mentions the disadvantages of action) (eg, the disadvantages of at least one of danger, distress, side effects, and complications) is classified, and the "patient burden (eg, patient burden (eg,)”. , Cost burden, and time burden including leave) ”with a check box 326 that specifies the category in which the conversation text is categorized, and“ Response or confirmation from the healthcare professional side. May include at least one with a check box 326 that specifies the category in which the conversation text referred to is classified.
  • the text conversion unit 211 acquires the voice data generated by the recording device 1 via the input / output IF23 (step S12).
  • the acquired voice data may be stored in the storage device 22.
  • the information regarding the voice data stored by the storage device 22 may be registered in the IC management DB 221.
  • Information for preventing falsification of the voice data (for example, at least one of a time stamp and an electronic signature) may be added to the voice data.
  • the text conversion unit 211 After that, the text conversion unit 211 generates text data indicating the content of the conversation between the medical staff and the patient in sentences (that is, text) from the voice data acquired in step S12 (step S13). That is, the text conversion unit 211 converts the voice data acquired in step S12 into text data (step S13).
  • the generated text data may be stored by the storage device 22.
  • the information about the text data stored by the storage device 22 may be registered in the IC management DB 221.
  • Information for preventing falsification of the text data (for example, at least one of a time stamp and an electronic signature) may be added to the text data.
  • the text conversion unit 211 (or an arbitrary functional block included in the CPU 21) generates text data so that the text indicating the utterance content of the medical worker and the text indicating the utterance content of the patient can be distinguished. You may.
  • the classification unit 212 sets each of the plurality of conversation texts obtained by subdividing the sentence indicated by the text data generated in step S13 into at least one of the plurality of categories designated in step S11.
  • Classify step S14. In classifying the conversation text into categories, at least one of a plurality of tags corresponding to the plurality of categories specified in step S11 is assigned to the conversation text (that is, the conversation text is tagged). It may be regarded as equivalent to (doing).
  • the classification unit 212 may classify each of the plurality of conversation texts into at least one of a plurality of categories by using a predetermined classification model.
  • the classification model may include, for example, master data relating to categories, master data relating to example sentences classified into each category, and dictionary data relating to word vectorization.
  • the classification unit 212 may classify the conversation texts constituting the text into a desired category by comparing the text indicated by the text data with the master data.
  • the classification unit 212 vectorizes the words that make up the sentence indicated by the text data (that is, calculates the feature vector of the word or sentence), and speaks based on the vectorized word (that is, based on the feature vector).
  • the text may be categorized as desired.
  • the classification unit 212 may, in addition to or instead of using a classification model, classify each of the plurality of conversation texts into at least one of a plurality of categories using a rule-based compliant method. Good. That is, the classification unit 212 may classify each of the plurality of conversation texts into at least one of the plurality of categories by classifying each of the plurality of conversation texts according to a predetermined rule.
  • the classifier 212 uses cosine similarity (ie, cosine similarity with respect to a vector of conversational text) in addition to or instead of using at least one of a classification model and a rule-based method.
  • cosine similarity ie, cosine similarity with respect to a vector of conversational text
  • Each conversational text may fall into at least one of a plurality of categories.
  • the classification unit 212 uses a clustering-based method in addition to or instead of using at least one of a classification model, a rule-based method, and a cosine similarity to create each of the plurality of conversational texts. It may be classified into at least one of a plurality of categories.
  • the classification unit 212 uses a learning model in addition to or instead of using at least one of a classification model, a rule-based method, a cosine similarity and a clustering-based method, and a plurality of conversational texts. Each of these may be classified into at least one of a plurality of categories.
  • the learning model is a learning model (for example, a learning model using a neural network) that outputs a category of conversation texts constituting the text data when text data is input.
  • the classification unit 212 does not have to classify the conversation text, which is difficult to classify into any of the plurality of categories, into any of the plurality of categories.
  • the classification unit 212 may add a tag of "no corresponding category” or "unknown corresponding category” to conversation text that is difficult to classify into any of a plurality of categories.
  • the text indicated by the text data may be subdivided into arbitrary units. That is, the size of the conversation text obtained by subdividing the sentence indicated by the text data may be arbitrary. For example, at least a part of the sentence indicated by the text data may be subdivided into word units. For example, at least a part of the sentence indicated by the text data may be subdivided into bunsetsu units. For example, at least a part of the sentence indicated by the text data may be subdivided into a plurality of conversational texts with punctuation marks as boundaries. For example, at least a part of the sentence indicated by the text data may be subdivided into units of sentences (for example, at least one of a single sentence, a compound sentence, and a compound sentence). For example, at least a part of the sentence indicated by the text data may be subdivided into units of morphemes. In this case, morphological analysis may be performed on the text data.
  • the classification data indicating the classification result by the classification unit 212 may be stored by the storage device 22.
  • the storage device 22 may add information (for example, at least one of a time stamp and an electronic signature) to the classification data to prevent the classification data from being tampered with.
  • the display control unit 213 After that, the display control unit 213 generates an IC support screen 31 to support the conversation between the medical staff and the patient based on the classification result of the classification unit 212 (step S15). Further, the display control unit 213 controls the display device 3 so as to display the generated IC support screen 31 (step S15). As a result, the display device 3 displays the IC support screen 31.
  • the IC support screen 31 may include, for example, a conversation display screen 311 and a category display screen 312. That is, the display control unit 213 controls the display device 3 so that the conversation display screen 311 and the category display screen 312 are displayed in parallel. However, the IC support screen 31 does not have to include at least one of the conversation display screen 311 and the category display screen 312.
  • the conversation display screen 311 displays the content of the conversation text along the flow of conversation between the medical staff and the patient. That is, the conversation display screen 311 displays texts indicating the contents of the conversation between the medical staff and the patient during a certain period in the order of the conversation flow.
  • the content of the conversation text includes information indicating the time during which the conversation indicated by the conversation text was taking place, information indicating the person who spoke the word indicated by the conversation text, and a category in which the conversation text is classified. It may be displayed with the information shown.
  • the conversation display screen 311 may display texts indicating the contents of the conversation between the current medical staff and the patient in the order of the conversation flow.
  • the conversation display screen 311 is, as a text indicating the content of the conversation between the current medical staff and the patient, substantially the current time only for the time required to complete the processes from step S12 to step S14. Displaying text indicating the content of the delayed conversation.
  • the conversation display screen 311 displays texts indicating the contents of the conversation between the medical staff and the patient a certain time ago (for example, a few seconds ago, a few tens of seconds ago, or a few minutes ago) in the order of the conversation flow. It may be displayed.
  • the conversation display screen 311 may display texts indicating the contents of the already completed conversation between the medical staff and the patient in the order of the conversation flow.
  • the conversation display screen 311 may display the contents of the conversation in units of subdivided conversation texts.
  • the conversation display screen 311 may display the content of the conversation in a unit different from the conversation text in addition to or instead of displaying the content of the conversation in the unit of the conversation text.
  • the conversation display screen 311 may display the content of the conversation in units of a group of conversations including a plurality of conversation texts (that is, in units of conversations that make sense).
  • the conversation display screen 311 may display the text indicated by the text data before being subdivided into the conversation text.
  • the category display screen 312 displays at least a part of the plurality of conversation texts by classified categories. That is, the category display screen 312 displays conversation text classified into one of a plurality of categories. On the other hand, the category display screen 312 does not have to display conversation texts classified into other categories different from one category among the plurality of categories. In the example shown in FIG. 6, the category display screen 312 displays conversation texts classified into categories related to "patient's symptom or medical condition".
  • the content of the conversation text includes information indicating the time during which the conversation indicated by the conversation text was taking place and the person who spoke the words indicated by the conversation text. Is displayed along with information indicating the category in which the conversation text is classified.
  • the category display screen 312 may include a GUI 3120 for designating the category of the conversation text to be displayed on the category display screen 312.
  • the GUI 3120 includes a plurality of buttons 3121 corresponding to a plurality of categories, respectively.
  • the plurality of buttons 3121 correspond to the plurality of categories specified in step S11 of FIG. 4, respectively.
  • the GUI 3120 has a button 3121-1 that is pressed when it wants to display conversation text in all categories, and conversation text that mentions "the purpose of informed consent”.
  • Button 3121-2 pressed when wishing to display, and button 3121-3 pressed when wishing to display conversational text referring to "patient's symptoms or medical conditions”.
  • Button 3121-4 pressed when wishing to display conversational text referring to "test or treatment performed on a patient” and referring to “patient trial or study”
  • Button 3121-5 pressed when wishing to display the conversational text in question
  • button 3121-6 pressed when wishing to display the conversational text referring to the "patient's opinion”
  • Mentions button 3121-7 which is pressed when wishing to display conversational text that mentions “agreement between healthcare professionals and patients", and “future medical policy for patients” It may include a button 3121-8 that is pressed if it wishes to display the conversational text in question.
  • the user of the conversation support device 2 displays the conversation support device 2 on the category display screen 312 by using the input device 4 while referring to the GUI 3120 displayed on the display device 3. You may specify a category of conversation text. As a result, the conversation texts classified into the categories specified by the user are displayed on the category display screen 312.
  • step S16 The operation described above (particularly, the operation from step S12 to step S15) is repeated until it is determined that the conversation support operation is completed (step S16).
  • the conversation support system SYS separately includes the display device 3 visible to the medical staff and the display device 3 visible to the patient, the display device 3 visible to the patient.
  • the content of the IC support screen 31 displayed on the screen may be different from the content of the IC support screen 31 displayed on the display device 3 visible to the medical staff.
  • the IC support screen 31 displayed on the display device 3 that can be visually recognized by the patient is useful information for the patient to understand the explanation of the medical staff (for example, information on the explanation of medical terms issued by the medical staff and the patient). At least one of the information regarding the diagnosis result of is displayed.
  • the conversation support system SYS of the present embodiment shows the contents of the conversation between the medical staff and the patient in order to obtain informed consent.
  • the IC support screen 31 in which the conversation text is displayed together with the category of the conversation text can be displayed. Therefore, the medical staff can determine whether or not the explanation about a certain category is insufficient by checking the category of the conversation text displayed on the IC support screen 31.
  • the state of "insufficient explanation about a certain category” mentioned here may mean a state in which the explanation from the medical staff regarding a certain category is omitted. In other words, "insufficient explanation for a category" means that at least some of the information that the healthcare professional should convey to the patient about a category is not explained to the patient. Good.
  • the patient can also determine whether or not the explanation about a certain category is insufficient by checking the category of the conversation text displayed on the IC support screen 31. As a result, if it is determined that the explanation about a certain category is insufficient, the medical staff can give an additional explanation about a certain category. Therefore, the conversation support system SYS can reduce the possibility of omission of explanation from the medical staff to the patient.
  • the conversation support system SYS can display a category display screen 312 for displaying at least a part of a plurality of conversation texts by classified categories.
  • the conversation texts classified into a certain category are displayed together, the medical staff and the patient can more appropriately determine whether or not the explanation about the certain category is insufficient. Therefore, the conversation support system SYS can more appropriately reduce the possibility of omission of explanation from the medical staff to the patient.
  • FIG. 7 is a block diagram showing a configuration of the conversation support device 2a of the first modification.
  • the configuration requirements that have already been explained are designated by the same reference numerals, and detailed description thereof will be omitted.
  • the conversation support device 2a of the first modification is a logical functional block realized in the CPU 21 to execute the conversation support operation as compared with the conversation support device 2 described above.
  • the difference is that the warning unit 214a is realized in the CPU 21.
  • Other features of the conversation support device 2a may be the same as other features of the conversation support device 2.
  • the warning unit 214a determines whether or not the conversation required for informed consent is insufficient based on the classification result of the classification unit 212. For example, the warning unit 214a may determine whether or not the conversation required for informed consent is insufficient for each category based on the classification result of the classification unit 212.
  • the greater the volume of conversation between the healthcare professional and the patient the more likely it is that the healthcare professional has given a sufficient explanation to the patient.
  • the smaller the volume of conversation between the healthcare professional and the patient the less likely it is that the healthcare professional has given sufficient explanation to the patient. Therefore, the operation of determining whether or not the conversation required for informed consent in a certain category is insufficient is substantially the operation of determining whether or not the explanation regarding a certain category is insufficient. It may be considered equivalent.
  • the action of determining whether or not the conversation required by informed consent in a certain category is insufficient is, in effect, the action of determining whether or not the explanation about a certain category is insufficient. It may be regarded as a specific example.
  • the warning unit 214a is divided into the number of conversation texts classified into one category (that is, subdivided). It may be determined whether the number of conversational texts in a block) is greater than the threshold specific to the category. If the number of conversation texts is greater than the threshold, it is possible that there was more conversation about one category required for informed consent than if the number of conversation texts was less than the threshold. high. On the other hand, when the number of conversation texts is less than the threshold, there is more conversation about one category required for informed consent than when the number of conversation texts is more than the threshold. Most likely not.
  • the warning unit 214a may determine that the conversation required for informed consent is not sufficient. On the other hand, the warning unit 214a may determine that the conversation required for informed consent is insufficient when the number of conversation texts is less than the threshold value.
  • the warning unit 214a may use the number of utterances (that is, the number of remarks) of the medical staff and the patient regarding one category. And at least one of the utterance time (that is, the speaking time) may be determined whether or not it is greater than the threshold value specific to the one category.
  • the number of utterances may mean, for example, the number of times a group of conversations that have a common meaning has been uttered.
  • the conversation about one category required for informed consent is more than when at least one of the number of utterances and the utterance time is less than the threshold.
  • the warning unit 214a may determine that the conversation required for informed consent is not sufficient when at least one of the number of utterances and the utterance time is greater than the threshold value. On the other hand, the warning unit 214a may determine that the conversation required for informed consent is insufficient when at least one of the number of utterances and the utterance time is less than the threshold value.
  • the threshold is at least one of the number of conversation texts, the number of utterances, and the utterance time, depending on whether the conversation required by informed consent is insufficient and the conversation required by informed consent is not sufficient. It is preferable that the value is set to an appropriate value that can be appropriately distinguished from the above.
  • a threshold value may be a fixed value predetermined by the conversation support system SYS or the user.
  • the threshold value may be a variable value that can be appropriately set by the conversation support system SYS or the user.
  • the threshold value may be zero. In this case, whether at least one of the number of conversation texts in one category, the number of utterances in one category, and the utterance time in one category is greater than the threshold (ie, greater than zero).
  • the action of determining is equivalent to the action of determining whether or not an utterance related to one category has been made. If it is determined that at least one of the number of conversation texts in one category, the number of utterances in one category, and the utterance time in one category is not greater than the threshold, no utterances in one category are made. It is estimated that it is not. Therefore, in this case, the warning unit 214a may determine that the conversation required for informed consent is insufficient.
  • the warning unit 214a may set the threshold value corresponding to one category based on the content of the conversation conducted regarding one category when the same type of informed consent was acquired in the past.
  • Other informed consents of the same type as one informed consent have the same or similar purpose, symptom, test, treatment, and at least one (or all) of the disease name to the one informed consent. May mean informed consent.
  • the warning unit 214a sets the threshold value corresponding to one category, the number of conversation texts indicating the content of the conversation conducted for one category when the same type of informed consent was acquired in the past, and the number of utterances.
  • the warning unit 214a sets a threshold value corresponding to one category, the number of conversation texts indicating the content of the conversation conducted for one category when the same type of informed outlet is acquired, the number of utterances, and the utterance time. A predetermined margin may be added to or subtracted from at least one of the two or at least one of the number of conversation texts, the number of utterances, and the utterance time. In this case, the warning unit 214a can set an appropriate threshold value. It is preferable that the threshold value for the number of conversation texts, the threshold value for the number of utterances, and the threshold value for the utterance time are set individually.
  • warning unit 214a issues a specific keyword for one category by at least one of the healthcare professional and the patient. It may be determined whether or not it has been done.
  • the specific keyword may be, for example, a keyword that should appear in a conversation for explaining a category. In this case, if the particular keyword is not uttered by at least one of the healthcare professional and the patient, it is likely that there is not enough conversation about one category required for informed consent. For this reason, warning unit 214a may determine that the conversation required for informed consent is inadequate if the particular keyword is not issued by at least one of the healthcare professional and the patient.
  • the warning unit 214a may use the warning unit 214a after a certain period of time has passed since the conversation between the healthcare professional and the patient was started. It may be determined whether at least one of the number of conversation texts, the number of utterances and the utterance time is greater than the threshold and / or whether a particular keyword has been uttered by at least one of the healthcare professional and the patient. This is because if more than a certain amount of time has passed since the conversation between the healthcare professional and the patient started, the healthcare professional is in the process of explaining a category (that is, regarding a category). This is because it is relatively likely (the explanation of) has not been completed yet.
  • the warning unit 214a estimates the flow of conversation that is expected to be conducted at the newly acquired informed consent this time, based on the flow of conversation that was conducted at the informed consent acquired in the past. Can be done. In this case, the warning unit 214a estimates the time when the conversation regarding one category ends based on the estimation result of the conversation flow assumed to be performed at the newly acquired informed outlet, and the warning unit 214a estimates one category.
  • the warning unit 214a can appropriately determine whether or not the conversation required for informed consent in one category is insufficient.
  • the classification result of the classification unit 212 is different from the classification result in which at least one of the number of conversation texts classified into one category, the number of utterances related to one category, and the utterance time related to one category is relatively large. If at least one of the number of conversation texts in a category, the number of utterances in another category, and the utterance time in another category changes to a relatively high classification result, the conversation in one category ends ( In other words, it is highly probable that the medical staff fully explained the matters related to one category).
  • the classification result of the classification unit 212 is such that at least one of the number of conversation texts classified into one category, the number of utterances related to one category, and the utterance time related to one category is relatively large.
  • the result changes to a classification result in which at least one of the number of conversation texts classified into other categories, the number of utterances related to other categories, and the utterance time related to other categories is relatively high, the results are classified into one category. It may be determined whether at least one of the number of conversation texts, the number of utterances related to one category, and the utterance time related to one category is greater than the threshold value. As a result, the warning unit 214a can appropriately determine whether or not the conversation required for informed consent in one category is insufficient.
  • the warning unit 214a may warn the user of the conversation support system SYS to that effect.
  • the warning unit 214a may control the display device 3 to display a warning screen 33a for warning that the conversation required for informed consent is insufficient in one category.
  • An example of the warning screen 33a is shown in FIG.
  • the conversation support system SYSA of the first modification can determine whether or not the conversation required for informed consent is insufficient. Further, the conversation support system SYSa can warn that the conversation required by the informed consent is insufficient when the conversation required by the informed consent is insufficient. As a result, the healthcare professional who sees the warning can provide additional explanations for a category. Therefore, the conversation support system SYSa can more appropriately reduce the possibility of omission of explanation from the medical staff to the patient.
  • the conversation support system SYSa may give a voice warning that the conversation required for informed consent in one category is insufficient. Good.
  • the conversation support system SYSA may include, in addition to or instead of the display device 3, a speaker that outputs a warning by voice.
  • the conversation support system SYSa includes an index indicating at least one of the number of conversation texts, the number of utterances, and the utterance time classified into one category.
  • the IC support screen 31 may be displayed.
  • the conversation support system SYSA (particularly, the display control unit 213) is a conversation text classified into one category.
  • the IC support screen 31 including the bar graph 3122a showing the number quantitatively may be displayed.
  • the conversation support system SYSa can more appropriately reduce the possibility of omission of explanation from the medical staff to the patient.
  • the conversation support system SYSa is provided with a warning unit 214a when displaying the IC support screen 31 including an index indicating at least one of the number of conversation texts, the number of utterances, and the utterance time classified into one category. It does not have to be.
  • FIG. 10 is a block diagram showing a configuration of the conversation support device 2b of the second modification.
  • the conversation support device 2b of the second modification is a logical functional block realized in the CPU 21 to execute the conversation support operation as compared with the conversation support device 2 described above.
  • the difference is that the summary output unit 215b is realized in the CPU 21.
  • Other features of the conversation support device 2b may be the same as other features of the conversation support device 2.
  • the conversation support device 2a of the first modification described above may include a summary output unit 215b.
  • the summary output unit 215b generates summary information (that is, summary information) that summarizes the content of the conversation between the medical staff and the patient when the informed consent is obtained. Further, the summary output unit 215b outputs the generated summary information. For example, the summary output unit 215b may control the display device 3 which is a specific example of the output device so as to display the summary information.
  • the summary information may include, for example, at least a part of the initial information registered in step S11 of FIG.
  • the summary information includes information indicating the title of informed consent, information indicating the name of a medical worker who has obtained informed consent, and information indicating informed consent. At least one of information indicating the name of the patient who gave informed consent, information indicating the date and time when informed consent was obtained, and information indicating at least one of the comments of the medical staff and the patient regarding informed consent. May include.
  • the summary information may include, for example, at least a part of a plurality of conversation texts obtained by subdividing the text indicated by the text data converted by the text conversion unit 211.
  • the user of the conversation support system SYSb may specify at least a part of the plurality of conversation texts to be included in the summary information.
  • the display control unit 213 may control the display device 3 so as to display the IC support screen 31 including the GUI 3110 for designating the conversation text to be included in the summary information.
  • An example of the IC support screen 31 including the GUI3110b for designating the conversation text to be included in the summary information is shown in FIG. In the example shown in FIG.
  • the GUI 3110b is a plurality of check boxes 3111b corresponding to the plurality of conversation texts displayed on the IC support screen 31, and is selected when the corresponding conversation texts are included in the summary information. Includes check box 3111b.
  • the summary output unit 215b includes the conversation text corresponding to the check box 3111b selected by the user in the summary information.
  • the summary output unit 215b does not have to include the conversation text corresponding to the check box 3111b not selected by the user in the summary information.
  • FIG. 12 shows an example in which the GUI 3110b is included in the conversation display screen 311 constituting the IC support screen 31.
  • the GUI 3110b may be included in the category display screen 312 that constitutes the IC support screen 31.
  • the summary information may include, for example, information about a category in which the conversation text included in the summary information is classified.
  • the summary output unit 215b may output summary information indicating the conversation texts by category.
  • the conversation support system SYSb of the second modification can output summary information. Therefore, the user can appropriately grasp the contents of informed consent by confirming the summary information.
  • the summary output unit 215b may learn the user's instruction to specify the conversation text to be included in the summary information.
  • the summary output unit 215b may automatically select the conversation text to be included in the summary information based on the learning result of the user's instruction. That is, the summary output unit 215b may automatically select the conversation text that the user is presumed to include in the summary information as the conversation text that should be included in the summary information, based on the learning result of the user's instruction. ..
  • the summary output unit 215b does not require the user's instruction to specify the conversation text to be included in the summary information, and the conversation text is relatively likely to be selected by the user as the conversation text to be included in the summary information. Can be selected appropriately.
  • the summary output unit 215b uses a learning model (for example, a neural network) that can be learned by using the user's instruction to specify the conversation text to be included in the summary information as teacher data. It may include a learning model).
  • a learning model for example, a neural network
  • the summary output unit 215b may recommend the conversation text that is preferably included in the summary information to the user based on the learning result of the instruction of the user who specifies the conversation text to be included in the summary information.
  • the summary output unit 215b may control the display device 3 so that the conversation text, which is preferable to be included in the summary information, is displayed on the IC support screen 31 in a display method distinguishable from other conversation texts. ..
  • the user can relatively easily select the conversation text to be included in the summary information.
  • the conversation support system SYSc of the third modification is different from the conversation support system SYS described above in that it includes a conversation support device 2c instead of the conversation support device 2.
  • Other features of the conversation support system SYSc may be the same as other features of the conversation support system SYS. Therefore, the conversation support device 2c of the third modification will be described below with reference to FIG.
  • FIG. 13 is a block diagram showing a configuration of the conversation support device 2c of the third modification.
  • the conversation support device 2c of the third modification is, as compared with the conversation support device 2 described above, as a logical functional block realized in the CPU 21 to execute the conversation support operation.
  • the difference is that the schedule presentation unit 216c is realized in the CPU 21.
  • Other features of the conversation support device 2c may be the same as other features of the conversation support device 2.
  • At least one of the conversation support device 2a of the first modification and the conversation support device 2b of the second modification described above may include the schedule presentation unit 216c.
  • the schedule presentation unit 216c presents the schedule of conversations to be conducted in order to obtain informed consent to the medical staff who obtains informed consent. Specifically, when the same type of informed consent as the informed consent acquired in the past is acquired, the schedule presentation unit 216c is based on the content of the conversation in the informed consent acquired in the past. , Present the schedule of conversations to be conducted in order to obtain this informed consent to the medical staff who obtain this informed consent. This is because, as mentioned above, when the same type of informed consent as the informed consent obtained in the past is obtained, the conversation flow is performed in the informed consent obtained in the past. It is more likely to be the same as the flow of conversation.
  • the schedule presenting unit 216c can present the schedule of the conversation to be conducted at the newly acquired informed consent based on the schedule of the conversation conducted at the informed consent acquired in the past. ..
  • the schedule presentation unit 216c presents a schedule that is the same as or similar to the schedule of conversations that took place in the informed consent acquired in the past as the schedule of conversations that should be given in the newly acquired informed consent. May be.
  • the schedule presentation unit 216c learns the schedule of conversations given in the informed consent acquired in the past, and based on the learning result of the schedule, the conversations to be held in the informed consent newly acquired this time.
  • a schedule may be presented.
  • the schedule presentation unit 216c includes a learning model (for example, a learning model using a neural network) that can be learned by using the schedule of conversations performed in informed consent acquired in the past as teacher data. May be good.
  • a learning model for example, a learning model using a neural network
  • the healthcare professional can proceed with the explanation to be given in order to obtain informed consent on an appropriate schedule based on the presented schedule.
  • FIG. 14 is a block diagram showing a configuration of the conversation support system SYSd of the fourth modification.
  • the conversation support system SYSd of the fourth modification is different from the conversation support system SYS described above in that it further includes an electronic medical record system 5d. Further, the conversation support system SYSd is different from the conversation support system SYS described above in that it includes a conversation support device 2d instead of the conversation support device 2. Other features of the conversation support system SYSd may be the same as other features of the conversation support system SYS. It should be noted that at least one of the conversation support system SYSA of the first modification described above to the conversation support system SYSc of the third modification may be provided with the electronic medical record system 5d.
  • the electronic medical record system 5d is a system for managing the patient's electronic medical record. Specifically, the electronic medical record system 5d stores electronic medical record data 51d indicating the patient's electronic medical record. Therefore, the electronic medical record system 5d may be provided with a storage device for storing the electronic medical record data 51d.
  • the conversation support device 2d has a logical function realized in the CPU 21 to execute the conversation support operation as compared with the conversation support device 2 described above.
  • the difference is that the medical record cooperation unit 217d is realized in the CPU 21 as a block.
  • Other features of the conversation support device 2d may be the same as other features of the conversation support device 2.
  • At least one of the conversation support device 2a of the first modification and the conversation support device 2c of the third modification described above may include a medical record cooperation unit 217d.
  • the medical record cooperation unit 217d performs a cooperation operation in which the electronic medical record data 51d stored in the electronic medical record system 5d and the data managed by the IC management DB 221 (that is, the data for managing informed consent) are linked.
  • the medical record cooperation unit 217d has already performed or performed an act (for example, medical practice) that has not been agreed upon by informed consent based on the electronic medical record data 51d and the IC management DB 221. It may be determined whether or not there is a plan to be given. Specifically, the medical record cooperation unit 217d can specify the medical practice or the like that has already been performed or is performed on the patient based on the electronic medical record data 51d. Further, the medical record cooperation unit 217d can determine whether or not an agreement has been reached between the medical staff and the patient regarding the medical practice or the like performed on the patient based on the IC management DB 221.
  • an act for example, medical practice
  • the medical record cooperation unit 217d has already performed or plans to perform medical treatment or the like that has not been agreed upon by informed consent based on the electronic medical record data 51d and the IC management DB 221. It is possible to determine whether or not there is. Furthermore, if it is determined by the informed consent that a medical practice, etc. that has not been agreed upon has already been performed or is scheduled to be performed on the patient, the medical record cooperation unit 217d will notify the conversation support system to that effect. The SYSd user may be warned. As a result, the user can recognize that it is necessary to give informed consent in order to perform medical treatment or the like. Therefore, the medical staff can perform medical treatment and the like after the informed consent is properly obtained. In other words, medical professionals rarely mistakenly perform medical practices or the like for which informed consent has not been properly obtained.
  • At least a part of the IC management DB 221 may be included in the electronic medical record data 51d. At least a part of the IC management DB 221 may be stored in the electronic medical record system 5d as a part of the electronic medical record data 51d. Even if at least a part of the data managed by the IC management DB 221 (for example, at least one of the above-mentioned voice data, text data, and classification data indicating the classification result of the classification unit 212) is included in the electronic medical record data 51d. Good. At least a part of the data managed by the IC management DB 221 may be stored in the electronic medical record system 5d as a part of the electronic medical record data 51d.
  • FIG. 16 is a block diagram showing a configuration of the conversation support device 2e of the fifth modification.
  • the conversation support device 2e of the fifth modification is different from the conversation support device 2 described above in that it includes a classification unit 212e instead of the classification unit 212. Further, the conversation support device 2e of the fifth modification is different from the conversation support device 2 described above in that it includes a learning unit 214e. Other features of the conversation support device 2e may be the same as other features of the conversation support device 2. In addition, at least one of the conversation support device 2b of the first modification and the conversation support device 2d of the fourth modification described above may include a classification unit 212e instead of the classification unit 212.
  • the classification unit 212e is different from the classification unit 212 in that each contains at least two classification units 2121e capable of classifying the conversation text into at least one of a plurality of categories.
  • the classification unit 212e includes a classification unit 2121e-1 and a classification unit 2121e-2.
  • Other features of the classification unit 212e may be the same as the other features of the classification unit 212.
  • the classification unit 2121e-1 may classify conversation texts by using any method.
  • the classification unit 2121e-2 may classify the conversation text by a method different from the method used by the classification unit 2121e-1.
  • the classification unit 2121e-1 may classify the conversation text by using a method based on the above-mentioned rule base.
  • the classification unit 2121e-2 may classify the conversation text using the above-mentioned classification model (for example, a learning model using a neural network). In this case, the classification unit 2121e-2 may classify conversation texts that the classification unit 2121e-1 could not classify.
  • the learning unit 214e learns the classification result of the classification unit 2121e-1.
  • the learning result of the learning unit 214e is reflected in the classifier 2121e-2. Therefore, it is preferable that the classification unit 2121e-2 classifies conversation texts using a learning model created by learning by the learning unit 214e (for example, a learning model using a neural network).
  • a learning model created by learning by the learning unit 214e (for example, a learning model using a neural network).
  • the classification unit 2121e-2 can use a learning model that reflects the learning result of the classification result of the classification unit 2121e-1, the conversation text is relatively more accurate than the classification unit 2121e-1. Can be classified.
  • the learning unit 214e may learn the classification result of the classification unit 212.
  • the learning result of the learning unit 214e is reflected in the classifier 212. In this case, it is expected that the learning by the learning unit 214e will improve the classification accuracy by the classification unit 212.
  • the classification unit 212 may modify the classification result of the conversation text.
  • the classification unit 212 may modify the classification result of the conversation text based on the user's instruction for modifying the classification result of the conversation text.
  • the display control unit 213 may control the display device 3 so as to display the IC support screen 31 including the GUI 313 for correcting the classification result of the conversation text.
  • An example of the IC support screen 31 including the GUI 313 for correcting the classification result of the conversation text is shown in FIG.
  • the GUI 313 is displayed so as to correspond to the category displayed on the IC support screen 31 (for example, the category displayed in association with the conversation text in the conversation display screen 311) and the conversation text.
  • the classification unit 212 may change the category for classifying the conversation text to the category selected by the user.
  • the classification unit 212 may learn the correction content of the classification result of the conversation text.
  • the classification unit 212 may learn the modified content of the classification result of the conversation text, and may classify the conversation text based on the learning result of the modified content of the classification result of the conversation text.
  • the classification unit 212 can classify the conversation text with relatively high accuracy as compared with the case where the correction content of the classification result of the conversation text is not learned.
  • the classification unit 212 can learn by using the correction content of the classification result of the conversation text (for example, the instruction of the user who corrects the classification result of the conversation text) as the teacher data (for example, neural).
  • a learning model using a network may be included.
  • the display control unit 213 may control the display device 3 so as to display the IC support screen 31 including the GUI for searching the conversation text.
  • FIG. 18 An example of the IC support screen 31 including a GUI for searching the conversation text is shown in FIG.
  • the IC support screen 31 (conversation display screen 311 in FIG. 18) may include a text box 3112 for inputting the wording to be searched as a GUI for searching the conversation text. ..
  • the IC support screen 31 may display conversation text including the wording input in the text box 3112.
  • FIG. 18 shows an IC support screen 31 (conversation display screen 311 in FIG. 18) that displays a conversation text including the word “complication” when the word “complication” is entered in the text box 3112. ..
  • the conversation support system SYS may be provided with a video capturing device (for example, a video camera) capable of capturing a video of at least one of a medical worker and a patient having conversation in order to obtain informed consent.
  • the moving image data showing the moving image taken by the moving image-taking device may be used as evidence of informed consent.
  • the moving image data taken by the moving image capturing device may be stored in the storage device 22.
  • the information about the moving image data stored by the storage device 22 may be registered in the IC management DB 221.
  • Information for preventing falsification of the moving image data (for example, at least one of a time stamp and an electronic signature) may be added to the moving image data.
  • at least one of the medical staff and the patient may specify whether or not to perform imaging with the moving image imaging device when registering the above-mentioned initial information (step S11 in FIG. 4).
  • the conversation support system SYS fully explains to the patient what the health care worker does to the patient (eg, the medical care) and obtains an agreement between the health care worker and the patient about the action.
  • the conversation support system SYSTEM is used to obtain an agreement between an arbitrary first person and an arbitrary second person on a desired matter in the same manner as in the case of supporting a conversation between a medical worker and a patient. You may support the conversation between the first person and the second person in the process of obtaining the agreement.
  • the conversation support system SYS conducts any conversation between the first person and the second person. You may support. That is, the conversation support system SYS may be used not only in the process of obtaining an agreement but also in a situation where an arbitrary conversation is made. For example, the conversation support system SYS may support a conversation between a first person and a second person in the process in which any first person explains a desired matter to any second person. For example, the conversation support system SYS converts a voice data indicating a conversation between a first person and a second person into text data, and subdivides the sentence indicated by the text data into a plurality of conversation texts.
  • Each of the above is classified into at least one of a plurality of categories that are distinguished according to the type of utterance content to be uttered in the process of explaining the desired matter, and the first is based on the classification result of the classification unit 212.
  • a support screen (for example, a screen similar to the IC support screen 31 shown in FIG. 6) may be displayed to support a conversation between the person in question and the second person.
  • the first person can determine whether or not the explanation of the desired matter to the second person is insufficient. That is, the first person can determine whether or not he / she sufficiently fulfills the obligation to explain the desired matter to the second person.
  • the conversation support system SYS can reduce the possibility that the explanation from one of the first and second persons to the other of the first and second persons will be omitted.
  • a contract between the first person and the second person for example, a contract related to real estate and finance
  • the conversation support system SYS may support the conversation between the first person and the second person in the process in which the first person explains the contract contents to an arbitrary second person.
  • Another example of a situation in which a conversation between a first person and a second person is assisted is when a police officer investigates a suspect in the police.
  • the conversation support system SYS may support the conversation between the police officer and the suspect in the process of the police officer investigating the suspect.
  • Another example of a situation in which a conversation between a first person and a second person is supported is a court hearing.
  • the conversation support system SYS may support conversations (eg, speech) between at least two persons, a judge, a prosecutor, a lawyer, a plaintiff, a lawyer, and a witness.
  • Each of the plurality of conversation texts obtained by subdividing the text indicating the content of the conversation is at least one of a plurality of categories distinguished according to the type of the utterance content to be uttered in the agreement acquisition process.
  • Classification means to classify into A conversation support device including a display control means for controlling a display device so that at least a part of the plurality of conversation texts is displayed together with the category in which each conversation text is classified.
  • Appendix 2 The conversation support device according to Appendix 1, wherein the display control means controls the display device so that at least a part of the plurality of conversation texts is displayed according to the classified categories.
  • the display control means controls the display device so as to display conversation texts classified into one category designated by at least one of the medical staff and the patient among the plurality of categories.
  • Any one of Appendix 1 to 3 is further provided with a conversation warning means for warning that the conversation required in the agreement acquisition process is insufficient based on the classification result of the plurality of conversation texts by the classification means. Conversation support device described in.
  • the conversation warning means is the one category when the number of the conversation texts classified into one of the plurality of categories is less than a predetermined threshold set for the one category.
  • the conversation support device according to Appendix 4 which warns that the conversation regarding the above is insufficient.
  • the conversation according to Appendix 5 The conversation according to Appendix 5, wherein the predetermined threshold value is set based on the content of the conversation that has been conducted between the medical worker and the patient in the past in the process of obtaining the agreement in relation to the one category. Support device.
  • the conversation warning means warns that the conversation regarding the one category is insufficient when no conversation text is classified into one of the plurality of categories.
  • the conversation support device according to any one of the items.
  • the conversation support device according to any one of Appendix 1 to 7, wherein the display control means further displays an index indicating the number of the conversation texts classified into one of the plurality of categories.
  • the plurality of categories relate to the utterance content that refers to the purpose of the agreement acquisition process, the utterance content that refers to the patient's symptoms or medical conditions, and the utterance content that refers to the examination or treatment performed on the patient. Categories, utterance content categories that refer to clinical trials or studies involving the patient, utterance content categories that refer to the patient's opinions, utterances that refer to the existence of agreement between the medical personnel and the patient.
  • the conversation support device according to any one of Appendix 1 to 8, which includes at least one of the content category and the utterance content category that refers to the future medical policy for the patient.
  • Appendix 10 (I) Learn the instructions of the healthcare professional to specify that at least one of the plurality of conversation texts should be included in the summary information summarizing the content of the conversation during the agreement acquisition process, (ii).
  • the conversation support device according to any one of Appendix 1 to 9, further comprising a generation means for generating the summary information based on the learning result of the instruction of the medical worker.
  • the generation means learns the instruction of the medical worker, and based on the learning result of the instruction of the medical worker, at least one of the plurality of conversation texts is used as the conversation text to be included in the summary information.
  • the conversation support device according to Appendix 10 which is recommended for the medical staff.
  • Appendix 12 In the process of obtaining the agreement regarding one type of action, based on the content of the conversation between the medical staff and the patient in the past, the person is in the process of obtaining the agreement regarding the type of action.
  • the conversation support device according to any one of Appendix 1 to 12, further comprising a medical record cooperation means for warning the patient that an act that has not been performed has been performed or is scheduled to be performed.
  • the classification means learns a first classification unit that classifies each of the plurality of conversation texts into at least one of the plurality of categories, and teacher data including the classification results of the first classification unit.
  • Each of the plurality of conversation texts obtained by subdividing the text indicating the content of the conversation is at least one of a plurality of categories distinguished according to the type of the utterance content to be uttered in the agreement acquisition process.
  • Classified into A conversation support method in which at least a part of the plurality of conversation texts is displayed together with the category in which each conversation text is classified.
  • Appendix 17 A recording medium on which a computer program that causes a computer to execute a conversation support method that supports conversation between a medical staff and a patient is recorded.
  • the conversation support method is The medical worker and the patient in the process of obtaining an agreement between the medical worker and the patient to explain the action performed by the medical worker to the patient and to obtain an agreement between the medical worker and the patient about the action.
  • Classify into one A recording medium that displays at least a part of the plurality of conversation texts together with the category in which each conversation text is classified.
  • Appendix 18 A computer program that causes a computer to execute a conversation support method that supports conversation between a healthcare professional and a patient.
  • the conversation support method is The medical worker and the patient in the process of obtaining an agreement between the medical worker and the patient to explain the action performed by the medical worker to the patient and to obtain an agreement between the medical worker and the patient about the action.
  • a classification means that classifies into at least one of a plurality of categories that are distinguished according to the type of utterance content
  • a conversation support device including a display control means for controlling a display device so that at least a part of the plurality of conversation texts is displayed together with the category in which each conversation text is classified.
  • the present invention can be appropriately modified within the scope of claims and within the scope not contrary to the gist or idea of the invention that can be read from the entire specification, and a conversation support device, a conversation support system, a conversation support method, and a computer accompanied by such changes. Programs and recording media are also included in the technical idea of the present invention.

Abstract

A conversation assistance device (1) comprises: a conversion means (211) for converting voice data indicating, through voice, the content of a conversation between a medical worker and a patient to text data that shows the content of the conversation by text, the conversation being between the medical worker and the patient in a consent acquisition process for obtaining consent for a medical act; a classification means (212) for classifying each of a plurality of conversation text pieces obtained by subdividing the text shown by the text data into at least one among a plurality of categories differentiated in accordance with the type of speech content to be spoken in the consent acquisition process; and a display control means (213) for controlling a display device such that at least some of the plurality of conversation text pieces are displayed together with the category into which each conversation text piece was classified.

Description

会話支援装置、会話支援システム、会話支援方法及び記録媒体Conversation support device, conversation support system, conversation support method and recording medium
 本発明は、医療従事者と患者との会話を支援するための会話支援装置、会話支援システム、会話支援方法及び記録媒体の技術分野に関する。 The present invention relates to a technical field of a conversation support device, a conversation support system, a conversation support method, and a recording medium for supporting a conversation between a medical worker and a patient.
 病院等の医療現場では、医療従事者と患者との間で様々な会話が行われる。例えば、医療従事者と患者との間では、インフォームドコンセントのための会話が行われる。このようなインフォームドコンセントを支援するための装置が、特許文献1から2に記載されている。その他、本願発明に関連する先行技術文献として、特許文献3から6があげられる。 In medical settings such as hospitals, various conversations are held between medical staff and patients. For example, a conversation for informed consent takes place between a healthcare professional and a patient. Devices for supporting such informed consent are described in Patent Documents 1 and 2. Other prior art documents related to the present invention include Patent Documents 3 to 6.
特開2005-063162号公報Japanese Unexamined Patent Publication No. 2005-063162 特開2015-170120号公報Japanese Unexamined Patent Publication No. 2015-170120 特開2017-111755号公報Japanese Unexamined Patent Publication No. 2017-11755 国際公開第2019/038807号パンフレットInternational Publication No. 2019/038807 Pamphlet 特開2017-049710号公報Japanese Unexamined Patent Publication No. 2017-049710 特開2017-111756号公報Japanese Unexamined Patent Publication No. 2017-11756
 インフォームドコンセントが取得される場合には、医療従事者が患者に対して行う行為(例えば、医療行為)について、医療従事者が患者に対して過不足なく説明することが求められる。しかしながら、上述した特許文献1から6に記載された装置は、医療従事者が患者に対して行う行為について患者に対して過不足なく説明することを支援することは想定していない。このため、上述した特許文献1から6に記載された装置は、医療従事者から患者への説明に漏れが生ずる可能性が相対的に高くなるという技術的問題を有している。 When informed consent is obtained, the medical staff is required to explain to the patient the actions (for example, medical actions) performed by the medical staff to the patient in just proportion. However, the devices described in Patent Documents 1 to 6 described above are not intended to support the medical staff to explain to the patient the actions to be performed on the patient in just proportion. Therefore, the devices described in Patent Documents 1 to 6 described above have a technical problem that the possibility of omission in the explanation from the medical staff to the patient is relatively high.
 本発明は、上述した技術的問題を解決可能な会話支援装置、会話支援システム、会話支援方法及びコンピュータプログラムを提供することを課題とする。一例として、本発明は、医療従事者から患者への説明に漏れが生ずる可能性を低減することが可能な会話支援装置、会話支援システム、会話支援方法及び記録媒体を提供することを課題とする。 An object of the present invention is to provide a conversation support device, a conversation support system, a conversation support method, and a computer program capable of solving the above-mentioned technical problems. As an example, it is an object of the present invention to provide a conversation support device, a conversation support system, a conversation support method, and a recording medium capable of reducing the possibility of omission of explanation from a medical worker to a patient. ..
 課題を解決するための会話支援装置の一例は、医療従事者が患者に対して行う行為について前記患者に説明し且つ前記行為について前記医療従事者と前記患者との間で合意を得るための合意取得過程での前記医療従事者と前記患者との会話の内容を示すテキストを細分化することで得られる複数の会話テキストの夫々を、前記合意取得過程で発話されるべき発話内容の種類に応じて区別される複数のカテゴリのうちの少なくとも一つに分類する分類手段と、前記複数の会話テキストのうちの少なくとも一部を、各会話テキストが分類された前記カテゴリと共に表示するように、表示装置を制御する表示制御手段とを備える会話支援装置である。 An example of a conversation support device for solving a problem is an agreement to explain an action performed by a medical worker to a patient to the patient and to obtain an agreement between the medical worker and the patient about the action. Each of the plurality of conversation texts obtained by subdividing the text indicating the content of the conversation between the medical staff and the patient in the acquisition process is determined according to the type of utterance content to be uttered in the agreement acquisition process. A classification means for classifying into at least one of a plurality of categories distinguished by the above, and a display device such that at least a part of the plurality of conversation texts is displayed together with the category in which each conversation text is classified. It is a conversation support device including a display control means for controlling the above.
 課題を解決するための会話支援システムの一例は、医療従事者が患者に対して行う行為について前記患者に説明し且つ前記行為について前記医療従事者と前記患者との間で合意を得るための合意取得過程での前記医療従事者と前記患者との会話の内容を記録する会話記録装置と、上述した会話支援装置の一例と、前記表示装置とを備える会話支援システムである。 課題を解決するための会話支援方法の一例は、医療従事者が患者に対して行う行為について前記患者に説明し且つ前記行為について前記医療従事者と前記患者との間で合意を得るための合意取得過程での前記医療従事者と前記患者との会話の内容を示すテキストを細分化することで得られる複数の会話テキストの夫々を、前記合意取得過程で発話されるべき発話内容の種類に応じて区別される複数のカテゴリのうちの少なくとも一つに分類し、前記複数の会話テキストのうちの少なくとも一部を、各会話テキストが分類された前記カテゴリと共に表示する会話支援方法である。 An example of a conversation support system for solving a problem is an agreement to explain to the patient what the health care worker does to the patient and to obtain an agreement between the health care worker and the patient about the action. It is a conversation support system including a conversation recording device that records the content of a conversation between the medical worker and the patient in the acquisition process, an example of the conversation support device described above, and the display device. An example of a conversation support method for solving a problem is an agreement to explain to the patient what the health care worker does to the patient and to obtain an agreement between the health care worker and the patient about the action. Each of the plurality of conversation texts obtained by subdividing the text indicating the content of the conversation between the medical staff and the patient in the acquisition process is determined according to the type of utterance content to be uttered in the agreement acquisition process. It is a conversation support method that classifies into at least one of a plurality of categories distinguished by the above, and displays at least a part of the plurality of conversation texts together with the category in which each conversation text is classified.
 課題を解決するための記録媒体の一例は、医療従事者と患者との会話を支援する会話支援方法をコンピュータに実行させるコンピュータプログラムが記録された記録媒体であって、前記会話支援方法は、前記医療従事者が前記患者に対して行う行為について前記患者に説明し且つ前記行為について前記医療従事者と前記患者との間で合意を得るための合意取得過程での前記医療従事者と前記患者との会話の内容を示すテキストを細分化することで得られる複数の会話テキストの夫々を、前記合意取得過程で発話されるべき発話内容の種類に応じて区別される複数のカテゴリのうちの少なくとも一つに分類し、前記複数の会話テキストのうちの少なくとも一部を、各会話テキストが分類された前記カテゴリと共に表示する記録媒体である。 An example of a recording medium for solving a problem is a recording medium in which a computer program for causing a computer to execute a conversation support method for supporting a conversation between a medical worker and a patient is recorded, and the conversation support method is described above. The medical worker and the patient in the process of obtaining an agreement to explain the action to be performed by the medical worker to the patient and to obtain an agreement between the medical worker and the patient about the action. At least one of a plurality of categories in which each of the plurality of conversation texts obtained by subdividing the text indicating the content of the conversation is distinguished according to the type of the content to be spoken in the process of obtaining the agreement. It is a recording medium that classifies into one and displays at least a part of the plurality of conversation texts together with the category in which each conversation text is classified.
 上述した会話支援装置、会話支援システム、会話支援方法及び記録媒体によれば、医療従事者から患者への説明に漏れが生ずる可能性を低減することができる。 According to the conversation support device, conversation support system, conversation support method and recording medium described above, it is possible to reduce the possibility of omission of explanation from the medical staff to the patient.
図1は、本実施形態の会話支援システムの全体構成を示すブロック図である。FIG. 1 is a block diagram showing an overall configuration of the conversation support system of the present embodiment. 図2は、本実施形態の会話支援装置の構成を示すブロック図である。FIG. 2 is a block diagram showing a configuration of the conversation support device of the present embodiment. 図3は、IC管理DBのデータ構造の一例を示すデータ構造図である。FIG. 3 is a data structure diagram showing an example of the data structure of the IC management DB. 図4は、会話支援装置が行う会話支援動作の流れを示すフローチャートである。FIG. 4 is a flowchart showing the flow of the conversation support operation performed by the conversation support device. 図5は、初期情報の入力を受け付けるためのGUIの一例を示す平面図である。FIG. 5 is a plan view showing an example of a GUI for accepting input of initial information. 図6は、IC支援画面の一例を示す平面図である。FIG. 6 is a plan view showing an example of the IC support screen. 図7は、第1変形例の会話支援装置の構成を示すブロック図である。FIG. 7 is a block diagram showing the configuration of the conversation support device of the first modification. 図8は、インフォームドコンセントで必要とされる会話が不十分である旨を警告する警告画面の一例を示す平面図である。FIG. 8 is a plan view showing an example of a warning screen warning that the conversation required for informed consent is insufficient. 図9は、一のカテゴリに分類された会話テキストの数を示す指標を含むIC支援画面の一例を示す平面図である。FIG. 9 is a plan view showing an example of an IC support screen including an index showing the number of conversation texts classified into one category. 図10は、第2変形例の会話支援装置の構成を示すブロック図である。FIG. 10 is a block diagram showing a configuration of a conversation support device of the second modification. 図11、サマリ情報の一例を示す説明図である。FIG. 11 is an explanatory diagram showing an example of summary information. 図12は、サマリ情報に含めるべき複数の会話テキストのうちの少なくとも一部を指定するためのGUIを含むIC支援画面の一例を示す平面図である。FIG. 12 is a plan view showing an example of an IC support screen including a GUI for designating at least a part of a plurality of conversation texts to be included in the summary information. 図13は、第3変形例の会話支援装置の構成を示すブロック図である。FIG. 13 is a block diagram showing the configuration of the conversation support device of the third modification. 図14は、第4変形例の会話支援システムの全体構成を示すブロック図である。FIG. 14 is a block diagram showing the overall configuration of the conversation support system of the fourth modification. 図15は、第4変形例の会話支援装置の構成を示すブロック図である。FIG. 15 is a block diagram showing a configuration of a conversation support device of the fourth modification. 図16は、第5変形例の会話支援装置の構成を示すブロック図である。FIG. 16 is a block diagram showing a configuration of a conversation support device of the fifth modification. 図17は、会話テキストの分類結果を修正するためのGUIを含むIC支援画面の一例を示す平面図である。FIG. 17 is a plan view showing an example of an IC support screen including a GUI for correcting the classification result of the conversation text. 図18は、会話テキストを検索するためのGUIを含むIC支援画面の一例を示す平面図である。FIG. 18 is a plan view showing an example of an IC support screen including a GUI for searching conversation text.
 以下、図面を参照しながら、会話支援装置、会話支援システム、会話支援方法及びコンピュータプログラムの実施形態について説明する。以下では、会話支援装置、会話支援システム、会話支援方法及びコンピュータプログラムの実施形態が適用された会話支援システムSYSについて説明する。 Hereinafter, the conversation support device, the conversation support system, the conversation support method, and the embodiment of the computer program will be described with reference to the drawings. Hereinafter, the conversation support system SYS to which the conversation support device, the conversation support system, the conversation support method, and the embodiment of the computer program are applied will be described.
 (1)会話支援システムSYSの構成
 はじめに、図1を参照しながら、本実施形態の会話支援システムSYSの構成について説明する。図1は、本実施形態の会話支援システムSYSの構成を示すブロック図である。
(1) Configuration of Conversation Support System SYS First, the configuration of the conversation support system SYS of the present embodiment will be described with reference to FIG. FIG. 1 is a block diagram showing a configuration of the conversation support system SYS of the present embodiment.
 会話支援システムSYSは、医療従事者と患者との会話を支援する。特に、本実施形態では、会話支援システムSYSは、医療従事者が患者に対して行う行為(例えば、医療行為)について医療従事者から患者に対して説明が行われる場面において、会話を支援してもよい。この場合、会話支援システムSYSは、医療従事者から患者に十分な説明が行われたか否かを確認する目的で、会話を支援してもよい。その中でも特に、本実施形態では、会話支援システムSYSは、医療従事者が患者に対して行う行為(例えば、医療行為)について患者に説明し且つその行為について医療従事者と患者との間で合意を得るための合意取得過程における医療従事者と患者との会話を支援してもよい。このような合意取得過程における医療従事者と患者との会話の一例として、インフォームドコンセント(IC)を取得するための会話があげられる。尚、本実施形態では、医療分野におけるインフォームドコンセントは、医療従事者と患者との十分な情報を得た上での合意を意味する。以下では、説明の便宜上、インフォームドコンセントを取得するための合意取得過程における医療従事者と患者との会話を支援する会話支援システムSYSについて説明する。 The conversation support system SYS supports conversations between medical staff and patients. In particular, in the present embodiment, the conversation support system SYS supports conversation in a situation where the medical staff explains to the patient the actions (for example, medical actions) performed by the medical staff to the patient. May be good. In this case, the conversation support system SYS may support conversation for the purpose of confirming whether or not sufficient explanation has been given to the patient by the medical staff. Among them, in particular, in the present embodiment, the conversation support system SYS explains to the patient an action (for example, medical action) performed by the medical worker to the patient, and the medical worker and the patient agree on the action. It may assist the conversation between the healthcare professional and the patient in the process of obtaining an agreement to obtain. An example of a conversation between a healthcare professional and a patient in the process of obtaining such an agreement is a conversation for obtaining informed consent (IC). In the present embodiment, informed consent in the medical field means an agreement between the medical staff and the patient after obtaining sufficient information. In the following, for convenience of explanation, the conversation support system SYS that supports the conversation between the medical staff and the patient in the process of obtaining consent for obtaining informed consent will be described.
 尚、本実施形態における「医療従事者」は、病気の治療、病気の予防、健康の維持、健康の回復及び健康の増進の少なくとも一つを目的とした活動である医療に従事する人物全般を含んでいてもよい。例えば、医療従事者は、独立して医療行為を行うことが可能な人物を含んでいてもよい。独立して医療行為を行うことが可能な人物の一例として、医師、歯科医師及び助産師の少なくとも一人があげられる。例えば、医療従事者は、上位の者(例えば、医師又は歯科医師)からの指示の下に医療行為を行うことが可能な人物を含んでいてもよい。上位の者からの指示の下に医療行為を行うことが可能な人物の一例として、看護師、薬剤師、臨床検査技師、放射線技師及び理学療法士等の少なくとも一人(いわゆる、コ・メディカル)があげられる。例えば、医療従事者は、施術所(例えば、鍼灸院、接骨院及び整骨院の少なくとも一つ)で施術を行う人物を含んでいてもよい。施術を行う人物の一例として、あん摩マッサージ指圧師、はり師、きゅう師及び柔道整復師の少なくとも一人があげられる。例えば、医療従事者は、保健業務に従事する人物を含んでいてもよい。保健業務に従事する人物の一例として、保健師があげられる。例えば、医療従事者は、福祉業務に従事する人物を含んでいてもよい。福祉業務に従事する人物の一例として、社会福祉士、児童福祉士、精神保健福祉士、臨床心理士及び臨床発達心理士の少なくとも一人があげられる。例えば、医療従事者は、介護業務に従事する人物を含んでいてもよい。介護業務に従事する人物の一例として、介護福祉士、訪問介護員、介護支援専門員及びホームヘルパーの少なくとも一人があげられる。 The “medical worker” in the present embodiment refers to all persons engaged in medical care, which is an activity aimed at at least one of treatment of illness, prevention of illness, maintenance of health, recovery of health, and promotion of health. It may be included. For example, a healthcare professional may include a person who is capable of performing medical practice independently. At least one of a doctor, dentist and midwife is an example of a person who can perform medical practice independently. For example, a healthcare professional may include a person who is capable of performing medical practice under the direction of a superior person (eg, a doctor or dentist). At least one person (so-called co-medical) such as a nurse, pharmacist, clinical laboratory technician, radiologist, and physiotherapist is an example of a person who can perform medical treatment under the instruction of a superior person. Be done. For example, a healthcare professional may include a person performing the procedure at a practitioner's office (eg, at least one of an acupuncture and moxibustion clinic, an osteopathic clinic, and an osteopathic clinic). An example of a person performing the procedure is at least one of the masseuse, acupuncturist, moxibutionist and judo rehabilitator. For example, a health care worker may include a person engaged in health services. A public health nurse is an example of a person engaged in health work. For example, a health care worker may include a person engaged in welfare work. An example of a person engaged in welfare work is at least one of a social worker, a child welfare worker, a mental health worker, a clinical psychologist and a clinical development psychologist. For example, a health care worker may include a person engaged in long-term care work. An example of a person engaged in long-term care work is at least one of a long-term care worker, a home-visit caregiver, a long-term care support specialist, and a home helper.
 また、本実施形態における「患者」は、病気の治療、病気の予防、健康の維持、健康の回復及び健康の増進の少なくとも一つを目的とした活動である医療が施される人物全般を含んでいてもよい。尚、患者の様態によっては、患者が意思疎通を行うことができない可能性がある。この場合、通常は、患者の代理人が(例えば、親族、後見人又は補佐人)が患者に代わって医療従事者と会話をする。このため、本実施形態における「患者」は、患者の代理人も含んでいてもよい。 In addition, the "patient" in the present embodiment includes all persons who receive medical treatment, which is an activity aimed at at least one of treatment of illness, prevention of illness, maintenance of health, recovery of health, and promotion of health. You may be. Depending on the patient's condition, the patient may not be able to communicate. In this case, the patient's agent (eg, relative, guardian or assistant) usually speaks with the healthcare professional on behalf of the patient. Therefore, the "patient" in the present embodiment may also include a patient's agent.
 会話支援システムSYSは、一人の医療従事者と一人の患者との間の会話を支援してもよい。会話支援システムSYSは、複数の医療従事者と一人の患者との間の会話を支援してもよい。会話支援システムSYSは、一人の医療従事者と複数の患者との間の会話を支援してもよい。会話支援システムSYSは、複数の医療従事者と複数の患者との間の会話を支援してもよい。 The conversation support system SYS may support conversations between one healthcare professional and one patient. The conversation support system SYS may support conversations between multiple healthcare professionals and a single patient. The conversation support system SYS may support conversations between one healthcare professional and multiple patients. The conversation support system SYS may support conversations between a plurality of healthcare professionals and a plurality of patients.
 インフォームドコンセントを取得するための会話を支援するために、会話支援システムSYSは、図1に示すように、録音装置1と、会話支援装置2と、表示装置3と、入力装置4とを備えている。 In order to support conversation for obtaining informed consent, the conversation support system SYS includes a recording device 1, a conversation support device 2, a display device 3, and an input device 4, as shown in FIG. ing.
 録音装置1は、医療従事者と患者との会話を録音する装置である。医療従事者と患者との会話を録音することで、録音装置1は、医療従事者と患者との会話の内容を音声で示す音声データを生成する。このため、録音装置1は、例えば、マイクと、マイクがアナログの電子信号として録音した会話を、デジタルの音声データに変換するデータ処理装置とを備えていてもよい。一例として、録音装置1は、マイクを内蔵した情報端末(例えば、スマートフォン)であってもよい。録音装置1は、生成した音声データを会話支援装置2に出力する。 The recording device 1 is a device that records a conversation between a medical worker and a patient. By recording the conversation between the medical staff and the patient, the recording device 1 generates voice data indicating the content of the conversation between the medical staff and the patient by voice. Therefore, the recording device 1 may include, for example, a microphone and a data processing device that converts a conversation recorded by the microphone as an analog electronic signal into digital audio data. As an example, the recording device 1 may be an information terminal (for example, a smartphone) having a built-in microphone. The recording device 1 outputs the generated voice data to the conversation support device 2.
 会話支援装置2は、録音装置1が生成した音声データを用いて、医療従事者と患者との会話を支援するための会話支援動作を行う。本実施形態では、会話支援動作は、例えば、インフォームドコンセントを取得する際に医療従事者から患者に対して行われる説明に漏れが生じないように、医療従事者と患者との会話を支援する動作を含んでいてもよい。言い換えれば、会話支援動作は、例えば、インフォームドコンセントを取得する際に医療従事者から患者に対して十分な説明が行われるように、医療従事者と患者との会話を支援する動作を含んでいてもよい。 The conversation support device 2 uses the voice data generated by the recording device 1 to perform a conversation support operation for supporting the conversation between the medical staff and the patient. In the present embodiment, the conversation support operation supports conversation between the medical staff and the patient so that the explanation given from the medical staff to the patient when obtaining informed consent is not omitted. It may include an action. In other words, the conversation support action includes, for example, an action that assists the conversation between the medical staff and the patient so that the medical staff gives sufficient explanation to the patient when obtaining informed consent. You may.
 ここで、図2を参照しながら、会話支援装置2の構成について更に詳細に説明する。図2は、会話支援装置2の構成を示すブロック図である。図2に示すように、会話支援装置2は、CPU(Central Processing Unit)21と、記憶装置22と、入出力IF(Interface:インタフェース)23とを備えている。 Here, the configuration of the conversation support device 2 will be described in more detail with reference to FIG. FIG. 2 is a block diagram showing the configuration of the conversation support device 2. As shown in FIG. 2, the conversation support device 2 includes a CPU (Central Processing Unit) 21, a storage device 22, and an input / output IF (Interface) 23.
 CPU21は、コンピュータプログラムを読み込む。例えば、CPU21は、記憶装置22が記憶しているコンピュータプログラムを読み込んでもよい。例えば、CPU21は、コンピュータで読み取り可能な記録媒体が記憶しているコンピュータプログラムを、図示しない記録媒体読み取り装置を用いて読み込んでもよい。CPU21は、不図示の通信装置を介して、会話支援装置2の外部に配置される不図示の装置からコンピュータプログラムを取得してもよい(つまり、ダウンロードしてもよい又は読み込んでもよい)。CPU21は、読み込んだコンピュータプログラムを実行する。その結果、CPU21内には、会話支援装置2が行うべき動作(例えば、上述した会話支援動作)を実行するための論理的な機能ブロックが実現される。つまり、CPU21は、会話支援装置2が行うべき動作を実行するための論理的な機能ブロックを実現するためのコントローラとして機能可能である。 The CPU 21 reads a computer program. For example, the CPU 21 may read a computer program stored in the storage device 22. For example, the CPU 21 may read a computer program stored in a computer-readable recording medium using a recording medium reading device (not shown). The CPU 21 may acquire a computer program from a device (not shown) arranged outside the conversation support device (2) via a communication device (not shown) (that is, it may be downloaded or read). The CPU 21 executes the read computer program. As a result, a logical functional block for executing an operation to be performed by the conversation support device 2 (for example, the conversation support operation described above) is realized in the CPU 21. That is, the CPU 21 can function as a controller for realizing a logical functional block for executing the operation to be performed by the conversation support device 2.
 図2には、会話支援動作を実行するためにCPU21内に実現される論理的な機能ブロックの一例が示されている。図2に示すように、CPU21内には、テキスト変換部211と、分類部212と、表示制御部213とが実現される。尚、テキスト変換部211、分類部212及び表示制御部213の動作の詳細については、後に図3等を参照しながら詳述するが、ここでその概要について簡単に説明する。テキスト変換部211は、録音装置1から送信される音声データを、テキストデータに変換する。分類部212は、テキストデータが示す文章を細分化することで得られる複数の会話テキストの夫々を、インフォームドコンセントを取得するための合意取得過程で発話されるべき発話内容の種類に応じて区別される複数のカテゴリのうちの少なくとも一つに分類する。表示制御部213は、分類部212の分類結果に基づいて、医療従事者と患者との会話を支援するためIC支援画面31(後述する図6参照)を表示するように表示装置3を制御する。 FIG. 2 shows an example of a logical functional block realized in the CPU 21 to execute a conversation support operation. As shown in FIG. 2, a text conversion unit 211, a classification unit 212, and a display control unit 213 are realized in the CPU 21. The details of the operations of the text conversion unit 211, the classification unit 212, and the display control unit 213 will be described in detail later with reference to FIG. 3 and the like, but the outline thereof will be briefly described here. The text conversion unit 211 converts the voice data transmitted from the recording device 1 into text data. The classification unit 212 distinguishes each of the plurality of conversation texts obtained by subdividing the sentences indicated by the text data according to the type of utterance content to be uttered in the process of obtaining consensus for obtaining informed consent. Classify into at least one of a plurality of categories to be given. The display control unit 213 controls the display device 3 so as to display the IC support screen 31 (see FIG. 6 described later) in order to support the conversation between the medical staff and the patient based on the classification result of the classification unit 212. ..
 尚、録音装置1自身が、録音装置1が録音した音声データをテキストデータに変換するテキスト変換部を備えていてもよい。この場合、録音装置1は、音声データに加えて又は代えて、テキストデータを会話支援装置2に送信してもよい。会話支援装置2は、CPU21内に実現される論理的な機能ブロックとして、テキスト変換部211に加えて又は代えて、録音装置1が送信するテキストデータを取得するデータ取得部を備えていてもよい。会話支援装置2は、テキスト変換部211を備えていなくてもよい。 Note that the recording device 1 itself may include a text conversion unit that converts the voice data recorded by the recording device 1 into text data. In this case, the recording device 1 may transmit text data to the conversation support device 2 in addition to or instead of the voice data. The conversation support device 2 may include, in addition to or instead of the text conversion unit 211, a data acquisition unit that acquires text data transmitted by the recording device 1 as a logical functional block realized in the CPU 21. .. The conversation support device 2 does not have to include the text conversion unit 211.
 会話支援装置2は、医療従事者が使用する情報端末(例えば、パーソナルコンピュータ及びタブレットコンピュータの少なくとも一つ)であってもよい。会話支援装置2は、医療従事者が働いている施設内に設置されたサーバであってもよい。会話支援装置2は、医療従事者が働いている施設の外部に設置されたサーバ(いわゆる、クラウドサーバ)であってもよい。 The conversation support device 2 may be an information terminal (for example, at least one of a personal computer and a tablet computer) used by a medical worker. The conversation support device 2 may be a server installed in the facility where the medical staff is working. The conversation support device 2 may be a server (so-called cloud server) installed outside the facility where the medical staff is working.
 記憶装置22は、所望のデータを記憶可能である。例えば、記憶装置22は、CPU21が実行するコンピュータプログラムを一時的に記憶していてもよい。記憶装置22は、CPU21がコンピュータプログラムを実行している際にCPU21が一時的に使用するデータを一時的に記憶してもよい。記憶装置22は、会話支援装置2が長期的に保存するデータを記憶してもよい。尚、記憶装置22は、RAM(Random Access Memory)、ROM(Read Only Memory)、ハードディスク装置、光磁気ディスク装置、SSD(Solid State Drive)及びディスクアレイ装置のうちの少なくとも一つを含んでいてもよい。 The storage device 22 can store desired data. For example, the storage device 22 may temporarily store a computer program executed by the CPU 21. The storage device 22 may temporarily store data temporarily used by the CPU 21 when the CPU 21 is executing a computer program. The storage device 22 may store data stored by the conversation support device 2 for a long period of time. The storage device 22 may include at least one of a RAM (Random Access Memory), a ROM (Read Only Memory), a hard disk device, a magneto-optical disk device, an SSD (Solid State Drive), and a disk array device. Good.
 本実施形態では特に、記憶装置22は、会話支援動作の支援対象となるインフォームドコンセントを管理するためのIC管理DB(DataBase)221を記憶する。IC管理DB221は、インフォームドコンセントの内容に関する情報を含むレコードを、インフォームドコンセントが取得された回数と同じ数だけ含んでいる。インフォームドコンセントの内容に関する情報を含むレコードは、IC管理DB221のデータ構造の一例を示す図3に示すように、例えば、レコードを識別するための識別番号(ID)を示す情報と、インフォームドコンセントのタイトルを示す情報と、インフォームドコンセントが取得された日時(或いは、インフォームドコンセントが取得される予定の日時)を示す情報と、インフォームドコンセントを取得する際に会話を行った医療従事者及び患者の夫々の氏名を示す情報と、インフォームドコンセントに関するコメント(例えば、医療従事者及び患者の少なくとも一方からのコメント)を示す情報と、インフォームドコンセントに関連して得られたデータ(例えば、上述した音声データ、テキストデータ及び会話テキストの分類結果を示すデータの少なくとも一つ)を示す情報とを含んでいてもよい。尚、IC管理DB221は、合意関連データと称されてもよい。 In particular, in the present embodiment, the storage device 22 stores the IC management DB (DataBase) 221 for managing the informed consent that is the support target of the conversation support operation. The IC management DB 221 contains as many records as the number of times the informed consent has been acquired, including information regarding the contents of the informed consent. As shown in FIG. 3, which shows an example of the data structure of the IC management DB 221, the record including the information regarding the contents of the informed consent includes, for example, the information indicating the identification number (ID) for identifying the record and the informed consent. Information indicating the title of, information indicating the date and time when informed consent was acquired (or the date and time when informed consent is scheduled to be acquired), medical personnel who had a conversation when acquiring informed consent, and Information indicating the name of each patient, information indicating comments regarding informed consent (eg, comments from at least one of the medical personnel and the patient), and data obtained in connection with informed consent (eg, described above). It may include information indicating at least one of the voice data, the text data, and the data indicating the classification result of the conversation text). The IC management DB 221 may be referred to as agreement-related data.
 入出力IF23は、会話支援装置2と会話支援装置2の外部の装置(例えば、録音装置1、表示装置3及び入力装置4の少なくとも一つ)との間でのデータの送受信を行う装置である。従って、会話支援装置2は、入出力IF23を介して、会話支援装置2の外部の装置にデータを送信する。更に、会話支援装置2は、入出力IF23を介して、会話支援装置2の外部の装置から送信されるデータを受信する。 The input / output IF23 is a device that transmits / receives data between the conversation support device 2 and an external device of the conversation support device 2 (for example, at least one of a recording device 1, a display device 3 and an input device 4). .. Therefore, the conversation support device 2 transmits data to an external device of the conversation support device 2 via the input / output IF23. Further, the conversation support device 2 receives data transmitted from an external device of the conversation support device 2 via the input / output IF23.
 再び図1において、表示装置3は、所望の情報を表示可能な出力装置(つまり、ディスプレイ)である。本実施形態では特に、表示装置3は、表示制御部213の制御下で、IC支援画面31を表示する。表示装置3は、医療従事者が使用する情報端末(例えば、パーソナルコンピュータ及びタブレットコンピュータの少なくとも一つ)が備えるディスプレイであってもよい。表示装置3は、医療従事者及び患者の双方が視認可能なディスプレイであってもよい。或いは、会話支援システムSYSは、医療従事者が視認可能な表示装置3と、患者が視認可能な表示装置3とを別々に備えていてもよい。つまり、会話支援システムSYSは、複数の表示装置3を備えていてもよい。この場合、一の表示装置3に表示される情報は、他の表示装置3に表示される情報と同一であってもよいし、異なっていてもよい。 Again, in FIG. 1, the display device 3 is an output device (that is, a display) capable of displaying desired information. In this embodiment, in particular, the display device 3 displays the IC support screen 31 under the control of the display control unit 213. The display device 3 may be a display provided in an information terminal (for example, at least one of a personal computer and a tablet computer) used by a medical professional. The display device 3 may be a display that can be visually recognized by both the medical staff and the patient. Alternatively, the conversation support system SYS may separately include a display device 3 that can be visually recognized by the medical staff and a display device 3 that can be visually recognized by the patient. That is, the conversation support system SYS may include a plurality of display devices 3. In this case, the information displayed on one display device 3 may be the same as or different from the information displayed on the other display devices 3.
 入力装置4は、会話支援装置2のユーザ(例えば、医療従事者及び患者の少なくとも一方)からの入力操作を受け付ける装置である。入力装置4は、例えば、ユーザが操作可能な操作装置を含んでいてもよい。入力装置4は、操作装置の一例として、例えば、キーボード、マウス及びタッチパネルのうちの少なくとも一つを備えていてもよい。入力装置4は、医療従事者が使用する情報端末(例えば、パーソナルコンピュータ及びタブレットコンピュータの少なくとも一つ)が備える操作装置であってもよい。入力装置4は、医療従事者及び患者の双方が操作可能な操作装置であってもよい。或いは、会話支援システムSYSは、医療従事者が操作可能な入力装置4と、患者が操作可能な入力装置4とを別々に備えていてもよい。つまり、会話支援システムSYSは、複数の入力装置4を備えていてもよい。 The input device 4 is a device that receives an input operation from a user of the conversation support device 2 (for example, at least one of a medical worker and a patient). The input device 4 may include, for example, a user-operable operating device. The input device 4 may include, for example, at least one of a keyboard, a mouse, and a touch panel as an example of the operating device. The input device 4 may be an operating device included in an information terminal (for example, at least one of a personal computer and a tablet computer) used by a medical professional. The input device 4 may be an operating device that can be operated by both the medical staff and the patient. Alternatively, the conversation support system SYS may separately include an input device 4 that can be operated by the medical staff and an input device 4 that can be operated by the patient. That is, the conversation support system SYS may include a plurality of input devices 4.
 (2)会話支援装置2が行う会話支援動作
 続いて、図4を参照しながら、会話支援装置2が行う会話支援動作について説明する。図4は、会話支援装置2が行う会話支援動作の流れを示すフローチャートである。
(2) Conversation Support Operation Performed by Conversation Support Device 2 Subsequently, the conversation support operation performed by the conversation support device 2 will be described with reference to FIG. FIG. 4 is a flowchart showing the flow of the conversation support operation performed by the conversation support device 2.
 図4に示すように、会話支援装置2は、インフォームドコンセントに関する初期情報の入力を受け付け、受け付けた初期情報をIC管理DB221に登録する(ステップS11)。但し、既に初期情報が入力済みである(例えば、初期情報がIC管理DB221に登録済みである)場合には、会話支援装置2は、ステップS11の動作を行わなくてもよい。 As shown in FIG. 4, the conversation support device 2 accepts the input of the initial information regarding the informed consent, and registers the received initial information in the IC management DB 221 (step S11). However, when the initial information has already been input (for example, the initial information has been registered in the IC management DB 221), the conversation support device 2 does not have to perform the operation of step S11.
 会話支援装置2の表示制御部213は、インフォームドコンセントに関する初期情報の入力を受け付けるためのGUI(Graphical User Interface)31を表示するように、表示装置3を制御してもよい。この場合、会話支援装置2のユーザ(例えば、医療従事者及び患者の少なくとも一方)は、表示装置3に表示されたGUI32を参照しながら、入力装置4を用いて初期情報を入力してもよい。 The display control unit 213 of the conversation support device 2 may control the display device 3 so as to display a GUI (Graphical User Interface) 31 for receiving input of initial information regarding informed consent. In this case, the user of the conversation support device 2 (for example, at least one of the medical staff and the patient) may input the initial information using the input device 4 while referring to the GUI 32 displayed on the display device 3. ..
 初期情報の入力を受け付けるためのGUI32の一例が、図5に示されている。図5に示すように、GUI32は、IC管理DB221に含まれる情報の入力を受け付けるためのGUIを含んでいてもよい。図5に示す例では、GUI32は、IC管理DB221に含まれる情報の入力を受け付けるためのGUIとして、インフォームドコンセントのタイトル(ICタイトル)を入力するためのテキストボックス321と、インフォームドコンセントを取得する医療従事者の氏名(IC取得者氏名及び医師側出席者氏名)を入力するためのテキストボックス322と、インフォームドコンセントを取得する患者の氏名(患者氏名及び患者代理人氏名)を入力するためのテキストボックス323と、インフォームドコンセントが取得された日時(或いは、インフォームドコンセントが取得される予定の日時)を入力するためのテキストボックス324と、インフォームドコンセントに関するコメントを入力するためのテキストボックス325とを含んでいる。 An example of the GUI 32 for accepting the input of the initial information is shown in FIG. As shown in FIG. 5, the GUI 32 may include a GUI for accepting input of information included in the IC management DB 221. In the example shown in FIG. 5, the GUI 32 acquires a text box 321 for inputting the title (IC title) of informed consent and informed consent as the GUI for receiving the input of the information included in the IC management DB 221. To enter the text box 322 for entering the name of the medical worker (IC acquirer name and attendee name on the doctor side) and the name of the patient who gives informed consent (patient name and patient agent name). Text box 323, text box 324 for entering the date and time when informed consent was acquired (or the date and time when informed consent is scheduled to be acquired), and text box for entering comments regarding informed consent. 325 and are included.
 更に、図5に示すように、GUI32は、会話テキストを分類するためのカテゴリ(言い換えれば、タグ)を指定するための入力を受け付けるためのGUIを含んでいてもよい。この場合、分類部212は、複数の会話テキストの夫々を、GUI32を用いて指定された少なくとも一つのカテゴリに分類する。一方で、分類部212は、GUI32を用いて指定されなかった少なくとも一つのカテゴリは、会話テキストの分類先として使用しなくてもよい。図5に示す例では、GUI32は、会話テキストを分類するためのカテゴリを指定するための入力を受け付けるGUIとして、複数のカテゴリに夫々対応する複数のチェックボックス326を含む。例えば、図5に示すように、GUI32は、「インフォームドコンセントの目的(つまり、合意取得過程の目的)」に言及している会話テキストが分類されるカテゴリを指定するチェックボックス326-1と、「患者の症状又は病状」に言及している会話テキストが分類されるカテゴリを指定するチェックボックス326-2と、「患者に対して行われる検査又は治療」に言及している会話テキストが分類されるカテゴリを指定するチェックボックス326-3と、「患者を対象とする治験又は研究」に言及している会話テキストが分類されるカテゴリを指定するチェックボックス326-4と、「患者の意見」に言及している会話テキストが分類されるカテゴリを指定するチェックボックス326-5と、「医療従事者と患者との間での合意(尚、ここで言う合意は、同意及び拒否の少なくとも一方の意思表示を含んでいてもよい)」に言及している会話テキストが分類されるカテゴリを指定するチェックボックス326-6と、「患者の対する今後の医療方針」に言及している会話テキストが分類されるカテゴリを指定するチェックボックス326-7との少なくとも一つを含んでいてもよい。また、図5に示すように、GUI32は、「病名又は診断名」に言及している会話テキストが分類されるカテゴリを指定するチェックボックス326と、「今回行う予定の行為(例えば、医療行為)によるメリット(例えば、生命予後に関するメリット及びQOL(Quality Of Life)に関するメリット)」に言及している会話テキストが分類されるカテゴリを指定するチェックボックス326と、「今回行う予定の行為(例えば、医療行為)によるデメリット(例えば、危険性、苦痛、副作用及び合併症の少なくとも一つに関するデメリット)」に言及している会話テキストが分類されるカテゴリを指定するチェックボックス326と、「患者の負担(例えば、費用的な負担、及び、休業を含む時間的な負担の少なくとも一方)」に言及している会話テキストが分類されるカテゴリを指定するチェックボックス326と、「医療従事者側からの返答又は確認」に言及している会話テキストが分類されるカテゴリを指定するチェックボックス326との少なくとも一つを含んでいてもよい。 Further, as shown in FIG. 5, the GUI 32 may include a GUI for accepting input for designating a category (in other words, a tag) for classifying conversation text. In this case, the classification unit 212 classifies each of the plurality of conversation texts into at least one category designated by using the GUI 32. On the other hand, the classification unit 212 does not have to use at least one category not specified by using the GUI 32 as a classification destination of the conversation text. In the example shown in FIG. 5, the GUI 32 includes a plurality of check boxes 326 corresponding to each of the plurality of categories as a GUI that accepts input for designating a category for classifying the conversation text. For example, as shown in FIG. 5, the GUI 32 includes a check box 326-1 that specifies the category in which the conversational text that refers to the "purpose of the informed outlet (ie, the purpose of the consensus acquisition process)" is classified. Checkbox 326-2, which specifies the category in which conversational texts referring to "patient symptoms or medical conditions" are classified, and conversational texts referring to "tests or treatments performed on patients" are classified In the check box 326-3 to specify the category, the check box 326-4 to specify the category in which the conversation text referring to "patient trial or study" is classified, and the "patient opinion". A check box 326-5, which specifies the category in which the conversation text referred to is classified, and "Agreement between the medical worker and the patient (note that the agreement here is at least one of consent and refusal). The check box 326-6, which specifies the category in which the conversation text that mentions "may include display", is classified, and the conversation text that mentions "future medical policy for patients" is classified. It may include at least one of the check boxes 326-7 that specify the category to be used. Further, as shown in FIG. 5, the GUI 32 includes a check box 326 for designating a category in which conversation texts referring to "disease name or diagnosis name" are classified, and "acts to be performed this time (for example, medical acts)". Check box 326 that specifies the category in which the conversation text that mentions "merits related to life prognosis and merits related to QOL (Quality Of Life)" is classified, and "acts to be performed this time (for example, medical treatment)" A check box 326 that specifies the category in which the conversation text that mentions the disadvantages of action) (eg, the disadvantages of at least one of danger, distress, side effects, and complications) is classified, and the "patient burden (eg, patient burden (eg,)". , Cost burden, and time burden including leave) ”with a check box 326 that specifies the category in which the conversation text is categorized, and“ Response or confirmation from the healthcare professional side. May include at least one with a check box 326 that specifies the category in which the conversation text referred to is classified.
 再び図4において、その後、テキスト変換部211は、入出力IF23を介して、録音装置1が生成した音声データを取得する(ステップS12)。取得された音声データは、記憶装置22によって記憶されてもよい。この際、記憶装置22によって記憶された音声データに関する情報は、IC管理DB221に登録されてもよい。音声データには、音声データの改ざんを防ぐための情報(例えば、タイムスタンプ及び電子署名の少なくとも一方)が付与されてもよい。 In FIG. 4 again, after that, the text conversion unit 211 acquires the voice data generated by the recording device 1 via the input / output IF23 (step S12). The acquired voice data may be stored in the storage device 22. At this time, the information regarding the voice data stored by the storage device 22 may be registered in the IC management DB 221. Information for preventing falsification of the voice data (for example, at least one of a time stamp and an electronic signature) may be added to the voice data.
 その後、テキスト変換部211は、ステップS12で取得した音声データから、医療従事者と患者との会話の内容を文章(つまり、テキスト)で示すテキストデータを生成する(ステップS13)。つまり、テキスト変換部211は、ステップS12で取得した音声データを、テキストデータに変換する(ステップS13)。生成されたテキストデータは、記憶装置22によって記憶されてもよい。この際、記憶装置22によって記憶されたテキストデータに関する情報は、IC管理DB221に登録されてもよい。テキストデータには、テキストデータの改ざんを防ぐための情報(例えば、タイムスタンプ及び電子署名の少なくとも一方)が付与されてもよい。尚、この際、テキスト変換部211(或いは、CPU21が備える任意の機能ブロック)は、医療従事者の発話内容を示すテキストと患者の発話内容を示すテキストとが区別できるように、テキストデータを生成してもよい。 After that, the text conversion unit 211 generates text data indicating the content of the conversation between the medical staff and the patient in sentences (that is, text) from the voice data acquired in step S12 (step S13). That is, the text conversion unit 211 converts the voice data acquired in step S12 into text data (step S13). The generated text data may be stored by the storage device 22. At this time, the information about the text data stored by the storage device 22 may be registered in the IC management DB 221. Information for preventing falsification of the text data (for example, at least one of a time stamp and an electronic signature) may be added to the text data. At this time, the text conversion unit 211 (or an arbitrary functional block included in the CPU 21) generates text data so that the text indicating the utterance content of the medical worker and the text indicating the utterance content of the patient can be distinguished. You may.
 その後、分類部212は、ステップS13で生成されたテキストデータが示す文章を細分化することで得られる複数の会話テキストの夫々を、ステップS11で指定された複数のカテゴリのうちの少なくとも一つに分類する(ステップS14)。尚、会話テキストをカテゴリに分類することは、会話テキストに対して、ステップS11で指定された複数のカテゴリに夫々対応する複数のタグのうちの少なくとも一つを割り当てる(つまり、会話テキストをタグ付けする)ことと等価であるとみなしてもよい。 After that, the classification unit 212 sets each of the plurality of conversation texts obtained by subdividing the sentence indicated by the text data generated in step S13 into at least one of the plurality of categories designated in step S11. Classify (step S14). In classifying the conversation text into categories, at least one of a plurality of tags corresponding to the plurality of categories specified in step S11 is assigned to the conversation text (that is, the conversation text is tagged). It may be regarded as equivalent to (doing).
 例えば、分類部212は、所定の分類モデルを用いて、複数の会話テキストの夫々を、複数のカテゴリのうちの少なくとも一つに分類してもよい。分類モデルは、例えば、カテゴリに関するマスタデータと、各カテゴリに分類される例文に関するマスタデータと、単語のベクトル化に関する辞書データとを含んでいてもよい。この場合、分類部212は、テキストデータが示す文章をマスタデータと比較することで、当該文章を構成する会話テキストを所望のカテゴリに分類してもよい。分類部212は、テキストデータが示す文章を構成する単語をベクトル化する(つまり、単語又は文章の特徴ベクトルを算出する)と共に、ベクトル化した単語に基づいて(つまり、特徴ベクトルに基づいて)会話テキストを所望のカテゴリに分類してもよい。 For example, the classification unit 212 may classify each of the plurality of conversation texts into at least one of a plurality of categories by using a predetermined classification model. The classification model may include, for example, master data relating to categories, master data relating to example sentences classified into each category, and dictionary data relating to word vectorization. In this case, the classification unit 212 may classify the conversation texts constituting the text into a desired category by comparing the text indicated by the text data with the master data. The classification unit 212 vectorizes the words that make up the sentence indicated by the text data (that is, calculates the feature vector of the word or sentence), and speaks based on the vectorized word (that is, based on the feature vector). The text may be categorized as desired.
 例えば、分類部212は、分類モデルを用いることに加えて又は代えて、ルールベースに準拠した方法を用いて、複数の会話テキストの夫々を複数のカテゴリのうちの少なくとも一つに分類してもよい。つまり、分類部212は、複数の会話テキストの夫々を所定のルールに従って分類することで、複数の会話テキストの夫々を複数のカテゴリのうちの少なくとも一つに分類してもよい。 For example, the classification unit 212 may, in addition to or instead of using a classification model, classify each of the plurality of conversation texts into at least one of a plurality of categories using a rule-based compliant method. Good. That is, the classification unit 212 may classify each of the plurality of conversation texts into at least one of the plurality of categories by classifying each of the plurality of conversation texts according to a predetermined rule.
 例えば、分類部212は、分類モデル及びルールベースに準拠した方法の少なくとも一方を用いることに加えて又は代えて、コサイン類似度(つまり、会話テキストのベクトルに関するコサイン類似度)を用いて、複数の会話テキストの夫々を複数のカテゴリのうちの少なくとも一つに分類してもよい。 For example, the classifier 212 uses cosine similarity (ie, cosine similarity with respect to a vector of conversational text) in addition to or instead of using at least one of a classification model and a rule-based method. Each conversational text may fall into at least one of a plurality of categories.
 例えば、分類部212は、分類モデル、ルールベースに準拠した方法及びコサイン類似度の少なくとも一つを用いることに加えて又は代えて、クラスタリングに準拠した手法を用いて、複数の会話テキストの夫々を複数のカテゴリのうちの少なくとも一つに分類してもよい。 For example, the classification unit 212 uses a clustering-based method in addition to or instead of using at least one of a classification model, a rule-based method, and a cosine similarity to create each of the plurality of conversational texts. It may be classified into at least one of a plurality of categories.
 例えば、分類部212は、分類モデル、ルールベースに準拠した方法、コサイン類似度及びクラスタリングに準拠した手法の少なくとも一つを用いることに加えて又は代えて、学習モデルを用いて、複数の会話テキストの夫々を複数のカテゴリのうちの少なくとも一つに分類してもよい。学習モデルは、テキストデータが入力された場合に、当該テキストデータを構成する会話テキストのカテゴリを出力する学習モデル(例えば、ニューラルネットワークを用いた学習モデル)である。 For example, the classification unit 212 uses a learning model in addition to or instead of using at least one of a classification model, a rule-based method, a cosine similarity and a clustering-based method, and a plurality of conversational texts. Each of these may be classified into at least one of a plurality of categories. The learning model is a learning model (for example, a learning model using a neural network) that outputs a category of conversation texts constituting the text data when text data is input.
 尚、会話テキストの文意によっては、複数のカテゴリのいずれにも分類し難い会話テキストが存在する可能性がある。この場合、分類部212は、複数のカテゴリのいずれにも分類し難い会話テキストを、複数のカテゴリのいずれにも分類しなくてもよい。この場合、例えば、分類部212は、複数のカテゴリのいずれにも分類し難い会話テキストに対して、「対応カテゴリなし」又は「対応カテゴリ不明」というタグを付与してもよい。 Depending on the meaning of the conversation text, there may be conversation text that is difficult to classify into any of multiple categories. In this case, the classification unit 212 does not have to classify the conversation text, which is difficult to classify into any of the plurality of categories, into any of the plurality of categories. In this case, for example, the classification unit 212 may add a tag of "no corresponding category" or "unknown corresponding category" to conversation text that is difficult to classify into any of a plurality of categories.
 テキストデータが示す文章は、任意の単位で細分化されてもよい。つまり、テキストデータが示す文章を細分化することで得られる会話テキストのサイズは、任意であってもよい。例えば、テキストデータが示す文章の少なくとも一部は、単語の単位で細分化されてもよい。例えば、テキストデータが示す文章の少なくとも一部は、文節の単位で細分化されてもよい。例えば、テキストデータが示す文章の少なくとも一部は、句読点を境界とする複数の会話テキストに細分化されてもよい。例えば、テキストデータが示す文章の少なくとも一部は、文(例えば、単文、複文及び重文の少なくとも一つ)の単位で細分化されてもよい。例えば、テキストデータが示す文章の少なくとも一部は、形態素の単位で細分化されてもよい。この場合、テキストデータに対して形態素解析が行われてもよい。 The text indicated by the text data may be subdivided into arbitrary units. That is, the size of the conversation text obtained by subdividing the sentence indicated by the text data may be arbitrary. For example, at least a part of the sentence indicated by the text data may be subdivided into word units. For example, at least a part of the sentence indicated by the text data may be subdivided into bunsetsu units. For example, at least a part of the sentence indicated by the text data may be subdivided into a plurality of conversational texts with punctuation marks as boundaries. For example, at least a part of the sentence indicated by the text data may be subdivided into units of sentences (for example, at least one of a single sentence, a compound sentence, and a compound sentence). For example, at least a part of the sentence indicated by the text data may be subdivided into units of morphemes. In this case, morphological analysis may be performed on the text data.
 分類部212による分類結果を示す分類データ(例えば、会話テキストに関する情報及び会話テキストが分類されたカテゴリに関する情報の少なくとも一方)は、記憶装置22によって記憶されてもよい。この際、記憶装置22によって分類データには、分類データの改ざんを防ぐための情報(例えば、タイムスタンプ及び電子署名の少なくとも一方)が付与されてもよい。 The classification data indicating the classification result by the classification unit 212 (for example, at least one of the information regarding the conversation text and the information regarding the category in which the conversation text is classified) may be stored by the storage device 22. At this time, the storage device 22 may add information (for example, at least one of a time stamp and an electronic signature) to the classification data to prevent the classification data from being tampered with.
 その後、表示制御部213は、分類部212の分類結果に基づいて、医療従事者と患者との会話を支援するためIC支援画面31を生成する(ステップS15)。更に、表示制御部213は、生成したIC支援画面31を表示するように表示装置3を制御する(ステップS15)。その結果、表示装置3は、IC支援画面31を表示する。 After that, the display control unit 213 generates an IC support screen 31 to support the conversation between the medical staff and the patient based on the classification result of the classification unit 212 (step S15). Further, the display control unit 213 controls the display device 3 so as to display the generated IC support screen 31 (step S15). As a result, the display device 3 displays the IC support screen 31.
 IC支援画面31の一例が図6に示されている。図6に示すように、IC支援画面31は、例えば、会話表示画面311と、カテゴリ表示画面312とを含んでいてもよい。つまり、表示制御部213は、会話表示画面311とカテゴリ表示画面312とを並列表示するように、表示装置3を制御する。但し、IC支援画面31は、会話表示画面311及びカテゴリ表示画面312の少なくとも一方を含んでいなくてもよい。 An example of the IC support screen 31 is shown in FIG. As shown in FIG. 6, the IC support screen 31 may include, for example, a conversation display screen 311 and a category display screen 312. That is, the display control unit 213 controls the display device 3 so that the conversation display screen 311 and the category display screen 312 are displayed in parallel. However, the IC support screen 31 does not have to include at least one of the conversation display screen 311 and the category display screen 312.
 会話表示画面311は、会話テキストの内容を、医療従事者と患者との会話の流れに沿って表示する。つまり、会話表示画面311は、ある期間中における医療従事者と患者との会話の内容を示すテキストを、会話の流れの順に表示する。この際、会話テキストの内容は、その会話テキストが示す会話が行われていた時間を示す情報と、その会話テキストが示す言葉を発話した人物を示す情報と、その会話テキストが分類されたカテゴリを示す情報と共に表示されてもよい。会話表示画面311は、現在の医療従事者と患者との会話の内容を示すテキストを、会話の流れの順に表示してもよい。但し、会話の内容を示すテキストを表示するためには、上述したステップS12からステップS14までの処理が完了する必要がある。このため、会話表示画面311は、現在の医療従事者と患者との会話の内容を示すテキストとして、実質的には、ステップS12からステップS14までの処理が完了するために必要な時間だけ現在時刻から遅延した会話の内容を示すテキストを表示している。或いは、会話表示画面311は、一定時間前の(例えば、数秒前の、数十秒前の又は数分前の)医療従事者と患者との会話の内容を示すテキストを、会話の流れの順に表示してもよい。或いは、会話表示画面311は、医療従事者と患者との既に終了した会話の内容を示すテキストを、会話の流れの順に表示してもよい。 The conversation display screen 311 displays the content of the conversation text along the flow of conversation between the medical staff and the patient. That is, the conversation display screen 311 displays texts indicating the contents of the conversation between the medical staff and the patient during a certain period in the order of the conversation flow. At this time, the content of the conversation text includes information indicating the time during which the conversation indicated by the conversation text was taking place, information indicating the person who spoke the word indicated by the conversation text, and a category in which the conversation text is classified. It may be displayed with the information shown. The conversation display screen 311 may display texts indicating the contents of the conversation between the current medical staff and the patient in the order of the conversation flow. However, in order to display the text indicating the content of the conversation, it is necessary to complete the above-mentioned processes from step S12 to step S14. Therefore, the conversation display screen 311 is, as a text indicating the content of the conversation between the current medical staff and the patient, substantially the current time only for the time required to complete the processes from step S12 to step S14. Displaying text indicating the content of the delayed conversation. Alternatively, the conversation display screen 311 displays texts indicating the contents of the conversation between the medical staff and the patient a certain time ago (for example, a few seconds ago, a few tens of seconds ago, or a few minutes ago) in the order of the conversation flow. It may be displayed. Alternatively, the conversation display screen 311 may display texts indicating the contents of the already completed conversation between the medical staff and the patient in the order of the conversation flow.
 尚、会話表示画面311は、細分化された会話テキストの単位で会話の内容を表示してもよい。或いは、会話表示画面311は、会話テキストの単位で会話の内容を表示することに加えて又は代えて、会話テキストとは異なる単位で会話の内容を表示してもよい。例えば、会話表示画面311は、複数の会話テキストを含むひとまとまりの会話の単位で(つまり、意味の通じる会話の単位で)、会話の内容を表示してもよい。例えば、会話表示画面311は、会話テキストに細分化される前のテキストデータが示すテキストを表示してもよい。 The conversation display screen 311 may display the contents of the conversation in units of subdivided conversation texts. Alternatively, the conversation display screen 311 may display the content of the conversation in a unit different from the conversation text in addition to or instead of displaying the content of the conversation in the unit of the conversation text. For example, the conversation display screen 311 may display the content of the conversation in units of a group of conversations including a plurality of conversation texts (that is, in units of conversations that make sense). For example, the conversation display screen 311 may display the text indicated by the text data before being subdivided into the conversation text.
 一方で、カテゴリ表示画面312は、複数の会話テキストのうちの少なくとも一部を、分類されたカテゴリ別に表示する。つまり、カテゴリ表示画面312は、複数のカテゴリのうちの一のカテゴリに分類された会話テキストを表示する。一方で、カテゴリ表示画面312は、複数のカテゴリのうちの一のカテゴリとは異なる他のカテゴリに分類された会話テキストを表示しなくてもよい。図6に示す例では、カテゴリ表示画面312は、「患者の症状又は病状」に関するカテゴリに分類された会話テキストが表示されております。尚、カテゴリ表示画面312においても、会話表示画面311と同様に、会話テキストの内容は、その会話テキストが示す会話が行われていた時間を示す情報と、その会話テキストが示す言葉を発話した人物を示す情報と、その会話テキストが分類されたカテゴリを示す情報と共に表示される。 On the other hand, the category display screen 312 displays at least a part of the plurality of conversation texts by classified categories. That is, the category display screen 312 displays conversation text classified into one of a plurality of categories. On the other hand, the category display screen 312 does not have to display conversation texts classified into other categories different from one category among the plurality of categories. In the example shown in FIG. 6, the category display screen 312 displays conversation texts classified into categories related to "patient's symptom or medical condition". In the category display screen 312 as well, as in the conversation display screen 311, the content of the conversation text includes information indicating the time during which the conversation indicated by the conversation text was taking place and the person who spoke the words indicated by the conversation text. Is displayed along with information indicating the category in which the conversation text is classified.
 カテゴリ表示画面312は、カテゴリ表示画面312に表示する会話テキストのカテゴリを指定するためのGUI3120を含んでいてもよい。図6に示す例では、GUI3120は、複数のカテゴリに夫々対応する複数のボタン3121を含む。複数のボタン3121は、図4のステップS11で指定された複数のカテゴリに夫々対応する。例えば、図6に示すように、GUI3120は、全てのカテゴリの会話テキストを表示することを希望する場合に押下されるボタン3121-1と、「インフォームドコンセントの目的」に言及している会話テキストを表示することを希望する場合に押下されるボタン3121-2と、「患者の症状又は病状」に言及している会話テキストを表示することを希望する場合に押下されるボタン3121-3と、「患者に対して行われる検査又は治療」に言及している会話テキストを表示することを希望する場合に押下されるボタン3121-4と、「患者を対象とする治験又は研究」に言及している会話テキストを表示することを希望する場合に押下されるボタン3121-5と、「患者の意見」に言及している会話テキストを表示することを希望する場合に押下されるボタン3121-6と、「医療従事者と患者との間での合意」に言及している会話テキストを表示することを希望する場合に押下されるボタン3121-7と、「患者の対する今後の医療方針」に言及している会話テキストを表示することを希望する場合に押下されるボタン3121-8とを含んでいてもよい。 The category display screen 312 may include a GUI 3120 for designating the category of the conversation text to be displayed on the category display screen 312. In the example shown in FIG. 6, the GUI 3120 includes a plurality of buttons 3121 corresponding to a plurality of categories, respectively. The plurality of buttons 3121 correspond to the plurality of categories specified in step S11 of FIG. 4, respectively. For example, as shown in FIG. 6, the GUI 3120 has a button 3121-1 that is pressed when it wants to display conversation text in all categories, and conversation text that mentions "the purpose of informed consent". Button 3121-2 pressed when wishing to display, and button 3121-3 pressed when wishing to display conversational text referring to "patient's symptoms or medical conditions". Button 3121-4 pressed when wishing to display conversational text referring to "test or treatment performed on a patient" and referring to "patient trial or study" Button 3121-5 pressed when wishing to display the conversational text in question and button 3121-6 pressed when wishing to display the conversational text referring to the "patient's opinion" , Mentions button 3121-7, which is pressed when wishing to display conversational text that mentions "agreement between healthcare professionals and patients", and "future medical policy for patients" It may include a button 3121-8 that is pressed if it wishes to display the conversational text in question.
 この場合、会話支援装置2のユーザ(例えば、医療従事者及び患者の少なくとも一方)は、表示装置3に表示されたGUI3120を参照しながら、入力装置4を用いて、カテゴリ表示画面312に表示する会話テキストのカテゴリを指定してもよい。その結果、カテゴリ表示画面312には、ユーザが指定したカテゴリに分類された会話テキストが表示される。 In this case, the user of the conversation support device 2 (for example, at least one of the medical staff and the patient) displays the conversation support device 2 on the category display screen 312 by using the input device 4 while referring to the GUI 3120 displayed on the display device 3. You may specify a category of conversation text. As a result, the conversation texts classified into the categories specified by the user are displayed on the category display screen 312.
 以上説明した動作(特に、ステップS12からステップS15までの動作)が、会話支援動作を終了すると判定されるまで繰り返される(ステップS16)。 The operation described above (particularly, the operation from step S12 to step S15) is repeated until it is determined that the conversation support operation is completed (step S16).
 尚、上述したように、医療従事者が視認可能な表示装置3と患者が視認可能な表示装置3とを会話支援システムSYSが別々に備えている場合には、患者が視認可能な表示装置3に表示されるIC支援画面31の内容は、医療従事者が視認可能な表示装置3に表示されるIC支援画面31の内容とは異なっていてもよい。患者が視認可能な表示装置3に表示されるIC支援画面31は、医療従事者の説明を患者が理解するために有益な情報(例えば、医療従事者が発した医療用語の解説に関する情報及び患者の診断結果に関する情報の少なくとも一方)が表示されていてもよい。 As described above, when the conversation support system SYS separately includes the display device 3 visible to the medical staff and the display device 3 visible to the patient, the display device 3 visible to the patient. The content of the IC support screen 31 displayed on the screen may be different from the content of the IC support screen 31 displayed on the display device 3 visible to the medical staff. The IC support screen 31 displayed on the display device 3 that can be visually recognized by the patient is useful information for the patient to understand the explanation of the medical staff (for example, information on the explanation of medical terms issued by the medical staff and the patient). At least one of the information regarding the diagnosis result of is displayed.
 (3)会話支援システムSYSの技術的効果
 以上説明したように、本実施形態の会話支援システムSYSは、インフォームドコンセントを取得するために行われた医療従事者と患者との会話の内容を示す会話テキストが、当該会話テキストのカテゴリと共に表示されるIC支援画面31を表示することができる。このため、医療従事者は、IC支援画面31に表示された会話テキストのカテゴリを確認することで、あるカテゴリに関する説明が不足しているか否かを判定することができる。尚、ここで言う「あるカテゴリに関する説明が不足している」状態は、あるカテゴリに関する医療従事者からの説明に漏れが生じている状態を意味していてもよい。つまり、「あるカテゴリに関する説明が不足している」状態は、あるカテゴリに関して医療従事者から患者に対して伝えるべき情報の少なくとも一部が患者に対して説明されていない状態を意味していてもよい。患者もまた、IC支援画面31に表示された会話テキストのカテゴリを確認することで、あるカテゴリに関する説明が不足しているか否かを判定することができる。その結果、あるカテゴリに関する説明が不足していると判定された場合には、医療従事者は、あるカテゴリに関する説明を更に追加で行うことができる。このため、会話支援システムSYSは、医療従事者から患者への説明に漏れが生ずる可能性を低減することができる。
(3) Technical Effects of Conversation Support System SYS As described above, the conversation support system SYS of the present embodiment shows the contents of the conversation between the medical staff and the patient in order to obtain informed consent. The IC support screen 31 in which the conversation text is displayed together with the category of the conversation text can be displayed. Therefore, the medical staff can determine whether or not the explanation about a certain category is insufficient by checking the category of the conversation text displayed on the IC support screen 31. The state of "insufficient explanation about a certain category" mentioned here may mean a state in which the explanation from the medical staff regarding a certain category is omitted. In other words, "insufficient explanation for a category" means that at least some of the information that the healthcare professional should convey to the patient about a category is not explained to the patient. Good. The patient can also determine whether or not the explanation about a certain category is insufficient by checking the category of the conversation text displayed on the IC support screen 31. As a result, if it is determined that the explanation about a certain category is insufficient, the medical staff can give an additional explanation about a certain category. Therefore, the conversation support system SYS can reduce the possibility of omission of explanation from the medical staff to the patient.
 また、本実施形態では、会話支援システムSYSは、複数の会話テキストのうちの少なくとも一部を、分類されたカテゴリ別に表示するためのカテゴリ表示画面312を表示することができる。この場合、あるカテゴリに分類された会話テキストがまとめて表示されるがゆえに、医療従事者及び患者は、あるカテゴリに関する説明が不足しているか否かをより適切に判定することができる。このため、会話支援システムSYSは、医療従事者から患者への説明に漏れが生ずる可能性をより適切に低減することができる。 Further, in the present embodiment, the conversation support system SYS can display a category display screen 312 for displaying at least a part of a plurality of conversation texts by classified categories. In this case, since the conversation texts classified into a certain category are displayed together, the medical staff and the patient can more appropriately determine whether or not the explanation about the certain category is insufficient. Therefore, the conversation support system SYS can more appropriately reduce the possibility of omission of explanation from the medical staff to the patient.
 (4)変形例
 続いて、会話支援システムSYSの変形例について説明する。
(4) Modification Example Next, a modification of the conversation support system SYS will be described.
 (4-1)第1変形例
 はじめに、第1変形例の会話支援システムSYSaについて説明する。第1変形例の会話支援システムSYSaは、上述した会話支援システムSYSと比較して、会話支援装置2に代えて会話支援装置2aを備えているという点で異なる。会話支援システムSYSaのその他の特徴は、会話支援システムSYSのその他の特徴と同一であってもよい。このため、以下、図7を参照しながら、第1変形例の会話支援装置2aについて説明する。図7は、第1変形例の会話支援装置2aの構成を示すブロック図である。尚、既に説明済みの構成要件については、同一の参照符号を付することでその詳細な説明を省略する。
(4-1) First Modified Example First , the conversation support system SYSA of the first modified example will be described. The conversation support system SYSa of the first modification is different from the conversation support system SYS described above in that it includes a conversation support device 2a instead of the conversation support device 2. Other features of the conversation support system SYS may be the same as other features of the conversation support system SYS. Therefore, the conversation support device 2a of the first modification will be described below with reference to FIG. 7. FIG. 7 is a block diagram showing a configuration of the conversation support device 2a of the first modification. The configuration requirements that have already been explained are designated by the same reference numerals, and detailed description thereof will be omitted.
 図7に示すように、第1変形例の会話支援装置2aは、上述した会話支援装置2と比較して、会話支援動作を実行するためにCPU21内に実現される論理的な機能ブロックとして、CPU21内に警告部214aが実現されているという点で異なる。会話支援装置2aのその他の特徴は、会話支援装置2のその他の特徴と同一であってもよい。 As shown in FIG. 7, the conversation support device 2a of the first modification is a logical functional block realized in the CPU 21 to execute the conversation support operation as compared with the conversation support device 2 described above. The difference is that the warning unit 214a is realized in the CPU 21. Other features of the conversation support device 2a may be the same as other features of the conversation support device 2.
 警告部214aは、分類部212の分類結果に基づいて、インフォームドコンセントで必要とされる会話が不十分であるか否かを判定する。例えば、警告部214aは、分類部212の分類結果に基づいて、カテゴリ毎に、インフォームドコンセントで必要とされる会話が不十分であるか否かを判定してもよい。尚、医療従事者と患者との間の会話の分量が多くなればなるほど、医療従事者から患者に対して十分な説明がなされた可能性が高くなる。逆に、医療従事者と患者との間の会話の分量が少なくなればなるほど、医療従事者から患者に対して十分な説明がなされた可能性が低くなる。このため、あるカテゴリにおいてインフォームドコンセントで必要とされる会話が不十分であるか否かを判定する動作は、実質的には、あるカテゴリに関する説明が不足しているか否かを判定する動作と等価であるとみなしてもよい。言い換えれば、あるカテゴリにおいてインフォームドコンセントで必要とされる会話が不十分であるか否かを判定する動作は、実質的には、あるカテゴリに関する説明が不足しているか否かを判定する動作の一具体例であるとみなしてもよい。 The warning unit 214a determines whether or not the conversation required for informed consent is insufficient based on the classification result of the classification unit 212. For example, the warning unit 214a may determine whether or not the conversation required for informed consent is insufficient for each category based on the classification result of the classification unit 212. The greater the volume of conversation between the healthcare professional and the patient, the more likely it is that the healthcare professional has given a sufficient explanation to the patient. Conversely, the smaller the volume of conversation between the healthcare professional and the patient, the less likely it is that the healthcare professional has given sufficient explanation to the patient. Therefore, the operation of determining whether or not the conversation required for informed consent in a certain category is insufficient is substantially the operation of determining whether or not the explanation regarding a certain category is insufficient. It may be considered equivalent. In other words, the action of determining whether or not the conversation required by informed consent in a certain category is insufficient is, in effect, the action of determining whether or not the explanation about a certain category is insufficient. It may be regarded as a specific example.
 一のカテゴリにおいてインフォームドコンセントで必要とされる会話が不十分であるか否かを判定するために、警告部214aは、一のカテゴリに分類された会話テキストの数(つまり、細分化された一塊の会話テキストの数)が、当該一のカテゴリに固有の閾値よりも多いか否かを判定してもよい。会話テキストの数が閾値よりも多い場合には、会話テキストの数が閾値よりも少ない場合と比較して、インフォームドコンセントで必要とされる一のカテゴリに関する会話が十分に行われた可能性が高い。一方で、会話テキストの数が閾値よりも少ない場合には、会話テキストの数が閾値よりも多い場合と比較して、インフォームドコンセントで必要とされる一のカテゴリに関する会話が十分に行われていない可能性が高い。このため、警告部214aは、会話テキストの数が閾値よりも多い場合には、インフォームドコンセントで必要とされる会話が不十分でないと判定してもよい。一方で、警告部214aは、会話テキストの数が閾値よりも少ない場合には、インフォームドコンセントで必要とされる会話が不十分であると判定してもよい。 In order to determine whether the conversation required by informed consent in one category is insufficient, the warning unit 214a is divided into the number of conversation texts classified into one category (that is, subdivided). It may be determined whether the number of conversational texts in a block) is greater than the threshold specific to the category. If the number of conversation texts is greater than the threshold, it is possible that there was more conversation about one category required for informed consent than if the number of conversation texts was less than the threshold. high. On the other hand, when the number of conversation texts is less than the threshold, there is more conversation about one category required for informed consent than when the number of conversation texts is more than the threshold. Most likely not. Therefore, when the number of conversation texts is larger than the threshold value, the warning unit 214a may determine that the conversation required for informed consent is not sufficient. On the other hand, the warning unit 214a may determine that the conversation required for informed consent is insufficient when the number of conversation texts is less than the threshold value.
 一のカテゴリにおいてインフォームドコンセントで必要とされる会話が不十分であるか否かを判定するために、警告部214aは、一のカテゴリに関する医療従事者及び患者の発話回数(つまり、発言回数)及び発話時間(つまり、発言時間)の少なくとも一方が、当該一のカテゴリに固有の閾値よりも多いか否かを判定してもよい。尚、発話回数は、例えば、意味の通じるひとまとまりの会話が発せられた回数を意味していてもよい。発話回数及び発話時間の少なくとも一方が閾値よりも多い場合には、発話回数及び発話時間の少なくとも一方が閾値よりも少ない場合と比較して、インフォームドコンセントで必要とされる一のカテゴリに関する会話が十分に行われた可能性が高い。一方で、発話回数及び発話時間の少なくとも一方が閾値よりも少ない場合には、発話回数及び発話時間の少なくとも一方が閾値よりも多い場合と比較して、インフォームドコンセントで必要とされる一のカテゴリに関する会話が十分に行われていない可能性が高い。このため、警告部214aは、発話回数及び発話時間の少なくとも一方が閾値よりも多い場合には、インフォームドコンセントで必要とされる会話が不十分でないと判定してもよい。一方で、警告部214aは、発話回数及び発話時間の少なくとも一方が閾値よりも少ない場合には、インフォームドコンセントで必要とされる会話が不十分であると判定してもよい。 In order to determine whether the conversation required by informed consent in one category is insufficient, the warning unit 214a may use the number of utterances (that is, the number of remarks) of the medical staff and the patient regarding one category. And at least one of the utterance time (that is, the speaking time) may be determined whether or not it is greater than the threshold value specific to the one category. The number of utterances may mean, for example, the number of times a group of conversations that have a common meaning has been uttered. When at least one of the number of utterances and the utterance time is greater than the threshold, the conversation about one category required for informed consent is more than when at least one of the number of utterances and the utterance time is less than the threshold. It is likely that it was done well. On the other hand, when at least one of the number of utterances and the utterance time is less than the threshold value, one category required for informed consent is compared with the case where at least one of the number of utterances and the utterance time is more than the threshold value. There is a high possibility that there is not enough conversation about. Therefore, the warning unit 214a may determine that the conversation required for informed consent is not sufficient when at least one of the number of utterances and the utterance time is greater than the threshold value. On the other hand, the warning unit 214a may determine that the conversation required for informed consent is insufficient when at least one of the number of utterances and the utterance time is less than the threshold value.
 閾値は、インフォームドコンセントで必要とされる会話が不十分である状態と、インフォームドコンセントで必要とされる会話が不十分でない状態とを、会話テキストの数、発話回数及び発話時間の少なくとも一つから適切に区別可能な適切な値に設定されていることが好ましい。このような閾値は、会話支援システムSYS又はユーザによって予め定められた固定値であってもよい。閾値は、会話支援システムSYS又はユーザによって適宜設定可能な可変値であってもよい。 The threshold is at least one of the number of conversation texts, the number of utterances, and the utterance time, depending on whether the conversation required by informed consent is insufficient and the conversation required by informed consent is not sufficient. It is preferable that the value is set to an appropriate value that can be appropriately distinguished from the above. Such a threshold value may be a fixed value predetermined by the conversation support system SYS or the user. The threshold value may be a variable value that can be appropriately set by the conversation support system SYS or the user.
 閾値はゼロであってもよい。この場合、一のカテゴリに分類された会話テキストの数、一のカテゴリに関する発話回数及び一のカテゴリに関する発話時間の少なくとも一つが閾値よりも多いか否か(つまり、ゼロよりも多いか否か)を判定する動作は、一のカテゴリに関する発話が行われたか否を判定する動作と等価である。一のカテゴリに分類された会話テキストの数、一のカテゴリに関する発話回数及び一のカテゴリに関する発話時間の少なくとも一つが閾値よりも多くないと判定された場合に、一のカテゴリに関する発話が全く行われていないと推定される。このため、この場合には、警告部214aは、インフォームドコンセントで必要とされる会話が不十分であると判定してもよい。 The threshold value may be zero. In this case, whether at least one of the number of conversation texts in one category, the number of utterances in one category, and the utterance time in one category is greater than the threshold (ie, greater than zero). The action of determining is equivalent to the action of determining whether or not an utterance related to one category has been made. If it is determined that at least one of the number of conversation texts in one category, the number of utterances in one category, and the utterance time in one category is not greater than the threshold, no utterances in one category are made. It is estimated that it is not. Therefore, in this case, the warning unit 214a may determine that the conversation required for informed consent is insufficient.
 警告部214aは、一のカテゴリに対応する閾値を、過去に同じ種類のインフォームドコンセントが取得された際に一のカテゴリに関して行われた会話の内容に基づいて設定してもよい。尚、一のインフォームドコンセントと同じ種類の他のインフォームドコンセントは、一のインフォームドコンセントと目的、症状、検査、治療及び病名の少なくとも一つ(或いは、全部)が同じ又は類似している他のインフォームドコンセントを意味していてもよい。例えば、過去に取得されたインフォームドコンセントと同じ種類のインフォームドコンセントが取得される場合には、一のカテゴリに関して同じ程度の分量の会話が行われることが望ましい。このため、警告部214aは、一のカテゴリに対応する閾値を、過去に同じ種類のインフォームドコンセントが取得された際に一のカテゴリに関して行われた会話の内容を示す会話テキストの数、発話回数及び発話時間の少なくとも一つに基づいて設定してもよい。例えば、警告部214aは、一のカテゴリに対応する閾値を、同じ種類のインフォームドコンセントが取得された際に一のカテゴリに関して行われた会話の内容を示す会話テキストの数、発話回数及び発話時間の少なくとも一つそのもの又は当該会話テキストの数、発話回数及び発話時間の少なくとも一つに対して所定のマージンを加算又は減算した値に設定してもよい。この場合、警告部214aは、適切な閾値の設定が可能となる。尚、会話テキストの数に関する閾値と、発話回数に関する閾値と、発話時間に関する閾値とは、個別に設定されることが好ましい。 The warning unit 214a may set the threshold value corresponding to one category based on the content of the conversation conducted regarding one category when the same type of informed consent was acquired in the past. Other informed consents of the same type as one informed consent have the same or similar purpose, symptom, test, treatment, and at least one (or all) of the disease name to the one informed consent. May mean informed consent. For example, when the same type of informed consent as the informed consent acquired in the past is obtained, it is desirable that the same amount of conversation is conducted for one category. Therefore, the warning unit 214a sets the threshold value corresponding to one category, the number of conversation texts indicating the content of the conversation conducted for one category when the same type of informed consent was acquired in the past, and the number of utterances. And may be set based on at least one of the speaking times. For example, the warning unit 214a sets a threshold value corresponding to one category, the number of conversation texts indicating the content of the conversation conducted for one category when the same type of informed outlet is acquired, the number of utterances, and the utterance time. A predetermined margin may be added to or subtracted from at least one of the two or at least one of the number of conversation texts, the number of utterances, and the utterance time. In this case, the warning unit 214a can set an appropriate threshold value. It is preferable that the threshold value for the number of conversation texts, the threshold value for the number of utterances, and the threshold value for the utterance time are set individually.
 一のカテゴリにおいてインフォームドコンセントで必要とされる会話が不十分であるか否かを判定するために、警告部214aは、一のカテゴリに関する特定のキーワードが医療従事者及び患者の少なくとも一方によって発せられたか否かを判定してもよい。特定のキーワードは、例えば、一にカテゴリに関する説明を行うための会話に出てくるべきキーワードであってもよい。この場合、特定のキーワードが医療従事者及び患者の少なくとも一方によって発せられていない場合には、インフォームドコンセントで必要とされる一のカテゴリに関する会話が十分に行われていない可能性が高い。このため、警告部214aは、特定のキーワードが医療従事者及び患者の少なくとも一方によって発せられていない場合には、インフォームドコンセントで必要とされる会話が不十分であると判定してもよい。 To determine if the conversation required for informed consent in one category is inadequate, warning unit 214a issues a specific keyword for one category by at least one of the healthcare professional and the patient. It may be determined whether or not it has been done. The specific keyword may be, for example, a keyword that should appear in a conversation for explaining a category. In this case, if the particular keyword is not uttered by at least one of the healthcare professional and the patient, it is likely that there is not enough conversation about one category required for informed consent. For this reason, warning unit 214a may determine that the conversation required for informed consent is inadequate if the particular keyword is not issued by at least one of the healthcare professional and the patient.
 インフォームドコンセントで必要とされる会話が不十分であるか否かを判定する際には、警告部214aは、医療従事者と患者との会話が開始されてから一定時間以上が経過した後に、会話テキストの数、発話回数及び発話時間の少なくとも一つが閾値よりも多いか否かを及び/又は特定のキーワードが医療従事者及び患者の少なくとも一方によって発せられたか否か判定してもよい。なぜならば、医療従事者と患者との会話が開始されてから一定時間以上が経過していない場合には、医療従事者があるカテゴリに関しての説明を行っている途中である(つまり、あるカテゴリに関しての説明がまだ完了していない)可能性が相対的に高いからである。 In determining whether or not the conversation required by informed consent is insufficient, the warning unit 214a may use the warning unit 214a after a certain period of time has passed since the conversation between the healthcare professional and the patient was started. It may be determined whether at least one of the number of conversation texts, the number of utterances and the utterance time is greater than the threshold and / or whether a particular keyword has been uttered by at least one of the healthcare professional and the patient. This is because if more than a certain amount of time has passed since the conversation between the healthcare professional and the patient started, the healthcare professional is in the process of explaining a category (that is, regarding a category). This is because it is relatively likely (the explanation of) has not been completed yet.
 ここで、過去に取得されたインフォームドコンセントと同じ種類のインフォームドコンセントが取得される場合には、当該インフォームドコンセントで行われる会話の流れは、過去に取得されたインフォームドコンセントで行われた会話の流れと同じである可能性が相対的に高くなる。このため、警告部214aは、過去に取得されたインフォームドコンセントで行われた会話の流れに基づいて、今回新たに取得されるインフォームドコンセントで行われると想定される会話の流れを推定することができる。この場合、警告部214aは、今回新たに取得されるインフォームドコンセントで行われると想定される会話の流れの推定結果に基づいて、一のカテゴリに関する会話が終了する時間を推定し、一のカテゴリに関する会話が終了すると推定された時間が経過した後に、一のカテゴリに分類された会話テキストの数、一のカテゴリに関する発話回数及び一のカテゴリに関する発話時間の少なくとも一つが閾値よりも多いか否か及び/又は特定のキーワードが医療従事者及び患者の少なくとも一方によって発せられたか否かを判定してもよい。その結果、警告部214aは、一のカテゴリにおいてインフォームドコンセントで必要とされる会話が不十分であるか否かを適切に判定することができる。 Here, when the same type of informed consent as the informed consent acquired in the past is acquired, the flow of conversation conducted in the informed consent is performed in the informed consent acquired in the past. It is more likely to be the same as the flow of conversation. Therefore, the warning unit 214a estimates the flow of conversation that is expected to be conducted at the newly acquired informed consent this time, based on the flow of conversation that was conducted at the informed consent acquired in the past. Can be done. In this case, the warning unit 214a estimates the time when the conversation regarding one category ends based on the estimation result of the conversation flow assumed to be performed at the newly acquired informed outlet, and the warning unit 214a estimates one category. Whether at least one of the number of conversation texts in one category, the number of utterances in one category, and the utterance time in one category is greater than the threshold after the estimated time to end the conversation. And / or it may be determined whether a particular keyword was uttered by at least one of the caregiver and the patient. As a result, the warning unit 214a can appropriately determine whether or not the conversation required for informed consent in one category is insufficient.
 或いは、インフォームドコンセントが取得される場合には、通常、一のカテゴリに関する会話が行われた後に、他のカテゴリに関する会話が行われることが多い。このため、分類部212の分類結果が、一のカテゴリに分類される会話テキストの数、一のカテゴリに関する発話回数及び一のカテゴリに関する発話時間の少なくとも一つが相対的に多い分類結果から、他のカテゴリに分類される会話テキストの数、他のカテゴリに関する発話回数及び他のカテゴリに関する発話時間の少なくとも一つが相対的に多い分類結果へと変わった場合には、一のカテゴリに関する会話が終了した(つまり、一のカテゴリに関する事項について医療従事者が十分に説明した)可能性が相対的に高いと推定される。このため、警告部214aは、分類部212の分類結果が、一のカテゴリに分類される会話テキストの数、一のカテゴリに関する発話回数及び一のカテゴリに関する発話時間の少なくとも一つが相対的に多い分類結果から、他のカテゴリに分類される会話テキストの数、他のカテゴリに関する発話回数及び他のカテゴリに関する発話時間の少なくとも一つが相対的に多い分類結果へと変わった後に、一のカテゴリに分類された会話テキストの数、一のカテゴリに関する発話回数及び一のカテゴリに関する発話時間の少なくとも一つが閾値よりも多いか否かを判定してもよい。その結果、警告部214aは、一のカテゴリにおいてインフォームドコンセントで必要とされる会話が不十分であるか否かを適切に判定することができる。 Alternatively, when informed consent is obtained, it is often the case that a conversation about one category is held and then a conversation about another category is held. Therefore, the classification result of the classification unit 212 is different from the classification result in which at least one of the number of conversation texts classified into one category, the number of utterances related to one category, and the utterance time related to one category is relatively large. If at least one of the number of conversation texts in a category, the number of utterances in another category, and the utterance time in another category changes to a relatively high classification result, the conversation in one category ends ( In other words, it is highly probable that the medical staff fully explained the matters related to one category). Therefore, in the warning unit 214a, the classification result of the classification unit 212 is such that at least one of the number of conversation texts classified into one category, the number of utterances related to one category, and the utterance time related to one category is relatively large. After the result changes to a classification result in which at least one of the number of conversation texts classified into other categories, the number of utterances related to other categories, and the utterance time related to other categories is relatively high, the results are classified into one category. It may be determined whether at least one of the number of conversation texts, the number of utterances related to one category, and the utterance time related to one category is greater than the threshold value. As a result, the warning unit 214a can appropriately determine whether or not the conversation required for informed consent in one category is insufficient.
 警告部214aは、上述した判定により一のカテゴリにおいてインフォームドコンセントで必要とされる会話が不十分であると判定した場合には、その旨を会話支援システムSYSのユーザに警告してもよい。例えば、警告部214aは、一のカテゴリにおいてインフォームドコンセントで必要とされる会話が不十分である旨を警告するための警告画面33aを表示するように、表示装置3を制御してもよい。尚、警告画面33aの一例は、図8に示されている。 If the warning unit 214a determines that the conversation required for informed consent is insufficient in one category based on the above-mentioned determination, the warning unit 214a may warn the user of the conversation support system SYS to that effect. For example, the warning unit 214a may control the display device 3 to display a warning screen 33a for warning that the conversation required for informed consent is insufficient in one category. An example of the warning screen 33a is shown in FIG.
 以上説明したように、第1変形例の会話支援システムSYSaは、インフォームドコンセントで必要とされる会話が不十分であるか否かを判定することができる。更に、会話支援システムSYSaは、インフォームドコンセントで必要とされる会話が不十分である場合には、インフォームドコンセントで必要とされる会話が不十分である旨を警告することができる。その結果、警告を確認した医療従事者は、あるカテゴリに関する説明を更に追加で行うことができる。このため、会話支援システムSYSaは、医療従事者から患者への説明に漏れが生ずる可能性をより適切に低減することができる。 As described above, the conversation support system SYSA of the first modification can determine whether or not the conversation required for informed consent is insufficient. Further, the conversation support system SYSa can warn that the conversation required by the informed consent is insufficient when the conversation required by the informed consent is insufficient. As a result, the healthcare professional who sees the warning can provide additional explanations for a category. Therefore, the conversation support system SYSa can more appropriately reduce the possibility of omission of explanation from the medical staff to the patient.
 尚、会話支援システムSYSaは、上述した警告画面33aを表示することに加えて又は代えて、一のカテゴリにおいてインフォームドコンセントで必要とされる会話が不十分である旨を音声で警告してもよい。この場合、会話支援システムSYSaは、表示装置3に加えて又は代えて、警告を音声で出力するスピーカを備えていてもよい。 In addition to or instead of displaying the above-mentioned warning screen 33a, the conversation support system SYSa may give a voice warning that the conversation required for informed consent in one category is insufficient. Good. In this case, the conversation support system SYSA may include, in addition to or instead of the display device 3, a speaker that outputs a warning by voice.
 また、会話支援システムSYSaは、上述した警告画面33aを表示することに加えて又は代えて、一のカテゴリに分類された会話テキストの数、発話回数及び発話時間の少なくとも一つを示す指標を含むIC支援画面31を表示してもよい。例えば、会話テキストの数を示す指標を含むIC支援画面31の一例を示す図9に示すように、会話支援システムSYSa(特に、表示制御部213)は、一のカテゴリに分類された会話テキストの数を定量的に示す棒グラフ3122aを含むIC支援画面31を表示してもよい。その結果、会話支援システムSYSaのユーザは、会話テキストの数をより直感的に認識することができるがゆえに、あるカテゴリに関する説明が不足しているか否かを判定することができる。図9に示す例では、ユーザは、「患者の対する今後の医療方針」というカテゴリに関する説明が不足していることを直感的に認識することができる。このため、会話支援システムSYSaは、医療従事者から患者への説明に漏れが生ずる可能性をより適切に低減することができる。尚、会話支援システムSYSaは、一のカテゴリに分類された会話テキストの数、発話回数及び発話時間の少なくとも一つを示す指標を含むIC支援画面31を表示する場合には、警告部214aを備えていなくてもよい。 In addition to or instead of displaying the warning screen 33a described above, the conversation support system SYSa includes an index indicating at least one of the number of conversation texts, the number of utterances, and the utterance time classified into one category. The IC support screen 31 may be displayed. For example, as shown in FIG. 9 showing an example of the IC support screen 31 including an index indicating the number of conversation texts, the conversation support system SYSA (particularly, the display control unit 213) is a conversation text classified into one category. The IC support screen 31 including the bar graph 3122a showing the number quantitatively may be displayed. As a result, the user of the conversation support system SYSa can more intuitively recognize the number of conversation texts, and thus can determine whether or not the explanation for a certain category is insufficient. In the example shown in FIG. 9, the user can intuitively recognize that the explanation regarding the category of "future medical policy for the patient" is insufficient. Therefore, the conversation support system SYSa can more appropriately reduce the possibility of omission of explanation from the medical staff to the patient. The conversation support system SYSa is provided with a warning unit 214a when displaying the IC support screen 31 including an index indicating at least one of the number of conversation texts, the number of utterances, and the utterance time classified into one category. It does not have to be.
 (4-2)第2変形例
 続いて、第2変形例の会話支援システムSYSbについて説明する。第2変形例の会話支援システムSYSbは、上述した会話支援システムSYSと比較して、会話支援装置2に代えて会話支援装置2bを備えているという点で異なる。会話支援システムSYSbのその他の特徴は、会話支援システムSYSのその他の特徴と同一であってもよい。このため、以下、図10を参照しながら、第2変形例の会話支援装置2bについて説明する。図10は、第2変形例の会話支援装置2bの構成を示すブロック図である。
(4-2) Second Modified Example Next, the conversation support system SYSb of the second modified example will be described. The conversation support system SYSb of the second modification is different from the conversation support system SYS described above in that it includes a conversation support device 2b instead of the conversation support device 2. Other features of the conversation support system SYSb may be the same as other features of the conversation support system SYS. Therefore, the conversation support device 2b of the second modification will be described below with reference to FIG. 10. FIG. 10 is a block diagram showing a configuration of the conversation support device 2b of the second modification.
 図10に示すように、第2変形例の会話支援装置2bは、上述した会話支援装置2と比較して、会話支援動作を実行するためにCPU21内に実現される論理的な機能ブロックとして、CPU21内にサマリ出力部215bが実現されているという点で異なる。会話支援装置2bのその他の特徴は、会話支援装置2のその他の特徴と同一であってもよい。尚、上述した第1変形例の会話支援装置2aがサマリ出力部215bを備えていてもよい。 As shown in FIG. 10, the conversation support device 2b of the second modification is a logical functional block realized in the CPU 21 to execute the conversation support operation as compared with the conversation support device 2 described above. The difference is that the summary output unit 215b is realized in the CPU 21. Other features of the conversation support device 2b may be the same as other features of the conversation support device 2. The conversation support device 2a of the first modification described above may include a summary output unit 215b.
 サマリ出力部215bは、インフォームドコンセントを取得した際における医療従事者と患者との会話の内容を要約したサマリ情報(つまり、要約情報)を生成する。更に、サマリ出力部215bは、生成したサマリ情報を出力する。例えば、サマリ出力部215bは、サマリ情報を表示するように、出力装置の一具体例である表示装置3を制御してもよい。 The summary output unit 215b generates summary information (that is, summary information) that summarizes the content of the conversation between the medical staff and the patient when the informed consent is obtained. Further, the summary output unit 215b outputs the generated summary information. For example, the summary output unit 215b may control the display device 3 which is a specific example of the output device so as to display the summary information.
 サマリ情報は、例えば、図4のステップS11で登録された初期情報の少なくとも一部を含んでいてもよい。例えば、サマリ情報の一例を示す説明図である図11に示すように、サマリ情報は、インフォームドコンセントのタイトルを示す情報と、インフォームドコンセントを取得した医療従事者の氏名を示す情報と、インフォームドコンセントを取得した患者の氏名を示す情報と、インフォームドコンセントが取得された日時を示す情報と、インフォームドコンセントに関する医療従事者及び患者の少なくとも一方のコメントを示す情報とのうちの少なくとも一つを含んでいてもよい。 The summary information may include, for example, at least a part of the initial information registered in step S11 of FIG. For example, as shown in FIG. 11, which is an explanatory diagram showing an example of summary information, the summary information includes information indicating the title of informed consent, information indicating the name of a medical worker who has obtained informed consent, and information indicating informed consent. At least one of information indicating the name of the patient who gave informed consent, information indicating the date and time when informed consent was obtained, and information indicating at least one of the comments of the medical staff and the patient regarding informed consent. May include.
 図11に示すように、サマリ情報は、例えば、テキスト変換部211が変換したテキストデータが示す文章を細分化することで得られる複数の会話テキストの少なくとも一部を含んでいてもよい。この場合、会話支援システムSYSbのユーザが、サマリ情報に含めるべき複数の会話テキストのうちの少なくとも一部を指定してもよい。この場合、表示制御部213は、サマリ情報に含めるべき会話テキストを指定するためのGUI3110を含むIC支援画面31を表示するように、表示装置3を制御してもよい。サマリ情報に含めるべき会話テキストを指定するためのGUI3110bを含むIC支援画面31の一例が、図12に示されている。図12に示す例では、GUI3110bは、IC支援画面31に表示される複数の会話テキストに夫々対応する複数のチェックボックス3111bであって且つ対応する会話テキストをサマリ情報に含める場合に選択される複数のチェックボックス3111bを含む。この場合、サマリ出力部215bは、ユーザが選択したチェックボックス3111bに対応する会話テキストをサマリ情報に含める。一方で、サマリ出力部215bは、ユーザ選択しなかったチェックボックス3111bに対応する会話テキストをサマリ情報に含めなくてもよい。 As shown in FIG. 11, the summary information may include, for example, at least a part of a plurality of conversation texts obtained by subdividing the text indicated by the text data converted by the text conversion unit 211. In this case, the user of the conversation support system SYSb may specify at least a part of the plurality of conversation texts to be included in the summary information. In this case, the display control unit 213 may control the display device 3 so as to display the IC support screen 31 including the GUI 3110 for designating the conversation text to be included in the summary information. An example of the IC support screen 31 including the GUI3110b for designating the conversation text to be included in the summary information is shown in FIG. In the example shown in FIG. 12, the GUI 3110b is a plurality of check boxes 3111b corresponding to the plurality of conversation texts displayed on the IC support screen 31, and is selected when the corresponding conversation texts are included in the summary information. Includes check box 3111b. In this case, the summary output unit 215b includes the conversation text corresponding to the check box 3111b selected by the user in the summary information. On the other hand, the summary output unit 215b does not have to include the conversation text corresponding to the check box 3111b not selected by the user in the summary information.
 尚、図12は、IC支援画面31を構成する会話表示画面311にGUI3110bが含まれる例を示している。しかしながら、IC支援画面31を構成するカテゴリ表示画面312にGUI3110bが含まれていてもよい。 Note that FIG. 12 shows an example in which the GUI 3110b is included in the conversation display screen 311 constituting the IC support screen 31. However, the GUI 3110b may be included in the category display screen 312 that constitutes the IC support screen 31.
 サマリ情報は、例えば、サマリ情報に含まれる会話テキストが分類されたカテゴリに関する情報を含んでいてもよい。この場合、サマリ出力部215bは、会話テキストをカテゴリごとに区別して示すサマリ情報を出力してもよい。 The summary information may include, for example, information about a category in which the conversation text included in the summary information is classified. In this case, the summary output unit 215b may output summary information indicating the conversation texts by category.
 以上説明したように、第2変形例の会話支援システムSYSbは、サマリ情報を出力することができる。このため、ユーザは、サマリ情報を確認することで、インフォームドコンセントの内容を適切に把握することができる。 As described above, the conversation support system SYSb of the second modification can output summary information. Therefore, the user can appropriately grasp the contents of informed consent by confirming the summary information.
 尚、サマリ出力部215bは、サマリ情報に含めるべき会話テキストを指定するユーザの指示を学習してもよい。サマリ出力部215bは、ユーザの指示の学習結果に基づいて、サマリ情報に含めるべき会話テキストを自動的に選択してもよい。つまり、サマリ出力部215bは、ユーザの指示の学習結果に基づいて、ユーザがサマリ情報に含めるであろうと推定される会話テキストを、サマリ情報に含めるべき会話テキストとして自動的に選択してもよい。この場合、サマリ出力部215bは、サマリ情報に含めるべき会話テキストを指定するユーザの指示を必要とすることなく、サマリ情報に含めるべき会話テキストとしてユーザが選択する可能性が相対的に高い会話テキストを適切に選択することができる。その結果、会話テキストを含むサマリ情報の生成に要するユーザの負荷が低減される。尚、ユーザの指示を学習する場合には、サマリ出力部215bは、サマリ情報に含めるべき会話テキストを指定するユーザの指示を教師データとして用いて学習可能な学習モデル(例えば、ニューラルネットワークを用いた学習モデル)を含んでいてもよい。 Note that the summary output unit 215b may learn the user's instruction to specify the conversation text to be included in the summary information. The summary output unit 215b may automatically select the conversation text to be included in the summary information based on the learning result of the user's instruction. That is, the summary output unit 215b may automatically select the conversation text that the user is presumed to include in the summary information as the conversation text that should be included in the summary information, based on the learning result of the user's instruction. .. In this case, the summary output unit 215b does not require the user's instruction to specify the conversation text to be included in the summary information, and the conversation text is relatively likely to be selected by the user as the conversation text to be included in the summary information. Can be selected appropriately. As a result, the load on the user required to generate summary information including conversation text is reduced. When learning the user's instruction, the summary output unit 215b uses a learning model (for example, a neural network) that can be learned by using the user's instruction to specify the conversation text to be included in the summary information as teacher data. It may include a learning model).
 また、サマリ出力部215bは、サマリ情報に含めるべき会話テキストを指定するユーザの指示の学習結果に基づいて、ユーザに対して、サマリ情報に含めることが好ましい会話テキストを推奨してもよい。例えば、サマリ出力部215bは、IC支援画面31上において、サマリ情報に含めることが好ましい会話テキストを他の会話テキストと区別可能な表示方法で表示するように、表示装置3を制御してもよい。その結果、ユーザは、サマリ情報に含めるべき会話テキストを相対的に容易に選択することができる。 Further, the summary output unit 215b may recommend the conversation text that is preferably included in the summary information to the user based on the learning result of the instruction of the user who specifies the conversation text to be included in the summary information. For example, the summary output unit 215b may control the display device 3 so that the conversation text, which is preferable to be included in the summary information, is displayed on the IC support screen 31 in a display method distinguishable from other conversation texts. .. As a result, the user can relatively easily select the conversation text to be included in the summary information.
 (4-3)第3変形例
 続いて、第3変形例の会話支援システムSYScについて説明する。第3変形例の会話支援システムSYScは、上述した会話支援システムSYSと比較して、会話支援装置2に代えて会話支援装置2cを備えているという点で異なる。会話支援システムSYScのその他の特徴は、会話支援システムSYSのその他の特徴と同一であってもよい。このため、以下、図13を参照しながら、第3変形例の会話支援装置2cについて説明する。図13は、第3変形例の会話支援装置2cの構成を示すブロック図である。
(4-3) Third Modified Example Next, the conversation support system SYSc of the third modified example will be described. The conversation support system SYSc of the third modification is different from the conversation support system SYS described above in that it includes a conversation support device 2c instead of the conversation support device 2. Other features of the conversation support system SYSc may be the same as other features of the conversation support system SYS. Therefore, the conversation support device 2c of the third modification will be described below with reference to FIG. FIG. 13 is a block diagram showing a configuration of the conversation support device 2c of the third modification.
 図13に示すように、第3変形例の会話支援装置2cは、上述した会話支援装置2と比較して、会話支援動作を実行するためにCPU21内に実現される論理的な機能ブロックとして、CPU21内にスケジュール提示部216cが実現されているという点で異なる。会話支援装置2cのその他の特徴は、会話支援装置2のその他の特徴と同一であってもよい。尚、上述した第1変形例の会話支援装置2a及び第2変形例の会話支援装置2bの少なくとも一方がスケジュール提示部216cを備えていてもよい。 As shown in FIG. 13, the conversation support device 2c of the third modification is, as compared with the conversation support device 2 described above, as a logical functional block realized in the CPU 21 to execute the conversation support operation. The difference is that the schedule presentation unit 216c is realized in the CPU 21. Other features of the conversation support device 2c may be the same as other features of the conversation support device 2. At least one of the conversation support device 2a of the first modification and the conversation support device 2b of the second modification described above may include the schedule presentation unit 216c.
 スケジュール提示部216cは、インフォームドコンセントを取得する医療従事者に対して、インフォームドコンセントを取得するために行うべき会話のスケジュールを提示する。具体的には、スケジュール提示部216cは、過去に取得されたインフォームドコンセントと同じ種類のインフォームドコンセントが取得される場合には、過去に取得されたインフォームドコンセントでの会話の内容に基づいて、今回のインフォームドコンセントを取得する医療従事者に対して、今回のインフォームドコンセントを取得するために行うべき会話のスケジュールを提示する。というのも、上述したように、過去に取得されたインフォームドコンセントと同じ種類のインフォームドコンセントが取得される場合には、その会話の流れは、過去に取得されたインフォームドコンセントで行われた会話の流れと同じである可能性が相対的に高くなる。このため、スケジュール提示部216cは、過去に取得されたインフォームドコンセントで行われた会話のスケジュールに基づいて、今回新たに取得されるインフォームドコンセントで行われるべき会話のスケジュールを提示することができる。例えば、スケジュール提示部216cは、過去に取得されたインフォームドコンセントで行われた会話のスケジュールと同じ又は類似するスケジュールを、今回新たに取得されるインフォームドコンセントで行われるべき会話のスケジュールとして提示してもよい。例えば、スケジュール提示部216cは、過去に取得されたインフォームドコンセントで行われた会話のスケジュールを学習し、スケジュールの学習結果に基づいて、今回新たに取得されるインフォームドコンセントで行われるべき会話のスケジュールを提示してもよい。この場合、スケジュール提示部216cは、過去に取得されたインフォームドコンセントで行われた会話のスケジュールを教師データとして用いて学習可能な学習モデル(例えば、ニューラルネットワークを用いた学習モデル)を含んでいてもよい。その結果、医療従事者は、提示されたスケジュールに基づいて、適切なスケジュールでインフォームドコンセントを取得するために行うべき説明を進めることができる。 The schedule presentation unit 216c presents the schedule of conversations to be conducted in order to obtain informed consent to the medical staff who obtains informed consent. Specifically, when the same type of informed consent as the informed consent acquired in the past is acquired, the schedule presentation unit 216c is based on the content of the conversation in the informed consent acquired in the past. , Present the schedule of conversations to be conducted in order to obtain this informed consent to the medical staff who obtain this informed consent. This is because, as mentioned above, when the same type of informed consent as the informed consent obtained in the past is obtained, the conversation flow is performed in the informed consent obtained in the past. It is more likely to be the same as the flow of conversation. Therefore, the schedule presenting unit 216c can present the schedule of the conversation to be conducted at the newly acquired informed consent based on the schedule of the conversation conducted at the informed consent acquired in the past. .. For example, the schedule presentation unit 216c presents a schedule that is the same as or similar to the schedule of conversations that took place in the informed consent acquired in the past as the schedule of conversations that should be given in the newly acquired informed consent. May be. For example, the schedule presentation unit 216c learns the schedule of conversations given in the informed consent acquired in the past, and based on the learning result of the schedule, the conversations to be held in the informed consent newly acquired this time. A schedule may be presented. In this case, the schedule presentation unit 216c includes a learning model (for example, a learning model using a neural network) that can be learned by using the schedule of conversations performed in informed consent acquired in the past as teacher data. May be good. As a result, the healthcare professional can proceed with the explanation to be given in order to obtain informed consent on an appropriate schedule based on the presented schedule.
 (4-4)第4変形例
 続いて、図14を参照しながら、第4変形例の会話支援システムSYSdについて説明する。図14は、第4変形例の会話支援システムSYSdの構成を示すブロック図である。
(4-4) Fourth Modified Example Next, the conversation support system SYSd of the fourth modified example will be described with reference to FIG. FIG. 14 is a block diagram showing a configuration of the conversation support system SYSd of the fourth modification.
 図14に示すように、第4変形例の会話支援システムSYSdは、上述した会話支援システムSYSと比較して、電子カルテシステム5dを更に備えているという点で異なっている。更に、会話支援システムSYSdは、上述した会話支援システムSYSと比較して、会話支援装置2に代えて会話支援装置2dを備えているという点で異なる。会話支援システムSYSdのその他の特徴は、会話支援システムSYSのその他の特徴と同一であってもよい。尚、上述した第1変形例の会話支援システムSYSaから第3変形例の会話支援システムSYScの少なくとも一つが電子カルテシステム5dを備えていてもよい。 As shown in FIG. 14, the conversation support system SYSd of the fourth modification is different from the conversation support system SYS described above in that it further includes an electronic medical record system 5d. Further, the conversation support system SYSd is different from the conversation support system SYS described above in that it includes a conversation support device 2d instead of the conversation support device 2. Other features of the conversation support system SYSd may be the same as other features of the conversation support system SYS. It should be noted that at least one of the conversation support system SYSA of the first modification described above to the conversation support system SYSc of the third modification may be provided with the electronic medical record system 5d.
 電子カルテシステム5dは、患者の電子カルテを管理するためのシステムである。具体的には、電子カルテシステム5dは、患者の電子カルテを示す電子カルテデータ51dを格納している。このため、電子カルテシステム5dは、電子カルテデータ51dを格納するための記憶装置を備えていてもよい。 The electronic medical record system 5d is a system for managing the patient's electronic medical record. Specifically, the electronic medical record system 5d stores electronic medical record data 51d indicating the patient's electronic medical record. Therefore, the electronic medical record system 5d may be provided with a storage device for storing the electronic medical record data 51d.
 会話支援装置2dは、会話支援装置2dの構成を示す図15に示すように、上述した会話支援装置2と比較して、会話支援動作を実行するためにCPU21内に実現される論理的な機能ブロックとして、CPU21内にカルテ連携部217dが実現されているという点で異なる。会話支援装置2dのその他の特徴は、会話支援装置2のその他の特徴と同一であってもよい。尚、上述した第1変形例の会話支援装置2aから第3変形例の会話支援装置2cの少なくとも一つがカルテ連携部217dを備えていてもよい。 As shown in FIG. 15, which shows the configuration of the conversation support device 2d, the conversation support device 2d has a logical function realized in the CPU 21 to execute the conversation support operation as compared with the conversation support device 2 described above. The difference is that the medical record cooperation unit 217d is realized in the CPU 21 as a block. Other features of the conversation support device 2d may be the same as other features of the conversation support device 2. At least one of the conversation support device 2a of the first modification and the conversation support device 2c of the third modification described above may include a medical record cooperation unit 217d.
 カルテ連携部217dは、電子カルテシステム5dが格納する電子カルテデータ51dと、IC管理DB221によって管理されるデータ(つまり、インフォームドコンセントを管理するためのデータ)とを連携した連携動作を行う。 The medical record cooperation unit 217d performs a cooperation operation in which the electronic medical record data 51d stored in the electronic medical record system 5d and the data managed by the IC management DB 221 (that is, the data for managing informed consent) are linked.
 例えば、カルテ連携部217dは、電子カルテデータ51dとIC管理DB221とに基づいて、インフォームドコンセントによって合意に達していない行為(例えば、医療行為)が、患者に対して既に行われている又は行われる予定があるか否かを判定してもよい。具体的には、カルテ連携部217dは、電子カルテデータ51dに基づいて、患者に対して既に行われている又は行われる医療行為等を特定することができる。更に、カルテ連携部217dは、IC管理DB221に基づいて、患者に対して行われる医療行為等に関して医療従事者と患者との間で合意に達したか否かを判定することができる。このため、カルテ連携部217dは、電子カルテデータ51dとIC管理DB221とに基づいて、インフォームドコンセントによって合意に達していない医療行為等が、患者に対して既に行われている又は行われる予定があるか否かを判定することができる。更に、インフォームドコンセントによって合意に達していない医療行為等が患者に対して既に行われている又は行われる予定があると判定された場合には、カルテ連携部217dは、その旨を会話支援システムSYSdのユーザに警告してもよい。その結果、ユーザは、医療行為等を行うためにインフォームドコンセントを行う必要があることを認識することができる。このため、医療従事者は、インフォームドコンセントが適切に取得された後に、医療行為等を行うことができる。つまり、医療従事者は、インフォームドコンセントが適切に取得されていない医療行為等を誤って行うことは殆どなくなる。 For example, the medical record cooperation unit 217d has already performed or performed an act (for example, medical practice) that has not been agreed upon by informed consent based on the electronic medical record data 51d and the IC management DB 221. It may be determined whether or not there is a plan to be given. Specifically, the medical record cooperation unit 217d can specify the medical practice or the like that has already been performed or is performed on the patient based on the electronic medical record data 51d. Further, the medical record cooperation unit 217d can determine whether or not an agreement has been reached between the medical staff and the patient regarding the medical practice or the like performed on the patient based on the IC management DB 221. For this reason, the medical record cooperation unit 217d has already performed or plans to perform medical treatment or the like that has not been agreed upon by informed consent based on the electronic medical record data 51d and the IC management DB 221. It is possible to determine whether or not there is. Furthermore, if it is determined by the informed consent that a medical practice, etc. that has not been agreed upon has already been performed or is scheduled to be performed on the patient, the medical record cooperation unit 217d will notify the conversation support system to that effect. The SYSd user may be warned. As a result, the user can recognize that it is necessary to give informed consent in order to perform medical treatment or the like. Therefore, the medical staff can perform medical treatment and the like after the informed consent is properly obtained. In other words, medical professionals rarely mistakenly perform medical practices or the like for which informed consent has not been properly obtained.
 尚、IC管理DB221の少なくとも一部が、電子カルテデータ51dに含まれていてもよい。IC管理DB221の少なくとも一部が、電子カルテデータ51dの一部として電子カルテシステム5dに格納されていてもよい。IC管理DB221によって管理されるデータ(例えば、上述した音声データ、テキストデータ及び分類部212の分類結果を示す分類データの少なくとも一つ)の少なくとも一部が、電子カルテデータ51dに含まれていてもよい。IC管理DB221によって管理されるデータの少なくとも一部が、電子カルテデータ51dの一部として電子カルテシステム5dに格納されていてもよい。 Note that at least a part of the IC management DB 221 may be included in the electronic medical record data 51d. At least a part of the IC management DB 221 may be stored in the electronic medical record system 5d as a part of the electronic medical record data 51d. Even if at least a part of the data managed by the IC management DB 221 (for example, at least one of the above-mentioned voice data, text data, and classification data indicating the classification result of the classification unit 212) is included in the electronic medical record data 51d. Good. At least a part of the data managed by the IC management DB 221 may be stored in the electronic medical record system 5d as a part of the electronic medical record data 51d.
 (4-5)第5変形例
 続いて、第5変形例の会話支援システムSYSeについて説明する。第5変形例の会話支援システムSYSeは、上述した会話支援システムSYSと比較して、会話支援装置2に代えて会話支援装置2eを備えているという点で異なる。会話支援システムSYSeのその他の特徴は、会話支援システムSYSのその他の特徴と同一であってもよい。このため、以下、図16を参照しながら、第5変形例の会話支援装置2eについて説明する。図16は、第5変形例の会話支援装置2eの構成を示すブロック図である。
(4-5) Fifth Modified Example Next, the conversation support system SYSSe of the fifth modified example will be described. The conversation support system SYS of the fifth modification is different from the conversation support system SYS described above in that it includes a conversation support device 2e instead of the conversation support device 2. Other features of the conversation support system SYS may be the same as other features of the conversation support system SYS. Therefore, the conversation support device 2e of the fifth modification will be described below with reference to FIG. FIG. 16 is a block diagram showing a configuration of the conversation support device 2e of the fifth modification.
 図16に示すように、第5変形例の会話支援装置2eは、上述した会話支援装置2と比較して、分類部212に代えて分類部212eを備えているという点で異なる。また、第5変形例の会話支援装置2eは、上述した会話支援装置2と比較して、学習部214eを備えているという点で異なる。会話支援装置2eのその他の特徴は、会話支援装置2のその他の特徴と同一であってもよい。尚、上述した第1変形例の会話支援装置2bから第4変形例の会話支援装置2dの少なくとも一つが、分類部212に代えて、分類部212eを備えていてもよい。 As shown in FIG. 16, the conversation support device 2e of the fifth modification is different from the conversation support device 2 described above in that it includes a classification unit 212e instead of the classification unit 212. Further, the conversation support device 2e of the fifth modification is different from the conversation support device 2 described above in that it includes a learning unit 214e. Other features of the conversation support device 2e may be the same as other features of the conversation support device 2. In addition, at least one of the conversation support device 2b of the first modification and the conversation support device 2d of the fourth modification described above may include a classification unit 212e instead of the classification unit 212.
 分類部212eは、分類部212と比較して、夫々が会話テキストを複数のカテゴリのうちの少なくとも一つに分類可能な少なくとも二つの分類部2121eを含んでいるという点で異なる。図16に示す例では、分類部212eは、分類部2121e-1と分類部2121e-2とを含む。分類部212eのその他の特徴は、分類部212のその他の特徴と同一であってもよい。 The classification unit 212e is different from the classification unit 212 in that each contains at least two classification units 2121e capable of classifying the conversation text into at least one of a plurality of categories. In the example shown in FIG. 16, the classification unit 212e includes a classification unit 2121e-1 and a classification unit 2121e-2. Other features of the classification unit 212e may be the same as the other features of the classification unit 212.
 分類部2121e-1は、任意の方法を用いて会話テキストを分類してもよい。一方で、分類部2121e-2は、分類部2121e-1が用いる方法とは異なる方法を用いて会話テキストを分類してもよい。一例として、例えば、分類部2121e-1は、上述したルールベースに準拠した方法を用いて会話テキストを分類してもよい。例えば、分類部2121e-2は、上述した分類モデル(例えば、ニューラルネットワークを用いた学習モデル)を用いて会話テキストを分類してもよい。この場合、分類部2121e-2は、分類部2121e-1が分類することができなかった会話テキストを分類してもよい。 The classification unit 2121e-1 may classify conversation texts by using any method. On the other hand, the classification unit 2121e-2 may classify the conversation text by a method different from the method used by the classification unit 2121e-1. As an example, for example, the classification unit 2121e-1 may classify the conversation text by using a method based on the above-mentioned rule base. For example, the classification unit 2121e-2 may classify the conversation text using the above-mentioned classification model (for example, a learning model using a neural network). In this case, the classification unit 2121e-2 may classify conversation texts that the classification unit 2121e-1 could not classify.
 学習部214eは、分類部2121e-1の分類結果を学習する。学習部214eの学習結果は、分類器2121e-2に反映される。このため、分類部2121e-2は、学習部214eが学習することにより作成した学習モデル(例えば、ニューラルネットワークを用いた学習モデル)を用いて会話テキストを分類することが好ましい。その結果、分類部2121e-2は、分類部2121e-1の分類結果の学習結果が反映された学習モデルを用いることができるがゆえに、分類部2121e-1よりも相対的に高精度に会話テキストを分類することができる。 The learning unit 214e learns the classification result of the classification unit 2121e-1. The learning result of the learning unit 214e is reflected in the classifier 2121e-2. Therefore, it is preferable that the classification unit 2121e-2 classifies conversation texts using a learning model created by learning by the learning unit 214e (for example, a learning model using a neural network). As a result, since the classification unit 2121e-2 can use a learning model that reflects the learning result of the classification result of the classification unit 2121e-1, the conversation text is relatively more accurate than the classification unit 2121e-1. Can be classified.
 尚、会話支援システムSYSが単一の分類部212を備える場合においても、学習部214eは、分類部212の分類結果を学習してもよい。学習部214eの学習結果は、分類器212に反映される。この場合、学習部214eによる学習によって、分類部212による分類精度が向上していくことが期待される。 Even when the conversation support system SYS includes a single classification unit 212, the learning unit 214e may learn the classification result of the classification unit 212. The learning result of the learning unit 214e is reflected in the classifier 212. In this case, it is expected that the learning by the learning unit 214e will improve the classification accuracy by the classification unit 212.
 (4-6)その他の変形例
 分類部212は、会話テキストの分類結果を修正してもよい。例えば、分類部212は、会話テキストの分類結果を修正するためのユーザの指示に基づいて、会話テキストの分類結果を修正してもよい。この場合、表示制御部213は、会話テキストの分類結果を修正するためのGUI313を含むIC支援画面31を表示するように、表示装置3を制御してもよい。会話テキストの分類結果を修正するためのGUI313を含むIC支援画面31の一例は、図17に示されている。図17に示す例では、GUI313は、IC支援画面31に表示されたカテゴリ(例えば、会話表示画面311中に、会話テキストと対応付けて表示されたカテゴリ)に対応するように表示され且つ会話テキストを分類するべきカテゴリを選択可能なプルダウンメニュー3131を含む。この場合、ユーザは、入力装置4を用いて、所望の会話テキストのカテゴリを変更してもよい。その結果、分類部212は、会話テキストを分類するカテゴリを、ユーザが選択したカテゴリに変更してもよい。
(4-6) Other Modified Examples The classification unit 212 may modify the classification result of the conversation text. For example, the classification unit 212 may modify the classification result of the conversation text based on the user's instruction for modifying the classification result of the conversation text. In this case, the display control unit 213 may control the display device 3 so as to display the IC support screen 31 including the GUI 313 for correcting the classification result of the conversation text. An example of the IC support screen 31 including the GUI 313 for correcting the classification result of the conversation text is shown in FIG. In the example shown in FIG. 17, the GUI 313 is displayed so as to correspond to the category displayed on the IC support screen 31 (for example, the category displayed in association with the conversation text in the conversation display screen 311) and the conversation text. Includes a pull-down menu 3131 that allows you to select the category in which to classify. In this case, the user may use the input device 4 to change the desired conversation text category. As a result, the classification unit 212 may change the category for classifying the conversation text to the category selected by the user.
 会話テキストの分類結果が修正される場合には、分類部212は、会話テキストの分類結果の修正内容を学習してもよい。分類部212は、会話テキストの分類結果の修正内容を学習し、且つ、会話テキストの分類結果の修正内容の学習結果に基づいて、会話テキストを分類してもよい。その結果、分類部212は、会話テキストの分類結果の修正内容を学習しない場合と比較して、相対的に高精度に会話テキストを分類することができる。尚、この場合には、分類部212は、会話テキストの分類結果の修正内容(例えば、会話テキストの分類結果を修正するユーザの指示)を教師データとして用いて学習可能な学習モデル(例えば、ニューラルネットワークを用いた学習モデル)を含んでいてもよい。 When the classification result of the conversation text is corrected, the classification unit 212 may learn the correction content of the classification result of the conversation text. The classification unit 212 may learn the modified content of the classification result of the conversation text, and may classify the conversation text based on the learning result of the modified content of the classification result of the conversation text. As a result, the classification unit 212 can classify the conversation text with relatively high accuracy as compared with the case where the correction content of the classification result of the conversation text is not learned. In this case, the classification unit 212 can learn by using the correction content of the classification result of the conversation text (for example, the instruction of the user who corrects the classification result of the conversation text) as the teacher data (for example, neural). A learning model using a network) may be included.
 表示制御部213は、会話テキストを検索するためのGUIを含むIC支援画面31を表示するように、表示装置3を制御してもよい。会話テキストを検索するためのGUIを含むIC支援画面31の一例は、図18に示されている。図18に示すように、IC支援画面31(図18では、会話表示画面311)は、会話テキストを検索するためのGUIとして、検索したい文言を入力するためのテキストボックス3112を含んでいてもよい。この場合、IC支援画面31(図18では、会話表示画面311)は、テキストボックス3112に入力された文言を含む会話テキストを表示してもよい。図18は、「合併症」という文言がテキストボックス3112に入力された場合に、合併症という文言を含む会話テキストを表示するIC支援画面31(図18では、会話表示画面311)を示している。 The display control unit 213 may control the display device 3 so as to display the IC support screen 31 including the GUI for searching the conversation text. An example of the IC support screen 31 including a GUI for searching the conversation text is shown in FIG. As shown in FIG. 18, the IC support screen 31 (conversation display screen 311 in FIG. 18) may include a text box 3112 for inputting the wording to be searched as a GUI for searching the conversation text. .. In this case, the IC support screen 31 (conversation display screen 311 in FIG. 18) may display conversation text including the wording input in the text box 3112. FIG. 18 shows an IC support screen 31 (conversation display screen 311 in FIG. 18) that displays a conversation text including the word “complication” when the word “complication” is entered in the text box 3112. ..
 会話支援システムSYSは、インフォームドコンセントを取得するために会話をしている医療従事者及び患者の少なくとも一方の動画を撮影可能な動画撮影装置(例えば、ビデオカメラ)を備えていてもよい。動画撮影装置が撮影した動画を示す動画データは、インフォームドコンセントの証拠として用いられてもよい。動画撮影装置が撮影した動画データは、記憶装置22によって記憶されてもよい。この際、記憶装置22によって記憶された動画データに関する情報は、IC管理DB221に登録されてもよい。動画データには、動画データの改ざんを防ぐための情報(例えば、タイムスタンプ及び電子署名の少なくとも一方)が付与されてもよい。また、医療従事者及び患者の少なくとも一方は、上述した初期情報を登録する際に(図4のステップS11)、動画撮影装置による撮影を行うか否かを指定してもよい。 The conversation support system SYS may be provided with a video capturing device (for example, a video camera) capable of capturing a video of at least one of a medical worker and a patient having conversation in order to obtain informed consent. The moving image data showing the moving image taken by the moving image-taking device may be used as evidence of informed consent. The moving image data taken by the moving image capturing device may be stored in the storage device 22. At this time, the information about the moving image data stored by the storage device 22 may be registered in the IC management DB 221. Information for preventing falsification of the moving image data (for example, at least one of a time stamp and an electronic signature) may be added to the moving image data. Further, at least one of the medical staff and the patient may specify whether or not to perform imaging with the moving image imaging device when registering the above-mentioned initial information (step S11 in FIG. 4).
 上述した説明では、会話支援システムSYSは、医療従事者が患者に対して行う行為(例えば、医療行為)について患者に十分に説明し且つその行為について医療従事者と患者との間で合意を得るための合意取得過程における医療従事者と患者との会話を支援する。しかしながら、会話支援システムSYSは、医療従事者と患者との会話を支援する場合と同様の態様で、所望事項について任意の第1の人物と任意の第2の人物との間で合意を得るための合意取得過程において第1の人物と第2の人物との会話を支援してもよい。或いは、合意取得過程における第1の人物と第2の人物との会話を支援することに加えて又は代えて、会話支援システムSYSは、第1の人物と第2の人物との任意の会話を支援してもよい。つまり、会話支援システムSYSは、合意取得過程に限らず、任意の会話がなされる場面において用いられてもよい。例えば、会話支援システムSYSは、所望事項について任意の第1の人物が任意の第2の人物に説明する過程での第1の人物と第2の人物との会話を支援してもよい。例えば、会話支援システムSYSは、第1の人物と第2の人物との会話を音声で示す音声データをテキストデータに変換し、テキストデータが示す文章を細分化することで得られる複数の会話テキストの夫々を、所望事項について説明する過程で発話されるべき発話内容の種類に応じて区別される複数のカテゴリのうちの少なくとも一つに分類し、分類部212の分類結果に基づいて、第1の人物と第2の人物との会話を支援するため支援画面(例えば、図6に示すIC支援画面31と同様の画面)を表示してもよい。この場合、第1の人物は、第2の人物に対する所望事項の説明が不足しているか否かを判定することができる。つまり、第1の人物は、第2の人物に対する所望事項の説明義務を十分に果たしているか否かを判定することができる。その結果、会話支援システムSYSは、第1及び第2の人物のいずれか一方から第1及び第2の人物のいずれか他方への説明に漏れが生ずる可能性を低減することができる。尚、第1の人物と第2の人物との間の会話が支援される場面の他の一例として、第1の人物と第2の人物との間で契約(例えば、不動産に関する契約及び金融に関する契約の少なくとも一方)が締結される場面があげられる。この場合、会話支援システムSYSは、契約内容について第1の人物が任意の第2の人物に説明する過程での第1の人物と第2の人物との間の会話を支援してもよい。第1の人物と第2の人物との間の会話が支援される場面の他の一例として、警察において警察官が被疑者を取り調べる場面があげられる。この場合、会話支援システムSYSは、警察官が被疑者を取り調べる過程での警察官と被疑者との間の会話を支援してもよい。第1の人物と第2の人物との間の会話が支援される場面の他の一例として、裁判所における審理が行われる場面があげられる。この場合、会話支援システムSYSは、裁判官、検察官、弁護士、原告、被告及びお証人の少なくとも二人の間で行われる会話(例えば、弁論)を支援してもよい。 In the above description, the conversation support system SYS fully explains to the patient what the health care worker does to the patient (eg, the medical care) and obtains an agreement between the health care worker and the patient about the action. To support conversations between healthcare professionals and patients in the process of obtaining an agreement. However, the conversation support system SYSTEM is used to obtain an agreement between an arbitrary first person and an arbitrary second person on a desired matter in the same manner as in the case of supporting a conversation between a medical worker and a patient. You may support the conversation between the first person and the second person in the process of obtaining the agreement. Alternatively, in addition to or in place of supporting the conversation between the first person and the second person in the process of obtaining consensus, the conversation support system SYS conducts any conversation between the first person and the second person. You may support. That is, the conversation support system SYS may be used not only in the process of obtaining an agreement but also in a situation where an arbitrary conversation is made. For example, the conversation support system SYS may support a conversation between a first person and a second person in the process in which any first person explains a desired matter to any second person. For example, the conversation support system SYS converts a voice data indicating a conversation between a first person and a second person into text data, and subdivides the sentence indicated by the text data into a plurality of conversation texts. Each of the above is classified into at least one of a plurality of categories that are distinguished according to the type of utterance content to be uttered in the process of explaining the desired matter, and the first is based on the classification result of the classification unit 212. A support screen (for example, a screen similar to the IC support screen 31 shown in FIG. 6) may be displayed to support a conversation between the person in question and the second person. In this case, the first person can determine whether or not the explanation of the desired matter to the second person is insufficient. That is, the first person can determine whether or not he / she sufficiently fulfills the obligation to explain the desired matter to the second person. As a result, the conversation support system SYS can reduce the possibility that the explanation from one of the first and second persons to the other of the first and second persons will be omitted. As another example of the situation where the conversation between the first person and the second person is supported, a contract between the first person and the second person (for example, a contract related to real estate and finance) There is a scene where at least one of the contracts) is concluded. In this case, the conversation support system SYS may support the conversation between the first person and the second person in the process in which the first person explains the contract contents to an arbitrary second person. Another example of a situation in which a conversation between a first person and a second person is assisted is when a police officer investigates a suspect in the police. In this case, the conversation support system SYS may support the conversation between the police officer and the suspect in the process of the police officer investigating the suspect. Another example of a situation in which a conversation between a first person and a second person is supported is a court hearing. In this case, the conversation support system SYS may support conversations (eg, speech) between at least two persons, a judge, a prosecutor, a lawyer, a plaintiff, a defendant, and a witness.
 (5)付記
 以上説明した実施形態に関して、更に以下の付記を開示する。
[付記1]
 医療従事者が患者に対して行う行為について前記患者に説明し且つ前記行為について前記医療従事者と前記患者との間で合意を得るための合意取得過程での前記医療従事者と前記患者との会話の内容を示すテキストを細分化することで得られる複数の会話テキストの夫々を、前記合意取得過程で発話されるべき発話内容の種類に応じて区別される複数のカテゴリのうちの少なくとも一つに分類する分類手段と、
 前記複数の会話テキストのうちの少なくとも一部を、各会話テキストが分類された前記カテゴリと共に表示するように、表示装置を制御する表示制御手段と
 を備える会話支援装置。
[付記2]
 前記表示制御手段は、前記複数の会話テキストのうちの少なくとも一部を、分類されたカテゴリ別に表示するように、前記表示装置を制御する
 付記1に記載の会話支援装置。
[付記3]
 前記表示制御手段は、前記複数のカテゴリのうちの前記医療従事者及び前記患者の少なくとも一方が指定した一のカテゴリに分類された会話テキストを表示するように、前記表示装置を制御する
 付記1又は2に記載の会話支援装置。
[付記4]
 前記分類手段による前記複数の会話テキストの分類結果に基づいて、前記合意取得過程で必要とされる会話が不十分である旨を警告する会話警告手段を更に備える
 付記1から3のいずれか一項に記載の会話支援装置。
[付記5]
 前記会話警告手段は、前記複数のカテゴリのうちの一のカテゴリに分類された前記会話テキストの数が、前記一のカテゴリに対して設定された所定閾値を下回っている場合に、前記一のカテゴリに関する会話が不十分である旨を警告する
 付記4に記載の会話支援装置。
[付記6]
 前記所定閾値は、前記合意取得過程において過去に前記医療従事者と前記患者との間で前記一のカテゴリに関連して行われた前記会話の内容に基づいて設定される
 付記5に記載の会話支援装置。
[付記7]
 前記会話警告手段は、前記複数のカテゴリのうちの一のカテゴリに前記会話テキストが一つも分類されていない場合に、前記一のカテゴリに関する会話が不十分である旨を警告する
 付記4から6のいずれか一項に記載の会話支援装置。
[付記8]
 前記表示制御手段は、前記複数のカテゴリのうちの一のカテゴリに分類された前記会話テキストの数を示す指標を更に表示する
 付記1から7のいずれか一項に記載の会話支援装置。
[付記9]
 前記複数のカテゴリは、前記合意取得過程の目的に言及する発話内容に関するカテゴリ、前記患者の症状又は病状に言及する発話内容に関するカテゴリ、前記患者に対して行われる検査又は治療に言及する発話内容に関するカテゴリ、前記患者を対象とする治験又は研究に言及する発話内容に関するカテゴリ、前記患者の意見に言及する発話内容に関するカテゴリ、前記医療従事者と前記患者との間での合意の有無に言及する発話内容に関するカテゴリ、及び、前記患者に対する今後の医療方針に言及する発話内容に関するカテゴリのうちの少なくとも一つを含む
 付記1から8のいずれか一項に記載の会話支援装置。
[付記10]
 (i)前記複数の会話テキストのうちの少なくとも一つを、前記合意取得過程での前記会話の内容を要約した要約情報に含めることを指定する前記医療従事者の指示を学習し、(ii)前記医療従事者の指示の学習結果に基づいて、前記要約情報を生成する生成手段を更に備える
 付記1から9のいずれか一項に記載の会話支援装置。
[付記11]
 前記生成手段は、前記医療従事者の指示を学習し、前記医療従事者の指示の学習結果に基づいて、前記複数の会話テキストのうちの少なくとも一つを、前記要約情報に含めるべき会話テキストとして前記医療従事者に推奨する
 付記10に記載の会話支援装置。
[付記12]
 一の種類の行為に関する前記合意取得過程において過去に前記医療従事者と前記患者との間で行われた前記会話の内容に基づいて、前記一の種類の行為に関する前記合意取得過程の場にいる前記医療従事者に対して、前記一の種類の行為に関する前記合意取得過程でなすべき会話のスケジュールを提示する提示手段を更に備える
 付記1から11に記載の会話支援装置。
[付記13]
 前記合意取得過程で得られた合意関連データを格納する格納手段と、
 前記格納手段に格納されている前記合意関連データと前記患者のカルテを電子的に管理する電子カルテデータとに基づいて、前記合意取得過程において前記医療従事者と前記患者との間で合意に達していない行為が前記患者に対して行われている又は行われる予定がある旨を警告するカルテ連携手段と
 を更に備える付記1から12のいずれか一項に記載の会話支援装置。
[付記14]
 前記分類手段は、前記複数の会話テキストの夫々を前記複数のカテゴリのうちの少なくとも一つに分類する第1の分類部と、前記第1の分類部の分類結果を含む教師データを学習し、前記教師データの学習結果に基づいて前記複数の会話テキストの夫々を前記複数のカテゴリのうちの少なくとも一つに分類する第2の分類部とを含む
 付記1から13のいずれか一項に記載の会話支援装置。
[付記15]
 医療従事者が患者に対して行う行為について前記患者に説明し且つ前記行為について前記医療従事者と前記患者との間で合意を得るための合意取得過程での前記医療従事者と前記患者との会話の内容を記録する会話記録装置と、
 付記1から14のいずれか一項に記載の会話支援装置と、
 前記表示装置と
 を備える会話支援システム。
[付記16]
 医療従事者が患者に対して行う行為について前記患者に説明し且つ前記行為について前記医療従事者と前記患者との間で合意を得るための合意取得過程での前記医療従事者と前記患者との会話の内容を示すテキストを細分化することで得られる複数の会話テキストの夫々を、前記合意取得過程で発話されるべき発話内容の種類に応じて区別される複数のカテゴリのうちの少なくとも一つに分類し、
 前記複数の会話テキストのうちの少なくとも一部を、各会話テキストが分類された前記カテゴリと共に表示する
 会話支援方法。
[付記17]
 医療従事者と患者との会話を支援する会話支援方法をコンピュータに実行させるコンピュータプログラムが記録された記録媒体であって、
 前記会話支援方法は、
 前記医療従事者が前記患者に対して行う行為について前記患者に説明し且つ前記行為について前記医療従事者と前記患者との間で合意を得るための合意取得過程での前記医療従事者と前記患者との会話の内容を示すテキストを細分化することで得られる複数の会話テキストの夫々を、前記合意取得過程で発話されるべき発話内容の種類に応じて区別される複数のカテゴリのうちの少なくとも一つに分類し、
 前記複数の会話テキストのうちの少なくとも一部を、各会話テキストが分類された前記カテゴリと共に表示する
 記録媒体。
[付記18]
 医療従事者と患者との会話を支援する会話支援方法をコンピュータに実行させるコンピュータプログラムであって、
 前記会話支援方法は、
 前記医療従事者が前記患者に対して行う行為について前記患者に説明し且つ前記行為について前記医療従事者と前記患者との間で合意を得るための合意取得過程での前記医療従事者と前記患者との会話の内容を示すテキストを細分化することで得られる複数の会話テキストの夫々を、前記合意取得過程で発話されるべき発話内容の種類に応じて区別される複数のカテゴリのうちの少なくとも一つに分類し、
 前記複数の会話テキストのうちの少なくとも一部を、各会話テキストが分類された前記カテゴリと共に表示する
 コンピュータプログラム。
[付記18]
 第1の人物から第2の人物に対して所定事項に関する説明を行う会話の内容を示すテキストを細分化することで得られる複数の会話テキストの夫々を、前記所定事項に関する説明で発話されるべき発話内容の種類に応じて区別される複数のカテゴリのうちの少なくとも一つに分類する分類手段と、
 前記複数の会話テキストのうちの少なくとも一部を、各会話テキストが分類された前記カテゴリと共に表示するように、表示装置を制御する表示制御手段と
 を備える会話支援装置。
(5) Additional Notes The following additional notes will be further disclosed with respect to the embodiments described above.
[Appendix 1]
The medical worker and the patient in the process of obtaining an agreement to explain the action to be performed by the medical worker to the patient and to obtain an agreement between the medical worker and the patient about the action. Each of the plurality of conversation texts obtained by subdividing the text indicating the content of the conversation is at least one of a plurality of categories distinguished according to the type of the utterance content to be uttered in the agreement acquisition process. Classification means to classify into
A conversation support device including a display control means for controlling a display device so that at least a part of the plurality of conversation texts is displayed together with the category in which each conversation text is classified.
[Appendix 2]
The conversation support device according to Appendix 1, wherein the display control means controls the display device so that at least a part of the plurality of conversation texts is displayed according to the classified categories.
[Appendix 3]
The display control means controls the display device so as to display conversation texts classified into one category designated by at least one of the medical staff and the patient among the plurality of categories. The conversation support device according to 2.
[Appendix 4]
Any one of Appendix 1 to 3 is further provided with a conversation warning means for warning that the conversation required in the agreement acquisition process is insufficient based on the classification result of the plurality of conversation texts by the classification means. Conversation support device described in.
[Appendix 5]
The conversation warning means is the one category when the number of the conversation texts classified into one of the plurality of categories is less than a predetermined threshold set for the one category. The conversation support device according to Appendix 4, which warns that the conversation regarding the above is insufficient.
[Appendix 6]
The conversation according to Appendix 5, wherein the predetermined threshold value is set based on the content of the conversation that has been conducted between the medical worker and the patient in the past in the process of obtaining the agreement in relation to the one category. Support device.
[Appendix 7]
The conversation warning means warns that the conversation regarding the one category is insufficient when no conversation text is classified into one of the plurality of categories. The conversation support device according to any one of the items.
[Appendix 8]
The conversation support device according to any one of Appendix 1 to 7, wherein the display control means further displays an index indicating the number of the conversation texts classified into one of the plurality of categories.
[Appendix 9]
The plurality of categories relate to the utterance content that refers to the purpose of the agreement acquisition process, the utterance content that refers to the patient's symptoms or medical conditions, and the utterance content that refers to the examination or treatment performed on the patient. Categories, utterance content categories that refer to clinical trials or studies involving the patient, utterance content categories that refer to the patient's opinions, utterances that refer to the existence of agreement between the medical personnel and the patient. The conversation support device according to any one of Appendix 1 to 8, which includes at least one of the content category and the utterance content category that refers to the future medical policy for the patient.
[Appendix 10]
(I) Learn the instructions of the healthcare professional to specify that at least one of the plurality of conversation texts should be included in the summary information summarizing the content of the conversation during the agreement acquisition process, (ii). The conversation support device according to any one of Appendix 1 to 9, further comprising a generation means for generating the summary information based on the learning result of the instruction of the medical worker.
[Appendix 11]
The generation means learns the instruction of the medical worker, and based on the learning result of the instruction of the medical worker, at least one of the plurality of conversation texts is used as the conversation text to be included in the summary information. The conversation support device according to Appendix 10, which is recommended for the medical staff.
[Appendix 12]
In the process of obtaining the agreement regarding one type of action, based on the content of the conversation between the medical staff and the patient in the past, the person is in the process of obtaining the agreement regarding the type of action. The conversation support device according to Appendix 1 to 11, further comprising a presentation means for presenting to the medical staff a schedule of conversations to be made in the process of obtaining the agreement regarding the one kind of action.
[Appendix 13]
A storage means for storing the agreement-related data obtained in the agreement acquisition process, and
An agreement is reached between the medical staff and the patient in the process of obtaining the agreement based on the agreement-related data stored in the storage means and the electronic medical record data that electronically manages the patient's medical record. The conversation support device according to any one of Appendix 1 to 12, further comprising a medical record cooperation means for warning the patient that an act that has not been performed has been performed or is scheduled to be performed.
[Appendix 14]
The classification means learns a first classification unit that classifies each of the plurality of conversation texts into at least one of the plurality of categories, and teacher data including the classification results of the first classification unit. The description in any one of Appendix 1 to 13, including a second classification unit that classifies each of the plurality of conversation texts into at least one of the plurality of categories based on the learning result of the teacher data. Conversation support device.
[Appendix 15]
The medical worker and the patient in the process of obtaining an agreement to explain the action to be performed by the medical worker to the patient and to obtain an agreement between the medical worker and the patient about the action. A conversation recording device that records the content of the conversation,
The conversation support device according to any one of Appendix 1 to 14 and
A conversation support system including the display device.
[Appendix 16]
The medical worker and the patient in the process of obtaining an agreement to explain the action to be performed by the medical worker to the patient and to obtain an agreement between the medical worker and the patient about the action. Each of the plurality of conversation texts obtained by subdividing the text indicating the content of the conversation is at least one of a plurality of categories distinguished according to the type of the utterance content to be uttered in the agreement acquisition process. Classified into
A conversation support method in which at least a part of the plurality of conversation texts is displayed together with the category in which each conversation text is classified.
[Appendix 17]
A recording medium on which a computer program that causes a computer to execute a conversation support method that supports conversation between a medical staff and a patient is recorded.
The conversation support method is
The medical worker and the patient in the process of obtaining an agreement between the medical worker and the patient to explain the action performed by the medical worker to the patient and to obtain an agreement between the medical worker and the patient about the action. At least one of a plurality of categories in which each of the plurality of conversation texts obtained by subdividing the text indicating the content of the conversation with the patient is distinguished according to the type of the utterance content to be uttered in the agreement acquisition process. Classify into one
A recording medium that displays at least a part of the plurality of conversation texts together with the category in which each conversation text is classified.
[Appendix 18]
A computer program that causes a computer to execute a conversation support method that supports conversation between a healthcare professional and a patient.
The conversation support method is
The medical worker and the patient in the process of obtaining an agreement between the medical worker and the patient to explain the action performed by the medical worker to the patient and to obtain an agreement between the medical worker and the patient about the action. At least one of a plurality of categories in which each of the plurality of conversation texts obtained by subdividing the text indicating the content of the conversation with the patient is distinguished according to the type of the utterance content to be uttered in the process of obtaining the agreement. Classify into one
A computer program that displays at least a portion of the plurality of conversation texts together with the category in which each conversation text is classified.
[Appendix 18]
Explaining the predetermined matter from the first person to the second person Each of the plurality of conversation texts obtained by subdividing the text indicating the content of the conversation should be uttered in the explanation about the predetermined matter. A classification means that classifies into at least one of a plurality of categories that are distinguished according to the type of utterance content, and
A conversation support device including a display control means for controlling a display device so that at least a part of the plurality of conversation texts is displayed together with the category in which each conversation text is classified.
 本発明は、請求の範囲及び明細書全体から読み取るこのできる発明の要旨又は思想に反しない範囲で適宜変更可能であり、そのような変更を伴う会話支援装置、会話支援システム、会話支援方法、コンピュータプログラム及び記録媒体もまた本発明の技術思想に含まれる。 The present invention can be appropriately modified within the scope of claims and within the scope not contrary to the gist or idea of the invention that can be read from the entire specification, and a conversation support device, a conversation support system, a conversation support method, and a computer accompanied by such changes. Programs and recording media are also included in the technical idea of the present invention.
 SYS 会話支援システム
 1 録音装置
 2、2a、2b、2c、2d、2e 会話支援装置
 21 CPU
 211 テキスト変換部
 212、212e、2121e、2122e 第2分類部
 213 表示制御部
 214a 警告部
 215b サマリ出力部
 216c スケジュール提示部
 217d カルテ連携部
 22 記憶装置
 221 IC管理DB
 3 表示装置
 4 入力装置
 5d 電子カルテシステム
 51d 電子カルテデータ
SYS conversation support system 1 Recording device 2, 2a, 2b, 2c, 2d, 2e Conversation support device 21 CPU
211 Text conversion unit 212, 212e, 2121e, 2122e Second classification unit 213 Display control unit 214a Warning unit 215b Summary output unit 216c Schedule presentation unit 217d Medical record linkage unit 22 Storage device 221 IC management DB
3 Display device 4 Input device 5d Electronic medical record system 51d Electronic medical record data

Claims (17)

  1.  医療従事者が患者に対して行う行為について前記患者に説明し且つ前記行為について前記医療従事者と前記患者との間で合意を得るための合意取得過程での前記医療従事者と前記患者との会話の内容を示すテキストを細分化することで得られる複数の会話テキストの夫々を、前記合意取得過程で発話されるべき発話内容の種類に応じて区別される複数のカテゴリのうちの少なくとも一つに分類する分類手段と、
     前記複数の会話テキストのうちの少なくとも一部を、各会話テキストが分類された前記カテゴリと共に表示するように、表示装置を制御する表示制御手段と
     を備える会話支援装置。
    The medical worker and the patient in the process of obtaining an agreement to explain the action to be performed by the medical worker to the patient and to obtain an agreement between the medical worker and the patient about the action. Each of the plurality of conversation texts obtained by subdividing the text indicating the content of the conversation is at least one of a plurality of categories distinguished according to the type of the utterance content to be uttered in the agreement acquisition process. Classification means to classify into
    A conversation support device including a display control means for controlling a display device so that at least a part of the plurality of conversation texts is displayed together with the category in which each conversation text is classified.
  2.  前記表示制御手段は、前記複数の会話テキストのうちの少なくとも一部を、分類されたカテゴリ別に表示するように、前記表示装置を制御する
     請求項1に記載の会話支援装置。
    The conversation support device according to claim 1, wherein the display control means controls the display device so that at least a part of the plurality of conversation texts is displayed according to the classified categories.
  3.  前記表示制御手段は、前記複数のカテゴリのうちの前記医療従事者及び前記患者の少なくとも一方が指定した一のカテゴリに分類された会話テキストを表示するように、前記表示装置を制御する
     請求項1又は2に記載の会話支援装置。
    The display control means controls the display device so as to display conversation texts classified into one category designated by at least one of the medical staff and the patient among the plurality of categories. Or the conversation support device according to 2.
  4.  前記分類手段による前記複数の会話テキストの分類結果に基づいて、前記合意取得過程で必要とされる会話が不十分である旨を警告する会話警告手段を更に備える
     請求項1から3のいずれか一項に記載の会話支援装置。
    Any one of claims 1 to 3, further comprising a conversation warning means for warning that the conversation required in the agreement acquisition process is insufficient based on the classification result of the plurality of conversation texts by the classification means. The conversation support device described in the section.
  5.  前記会話警告手段は、前記複数のカテゴリのうちの一のカテゴリに分類された前記会話テキストの数が、前記一のカテゴリに対して設定された所定閾値を下回っている場合に、前記一のカテゴリに関する会話が不十分である旨を警告する
     請求項4に記載の会話支援装置。
    The conversation warning means is the one category when the number of the conversation texts classified into one of the plurality of categories is less than a predetermined threshold set for the one category. The conversation support device according to claim 4, which warns that the conversation regarding the above is insufficient.
  6.  前記所定閾値は、前記合意取得過程において過去に前記医療従事者と前記患者との間で前記一のカテゴリに関連して行われた前記会話の内容に基づいて設定される
     請求項5に記載の会話支援装置。
    The predetermined threshold value is set according to claim 5, which is set based on the content of the conversation conducted between the medical worker and the patient in the past in the process of obtaining the agreement in relation to the one category. Conversation support device.
  7.  前記会話警告手段は、前記複数のカテゴリのうちの一のカテゴリに前記会話テキストが一つも分類されていない場合に、前記一のカテゴリに関する会話が不十分である旨を警告する
     請求項4から6のいずれか一項に記載の会話支援装置。
    The conversation warning means warns that the conversation regarding the one category is insufficient when no conversation text is classified into one of the plurality of categories. Claims 4 to 6 Conversation support device according to any one of the above.
  8.  前記表示制御手段は、前記複数のカテゴリのうちの一のカテゴリに分類された前記会話テキストの数を示す指標を更に表示する
     請求項1から7のいずれか一項に記載の会話支援装置。
    The conversation support device according to any one of claims 1 to 7, wherein the display control means further displays an index indicating the number of conversation texts classified into one of the plurality of categories.
  9.  前記複数のカテゴリは、前記合意取得過程の目的に言及する発話内容に関するカテゴリ、前記患者の症状又は病状に言及する発話内容に関するカテゴリ、前記患者に対して行われる検査又は治療に言及する発話内容に関するカテゴリ、前記患者を対象とする治験又は研究に言及する発話内容に関するカテゴリ、前記患者の意見に言及する発話内容に関するカテゴリ、前記医療従事者と前記患者との間での合意の有無に言及する発話内容に関するカテゴリ、及び、前記患者に対する今後の医療方針に言及する発話内容に関するカテゴリのうちの少なくとも一つを含む
     請求項1から8のいずれか一項に記載の会話支援装置。
    The plurality of categories relate to the utterance content that refers to the purpose of the agreement acquisition process, the utterance content that refers to the patient's symptoms or medical conditions, and the utterance content that refers to the examination or treatment performed on the patient. Categories, utterance content categories that refer to clinical trials or studies involving the patient, utterance content categories that refer to the patient's opinions, utterances that refer to the existence of agreement between the medical personnel and the patient. The conversation support device according to any one of claims 1 to 8, which includes at least one of the content category and the utterance content category that refers to the future medical policy for the patient.
  10.  (i)前記複数の会話テキストのうちの少なくとも一つを、前記合意取得過程での前記会話の内容を要約した要約情報に含めることを指定する前記医療従事者の過去の指示を学習し、(ii)前記医療従事者の過去の指示の学習結果に基づいて、前記要約情報を生成する生成手段を更に備える
     請求項1から9のいずれか一項に記載の会話支援装置。
    (I) Learn the past instructions of the healthcare professional specifying that at least one of the plurality of conversation texts should be included in the summary information summarizing the content of the conversation during the agreement acquisition process. ii) The conversation support device according to any one of claims 1 to 9, further comprising a generation means for generating the summary information based on the learning result of the past instructions of the medical worker.
  11.  前記生成手段は、前記医療従事者の過去の指示を学習し、前記医療従事者の過去の指示の学習結果に基づいて、前記複数の会話テキストのうちの少なくとも一つを、前記要約情報に含めるべき会話テキストとして前記医療従事者に推奨する
     請求項10に記載の会話支援装置。
    The generation means learns the past instructions of the medical worker and includes at least one of the plurality of conversation texts in the summary information based on the learning result of the past instructions of the medical worker. The conversation support device according to claim 10, which is recommended to the medical staff as a conversation text to be written.
  12.  一の種類の行為に関する前記合意取得過程において過去に前記医療従事者と前記患者との間で行われた前記会話の内容に基づいて、前記一の種類の行為に関する前記合意取得過程の場にいる前記医療従事者に対して、前記一の種類の行為に関する前記合意取得過程で話すべき会話のスケジュールを提示する提示手段を更に備える
     請求項1から11に記載の会話支援装置。
    In the process of obtaining the agreement regarding one type of action, based on the content of the conversation between the medical staff and the patient in the past, the person is in the process of obtaining the agreement regarding the type of action. The conversation support device according to claim 1 to 11, further comprising a presentation means for presenting to the medical worker a schedule of conversations to be spoken in the process of obtaining the agreement regarding the one kind of action.
  13.  前記合意取得過程で得られた合意関連データを格納する格納手段と、
     前記格納手段に格納されている前記合意関連データと前記患者のカルテを電子的に管理する電子カルテデータとに基づいて、前記合意取得過程において前記医療従事者と前記患者との間で合意に達していない行為が前記患者に対して行われている又は行われる予定がある旨を警告するカルテ連携手段と
     を更に備える請求項1から12のいずれか一項に記載の会話支援装置。
    A storage means for storing the agreement-related data obtained in the agreement acquisition process, and
    An agreement is reached between the medical staff and the patient in the process of obtaining the agreement based on the agreement-related data stored in the storage means and the electronic medical record data that electronically manages the patient's medical record. The conversation support device according to any one of claims 1 to 12, further comprising a medical record cooperation means for warning the patient that an act that has not been performed has been performed or is scheduled to be performed.
  14.  前記分類手段は、前記複数の会話テキストの夫々を前記複数のカテゴリのうちの少なくとも一つに分類する第1の分類部と、前記第1の分類部の分類結果を含む教師データを学習し、前記教師データの学習結果に基づいて前記複数の会話テキストの夫々を前記複数のカテゴリのうちの少なくとも一つに分類する第2の分類部とを含む
     請求項1から13のいずれか一項に記載の会話支援装置。
    The classification means learns a first classification unit that classifies each of the plurality of conversation texts into at least one of the plurality of categories, and teacher data including the classification results of the first classification unit. The invention according to any one of claims 1 to 13, which includes a second classification unit that classifies each of the plurality of conversation texts into at least one of the plurality of categories based on the learning result of the teacher data. Conversation support device.
  15.  医療従事者が患者に対して行う行為について前記患者に説明し且つ前記行為について前記医療従事者と前記患者との間で合意を得るための合意取得過程での前記医療従事者と前記患者との会話の内容を記録する会話記録装置と、
     請求項1から14のいずれか一項に記載の会話支援装置と、
     前記表示装置と
     を備える会話支援システム。
    The medical worker and the patient in the process of obtaining an agreement to explain the action to be performed by the medical worker to the patient and to obtain an agreement between the medical worker and the patient about the action. A conversation recording device that records the content of the conversation,
    The conversation support device according to any one of claims 1 to 14.
    A conversation support system including the display device.
  16.  医療従事者が患者に対して行う行為について前記患者に説明し且つ前記行為について前記医療従事者と前記患者との間で合意を得るための合意取得過程での前記医療従事者と前記患者との会話を示すテキストを細分化することで得られる複数の会話テキストの夫々を、前記合意取得過程で発話されるべき発話内容の種類に応じて区別される複数のカテゴリのうちの少なくとも一つに分類し、
     前記複数の会話テキストのうちの少なくとも一部を、各会話テキストが分類された前記カテゴリと共に表示する
     会話支援方法。
    The medical worker and the patient in the process of obtaining an agreement to explain the action to be performed by the medical worker to the patient and to obtain an agreement between the medical worker and the patient about the action. Each of the plurality of conversation texts obtained by subdividing the text indicating the conversation is classified into at least one of a plurality of categories distinguished according to the type of utterance content to be uttered in the agreement acquisition process. And
    A conversation support method in which at least a part of the plurality of conversation texts is displayed together with the category in which each conversation text is classified.
  17.  医療従事者と患者との会話を支援する会話支援方法をコンピュータに実行させるコンピュータプログラムが記録された記録媒体であって、
     前記会話支援方法は、
     前記医療従事者が前記患者に対して行う行為について前記患者に説明し且つ前記行為について前記医療従事者と前記患者との間で合意を得るための合意取得過程での前記医療従事者と前記患者との会話の内容を示すテキストを細分化することで得られる複数の会話テキストの夫々を、前記合意取得過程で発話されるべき発話内容の種類に応じて区別される複数のカテゴリのうちの少なくとも一つに分類し、
     前記複数の会話テキストのうちの少なくとも一部を、各会話テキストが分類された前記カテゴリと共に表示する
     記録媒体。
    A recording medium on which a computer program that causes a computer to execute a conversation support method that supports conversation between a medical staff and a patient is recorded.
    The conversation support method is
    The medical worker and the patient in the process of obtaining an agreement between the medical worker and the patient to explain the action performed by the medical worker to the patient and to obtain an agreement between the medical worker and the patient about the action. At least one of a plurality of categories in which each of the plurality of conversation texts obtained by subdividing the text indicating the content of the conversation with the patient is distinguished according to the type of the utterance content to be uttered in the process of obtaining the agreement. Classify into one
    A recording medium that displays at least a part of the plurality of conversation texts together with the category in which each conversation text is classified.
PCT/JP2019/051090 2019-12-26 2019-12-26 Conversation assistance device, conversation assistance system, conversation assistance method, and recording medium WO2021130953A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2021566679A JP7388450B2 (en) 2019-12-26 2019-12-26 Conversation support device, conversation support system, conversation support method, and recording medium
PCT/JP2019/051090 WO2021130953A1 (en) 2019-12-26 2019-12-26 Conversation assistance device, conversation assistance system, conversation assistance method, and recording medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2019/051090 WO2021130953A1 (en) 2019-12-26 2019-12-26 Conversation assistance device, conversation assistance system, conversation assistance method, and recording medium

Publications (1)

Publication Number Publication Date
WO2021130953A1 true WO2021130953A1 (en) 2021-07-01

Family

ID=76575827

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/051090 WO2021130953A1 (en) 2019-12-26 2019-12-26 Conversation assistance device, conversation assistance system, conversation assistance method, and recording medium

Country Status (2)

Country Link
JP (1) JP7388450B2 (en)
WO (1) WO2021130953A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114049971A (en) * 2021-10-11 2022-02-15 北京左医科技有限公司 Medical teaching method and medical teaching device based on doctor-patient conversation

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003178158A (en) * 2001-12-07 2003-06-27 Canon Inc Third party evidential material saving type interrogation record printing service system
JP2005063162A (en) * 2003-08-13 2005-03-10 Takashi Suzuki Informed consent recording and management device
JP2010197643A (en) * 2009-02-25 2010-09-09 Gifu Univ Interactive learning system
JP2015138457A (en) * 2014-01-23 2015-07-30 キヤノン株式会社 Information processing device, information processing method and program

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6979819B2 (en) 2017-07-24 2021-12-15 シャープ株式会社 Display control device, display control method and program
KR102365621B1 (en) 2017-10-20 2022-02-21 구글 엘엘씨 Capturing detailed structures in patient-physician conversations for use in clinical documentation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003178158A (en) * 2001-12-07 2003-06-27 Canon Inc Third party evidential material saving type interrogation record printing service system
JP2005063162A (en) * 2003-08-13 2005-03-10 Takashi Suzuki Informed consent recording and management device
JP2010197643A (en) * 2009-02-25 2010-09-09 Gifu Univ Interactive learning system
JP2015138457A (en) * 2014-01-23 2015-07-30 キヤノン株式会社 Information processing device, information processing method and program

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114049971A (en) * 2021-10-11 2022-02-15 北京左医科技有限公司 Medical teaching method and medical teaching device based on doctor-patient conversation

Also Published As

Publication number Publication date
JPWO2021130953A1 (en) 2021-07-01
JP7388450B2 (en) 2023-11-29

Similar Documents

Publication Publication Date Title
US20220020495A1 (en) Methods and apparatus for providing guidance to medical professionals
US11681356B2 (en) System and method for automated data entry and workflow management
JP6078057B2 (en) Document expansion in dictation-based document generation workflow
US9070357B1 (en) Using speech analysis to assess a speaker's physiological health
US8312057B2 (en) Methods and system to generate data associated with a medical report using voice inputs
Pilnick et al. Advice, authority and autonomy in shared decision‐making in antenatal screening: the importance of context
US20130110547A1 (en) Medical software application and medical communication services software application
US20140365239A1 (en) Methods and apparatus for facilitating guideline compliance
US20140019128A1 (en) Voice Based System and Method for Data Input
US20060020493A1 (en) Ontology based method for automatically generating healthcare billing codes from a patient encounter
US20060020492A1 (en) Ontology based medical system for automatically generating healthcare billing codes from a patient encounter
US11862164B2 (en) Natural language understanding of conversational sources
Pearce et al. Coding and classifying GP data: the POLAR project
US20230154575A1 (en) Systems and Methods for Mental Health Care Delivery Via Artificial Intelligence
CN111133521A (en) System and method for automated retrieval and analysis of medical records
US20240105294A1 (en) De-duplication and contextually-intelligent recommendations based on natural language understanding of conversational sources
Falcetta et al. Automatic documentation of professional health interactions: a systematic review
Walker et al. Developing an intelligent virtual agent to stratify people with cognitive complaints: A comparison of human–patient and intelligent virtual agent–patient interaction
EP3000064A1 (en) Methods and apparatus for providing guidance to medical professionals
JP2010055146A (en) Medical term translation display system
WO2021130953A1 (en) Conversation assistance device, conversation assistance system, conversation assistance method, and recording medium
Dalmer Unsettling knowledge synthesis methods using institutional ethnography: Reflections on the scoping review as a critical knowledge synthesis tool
Duran et al. The quality of CLP-related information for patients provided by ChatGPT
Maas et al. Automated Medical Reporting: From Multimodal Inputs to Medical Reports through Knowledge Graphs.
TW202309917A (en) Data analysis system and data analysis method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19957691

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021566679

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19957691

Country of ref document: EP

Kind code of ref document: A1