WO2021130953A1 - Dispositif d'aide à la conversation, système d'aide à la conversation, procédé d'aide à la conversation et support d'enregistrement - Google Patents

Dispositif d'aide à la conversation, système d'aide à la conversation, procédé d'aide à la conversation et support d'enregistrement Download PDF

Info

Publication number
WO2021130953A1
WO2021130953A1 PCT/JP2019/051090 JP2019051090W WO2021130953A1 WO 2021130953 A1 WO2021130953 A1 WO 2021130953A1 JP 2019051090 W JP2019051090 W JP 2019051090W WO 2021130953 A1 WO2021130953 A1 WO 2021130953A1
Authority
WO
WIPO (PCT)
Prior art keywords
conversation
patient
text
agreement
category
Prior art date
Application number
PCT/JP2019/051090
Other languages
English (en)
Japanese (ja)
Inventor
孝之 近藤
利憲 細井
長谷川 武史
秀章 三澤
潤一郎 日下
有希 草野
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Priority to JP2021566679A priority Critical patent/JP7388450B2/ja
Priority to PCT/JP2019/051090 priority patent/WO2021130953A1/fr
Publication of WO2021130953A1 publication Critical patent/WO2021130953A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H80/00ICT specially adapted for facilitating communication between medical practitioners or patients, e.g. for collaborative diagnosis, therapy or health monitoring

Definitions

  • the present invention relates to a technical field of a conversation support device, a conversation support system, a conversation support method, and a recording medium for supporting a conversation between a medical worker and a patient.
  • Patent Documents 1 and 2 In medical settings such as hospitals, various conversations are held between medical staff and patients. For example, a conversation for informed consent takes place between a healthcare professional and a patient. Devices for supporting such informed consent are described in Patent Documents 1 and 2. Other prior art documents related to the present invention include Patent Documents 3 to 6.
  • Japanese Unexamined Patent Publication No. 2005-063162 Japanese Unexamined Patent Publication No. 2015-170120
  • Japanese Unexamined Patent Publication No. 2017-11755 International Publication No. 2019/038807 Pamphlet Japanese Unexamined Patent Publication No. 2017-049710 Japanese Unexamined Patent Publication No. 2017-11756
  • the medical staff When informed consent is obtained, the medical staff is required to explain to the patient the actions (for example, medical actions) performed by the medical staff to the patient in just proportion.
  • the devices described in Patent Documents 1 to 6 described above are not intended to support the medical staff to explain to the patient the actions to be performed on the patient in just proportion. Therefore, the devices described in Patent Documents 1 to 6 described above have a technical problem that the possibility of omission in the explanation from the medical staff to the patient is relatively high.
  • An object of the present invention is to provide a conversation support device, a conversation support system, a conversation support method, and a computer program capable of solving the above-mentioned technical problems.
  • An example of a conversation support device for solving a problem is an agreement to explain an action performed by a medical worker to a patient to the patient and to obtain an agreement between the medical worker and the patient about the action.
  • Each of the plurality of conversation texts obtained by subdividing the text indicating the content of the conversation between the medical staff and the patient in the acquisition process is determined according to the type of utterance content to be uttered in the agreement acquisition process.
  • An example of a conversation support system for solving a problem is an agreement to explain to the patient what the health care worker does to the patient and to obtain an agreement between the health care worker and the patient about the action. It is a conversation support system including a conversation recording device that records the content of a conversation between the medical worker and the patient in the acquisition process, an example of the conversation support device described above, and the display device.
  • An example of a conversation support method for solving a problem is an agreement to explain to the patient what the health care worker does to the patient and to obtain an agreement between the health care worker and the patient about the action.
  • Each of the plurality of conversation texts obtained by subdividing the text indicating the content of the conversation between the medical staff and the patient in the acquisition process is determined according to the type of utterance content to be uttered in the agreement acquisition process. It is a conversation support method that classifies into at least one of a plurality of categories distinguished by the above, and displays at least a part of the plurality of conversation texts together with the category in which each conversation text is classified.
  • An example of a recording medium for solving a problem is a recording medium in which a computer program for causing a computer to execute a conversation support method for supporting a conversation between a medical worker and a patient is recorded, and the conversation support method is described above.
  • the medical worker and the patient in the process of obtaining an agreement to explain the action to be performed by the medical worker to the patient and to obtain an agreement between the medical worker and the patient about the action.
  • At least one of a plurality of categories in which each of the plurality of conversation texts obtained by subdividing the text indicating the content of the conversation is distinguished according to the type of the content to be spoken in the process of obtaining the agreement.
  • It is a recording medium that classifies into one and displays at least a part of the plurality of conversation texts together with the category in which each conversation text is classified.
  • conversation support device conversation support system, conversation support method and recording medium described above, it is possible to reduce the possibility of omission of explanation from the medical staff to the patient.
  • FIG. 1 is a block diagram showing an overall configuration of the conversation support system of the present embodiment.
  • FIG. 2 is a block diagram showing a configuration of the conversation support device of the present embodiment.
  • FIG. 3 is a data structure diagram showing an example of the data structure of the IC management DB.
  • FIG. 4 is a flowchart showing the flow of the conversation support operation performed by the conversation support device.
  • FIG. 5 is a plan view showing an example of a GUI for accepting input of initial information.
  • FIG. 6 is a plan view showing an example of the IC support screen.
  • FIG. 7 is a block diagram showing the configuration of the conversation support device of the first modification.
  • FIG. 8 is a plan view showing an example of a warning screen warning that the conversation required for informed consent is insufficient.
  • FIG. 9 is a plan view showing an example of an IC support screen including an index showing the number of conversation texts classified into one category.
  • FIG. 10 is a block diagram showing a configuration of a conversation support device of the second modification.
  • FIG. 11 is an explanatory diagram showing an example of summary information.
  • FIG. 12 is a plan view showing an example of an IC support screen including a GUI for designating at least a part of a plurality of conversation texts to be included in the summary information.
  • FIG. 13 is a block diagram showing the configuration of the conversation support device of the third modification.
  • FIG. 14 is a block diagram showing the overall configuration of the conversation support system of the fourth modification.
  • FIG. 15 is a block diagram showing a configuration of a conversation support device of the fourth modification.
  • FIG. 16 is a block diagram showing a configuration of a conversation support device of the fifth modification.
  • FIG. 17 is a plan view showing an example of an IC support screen including a GUI for correcting the classification result of the conversation text.
  • FIG. 18 is a plan view showing an example of an IC support screen including a GUI for searching conversation text.
  • FIG. 1 is a block diagram showing a configuration of the conversation support system SYS of the present embodiment.
  • the conversation support system SYS supports conversations between medical staff and patients.
  • the conversation support system SYS supports conversation in a situation where the medical staff explains to the patient the actions (for example, medical actions) performed by the medical staff to the patient. May be good.
  • the conversation support system SYS may support conversation for the purpose of confirming whether or not sufficient explanation has been given to the patient by the medical staff.
  • the conversation support system SYS explains to the patient an action (for example, medical action) performed by the medical worker to the patient, and the medical worker and the patient agree on the action. It may assist the conversation between the healthcare professional and the patient in the process of obtaining an agreement to obtain.
  • An example of a conversation between a healthcare professional and a patient in the process of obtaining such an agreement is a conversation for obtaining informed consent (IC).
  • informed consent in the medical field means an agreement between the medical staff and the patient after obtaining sufficient information.
  • the conversation support system SYS that supports the conversation between the medical staff and the patient in the process of obtaining consent for obtaining informed consent will be described.
  • the “medical worker” in the present embodiment refers to all persons engaged in medical care, which is an activity aimed at at least one of treatment of illness, prevention of illness, maintenance of health, recovery of health, and promotion of health. It may be included.
  • a healthcare professional may include a person who is capable of performing medical practice independently. At least one of a doctor, dentist and midwife is an example of a person who can perform medical practice independently.
  • a healthcare professional may include a person who is capable of performing medical practice under the direction of a superior person (eg, a doctor or dentist).
  • At least one person such as a nurse, pharmacist, clinical laboratory technician, radiologist, and physiotherapist is an example of a person who can perform medical treatment under the instruction of a superior person.
  • a healthcare professional may include a person performing the procedure at a practitioner's office (eg, at least one of an acupuncture and moxibustion clinic, an osteopathic clinic, and an osteopathic clinic).
  • An example of a person performing the procedure is at least one of the masseuse, acupuncturist, moxibutionist and judo rehabilitator.
  • a health care worker may include a person engaged in health services.
  • a public health nurse is an example of a person engaged in health work.
  • a health care worker may include a person engaged in welfare work.
  • An example of a person engaged in welfare work is at least one of a social worker, a child welfare worker, a mental health worker, a clinical psychologist and a clinical development psychologist.
  • a health care worker may include a person engaged in long-term care work.
  • An example of a person engaged in long-term care work is at least one of a long-term care worker, a home-visit caregiver, a long-term care support specialist, and a home helper.
  • the "patient” in the present embodiment includes all persons who receive medical treatment, which is an activity aimed at at least one of treatment of illness, prevention of illness, maintenance of health, recovery of health, and promotion of health. You may be. Depending on the patient's condition, the patient may not be able to communicate. In this case, the patient's agent (eg, relative, guardian or assistant) usually speaks with the healthcare professional on behalf of the patient. Therefore, the "patient” in the present embodiment may also include a patient's agent.
  • the patient's agent eg, relative, guardian or assistant
  • the conversation support system SYS may support conversations between one healthcare professional and one patient.
  • the conversation support system SYS may support conversations between multiple healthcare professionals and a single patient.
  • the conversation support system SYS may support conversations between one healthcare professional and multiple patients.
  • the conversation support system SYS may support conversations between a plurality of healthcare professionals and a plurality of patients.
  • the conversation support system SYS includes a recording device 1, a conversation support device 2, a display device 3, and an input device 4, as shown in FIG. ing.
  • the recording device 1 is a device that records a conversation between a medical worker and a patient. By recording the conversation between the medical staff and the patient, the recording device 1 generates voice data indicating the content of the conversation between the medical staff and the patient by voice. Therefore, the recording device 1 may include, for example, a microphone and a data processing device that converts a conversation recorded by the microphone as an analog electronic signal into digital audio data. As an example, the recording device 1 may be an information terminal (for example, a smartphone) having a built-in microphone. The recording device 1 outputs the generated voice data to the conversation support device 2.
  • the conversation support device 2 uses the voice data generated by the recording device 1 to perform a conversation support operation for supporting the conversation between the medical staff and the patient.
  • the conversation support operation supports conversation between the medical staff and the patient so that the explanation given from the medical staff to the patient when obtaining informed consent is not omitted. It may include an action.
  • the conversation support action includes, for example, an action that assists the conversation between the medical staff and the patient so that the medical staff gives sufficient explanation to the patient when obtaining informed consent. You may.
  • FIG. 2 is a block diagram showing the configuration of the conversation support device 2.
  • the conversation support device 2 includes a CPU (Central Processing Unit) 21, a storage device 22, and an input / output IF (Interface) 23.
  • CPU Central Processing Unit
  • IF Interface
  • the CPU 21 reads a computer program.
  • the CPU 21 may read a computer program stored in the storage device 22.
  • the CPU 21 may read a computer program stored in a computer-readable recording medium using a recording medium reading device (not shown).
  • the CPU 21 may acquire a computer program from a device (not shown) arranged outside the conversation support device (2) via a communication device (not shown) (that is, it may be downloaded or read).
  • the CPU 21 executes the read computer program.
  • a logical functional block for executing an operation to be performed by the conversation support device 2 (for example, the conversation support operation described above) is realized in the CPU 21. That is, the CPU 21 can function as a controller for realizing a logical functional block for executing the operation to be performed by the conversation support device 2.
  • FIG. 2 shows an example of a logical functional block realized in the CPU 21 to execute a conversation support operation.
  • a text conversion unit 211, a classification unit 212, and a display control unit 213 are realized in the CPU 21.
  • the details of the operations of the text conversion unit 211, the classification unit 212, and the display control unit 213 will be described in detail later with reference to FIG. 3 and the like, but the outline thereof will be briefly described here.
  • the text conversion unit 211 converts the voice data transmitted from the recording device 1 into text data.
  • the classification unit 212 distinguishes each of the plurality of conversation texts obtained by subdividing the sentences indicated by the text data according to the type of utterance content to be uttered in the process of obtaining consensus for obtaining informed consent.
  • the display control unit 213 controls the display device 3 so as to display the IC support screen 31 (see FIG. 6 described later) in order to support the conversation between the medical staff and the patient based on the classification result of the classification unit 212. ..
  • the recording device 1 itself may include a text conversion unit that converts the voice data recorded by the recording device 1 into text data.
  • the recording device 1 may transmit text data to the conversation support device 2 in addition to or instead of the voice data.
  • the conversation support device 2 may include, in addition to or instead of the text conversion unit 211, a data acquisition unit that acquires text data transmitted by the recording device 1 as a logical functional block realized in the CPU 21. ..
  • the conversation support device 2 does not have to include the text conversion unit 211.
  • the conversation support device 2 may be an information terminal (for example, at least one of a personal computer and a tablet computer) used by a medical worker.
  • the conversation support device 2 may be a server installed in the facility where the medical staff is working.
  • the conversation support device 2 may be a server (so-called cloud server) installed outside the facility where the medical staff is working.
  • the storage device 22 can store desired data.
  • the storage device 22 may temporarily store a computer program executed by the CPU 21.
  • the storage device 22 may temporarily store data temporarily used by the CPU 21 when the CPU 21 is executing a computer program.
  • the storage device 22 may store data stored by the conversation support device 2 for a long period of time.
  • the storage device 22 may include at least one of a RAM (Random Access Memory), a ROM (Read Only Memory), a hard disk device, a magneto-optical disk device, an SSD (Solid State Drive), and a disk array device. Good.
  • the storage device 22 stores the IC management DB (DataBase) 221 for managing the informed consent that is the support target of the conversation support operation.
  • the IC management DB 221 contains as many records as the number of times the informed consent has been acquired, including information regarding the contents of the informed consent.
  • FIG. 3 which shows an example of the data structure of the IC management DB 221
  • the record including the information regarding the contents of the informed consent includes, for example, the information indicating the identification number (ID) for identifying the record and the informed consent.
  • the IC management DB 221 may be referred to as agreement-related data.
  • the input / output IF23 is a device that transmits / receives data between the conversation support device 2 and an external device of the conversation support device 2 (for example, at least one of a recording device 1, a display device 3 and an input device 4). .. Therefore, the conversation support device 2 transmits data to an external device of the conversation support device 2 via the input / output IF23. Further, the conversation support device 2 receives data transmitted from an external device of the conversation support device 2 via the input / output IF23.
  • the display device 3 is an output device (that is, a display) capable of displaying desired information.
  • the display device 3 displays the IC support screen 31 under the control of the display control unit 213.
  • the display device 3 may be a display provided in an information terminal (for example, at least one of a personal computer and a tablet computer) used by a medical professional.
  • the display device 3 may be a display that can be visually recognized by both the medical staff and the patient.
  • the conversation support system SYS may separately include a display device 3 that can be visually recognized by the medical staff and a display device 3 that can be visually recognized by the patient. That is, the conversation support system SYS may include a plurality of display devices 3. In this case, the information displayed on one display device 3 may be the same as or different from the information displayed on the other display devices 3.
  • the input device 4 is a device that receives an input operation from a user of the conversation support device 2 (for example, at least one of a medical worker and a patient).
  • the input device 4 may include, for example, a user-operable operating device.
  • the input device 4 may include, for example, at least one of a keyboard, a mouse, and a touch panel as an example of the operating device.
  • the input device 4 may be an operating device included in an information terminal (for example, at least one of a personal computer and a tablet computer) used by a medical professional.
  • the input device 4 may be an operating device that can be operated by both the medical staff and the patient.
  • the conversation support system SYS may separately include an input device 4 that can be operated by the medical staff and an input device 4 that can be operated by the patient. That is, the conversation support system SYS may include a plurality of input devices 4.
  • FIG. 4 is a flowchart showing the flow of the conversation support operation performed by the conversation support device 2.
  • the conversation support device 2 accepts the input of the initial information regarding the informed consent, and registers the received initial information in the IC management DB 221 (step S11). However, when the initial information has already been input (for example, the initial information has been registered in the IC management DB 221), the conversation support device 2 does not have to perform the operation of step S11.
  • the display control unit 213 of the conversation support device 2 may control the display device 3 so as to display a GUI (Graphical User Interface) 31 for receiving input of initial information regarding informed consent.
  • GUI Graphic User Interface
  • the user of the conversation support device 2 (for example, at least one of the medical staff and the patient) may input the initial information using the input device 4 while referring to the GUI 32 displayed on the display device 3. ..
  • the GUI 32 may include a GUI for accepting input of information included in the IC management DB 221.
  • the GUI 32 acquires a text box 321 for inputting the title (IC title) of informed consent and informed consent as the GUI for receiving the input of the information included in the IC management DB 221.
  • the GUI 32 may include a GUI for accepting input for designating a category (in other words, a tag) for classifying conversation text.
  • the classification unit 212 classifies each of the plurality of conversation texts into at least one category designated by using the GUI 32.
  • the classification unit 212 does not have to use at least one category not specified by using the GUI 32 as a classification destination of the conversation text.
  • the GUI 32 includes a plurality of check boxes 326 corresponding to each of the plurality of categories as a GUI that accepts input for designating a category for classifying the conversation text. For example, as shown in FIG.
  • the GUI 32 includes a check box 326-1 that specifies the category in which the conversational text that refers to the "purpose of the informed outlet (ie, the purpose of the consensus acquisition process)" is classified.
  • Checkbox 326-2 which specifies the category in which conversational texts referring to "patient symptoms or medical conditions” are classified, and conversational texts referring to "tests or treatments performed on patients” are classified
  • the check box 326-3 to specify the category
  • the check box 326-4 to specify the category in which the conversation text referring to "patient trial or study” is classified, and the "patient opinion”.
  • a check box 326-5 which specifies the category in which the conversation text referred to is classified, and "Agreement between the medical worker and the patient (note that the agreement here is at least one of consent and refusal).
  • the check box 326-6 which specifies the category in which the conversation text that mentions "may include display", is classified, and the conversation text that mentions "future medical policy for patients" is classified. It may include at least one of the check boxes 326-7 that specify the category to be used. Further, as shown in FIG. 5, the GUI 32 includes a check box 326 for designating a category in which conversation texts referring to "disease name or diagnosis name" are classified, and "acts to be performed this time (for example, medical acts)".
  • Check box 326 that specifies the category in which the conversation text that mentions "merits related to life prognosis and merits related to QOL (Quality Of Life)" is classified, and “acts to be performed this time (for example, medical treatment)"
  • a check box 326 that specifies the category in which the conversation text that mentions the disadvantages of action) (eg, the disadvantages of at least one of danger, distress, side effects, and complications) is classified, and the "patient burden (eg, patient burden (eg,)”. , Cost burden, and time burden including leave) ”with a check box 326 that specifies the category in which the conversation text is categorized, and“ Response or confirmation from the healthcare professional side. May include at least one with a check box 326 that specifies the category in which the conversation text referred to is classified.
  • the text conversion unit 211 acquires the voice data generated by the recording device 1 via the input / output IF23 (step S12).
  • the acquired voice data may be stored in the storage device 22.
  • the information regarding the voice data stored by the storage device 22 may be registered in the IC management DB 221.
  • Information for preventing falsification of the voice data (for example, at least one of a time stamp and an electronic signature) may be added to the voice data.
  • the text conversion unit 211 After that, the text conversion unit 211 generates text data indicating the content of the conversation between the medical staff and the patient in sentences (that is, text) from the voice data acquired in step S12 (step S13). That is, the text conversion unit 211 converts the voice data acquired in step S12 into text data (step S13).
  • the generated text data may be stored by the storage device 22.
  • the information about the text data stored by the storage device 22 may be registered in the IC management DB 221.
  • Information for preventing falsification of the text data (for example, at least one of a time stamp and an electronic signature) may be added to the text data.
  • the text conversion unit 211 (or an arbitrary functional block included in the CPU 21) generates text data so that the text indicating the utterance content of the medical worker and the text indicating the utterance content of the patient can be distinguished. You may.
  • the classification unit 212 sets each of the plurality of conversation texts obtained by subdividing the sentence indicated by the text data generated in step S13 into at least one of the plurality of categories designated in step S11.
  • Classify step S14. In classifying the conversation text into categories, at least one of a plurality of tags corresponding to the plurality of categories specified in step S11 is assigned to the conversation text (that is, the conversation text is tagged). It may be regarded as equivalent to (doing).
  • the classification unit 212 may classify each of the plurality of conversation texts into at least one of a plurality of categories by using a predetermined classification model.
  • the classification model may include, for example, master data relating to categories, master data relating to example sentences classified into each category, and dictionary data relating to word vectorization.
  • the classification unit 212 may classify the conversation texts constituting the text into a desired category by comparing the text indicated by the text data with the master data.
  • the classification unit 212 vectorizes the words that make up the sentence indicated by the text data (that is, calculates the feature vector of the word or sentence), and speaks based on the vectorized word (that is, based on the feature vector).
  • the text may be categorized as desired.
  • the classification unit 212 may, in addition to or instead of using a classification model, classify each of the plurality of conversation texts into at least one of a plurality of categories using a rule-based compliant method. Good. That is, the classification unit 212 may classify each of the plurality of conversation texts into at least one of the plurality of categories by classifying each of the plurality of conversation texts according to a predetermined rule.
  • the classifier 212 uses cosine similarity (ie, cosine similarity with respect to a vector of conversational text) in addition to or instead of using at least one of a classification model and a rule-based method.
  • cosine similarity ie, cosine similarity with respect to a vector of conversational text
  • Each conversational text may fall into at least one of a plurality of categories.
  • the classification unit 212 uses a clustering-based method in addition to or instead of using at least one of a classification model, a rule-based method, and a cosine similarity to create each of the plurality of conversational texts. It may be classified into at least one of a plurality of categories.
  • the classification unit 212 uses a learning model in addition to or instead of using at least one of a classification model, a rule-based method, a cosine similarity and a clustering-based method, and a plurality of conversational texts. Each of these may be classified into at least one of a plurality of categories.
  • the learning model is a learning model (for example, a learning model using a neural network) that outputs a category of conversation texts constituting the text data when text data is input.
  • the classification unit 212 does not have to classify the conversation text, which is difficult to classify into any of the plurality of categories, into any of the plurality of categories.
  • the classification unit 212 may add a tag of "no corresponding category” or "unknown corresponding category” to conversation text that is difficult to classify into any of a plurality of categories.
  • the text indicated by the text data may be subdivided into arbitrary units. That is, the size of the conversation text obtained by subdividing the sentence indicated by the text data may be arbitrary. For example, at least a part of the sentence indicated by the text data may be subdivided into word units. For example, at least a part of the sentence indicated by the text data may be subdivided into bunsetsu units. For example, at least a part of the sentence indicated by the text data may be subdivided into a plurality of conversational texts with punctuation marks as boundaries. For example, at least a part of the sentence indicated by the text data may be subdivided into units of sentences (for example, at least one of a single sentence, a compound sentence, and a compound sentence). For example, at least a part of the sentence indicated by the text data may be subdivided into units of morphemes. In this case, morphological analysis may be performed on the text data.
  • the classification data indicating the classification result by the classification unit 212 may be stored by the storage device 22.
  • the storage device 22 may add information (for example, at least one of a time stamp and an electronic signature) to the classification data to prevent the classification data from being tampered with.
  • the display control unit 213 After that, the display control unit 213 generates an IC support screen 31 to support the conversation between the medical staff and the patient based on the classification result of the classification unit 212 (step S15). Further, the display control unit 213 controls the display device 3 so as to display the generated IC support screen 31 (step S15). As a result, the display device 3 displays the IC support screen 31.
  • the IC support screen 31 may include, for example, a conversation display screen 311 and a category display screen 312. That is, the display control unit 213 controls the display device 3 so that the conversation display screen 311 and the category display screen 312 are displayed in parallel. However, the IC support screen 31 does not have to include at least one of the conversation display screen 311 and the category display screen 312.
  • the conversation display screen 311 displays the content of the conversation text along the flow of conversation between the medical staff and the patient. That is, the conversation display screen 311 displays texts indicating the contents of the conversation between the medical staff and the patient during a certain period in the order of the conversation flow.
  • the content of the conversation text includes information indicating the time during which the conversation indicated by the conversation text was taking place, information indicating the person who spoke the word indicated by the conversation text, and a category in which the conversation text is classified. It may be displayed with the information shown.
  • the conversation display screen 311 may display texts indicating the contents of the conversation between the current medical staff and the patient in the order of the conversation flow.
  • the conversation display screen 311 is, as a text indicating the content of the conversation between the current medical staff and the patient, substantially the current time only for the time required to complete the processes from step S12 to step S14. Displaying text indicating the content of the delayed conversation.
  • the conversation display screen 311 displays texts indicating the contents of the conversation between the medical staff and the patient a certain time ago (for example, a few seconds ago, a few tens of seconds ago, or a few minutes ago) in the order of the conversation flow. It may be displayed.
  • the conversation display screen 311 may display texts indicating the contents of the already completed conversation between the medical staff and the patient in the order of the conversation flow.
  • the conversation display screen 311 may display the contents of the conversation in units of subdivided conversation texts.
  • the conversation display screen 311 may display the content of the conversation in a unit different from the conversation text in addition to or instead of displaying the content of the conversation in the unit of the conversation text.
  • the conversation display screen 311 may display the content of the conversation in units of a group of conversations including a plurality of conversation texts (that is, in units of conversations that make sense).
  • the conversation display screen 311 may display the text indicated by the text data before being subdivided into the conversation text.
  • the category display screen 312 displays at least a part of the plurality of conversation texts by classified categories. That is, the category display screen 312 displays conversation text classified into one of a plurality of categories. On the other hand, the category display screen 312 does not have to display conversation texts classified into other categories different from one category among the plurality of categories. In the example shown in FIG. 6, the category display screen 312 displays conversation texts classified into categories related to "patient's symptom or medical condition".
  • the content of the conversation text includes information indicating the time during which the conversation indicated by the conversation text was taking place and the person who spoke the words indicated by the conversation text. Is displayed along with information indicating the category in which the conversation text is classified.
  • the category display screen 312 may include a GUI 3120 for designating the category of the conversation text to be displayed on the category display screen 312.
  • the GUI 3120 includes a plurality of buttons 3121 corresponding to a plurality of categories, respectively.
  • the plurality of buttons 3121 correspond to the plurality of categories specified in step S11 of FIG. 4, respectively.
  • the GUI 3120 has a button 3121-1 that is pressed when it wants to display conversation text in all categories, and conversation text that mentions "the purpose of informed consent”.
  • Button 3121-2 pressed when wishing to display, and button 3121-3 pressed when wishing to display conversational text referring to "patient's symptoms or medical conditions”.
  • Button 3121-4 pressed when wishing to display conversational text referring to "test or treatment performed on a patient” and referring to “patient trial or study”
  • Button 3121-5 pressed when wishing to display the conversational text in question
  • button 3121-6 pressed when wishing to display the conversational text referring to the "patient's opinion”
  • Mentions button 3121-7 which is pressed when wishing to display conversational text that mentions “agreement between healthcare professionals and patients", and “future medical policy for patients” It may include a button 3121-8 that is pressed if it wishes to display the conversational text in question.
  • the user of the conversation support device 2 displays the conversation support device 2 on the category display screen 312 by using the input device 4 while referring to the GUI 3120 displayed on the display device 3. You may specify a category of conversation text. As a result, the conversation texts classified into the categories specified by the user are displayed on the category display screen 312.
  • step S16 The operation described above (particularly, the operation from step S12 to step S15) is repeated until it is determined that the conversation support operation is completed (step S16).
  • the conversation support system SYS separately includes the display device 3 visible to the medical staff and the display device 3 visible to the patient, the display device 3 visible to the patient.
  • the content of the IC support screen 31 displayed on the screen may be different from the content of the IC support screen 31 displayed on the display device 3 visible to the medical staff.
  • the IC support screen 31 displayed on the display device 3 that can be visually recognized by the patient is useful information for the patient to understand the explanation of the medical staff (for example, information on the explanation of medical terms issued by the medical staff and the patient). At least one of the information regarding the diagnosis result of is displayed.
  • the conversation support system SYS of the present embodiment shows the contents of the conversation between the medical staff and the patient in order to obtain informed consent.
  • the IC support screen 31 in which the conversation text is displayed together with the category of the conversation text can be displayed. Therefore, the medical staff can determine whether or not the explanation about a certain category is insufficient by checking the category of the conversation text displayed on the IC support screen 31.
  • the state of "insufficient explanation about a certain category” mentioned here may mean a state in which the explanation from the medical staff regarding a certain category is omitted. In other words, "insufficient explanation for a category" means that at least some of the information that the healthcare professional should convey to the patient about a category is not explained to the patient. Good.
  • the patient can also determine whether or not the explanation about a certain category is insufficient by checking the category of the conversation text displayed on the IC support screen 31. As a result, if it is determined that the explanation about a certain category is insufficient, the medical staff can give an additional explanation about a certain category. Therefore, the conversation support system SYS can reduce the possibility of omission of explanation from the medical staff to the patient.
  • the conversation support system SYS can display a category display screen 312 for displaying at least a part of a plurality of conversation texts by classified categories.
  • the conversation texts classified into a certain category are displayed together, the medical staff and the patient can more appropriately determine whether or not the explanation about the certain category is insufficient. Therefore, the conversation support system SYS can more appropriately reduce the possibility of omission of explanation from the medical staff to the patient.
  • FIG. 7 is a block diagram showing a configuration of the conversation support device 2a of the first modification.
  • the configuration requirements that have already been explained are designated by the same reference numerals, and detailed description thereof will be omitted.
  • the conversation support device 2a of the first modification is a logical functional block realized in the CPU 21 to execute the conversation support operation as compared with the conversation support device 2 described above.
  • the difference is that the warning unit 214a is realized in the CPU 21.
  • Other features of the conversation support device 2a may be the same as other features of the conversation support device 2.
  • the warning unit 214a determines whether or not the conversation required for informed consent is insufficient based on the classification result of the classification unit 212. For example, the warning unit 214a may determine whether or not the conversation required for informed consent is insufficient for each category based on the classification result of the classification unit 212.
  • the greater the volume of conversation between the healthcare professional and the patient the more likely it is that the healthcare professional has given a sufficient explanation to the patient.
  • the smaller the volume of conversation between the healthcare professional and the patient the less likely it is that the healthcare professional has given sufficient explanation to the patient. Therefore, the operation of determining whether or not the conversation required for informed consent in a certain category is insufficient is substantially the operation of determining whether or not the explanation regarding a certain category is insufficient. It may be considered equivalent.
  • the action of determining whether or not the conversation required by informed consent in a certain category is insufficient is, in effect, the action of determining whether or not the explanation about a certain category is insufficient. It may be regarded as a specific example.
  • the warning unit 214a is divided into the number of conversation texts classified into one category (that is, subdivided). It may be determined whether the number of conversational texts in a block) is greater than the threshold specific to the category. If the number of conversation texts is greater than the threshold, it is possible that there was more conversation about one category required for informed consent than if the number of conversation texts was less than the threshold. high. On the other hand, when the number of conversation texts is less than the threshold, there is more conversation about one category required for informed consent than when the number of conversation texts is more than the threshold. Most likely not.
  • the warning unit 214a may determine that the conversation required for informed consent is not sufficient. On the other hand, the warning unit 214a may determine that the conversation required for informed consent is insufficient when the number of conversation texts is less than the threshold value.
  • the warning unit 214a may use the number of utterances (that is, the number of remarks) of the medical staff and the patient regarding one category. And at least one of the utterance time (that is, the speaking time) may be determined whether or not it is greater than the threshold value specific to the one category.
  • the number of utterances may mean, for example, the number of times a group of conversations that have a common meaning has been uttered.
  • the conversation about one category required for informed consent is more than when at least one of the number of utterances and the utterance time is less than the threshold.
  • the warning unit 214a may determine that the conversation required for informed consent is not sufficient when at least one of the number of utterances and the utterance time is greater than the threshold value. On the other hand, the warning unit 214a may determine that the conversation required for informed consent is insufficient when at least one of the number of utterances and the utterance time is less than the threshold value.
  • the threshold is at least one of the number of conversation texts, the number of utterances, and the utterance time, depending on whether the conversation required by informed consent is insufficient and the conversation required by informed consent is not sufficient. It is preferable that the value is set to an appropriate value that can be appropriately distinguished from the above.
  • a threshold value may be a fixed value predetermined by the conversation support system SYS or the user.
  • the threshold value may be a variable value that can be appropriately set by the conversation support system SYS or the user.
  • the threshold value may be zero. In this case, whether at least one of the number of conversation texts in one category, the number of utterances in one category, and the utterance time in one category is greater than the threshold (ie, greater than zero).
  • the action of determining is equivalent to the action of determining whether or not an utterance related to one category has been made. If it is determined that at least one of the number of conversation texts in one category, the number of utterances in one category, and the utterance time in one category is not greater than the threshold, no utterances in one category are made. It is estimated that it is not. Therefore, in this case, the warning unit 214a may determine that the conversation required for informed consent is insufficient.
  • the warning unit 214a may set the threshold value corresponding to one category based on the content of the conversation conducted regarding one category when the same type of informed consent was acquired in the past.
  • Other informed consents of the same type as one informed consent have the same or similar purpose, symptom, test, treatment, and at least one (or all) of the disease name to the one informed consent. May mean informed consent.
  • the warning unit 214a sets the threshold value corresponding to one category, the number of conversation texts indicating the content of the conversation conducted for one category when the same type of informed consent was acquired in the past, and the number of utterances.
  • the warning unit 214a sets a threshold value corresponding to one category, the number of conversation texts indicating the content of the conversation conducted for one category when the same type of informed outlet is acquired, the number of utterances, and the utterance time. A predetermined margin may be added to or subtracted from at least one of the two or at least one of the number of conversation texts, the number of utterances, and the utterance time. In this case, the warning unit 214a can set an appropriate threshold value. It is preferable that the threshold value for the number of conversation texts, the threshold value for the number of utterances, and the threshold value for the utterance time are set individually.
  • warning unit 214a issues a specific keyword for one category by at least one of the healthcare professional and the patient. It may be determined whether or not it has been done.
  • the specific keyword may be, for example, a keyword that should appear in a conversation for explaining a category. In this case, if the particular keyword is not uttered by at least one of the healthcare professional and the patient, it is likely that there is not enough conversation about one category required for informed consent. For this reason, warning unit 214a may determine that the conversation required for informed consent is inadequate if the particular keyword is not issued by at least one of the healthcare professional and the patient.
  • the warning unit 214a may use the warning unit 214a after a certain period of time has passed since the conversation between the healthcare professional and the patient was started. It may be determined whether at least one of the number of conversation texts, the number of utterances and the utterance time is greater than the threshold and / or whether a particular keyword has been uttered by at least one of the healthcare professional and the patient. This is because if more than a certain amount of time has passed since the conversation between the healthcare professional and the patient started, the healthcare professional is in the process of explaining a category (that is, regarding a category). This is because it is relatively likely (the explanation of) has not been completed yet.
  • the warning unit 214a estimates the flow of conversation that is expected to be conducted at the newly acquired informed consent this time, based on the flow of conversation that was conducted at the informed consent acquired in the past. Can be done. In this case, the warning unit 214a estimates the time when the conversation regarding one category ends based on the estimation result of the conversation flow assumed to be performed at the newly acquired informed outlet, and the warning unit 214a estimates one category.
  • the warning unit 214a can appropriately determine whether or not the conversation required for informed consent in one category is insufficient.
  • the classification result of the classification unit 212 is different from the classification result in which at least one of the number of conversation texts classified into one category, the number of utterances related to one category, and the utterance time related to one category is relatively large. If at least one of the number of conversation texts in a category, the number of utterances in another category, and the utterance time in another category changes to a relatively high classification result, the conversation in one category ends ( In other words, it is highly probable that the medical staff fully explained the matters related to one category).
  • the classification result of the classification unit 212 is such that at least one of the number of conversation texts classified into one category, the number of utterances related to one category, and the utterance time related to one category is relatively large.
  • the result changes to a classification result in which at least one of the number of conversation texts classified into other categories, the number of utterances related to other categories, and the utterance time related to other categories is relatively high, the results are classified into one category. It may be determined whether at least one of the number of conversation texts, the number of utterances related to one category, and the utterance time related to one category is greater than the threshold value. As a result, the warning unit 214a can appropriately determine whether or not the conversation required for informed consent in one category is insufficient.
  • the warning unit 214a may warn the user of the conversation support system SYS to that effect.
  • the warning unit 214a may control the display device 3 to display a warning screen 33a for warning that the conversation required for informed consent is insufficient in one category.
  • An example of the warning screen 33a is shown in FIG.
  • the conversation support system SYSA of the first modification can determine whether or not the conversation required for informed consent is insufficient. Further, the conversation support system SYSa can warn that the conversation required by the informed consent is insufficient when the conversation required by the informed consent is insufficient. As a result, the healthcare professional who sees the warning can provide additional explanations for a category. Therefore, the conversation support system SYSa can more appropriately reduce the possibility of omission of explanation from the medical staff to the patient.
  • the conversation support system SYSa may give a voice warning that the conversation required for informed consent in one category is insufficient. Good.
  • the conversation support system SYSA may include, in addition to or instead of the display device 3, a speaker that outputs a warning by voice.
  • the conversation support system SYSa includes an index indicating at least one of the number of conversation texts, the number of utterances, and the utterance time classified into one category.
  • the IC support screen 31 may be displayed.
  • the conversation support system SYSA (particularly, the display control unit 213) is a conversation text classified into one category.
  • the IC support screen 31 including the bar graph 3122a showing the number quantitatively may be displayed.
  • the conversation support system SYSa can more appropriately reduce the possibility of omission of explanation from the medical staff to the patient.
  • the conversation support system SYSa is provided with a warning unit 214a when displaying the IC support screen 31 including an index indicating at least one of the number of conversation texts, the number of utterances, and the utterance time classified into one category. It does not have to be.
  • FIG. 10 is a block diagram showing a configuration of the conversation support device 2b of the second modification.
  • the conversation support device 2b of the second modification is a logical functional block realized in the CPU 21 to execute the conversation support operation as compared with the conversation support device 2 described above.
  • the difference is that the summary output unit 215b is realized in the CPU 21.
  • Other features of the conversation support device 2b may be the same as other features of the conversation support device 2.
  • the conversation support device 2a of the first modification described above may include a summary output unit 215b.
  • the summary output unit 215b generates summary information (that is, summary information) that summarizes the content of the conversation between the medical staff and the patient when the informed consent is obtained. Further, the summary output unit 215b outputs the generated summary information. For example, the summary output unit 215b may control the display device 3 which is a specific example of the output device so as to display the summary information.
  • the summary information may include, for example, at least a part of the initial information registered in step S11 of FIG.
  • the summary information includes information indicating the title of informed consent, information indicating the name of a medical worker who has obtained informed consent, and information indicating informed consent. At least one of information indicating the name of the patient who gave informed consent, information indicating the date and time when informed consent was obtained, and information indicating at least one of the comments of the medical staff and the patient regarding informed consent. May include.
  • the summary information may include, for example, at least a part of a plurality of conversation texts obtained by subdividing the text indicated by the text data converted by the text conversion unit 211.
  • the user of the conversation support system SYSb may specify at least a part of the plurality of conversation texts to be included in the summary information.
  • the display control unit 213 may control the display device 3 so as to display the IC support screen 31 including the GUI 3110 for designating the conversation text to be included in the summary information.
  • An example of the IC support screen 31 including the GUI3110b for designating the conversation text to be included in the summary information is shown in FIG. In the example shown in FIG.
  • the GUI 3110b is a plurality of check boxes 3111b corresponding to the plurality of conversation texts displayed on the IC support screen 31, and is selected when the corresponding conversation texts are included in the summary information. Includes check box 3111b.
  • the summary output unit 215b includes the conversation text corresponding to the check box 3111b selected by the user in the summary information.
  • the summary output unit 215b does not have to include the conversation text corresponding to the check box 3111b not selected by the user in the summary information.
  • FIG. 12 shows an example in which the GUI 3110b is included in the conversation display screen 311 constituting the IC support screen 31.
  • the GUI 3110b may be included in the category display screen 312 that constitutes the IC support screen 31.
  • the summary information may include, for example, information about a category in which the conversation text included in the summary information is classified.
  • the summary output unit 215b may output summary information indicating the conversation texts by category.
  • the conversation support system SYSb of the second modification can output summary information. Therefore, the user can appropriately grasp the contents of informed consent by confirming the summary information.
  • the summary output unit 215b may learn the user's instruction to specify the conversation text to be included in the summary information.
  • the summary output unit 215b may automatically select the conversation text to be included in the summary information based on the learning result of the user's instruction. That is, the summary output unit 215b may automatically select the conversation text that the user is presumed to include in the summary information as the conversation text that should be included in the summary information, based on the learning result of the user's instruction. ..
  • the summary output unit 215b does not require the user's instruction to specify the conversation text to be included in the summary information, and the conversation text is relatively likely to be selected by the user as the conversation text to be included in the summary information. Can be selected appropriately.
  • the summary output unit 215b uses a learning model (for example, a neural network) that can be learned by using the user's instruction to specify the conversation text to be included in the summary information as teacher data. It may include a learning model).
  • a learning model for example, a neural network
  • the summary output unit 215b may recommend the conversation text that is preferably included in the summary information to the user based on the learning result of the instruction of the user who specifies the conversation text to be included in the summary information.
  • the summary output unit 215b may control the display device 3 so that the conversation text, which is preferable to be included in the summary information, is displayed on the IC support screen 31 in a display method distinguishable from other conversation texts. ..
  • the user can relatively easily select the conversation text to be included in the summary information.
  • the conversation support system SYSc of the third modification is different from the conversation support system SYS described above in that it includes a conversation support device 2c instead of the conversation support device 2.
  • Other features of the conversation support system SYSc may be the same as other features of the conversation support system SYS. Therefore, the conversation support device 2c of the third modification will be described below with reference to FIG.
  • FIG. 13 is a block diagram showing a configuration of the conversation support device 2c of the third modification.
  • the conversation support device 2c of the third modification is, as compared with the conversation support device 2 described above, as a logical functional block realized in the CPU 21 to execute the conversation support operation.
  • the difference is that the schedule presentation unit 216c is realized in the CPU 21.
  • Other features of the conversation support device 2c may be the same as other features of the conversation support device 2.
  • At least one of the conversation support device 2a of the first modification and the conversation support device 2b of the second modification described above may include the schedule presentation unit 216c.
  • the schedule presentation unit 216c presents the schedule of conversations to be conducted in order to obtain informed consent to the medical staff who obtains informed consent. Specifically, when the same type of informed consent as the informed consent acquired in the past is acquired, the schedule presentation unit 216c is based on the content of the conversation in the informed consent acquired in the past. , Present the schedule of conversations to be conducted in order to obtain this informed consent to the medical staff who obtain this informed consent. This is because, as mentioned above, when the same type of informed consent as the informed consent obtained in the past is obtained, the conversation flow is performed in the informed consent obtained in the past. It is more likely to be the same as the flow of conversation.
  • the schedule presenting unit 216c can present the schedule of the conversation to be conducted at the newly acquired informed consent based on the schedule of the conversation conducted at the informed consent acquired in the past. ..
  • the schedule presentation unit 216c presents a schedule that is the same as or similar to the schedule of conversations that took place in the informed consent acquired in the past as the schedule of conversations that should be given in the newly acquired informed consent. May be.
  • the schedule presentation unit 216c learns the schedule of conversations given in the informed consent acquired in the past, and based on the learning result of the schedule, the conversations to be held in the informed consent newly acquired this time.
  • a schedule may be presented.
  • the schedule presentation unit 216c includes a learning model (for example, a learning model using a neural network) that can be learned by using the schedule of conversations performed in informed consent acquired in the past as teacher data. May be good.
  • a learning model for example, a learning model using a neural network
  • the healthcare professional can proceed with the explanation to be given in order to obtain informed consent on an appropriate schedule based on the presented schedule.
  • FIG. 14 is a block diagram showing a configuration of the conversation support system SYSd of the fourth modification.
  • the conversation support system SYSd of the fourth modification is different from the conversation support system SYS described above in that it further includes an electronic medical record system 5d. Further, the conversation support system SYSd is different from the conversation support system SYS described above in that it includes a conversation support device 2d instead of the conversation support device 2. Other features of the conversation support system SYSd may be the same as other features of the conversation support system SYS. It should be noted that at least one of the conversation support system SYSA of the first modification described above to the conversation support system SYSc of the third modification may be provided with the electronic medical record system 5d.
  • the electronic medical record system 5d is a system for managing the patient's electronic medical record. Specifically, the electronic medical record system 5d stores electronic medical record data 51d indicating the patient's electronic medical record. Therefore, the electronic medical record system 5d may be provided with a storage device for storing the electronic medical record data 51d.
  • the conversation support device 2d has a logical function realized in the CPU 21 to execute the conversation support operation as compared with the conversation support device 2 described above.
  • the difference is that the medical record cooperation unit 217d is realized in the CPU 21 as a block.
  • Other features of the conversation support device 2d may be the same as other features of the conversation support device 2.
  • At least one of the conversation support device 2a of the first modification and the conversation support device 2c of the third modification described above may include a medical record cooperation unit 217d.
  • the medical record cooperation unit 217d performs a cooperation operation in which the electronic medical record data 51d stored in the electronic medical record system 5d and the data managed by the IC management DB 221 (that is, the data for managing informed consent) are linked.
  • the medical record cooperation unit 217d has already performed or performed an act (for example, medical practice) that has not been agreed upon by informed consent based on the electronic medical record data 51d and the IC management DB 221. It may be determined whether or not there is a plan to be given. Specifically, the medical record cooperation unit 217d can specify the medical practice or the like that has already been performed or is performed on the patient based on the electronic medical record data 51d. Further, the medical record cooperation unit 217d can determine whether or not an agreement has been reached between the medical staff and the patient regarding the medical practice or the like performed on the patient based on the IC management DB 221.
  • an act for example, medical practice
  • the medical record cooperation unit 217d has already performed or plans to perform medical treatment or the like that has not been agreed upon by informed consent based on the electronic medical record data 51d and the IC management DB 221. It is possible to determine whether or not there is. Furthermore, if it is determined by the informed consent that a medical practice, etc. that has not been agreed upon has already been performed or is scheduled to be performed on the patient, the medical record cooperation unit 217d will notify the conversation support system to that effect. The SYSd user may be warned. As a result, the user can recognize that it is necessary to give informed consent in order to perform medical treatment or the like. Therefore, the medical staff can perform medical treatment and the like after the informed consent is properly obtained. In other words, medical professionals rarely mistakenly perform medical practices or the like for which informed consent has not been properly obtained.
  • At least a part of the IC management DB 221 may be included in the electronic medical record data 51d. At least a part of the IC management DB 221 may be stored in the electronic medical record system 5d as a part of the electronic medical record data 51d. Even if at least a part of the data managed by the IC management DB 221 (for example, at least one of the above-mentioned voice data, text data, and classification data indicating the classification result of the classification unit 212) is included in the electronic medical record data 51d. Good. At least a part of the data managed by the IC management DB 221 may be stored in the electronic medical record system 5d as a part of the electronic medical record data 51d.
  • FIG. 16 is a block diagram showing a configuration of the conversation support device 2e of the fifth modification.
  • the conversation support device 2e of the fifth modification is different from the conversation support device 2 described above in that it includes a classification unit 212e instead of the classification unit 212. Further, the conversation support device 2e of the fifth modification is different from the conversation support device 2 described above in that it includes a learning unit 214e. Other features of the conversation support device 2e may be the same as other features of the conversation support device 2. In addition, at least one of the conversation support device 2b of the first modification and the conversation support device 2d of the fourth modification described above may include a classification unit 212e instead of the classification unit 212.
  • the classification unit 212e is different from the classification unit 212 in that each contains at least two classification units 2121e capable of classifying the conversation text into at least one of a plurality of categories.
  • the classification unit 212e includes a classification unit 2121e-1 and a classification unit 2121e-2.
  • Other features of the classification unit 212e may be the same as the other features of the classification unit 212.
  • the classification unit 2121e-1 may classify conversation texts by using any method.
  • the classification unit 2121e-2 may classify the conversation text by a method different from the method used by the classification unit 2121e-1.
  • the classification unit 2121e-1 may classify the conversation text by using a method based on the above-mentioned rule base.
  • the classification unit 2121e-2 may classify the conversation text using the above-mentioned classification model (for example, a learning model using a neural network). In this case, the classification unit 2121e-2 may classify conversation texts that the classification unit 2121e-1 could not classify.
  • the learning unit 214e learns the classification result of the classification unit 2121e-1.
  • the learning result of the learning unit 214e is reflected in the classifier 2121e-2. Therefore, it is preferable that the classification unit 2121e-2 classifies conversation texts using a learning model created by learning by the learning unit 214e (for example, a learning model using a neural network).
  • a learning model created by learning by the learning unit 214e (for example, a learning model using a neural network).
  • the classification unit 2121e-2 can use a learning model that reflects the learning result of the classification result of the classification unit 2121e-1, the conversation text is relatively more accurate than the classification unit 2121e-1. Can be classified.
  • the learning unit 214e may learn the classification result of the classification unit 212.
  • the learning result of the learning unit 214e is reflected in the classifier 212. In this case, it is expected that the learning by the learning unit 214e will improve the classification accuracy by the classification unit 212.
  • the classification unit 212 may modify the classification result of the conversation text.
  • the classification unit 212 may modify the classification result of the conversation text based on the user's instruction for modifying the classification result of the conversation text.
  • the display control unit 213 may control the display device 3 so as to display the IC support screen 31 including the GUI 313 for correcting the classification result of the conversation text.
  • An example of the IC support screen 31 including the GUI 313 for correcting the classification result of the conversation text is shown in FIG.
  • the GUI 313 is displayed so as to correspond to the category displayed on the IC support screen 31 (for example, the category displayed in association with the conversation text in the conversation display screen 311) and the conversation text.
  • the classification unit 212 may change the category for classifying the conversation text to the category selected by the user.
  • the classification unit 212 may learn the correction content of the classification result of the conversation text.
  • the classification unit 212 may learn the modified content of the classification result of the conversation text, and may classify the conversation text based on the learning result of the modified content of the classification result of the conversation text.
  • the classification unit 212 can classify the conversation text with relatively high accuracy as compared with the case where the correction content of the classification result of the conversation text is not learned.
  • the classification unit 212 can learn by using the correction content of the classification result of the conversation text (for example, the instruction of the user who corrects the classification result of the conversation text) as the teacher data (for example, neural).
  • a learning model using a network may be included.
  • the display control unit 213 may control the display device 3 so as to display the IC support screen 31 including the GUI for searching the conversation text.
  • FIG. 18 An example of the IC support screen 31 including a GUI for searching the conversation text is shown in FIG.
  • the IC support screen 31 (conversation display screen 311 in FIG. 18) may include a text box 3112 for inputting the wording to be searched as a GUI for searching the conversation text. ..
  • the IC support screen 31 may display conversation text including the wording input in the text box 3112.
  • FIG. 18 shows an IC support screen 31 (conversation display screen 311 in FIG. 18) that displays a conversation text including the word “complication” when the word “complication” is entered in the text box 3112. ..
  • the conversation support system SYS may be provided with a video capturing device (for example, a video camera) capable of capturing a video of at least one of a medical worker and a patient having conversation in order to obtain informed consent.
  • the moving image data showing the moving image taken by the moving image-taking device may be used as evidence of informed consent.
  • the moving image data taken by the moving image capturing device may be stored in the storage device 22.
  • the information about the moving image data stored by the storage device 22 may be registered in the IC management DB 221.
  • Information for preventing falsification of the moving image data (for example, at least one of a time stamp and an electronic signature) may be added to the moving image data.
  • at least one of the medical staff and the patient may specify whether or not to perform imaging with the moving image imaging device when registering the above-mentioned initial information (step S11 in FIG. 4).
  • the conversation support system SYS fully explains to the patient what the health care worker does to the patient (eg, the medical care) and obtains an agreement between the health care worker and the patient about the action.
  • the conversation support system SYSTEM is used to obtain an agreement between an arbitrary first person and an arbitrary second person on a desired matter in the same manner as in the case of supporting a conversation between a medical worker and a patient. You may support the conversation between the first person and the second person in the process of obtaining the agreement.
  • the conversation support system SYS conducts any conversation between the first person and the second person. You may support. That is, the conversation support system SYS may be used not only in the process of obtaining an agreement but also in a situation where an arbitrary conversation is made. For example, the conversation support system SYS may support a conversation between a first person and a second person in the process in which any first person explains a desired matter to any second person. For example, the conversation support system SYS converts a voice data indicating a conversation between a first person and a second person into text data, and subdivides the sentence indicated by the text data into a plurality of conversation texts.
  • Each of the above is classified into at least one of a plurality of categories that are distinguished according to the type of utterance content to be uttered in the process of explaining the desired matter, and the first is based on the classification result of the classification unit 212.
  • a support screen (for example, a screen similar to the IC support screen 31 shown in FIG. 6) may be displayed to support a conversation between the person in question and the second person.
  • the first person can determine whether or not the explanation of the desired matter to the second person is insufficient. That is, the first person can determine whether or not he / she sufficiently fulfills the obligation to explain the desired matter to the second person.
  • the conversation support system SYS can reduce the possibility that the explanation from one of the first and second persons to the other of the first and second persons will be omitted.
  • a contract between the first person and the second person for example, a contract related to real estate and finance
  • the conversation support system SYS may support the conversation between the first person and the second person in the process in which the first person explains the contract contents to an arbitrary second person.
  • Another example of a situation in which a conversation between a first person and a second person is assisted is when a police officer investigates a suspect in the police.
  • the conversation support system SYS may support the conversation between the police officer and the suspect in the process of the police officer investigating the suspect.
  • Another example of a situation in which a conversation between a first person and a second person is supported is a court hearing.
  • the conversation support system SYS may support conversations (eg, speech) between at least two persons, a judge, a prosecutor, a lawyer, a plaintiff, a lawyer, and a witness.
  • Each of the plurality of conversation texts obtained by subdividing the text indicating the content of the conversation is at least one of a plurality of categories distinguished according to the type of the utterance content to be uttered in the agreement acquisition process.
  • Classification means to classify into A conversation support device including a display control means for controlling a display device so that at least a part of the plurality of conversation texts is displayed together with the category in which each conversation text is classified.
  • Appendix 2 The conversation support device according to Appendix 1, wherein the display control means controls the display device so that at least a part of the plurality of conversation texts is displayed according to the classified categories.
  • the display control means controls the display device so as to display conversation texts classified into one category designated by at least one of the medical staff and the patient among the plurality of categories.
  • Any one of Appendix 1 to 3 is further provided with a conversation warning means for warning that the conversation required in the agreement acquisition process is insufficient based on the classification result of the plurality of conversation texts by the classification means. Conversation support device described in.
  • the conversation warning means is the one category when the number of the conversation texts classified into one of the plurality of categories is less than a predetermined threshold set for the one category.
  • the conversation support device according to Appendix 4 which warns that the conversation regarding the above is insufficient.
  • the conversation according to Appendix 5 The conversation according to Appendix 5, wherein the predetermined threshold value is set based on the content of the conversation that has been conducted between the medical worker and the patient in the past in the process of obtaining the agreement in relation to the one category. Support device.
  • the conversation warning means warns that the conversation regarding the one category is insufficient when no conversation text is classified into one of the plurality of categories.
  • the conversation support device according to any one of the items.
  • the conversation support device according to any one of Appendix 1 to 7, wherein the display control means further displays an index indicating the number of the conversation texts classified into one of the plurality of categories.
  • the plurality of categories relate to the utterance content that refers to the purpose of the agreement acquisition process, the utterance content that refers to the patient's symptoms or medical conditions, and the utterance content that refers to the examination or treatment performed on the patient. Categories, utterance content categories that refer to clinical trials or studies involving the patient, utterance content categories that refer to the patient's opinions, utterances that refer to the existence of agreement between the medical personnel and the patient.
  • the conversation support device according to any one of Appendix 1 to 8, which includes at least one of the content category and the utterance content category that refers to the future medical policy for the patient.
  • Appendix 10 (I) Learn the instructions of the healthcare professional to specify that at least one of the plurality of conversation texts should be included in the summary information summarizing the content of the conversation during the agreement acquisition process, (ii).
  • the conversation support device according to any one of Appendix 1 to 9, further comprising a generation means for generating the summary information based on the learning result of the instruction of the medical worker.
  • the generation means learns the instruction of the medical worker, and based on the learning result of the instruction of the medical worker, at least one of the plurality of conversation texts is used as the conversation text to be included in the summary information.
  • the conversation support device according to Appendix 10 which is recommended for the medical staff.
  • Appendix 12 In the process of obtaining the agreement regarding one type of action, based on the content of the conversation between the medical staff and the patient in the past, the person is in the process of obtaining the agreement regarding the type of action.
  • the conversation support device according to any one of Appendix 1 to 12, further comprising a medical record cooperation means for warning the patient that an act that has not been performed has been performed or is scheduled to be performed.
  • the classification means learns a first classification unit that classifies each of the plurality of conversation texts into at least one of the plurality of categories, and teacher data including the classification results of the first classification unit.
  • Each of the plurality of conversation texts obtained by subdividing the text indicating the content of the conversation is at least one of a plurality of categories distinguished according to the type of the utterance content to be uttered in the agreement acquisition process.
  • Classified into A conversation support method in which at least a part of the plurality of conversation texts is displayed together with the category in which each conversation text is classified.
  • Appendix 17 A recording medium on which a computer program that causes a computer to execute a conversation support method that supports conversation between a medical staff and a patient is recorded.
  • the conversation support method is The medical worker and the patient in the process of obtaining an agreement between the medical worker and the patient to explain the action performed by the medical worker to the patient and to obtain an agreement between the medical worker and the patient about the action.
  • Classify into one A recording medium that displays at least a part of the plurality of conversation texts together with the category in which each conversation text is classified.
  • Appendix 18 A computer program that causes a computer to execute a conversation support method that supports conversation between a healthcare professional and a patient.
  • the conversation support method is The medical worker and the patient in the process of obtaining an agreement between the medical worker and the patient to explain the action performed by the medical worker to the patient and to obtain an agreement between the medical worker and the patient about the action.
  • a classification means that classifies into at least one of a plurality of categories that are distinguished according to the type of utterance content
  • a conversation support device including a display control means for controlling a display device so that at least a part of the plurality of conversation texts is displayed together with the category in which each conversation text is classified.
  • the present invention can be appropriately modified within the scope of claims and within the scope not contrary to the gist or idea of the invention that can be read from the entire specification, and a conversation support device, a conversation support system, a conversation support method, and a computer accompanied by such changes. Programs and recording media are also included in the technical idea of the present invention.

Abstract

Un dispositif d'aide à la conversation (1) comprend : un moyen de conversion (211) pour convertir des données vocales indiquant, par l'intermédiaire de la voix, le contenu d'une conversation entre un travailleur médical et un patient à des données de texte qui représentent le contenu de la conversation par du texte, la conversation étant entre le travailleur médical et le patient dans un processus d'acquisition de consentement pour obtenir un accord à un acte médical ; un moyen de classification (212) pour classifier chacune d'une pluralité de section de texte de conversation obtenues en subdivisant le texte représenté par les données de texte en au moins une catégorie parmi une pluralité de catégories différenciées en fonction du type de contenu de parole à prononcer dans le processus d'obtention d'accord ; et un moyen de commande d'affichage (213) pour commander un dispositif d'affichage de telle sorte qu'au moins certaines de la pluralité de sections de texte de conversation soient affichées conjointement avec la catégorie dans laquelle chaque section de texte de conversation a été classée.
PCT/JP2019/051090 2019-12-26 2019-12-26 Dispositif d'aide à la conversation, système d'aide à la conversation, procédé d'aide à la conversation et support d'enregistrement WO2021130953A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2021566679A JP7388450B2 (ja) 2019-12-26 2019-12-26 会話支援装置、会話支援システム、会話支援方法及び記録媒体
PCT/JP2019/051090 WO2021130953A1 (fr) 2019-12-26 2019-12-26 Dispositif d'aide à la conversation, système d'aide à la conversation, procédé d'aide à la conversation et support d'enregistrement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2019/051090 WO2021130953A1 (fr) 2019-12-26 2019-12-26 Dispositif d'aide à la conversation, système d'aide à la conversation, procédé d'aide à la conversation et support d'enregistrement

Publications (1)

Publication Number Publication Date
WO2021130953A1 true WO2021130953A1 (fr) 2021-07-01

Family

ID=76575827

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/051090 WO2021130953A1 (fr) 2019-12-26 2019-12-26 Dispositif d'aide à la conversation, système d'aide à la conversation, procédé d'aide à la conversation et support d'enregistrement

Country Status (2)

Country Link
JP (1) JP7388450B2 (fr)
WO (1) WO2021130953A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114049971A (zh) * 2021-10-11 2022-02-15 北京左医科技有限公司 基于医患对话的医学教学方法及医学教学装置
EP4362033A1 (fr) * 2022-10-24 2024-05-01 Koninklijke Philips N.V. Consentement au patient

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003178158A (ja) * 2001-12-07 2003-06-27 Canon Inc 第三者証拠資料保存型の調書プリントサービスシステム
JP2005063162A (ja) * 2003-08-13 2005-03-10 Takashi Suzuki インフォームドコンセント記録管理装置
JP2010197643A (ja) * 2009-02-25 2010-09-09 Gifu Univ 対話型学習システム
JP2015138457A (ja) * 2014-01-23 2015-07-30 キヤノン株式会社 情報処理装置、情報処理方法及びプログラム

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6979819B2 (ja) 2017-07-24 2021-12-15 シャープ株式会社 表示制御装置、表示制御方法及びプログラム
US11521722B2 (en) 2017-10-20 2022-12-06 Google Llc Capturing detailed structure from patient-doctor conversations for use in clinical documentation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003178158A (ja) * 2001-12-07 2003-06-27 Canon Inc 第三者証拠資料保存型の調書プリントサービスシステム
JP2005063162A (ja) * 2003-08-13 2005-03-10 Takashi Suzuki インフォームドコンセント記録管理装置
JP2010197643A (ja) * 2009-02-25 2010-09-09 Gifu Univ 対話型学習システム
JP2015138457A (ja) * 2014-01-23 2015-07-30 キヤノン株式会社 情報処理装置、情報処理方法及びプログラム

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114049971A (zh) * 2021-10-11 2022-02-15 北京左医科技有限公司 基于医患对话的医学教学方法及医学教学装置
EP4362033A1 (fr) * 2022-10-24 2024-05-01 Koninklijke Philips N.V. Consentement au patient
WO2024088756A1 (fr) * 2022-10-24 2024-05-02 Koninklijke Philips N.V. Consentement de patient

Also Published As

Publication number Publication date
JP7388450B2 (ja) 2023-11-29
JPWO2021130953A1 (fr) 2021-07-01

Similar Documents

Publication Publication Date Title
US20220020495A1 (en) Methods and apparatus for providing guidance to medical professionals
US11681356B2 (en) System and method for automated data entry and workflow management
JP6078057B2 (ja) 口述ベース文書生成ワークフローにおける文書拡張
US9070357B1 (en) Using speech analysis to assess a speaker's physiological health
US8312057B2 (en) Methods and system to generate data associated with a medical report using voice inputs
Pilnick et al. Advice, authority and autonomy in shared decision‐making in antenatal screening: the importance of context
US20130110547A1 (en) Medical software application and medical communication services software application
US20140365239A1 (en) Methods and apparatus for facilitating guideline compliance
US20140019128A1 (en) Voice Based System and Method for Data Input
US20060020493A1 (en) Ontology based method for automatically generating healthcare billing codes from a patient encounter
US11862164B2 (en) Natural language understanding of conversational sources
Pearce et al. Coding and classifying GP data: the POLAR project
US20230154575A1 (en) Systems and Methods for Mental Health Care Delivery Via Artificial Intelligence
CN111133521A (zh) 用于医学记录的自动检索和分析的系统和方法
US20240105294A1 (en) De-duplication and contextually-intelligent recommendations based on natural language understanding of conversational sources
Falcetta et al. Automatic documentation of professional health interactions: a systematic review
WO2021130953A1 (fr) Dispositif d'aide à la conversation, système d'aide à la conversation, procédé d'aide à la conversation et support d'enregistrement
Walker et al. Developing an intelligent virtual agent to stratify people with cognitive complaints: A comparison of human–patient and intelligent virtual agent–patient interaction
WO2014197669A1 (fr) Procédés et appareil pour fournir un guide à des professionnels médicaux
JP2010055146A (ja) 医療用語翻訳表示システム
Dalmer Unsettling knowledge synthesis methods using institutional ethnography: Reflections on the scoping review as a critical knowledge synthesis tool
Duran et al. The quality of CLP-related information for patients provided by ChatGPT
Maas et al. Automated Medical Reporting: From Multimodal Inputs to Medical Reports through Knowledge Graphs.
TW202309917A (zh) 資料分析系統及資料分析方法
Badr Guidelines for Health IT Addressing the Quality of Data in EHR Information Systems.

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19957691

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021566679

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19957691

Country of ref document: EP

Kind code of ref document: A1