WO2012032734A1 - 誤診原因検出装置及び誤診原因検出方法 - Google Patents
誤診原因検出装置及び誤診原因検出方法 Download PDFInfo
- Publication number
- WO2012032734A1 WO2012032734A1 PCT/JP2011/004780 JP2011004780W WO2012032734A1 WO 2012032734 A1 WO2012032734 A1 WO 2012032734A1 JP 2011004780 W JP2011004780 W JP 2011004780W WO 2012032734 A1 WO2012032734 A1 WO 2012032734A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- interpretation
- image
- result
- diagnosis
- interpreter
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 52
- 238000003745 diagnosis Methods 0.000 claims description 186
- 238000001514 detection method Methods 0.000 claims description 25
- 238000000605 extraction Methods 0.000 claims description 24
- 238000012545 processing Methods 0.000 claims description 23
- 239000000284 extract Substances 0.000 claims description 7
- 206010028980 Neoplasm Diseases 0.000 description 51
- 201000011510 cancer Diseases 0.000 description 45
- 238000010586 diagram Methods 0.000 description 25
- 238000004590 computer program Methods 0.000 description 12
- 230000003902 lesion Effects 0.000 description 10
- 238000002604 ultrasonography Methods 0.000 description 7
- 239000000470 constituent Substances 0.000 description 6
- 238000002405 diagnostic procedure Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 210000005075 mammary gland Anatomy 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 239000004973 liquid crystal related substance Substances 0.000 description 3
- 238000001574 biopsy Methods 0.000 description 2
- 238000002591 computed tomography Methods 0.000 description 2
- 238000004195 computer-aided diagnosis Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000005481 NMR spectroscopy Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000002238 attenuated effect Effects 0.000 description 1
- 208000030270 breast disease Diseases 0.000 description 1
- 230000001684 chronic effect Effects 0.000 description 1
- 208000031513 cyst Diseases 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 208000030776 invasive breast carcinoma Diseases 0.000 description 1
- 206010073095 invasive ductal breast carcinoma Diseases 0.000 description 1
- 201000010985 invasive ductal carcinoma Diseases 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000001575 pathological effect Effects 0.000 description 1
- 238000004393 prognosis Methods 0.000 description 1
- 238000010187 selection method Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000001356 surgical procedure Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H40/00—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
- G16H40/20—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the management or administration of healthcare resources or facilities, e.g. managing hospital staff or surgery rooms
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5211—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
- A61B6/5217—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/52—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/5215—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
- A61B8/5223—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for extracting a diagnostic or physiological parameter from medical diagnostic data
Definitions
- the present invention relates to a misdiagnosis cause detection device and a misdiagnosis cause detection method.
- misdiagnosis of doctors with little interpretation experience has become a major problem due to the chronic shortage of interpretation doctors.
- “probability of presence diagnosis” and “error of qualitative diagnosis” have a great influence on the prognosis of patients.
- An error in the presence diagnosis is missing a lesion.
- An error in qualitative diagnosis is a mistake in the diagnosis of the detected lesion.
- CAD Computer Aided Diagnosis
- measures against “error in qualitative diagnosis” are taken by interpretation education by experienced doctors.
- a veteran doctor educates a new doctor, in addition to determining whether the diagnosis is correct, the veteran doctor gives detailed instructions according to the cause of the misdiagnosis of the new doctor. For example, when the determination of “A cancer” is mistaken and the diagnosis flow for determining A cancer is wrong, the diagnosis flow is taught. On the other hand, when the recognition of the image pattern corresponding to the diagnosis flow of A cancer is incorrect, the image pattern is taught.
- Patent Document 1 has a problem that the cause of misdiagnosis cannot be detected.
- the first cause is that the association between the case and the diagnosis flow is incorrect.
- the second cause is that the association between the case and the image pattern is incorrect.
- the interpreter learns the diagnosis flow for each case and makes a diagnosis according to the diagnosis flow.
- diagnosis is performed while checking the diagnosis items included in the diagnosis flow.
- the image interpreter directly associates and stores each case and the image pattern, and performs diagnosis by image pattern matching. That is, the misdiagnosis that occurs in the radiogram interpreter occurs when incorrect knowledge is obtained in any of the learning processes described above.
- an object of the present invention is to provide a device for detecting a cause of misdiagnosis and a method for detecting the cause of misdiagnosis when an interpreter makes a misdiagnosis.
- a misdiagnosis cause detection device is an interpretation image of an interpretation report that is a set of an interpretation image for image diagnosis and a definitive diagnosis result for the interpretation image.
- An interpretation image presentation unit for presenting a certain target interpretation image to an interpreter, a first interpretation result as an interpretation result by the interpreter for the target interpretation image, and a time required for the interpretation by the interpreter to interpret the target interpretation image.
- An interpretation result acquisition unit that acquires an interpretation time, and a determination of whether the first interpretation result is correct by comparing a definitive diagnosis result with respect to the target interpretation image and the first interpretation result acquired by the interpretation result acquisition unit When the interpretation result determination unit and the interpretation result determination unit determine that the first interpretation result is incorrect, (a) the interpretation time acquired by the interpretation result acquisition unit is longer than a threshold value.
- a first selection process for selecting an attribute of a teaching content for teaching a diagnosis flow of a case having a case name indicated by the first interpretation result as an attribute of the teaching content presented to the interpreter b) If the interpretation time is equal to or less than the threshold, the teaching content attribute for teaching the image pattern of the case of the case name indicated by the first interpretation result is used as the teaching content attribute to be presented to the interpreter.
- a teaching content attribute selection unit that executes at least one of the second selection processes to be selected.
- the above-mentioned misdiagnosis causes can be classified using the interpretation time.
- the image interpretation time is long because the image reader performs diagnosis while confirming the diagnosis flow step by step.
- the interpreter does not need to check the diagnosis flow, and the diagnosis is performed mainly from the image pattern, so that the interpretation time is shortened.
- the image interpreter can select the teaching content that can appropriately correct the cause of the misdiagnosis. Further, it is possible to reduce the search time by the interpreter of the teaching content to be referred to at the time of misdiagnosis, and to shorten the learning time of the interpreter.
- the present invention can be realized not only as a misdiagnosis cause detection device including such a characteristic processing unit, but also as a misdiagnosis having steps executed by a characteristic processing unit included in the misdiagnosis cause detection device.
- This can be realized as a cause detection method. It can also be realized as a program for causing a computer to function as a characteristic processing unit included in the misdiagnosis cause detection device. It can also be realized as a program that causes a computer to execute characteristic steps included in the misdiagnosis cause detection method. Needless to say, such a program can be distributed through a computer-readable non-volatile recording medium such as a CD-ROM (Compact Disc-Read Only Memory) or a communication network such as the Internet.
- CD-ROM Compact Disc-Read Only Memory
- the cause of misdiagnosis can be detected.
- FIG. 1 is a block diagram showing a characteristic functional configuration of an image interpretation education apparatus according to Embodiment 1 of the present invention.
- FIG. 2A is a diagram illustrating an example of an ultrasound image as an interpretation image stored in an interpretation report database.
- FIG. 2B is a diagram illustrating an example of interpretation information stored in the interpretation report database.
- FIG. 3 is a diagram illustrating an example of an image presented by the interpretation image presenting unit.
- FIG. 4 is a diagram illustrating an example of a representative image and an interpretation flow.
- FIG. 5 is a diagram illustrating an example of a histogram of interpretation time.
- FIG. 6 is a diagram illustrating an example of a teaching content database.
- FIG. 1 is a block diagram showing a characteristic functional configuration of an image interpretation education apparatus according to Embodiment 1 of the present invention.
- FIG. 2A is a diagram illustrating an example of an ultrasound image as an interpretation image stored in an interpretation report database.
- FIG. 2B is a diagram illustrating an example of
- FIG. 7 is a flowchart showing a flow of overall processing executed by the image interpretation education apparatus according to Embodiment 1 of the present invention.
- FIG. 8 is a flowchart showing a detailed processing flow of the teaching content attribute selection process (step S105 in FIG. 7) by the teaching content attribute selection unit.
- FIG. 9 is a diagram illustrating an example of a screen output to the output medium by the output unit.
- FIG. 10 is a diagram illustrating an example of a screen output to the output medium by the output unit.
- FIG. 11 is a block diagram showing a characteristic functional configuration of the image interpretation education apparatus according to Embodiment 2 of the present invention.
- FIG. 12A is a diagram illustrating an example of a misdiagnosis location on an interpretation image.
- FIG. 12A is a diagram illustrating an example of a misdiagnosis location on an interpretation image.
- FIG. 12B is a diagram illustrating an example of a misdiagnosis location on the diagnosis flow.
- FIG. 13 is a flowchart showing the flow of overall processing executed by the image interpretation education apparatus according to Embodiment 2 of the present invention.
- FIG. 14 is a flowchart showing a detailed process flow of the misdiagnosis location extraction process (step S301 in FIG. 13) by the misdiagnosis location extraction unit.
- FIG. 15 is a diagram illustrating an example of representative images and diagnosis items of two cases.
- FIG. 16 is a diagram illustrating an example of a screen output to the output medium by the output unit.
- FIG. 17 is a diagram illustrating an example of a screen output to the output medium by the output unit.
- the misdiagnosis cause detection device has an input diagnosis result (when a misdiagnosis occurs for image interpretation such as an ultrasound image, a CT (Computed Tomography) image, or a nuclear magnetic resonance image) ( Hereinafter, whether the cause of the misdiagnosis is caused by the image pattern or the diagnosis flow from the “interpretation result” and the diagnosis time (hereinafter also referred to as “interpretation time”). And presenting teaching content suitable for the cause of misdiagnosis of the interpreter.
- the misdiagnosis cause detection apparatus interprets a target interpretation image that is an interpretation image among interpretation reports that are a combination of an interpretation image for image diagnosis and a definitive diagnosis result for the interpretation image.
- the second selection process of selecting the attribute of the teaching content for teaching the image pattern of the case with the case name indicated by the first interpretation result as the attribute of the teaching content presented to the radiogram interpreter if And a teaching content attribute selection unit that executes at least one of the selection processes.
- the cause of misdiagnosis can be classified using the interpretation time.
- an error in association between a case and a diagnosis flow occurs, the image interpretation time is long because the image reader performs diagnosis while confirming the diagnosis flow step by step.
- an error in association between a case and an image pattern occurs, it is considered that learning of the diagnosis flow has already been completed and the interpreter is familiar with the diagnosis flow. For this reason, the interpreter does not need to check the diagnosis flow, and the diagnosis is performed mainly from the image pattern, so that the interpretation time is shortened.
- the image interpreter can select the teaching content that can appropriately correct the cause of the misdiagnosis. Further, it is possible to reduce the search time by the interpreter of the teaching content to be referred to at the time of misdiagnosis, and to shorten the learning time of the interpreter.
- the radiogram interpreter can select teaching contents that can appropriately correct the diagnosis flow causing the misdiagnosis.
- the interpreter can search for the teaching content for teaching the diagnosis flow immediately as the teaching content to be referred to at the time of misdiagnosis, and the learning time of the interpreter can be shortened.
- the interpretation time is less than or equal to the threshold value, it can be determined that a misdiagnosis has occurred due to an “incorrect association between a case and an image pattern”. For this reason, the attribute of the teaching content for teaching the image pattern can be selected.
- the radiogram interpreter can select teaching content that can appropriately correct the image pattern causing the misdiagnosis.
- the interpreter can search for the teaching content for teaching the image pattern immediately as the teaching content to be referred to at the time of misdiagnosis, and the learning time of the interpreter can be shortened.
- the interpretation report further includes a second interpretation result that is an interpretation result that has already been performed on the interpretation image, and the interpretation image presentation unit matches the final diagnosis result with the second interpretation result.
- the image interpretation image included in the image interpretation report may be presented to the image interpreter.
- the above-described misdiagnosis cause detection device further includes, for each case name, teaching content for teaching the diagnosis flow of the case with the case name and teaching content for teaching the image pattern of the case with the case name.
- An output unit is provided that acquires the teaching content of the attribute selected by the teaching content attribute selection unit for the case with the case name indicated by the first interpretation result from the stored teaching content database, and outputs the acquired teaching content. .
- the interpretation report further includes a diagnosis result for each of the plurality of diagnosis items, and the interpretation result acquisition unit further displays a diagnosis result performed by the interpreter for each of the plurality of diagnosis items.
- the misdiagnosis cause detection device may further include a misdiagnosis location extraction unit that extracts a diagnosis item in which the diagnosis result included in the interpretation report is different from the diagnosis result acquired by the interpretation result acquisition unit. Good.
- the above-described misdiagnosis cause detection device further includes, for each case name, teaching content for teaching a diagnosis flow of a case with the case name and teaching content for teaching an image pattern of a case with the case name.
- teaching content for teaching a diagnosis flow of a case with the case name
- teaching content for teaching an image pattern of a case with the case name.
- An output unit is provided that creates teaching content in which a portion corresponding to the diagnostic item extracted by the extraction unit is emphasized and outputs the created teaching content.
- the threshold value may be different for each case name indicated by the first interpretation result.
- misdiagnosis cause detection apparatus and the misdiagnosis cause detection method of the present invention will be described.
- the misdiagnosis cause detection device of the present invention is applied to an interpretation education device for an interpreter.
- the present invention can be applied to devices other than the interpretation education device described in the following embodiments.
- a device that detects the cause of misdiagnosis and presents the cause of misdiagnosis to the interpreter when the interpreter is actually making a diagnosis by interpretation.
- FIG. 1 is a block diagram showing a characteristic functional configuration of an interpretation education apparatus 100 according to Embodiment 1 of the present invention.
- an interpretation education apparatus 100 is an apparatus that presents educational content according to an interpretation result of an interpreter.
- the interpretation interpretation apparatus 100 includes an interpretation report database 101, an interpretation image presentation unit 102, an interpretation result acquisition unit 103, an interpretation result determination unit 104, a teaching content attribute selection unit 105, a teaching content database 106, and an output unit 107.
- the interpretation report database 101 is a storage device including, for example, a hard disk and a memory.
- the interpretation report database 101 is a database that stores an interpretation image to be presented to an interpreter and interpretation information corresponding to the interpretation image.
- the interpretation image is an image used for image diagnosis and indicates image data stored in an electronic medium.
- the interpretation information is information indicating not only the interpretation result of the interpretation image but also the result of definitive diagnosis such as biopsy performed after the image diagnosis.
- FIG. 2A and 2B are diagrams respectively showing an example of an ultrasound image and interpretation information 21 as the interpretation image 20 stored in the interpretation report database 101.
- the interpretation information 21 includes a patient ID 22, an image ID 23, a definitive diagnosis result 24, an interpreter ID 25, a diagnostic item determination result 26, an image finding 27, and an interpretation time 28.
- Patient ID 22 indicates information for identifying a patient who is a subject of an interpretation image.
- the image ID 23 indicates information for identifying the interpretation image 20.
- the definitive diagnosis result 24 indicates the definitive diagnosis result of the patient indicated by the patient ID 22.
- the definitive diagnosis result is a diagnosis result that reveals the true state of the subject patient by microscopic pathological examination of the specimen obtained by surgery or biopsy, or by various other means. is there.
- the radiogram interpreter ID 25 indicates information for identifying the radiogram interpreter who interprets the radiogram image 20 having the image ID 23.
- the diagnosis item determination result 26 indicates information on the diagnosis result for each diagnosis item (described as item 1, item 2, etc. in FIG. 2B) defined for the interpretation image 20 with the image ID 23.
- the image finding 27 is information indicating a result of diagnosis performed by the image interpreter with the image interpreter ID 25 on the image interpretation image 20 with the image ID 23.
- the image findings 27 are information indicating a diagnosis result (interpretation result) including a disease name and a diagnosis reason (interpretation reason).
- the interpretation time 28 is information indicating the time from the start of interpretation to the end of interpretation.
- the interpretation report database 101 is provided in the interpretation education apparatus 100 .
- the interpretation education apparatus to which the present invention is applied is not limited to such an interpretation education apparatus.
- the interpretation report database 101 may be provided on a server connected to the interpretation education apparatus via a network.
- the interpretation information 21 may be included as attached data in the interpretation image 20.
- the interpretation image presentation unit 102 acquires the interpretation image 20 that is the subject of the diagnostic test from the interpretation report database 101. Further, the interpretation image presentation unit 102 displays the acquired interpretation image 20 (target interpretation image) on a monitor such as a liquid crystal display (not shown) or a television together with an input form of diagnostic items and image findings for the interpretation image 20.
- FIG. 3 is a diagram showing an example of a presentation screen by the interpretation image presentation unit 102.
- the presentation screen includes an interpretation image 20 as a diagnostic test target, an answer form for a diagnostic result for a diagnostic item, an input form for the diagnostic item input area 30 as an example, and an input form for an image finding. And an input form using the image finding input area 31 as an example.
- the diagnostic item input area 30 items corresponding to the diagnostic item determination result 26 of the interpretation report database 101 are described.
- the image finding input area 31 items corresponding to the image findings 27 of the interpretation report database 101 are described.
- the interpretation image presentation unit 102 may select only the interpretation image 20 in which the definitive diagnosis result 24 and the image findings 27 match when acquiring the interpretation image 20 to be subjected to the diagnostic test from the interpretation report database 101.
- the interpretation report database 101 there is an interpretation image 20 that cannot indicate a lesion that matches the definitive diagnosis result from the image alone due to image noise or characteristics of the imaging device of the interpretation image 20.
- Such an image is an inappropriate image as an image for interpretation education intended to estimate a lesion using only the interpretation image 20.
- a case where the definitive diagnosis result 24 and the image finding 27 coincide can be said to be a case that ensures that the same lesion as the definitive diagnosis result can be pointed out from the interpretation image 20.
- the interpretation result acquisition unit 103 acquires the interpretation result of the interpreter for the interpretation image 20 presented by the interpretation image presentation unit 102. For example, information input to the diagnostic item input area 30 and the image finding input area 31 from a keyboard or a mouse is acquired and stored in a memory or the like. In addition, the interpretation result acquisition unit 103 acquires the time (interpretation time) from when the interpreter starts interpretation until it ends. The interpretation result acquisition unit 103 outputs the acquired information and interpretation time to the interpretation result determination unit 104 and the teaching content attribute selection unit 105. The interpretation time is measured by a timer (not shown) provided in the interpretation education apparatus 100.
- the interpretation result determination unit 104 refers to the interpretation report database 101, compares the interpretation result of the interpreter acquired from the interpretation result acquisition unit 103 with the interpretation information 21 recorded in the interpretation report database 101, and interprets the interpreter. The correctness / incorrectness of the interpretation result is determined. Specifically, the interpretation result determination unit 104 includes the input result to the image interpretation input area 31 of the interpreter acquired from the interpretation result acquisition unit 103 and the definitive diagnosis result 24 of the interpretation image 20 acquired from the interpretation report database 101. Compare with information. The interpretation result determination unit 104 determines that the interpretation result is correct when they match, and determines that the interpretation result is erroneous (misdiagnosis) when they do not match.
- the teaching content attribute selection unit 105 uses the interpretation result and interpretation time acquired from the interpretation result acquisition unit 103 and the result of correct / incorrect determination of the interpretation result acquired from the interpretation result determination unit 104 to teach content to be presented to the interpreter. Select an attribute. Further, the teaching content attribute selection unit 105 notifies the output unit 107 of the selected teaching content attribute. A specific method for selecting the teaching content attribute will be described later. Here, the teaching content attribute will be described.
- the teaching content attribute is two types of identification information given to teaching content for teaching an accurate diagnosis method of a case.
- teaching content attributes there are two types of teaching content attributes: an image pattern attribute and a diagnostic flow attribute.
- the teaching content to which the image pattern attribute is given indicates content related to the representative interpretation image 20 for the case name.
- the teaching content to which the diagnosis flow attribute is assigned indicates content related to the diagnosis flow for the case name.
- FIG. 4 is a diagram showing an example of image pattern attribute content and diagnostic flow attribute content for “case name: hard cancer”.
- the image pattern attribute content 40 is an interpretation image 20 showing a typical example of hard cancer.
- diagnosis flow attribute content 41 is a flowchart for diagnosing hard cancer.
- FIG. 4B “border unclear” or “border clear rough”, “front / rear tear”, “back echo weak”, and “internal echo extremely low”.
- a diagnosis flow for diagnosing hard cancer is described.
- Misdiagnosis is roughly divided into two causes. The first is an error in associating a diagnosis flow with a case stored by an interpreter. The second is an error in associating the case and the image pattern stored by the interpreter.
- the interpreter determines individual diagnosis items for the interpretation image 20, and then combines the determination results of the diagnosis items with reference to the diagnosis flow to make a final diagnosis.
- an interpreter who is not familiar with interpretation interprets the diagnosis flow one by one, and thus it takes time for interpretation.
- an interpreter who has completed the first half of the learning process will perform the second half of the learning process, but in the second half of the learning process, the interpreter will determine a typical diagnostic item and then determine the typical diagnosis for each case name.
- An image pattern is constructed in the head, and diagnosis is performed instantaneously while referring to the constructed image pattern.
- the interpretation time of the interpreter in the second half of learning is relatively shorter than the interpretation time of the interpreter in the first half of learning. This is because, by experiencing many interpretations of the same case, the diagnostic flow becomes familiar and reference of the diagnostic flow becomes unnecessary. For this reason, the image interpreter in the second half of learning mainly diagnoses from the image pattern.
- the image interpretation education apparatus 100 determines whether the cause of the misdiagnosis is caused by “an error in association between a case and a diagnosis flow (diagnosis flow attribute)” or “an error in association between a case and an image pattern (image pattern attribute)”. Is determined.
- the interpretation interpretation education apparatus 100 can provide content that matches the cause of misdiagnosis of the interpreter by presenting the instruction content of the teaching content attribute corresponding to the cause of misdiagnosis to the interpreter.
- FIG. 5 is a diagram showing a typical example of a histogram of interpretation time in the radiology department of a hospital.
- the frequency (number of interpretations) in the histogram is approximated by a curved waveform.
- the peak with the shorter interpretation time is due to the diagnosis by the image pattern
- the peak with the longer interpretation time is due to the diagnosis by the diagnostic flow judgment
- the teaching content database 106 is a database in which teaching content related to two attributes of an image pattern attribute and a diagnostic flow attribute selected by the teaching content attribute selection unit 105 is recorded.
- FIG. 6 is an example of the teaching content database 106.
- the teaching content database 106 includes a content attribute 60, a case name 61, and a content ID 62.
- the teaching content database 106 lists the content IDs 62 so that the content IDs 62 can be easily obtained from the content attributes 60 and the case names 61.
- the content attribute 60 is a diagnosis flow attribute and the case name 61 is hard cancer
- the content ID 62 of the teaching content is F_001.
- the teaching content corresponding to the content ID 62 is stored in the teaching content database 106.
- the teaching content may not be stored in the teaching content database 106, and may be stored in, for example, an external server.
- the output unit 107 acquires the content ID corresponding to the content attribute selected by the taught content attribute selection unit 105 and the case name misdiagnosed by the interpreter from the taught content database 106 by referring to the taught content database 106.
- the output unit 107 outputs the teaching content corresponding to the acquired content ID to the output medium.
- the output medium is a monitor such as a liquid crystal display or a television.
- FIG. 7 is a flowchart showing an overall flow of processing executed by the image interpretation education apparatus 100.
- the interpretation image presentation unit 102 acquires the interpretation image 20 that is the subject of the diagnostic test from the interpretation report database 101.
- the image interpretation image presentation unit 102 presents the acquired image interpretation image 20 to a radiogram interpreter by displaying it on a monitor such as a liquid crystal display (not shown) or a television together with an input form for diagnostic items and image findings for the image interpretation image 20 (step). S101).
- the image interpretation image 20 to be processed may be selected by the image interpreter or may be selected randomly.
- the interpretation result acquisition unit 103 acquires the interpretation result of the interpreter for the interpretation image 20 presented by the interpretation image presentation unit 102.
- the interpretation result acquisition unit 103 stores information input from a keyboard, a mouse, or the like in a memory or the like. Then, the interpretation result acquisition unit 103 notifies the acquired input to the interpretation result determination unit 104 and the teaching content attribute selection unit 105 (step S102). Specifically, the interpretation result acquisition unit 103 acquires information input from the interpretation image presentation unit 102 to the diagnostic item input area 30 and the image finding input area 31, respectively.
- the interpretation result acquisition unit 103 acquires the interpretation time.
- the interpretation result determination unit 104 refers to the interpretation report database 101 to compare the interpretation result of the interpreter acquired from the interpretation result acquisition unit 103 with the interpretation information 21 recorded in the interpretation report database 101.
- the interpretation result determination unit 104 determines the correctness of the interpretation result of the interpreter from the comparison result (step S103). Specifically, the interpretation result determination unit 104 includes the input result to the image interpretation input area 31 of the interpreter acquired from the interpretation result acquisition unit 103 and the definitive diagnosis result 24 of the interpretation image 20 acquired from the interpretation report database 101. Compare with information.
- the interpretation result determination unit 104 determines that the interpretation result is correct when they match, and determines that the interpretation result is erroneous (misdiagnosis) when they do not match.
- the image finding input of the interpreter acquired in step S102 is “hard cancer” and the definitive diagnosis result acquired from the interpretation report database 101 is “hard cancer” for the image interpretation image A, the two match. Therefore, it is determined that it is not a misdiagnosis (correct answer).
- the image finding input of the interpreter acquired in step S102 is “hard cancer” for the interpretation image A and the definitive diagnosis obtained from the interpretation report database 101 is other than “hard cancer”, both are It becomes inconsistent and it determines with a misdiagnosis.
- step S102 when there are a plurality of diagnosis names acquired from step S102, if one of them matches the confirmed diagnosis result acquired from the interpretation report database 101, it may be determined that the answer is correct.
- step S104 When the teaching content attribute selection unit 105 obtains a determination that there is a misdiagnosis from the interpretation result determination unit 104 (Yes in step S104), the input result and interpretation time from the interpretation result acquisition unit 103 to the image finding input area 31 Get each. Further, the teaching content attribute selection unit 105 selects the teaching content attribute using the interpretation time, and notifies the output unit 107 of the selected teaching content attribute (step S105). Details of the teaching content attribute selection process (step S105) will be described later.
- the output unit 107 refers to the teaching content database 106, and the teaching content database 106 indicates the content ID corresponding to the teaching content attribute selected by the teaching content attribute selection unit 105 and the case name misdiagnosed by the interpreter. Get more. Further, the output unit 107 acquires the teaching content corresponding to the acquired content ID from the teaching content database 106 and outputs it to the output medium (step S106).
- FIG. 8 is a flowchart showing a detailed processing flow of the teaching content attribute selection process (step S105 in FIG. 7) by the teaching content attribute selection unit 105.
- the teaching content attribute selection method using the interpretation time of the interpreter will be described with reference to FIG.
- the teaching content attribute selection unit 105 acquires the image findings input by the interpreter from the interpretation result acquisition unit 103 (step S201).
- the teaching content attribute selection unit 105 acquires the interpretation time of the interpreter from the interpretation result acquisition unit 103 (step S202).
- the interpretation time of the interpreter may be measured by a timer provided inside the interpretation education apparatus 100. For example, the user presses a start button displayed on the screen at the start of interpretation (when an interpretation image is presented), and the user presses an end button displayed on the screen at the end of interpretation.
- the teaching content attribute selection unit 105 may acquire the time from when the start button counted by the timer is pressed until the end button is pressed as the interpretation time.
- the teaching content attribute selection unit 105 calculates a threshold of interpretation time for determining the teaching content attribute (step S203).
- a threshold calculation method is to create a histogram of interpretation time from interpretation time of data recorded in the interpretation report database 101, and to determine a discrimination threshold (Non-Patent Document: “Image Processing Handbook”, pp. 278, (See Shoshodo, 1992). Thereby, a threshold value can be set in a valley portion located between two peaks in the histogram as shown in FIG.
- the threshold of interpretation time may be obtained for each case name judged by the interpreter.
- the frequency of occurrence of diagnosis flow or the frequency of occurrence of cases varies depending on the site or case name to be examined. For this reason, the interpretation time may be different.
- case names with a short diagnosis flow include some hard cancers and invasive lobular cancers. Since the case name can be determined only by the boundary property of the tumor, the time for determining the case name is also shorter than the other case names.
- case names having a long diagnosis flow include some cysts and mucinous cancer. Since these are case names that can be determined using the shape and aspect ratio in addition to the boundary of the mass, the interpretation time becomes longer for some hard cancers and invasive lobular cancers.
- the interpretation time varies depending on the occurrence frequency of case names. For example, the occurrence frequency of “hard cancer” in breast disease is about 30%, while the occurrence frequency of “medullary cancer” is about 0.5%. Cases with a high frequency of occurrence are frequently seen clinically, so that diagnosis can be made earlier, and the interpretation time is shortened compared with cases with a low frequency of occurrence.
- the accuracy of attribute classification can be improved by obtaining a threshold value for each part or for each case name.
- the threshold for the interpretation time may be calculated in synchronization with the update of the interpretation report database 101 and stored in the interpretation report database 101.
- the calculation of the threshold value may be performed by the teaching content attribute selection unit 105 or may be performed by another processing unit. This eliminates the need for threshold calculation every time the radiogram interpreter inputs diagnostic items each time. For this reason, the processing time of the image interpretation education apparatus 100 can be shortened, and the teaching content can be presented to the image interpreter in a shorter time.
- the teaching content attribute selecting unit 105 determines whether or not the interpretation time of the interpreter acquired in step S202 is greater than the threshold calculated in step S203 (step S204). If the interpretation time is greater than the threshold (Yes in step S204), the teaching content attribute selection unit 105 selects a diagnostic flow attribute as the teaching content attribute (step S205). On the other hand, when the interpretation time is equal to or less than the threshold (No in step S204), the teaching content attribute selection unit 105 selects an image pattern attribute as the teaching content attribute (step S206).
- the teaching content attribute selection unit 105 can select the teaching content attribute according to the cause of misdiagnosis of the interpreter.
- FIG. 9 is a diagram illustrating an example of a screen output to the output medium by the output unit 107 when the teaching content attribute selection unit 105 selects an image pattern attribute.
- the output unit 107 presents an interpretation image that the interpreter makes a determination error, an interpretation result (interpreter answer) of the interpreter, and a definitive diagnosis result (correct answer).
- the output unit 107 presents a representative image of the case name corresponding to the interpreter's answer along with the presentation.
- the image pattern attribute it is considered that the interpreter is familiar with the diagnosis flow of “hard cancer”.
- FIG. 10 is a diagram illustrating an example of a screen output to the output medium by the output unit 107 when the teaching content attribute selection unit 105 selects the diagnosis flow attribute.
- the output unit 107 like FIG. 9 (a), interprets an image that the interpreter makes a mistake in determination, an interpretation result (interpreter's answer) of the interpreter, and a definitive diagnosis result. (Correct).
- the output unit 107 presents the diagnosis flow of the case name corresponding to the interpreter's answer along with the presentation.
- a misdiagnosis occurs because the diagnostic flow of the radiogram interpreter for “hard cancer” is incorrectly associated. Therefore, by presenting the correct diagnosis flow for the “hard cancer” that is the interpreter's answer, the knowledge of the diagnosis flow of “hard cancer” that is erroneously recognized by the interpreter can be corrected.
- the interpretation education apparatus 100 can present the teaching content according to the cause of the misdiagnosis of the interpreter by executing the processing of steps S101 to S106 shown in FIG. For this reason, the learning time is shortened, and an efficient interpretation method can be learned.
- the image interpretation education apparatus 100 it is possible to determine the cause of misdiagnosis using the interpretation time of the interpreter and automatically select teaching contents according to the determined cause of misdiagnosis. For this reason, the interpreter can learn the interpretation method efficiently without being given unnecessary teaching content.
- Embodiment 2 An interpretation education apparatus according to Embodiment 2 of the present invention will be described.
- interpretation education apparatus 100 classifies the cause of misdiagnosis of an interpreter into one of two types of attributes “diagnosis flow attribute” and “image pattern attribute” using the interpretation time.
- the teaching content corresponding to the attribute is presented.
- the interpretation education apparatus according to the present embodiment emphasizes and presents the location (misdiagnosis location) that caused the misdiagnosis for the content taught when the interpreter misdiagnose.
- the background of the problem to be solved by this embodiment is as follows. For example, when an interpreter misdiagnoses “papillary duct cancer” as “hard cancer” in the ultrasound image diagnosis of the mammary gland, the difference between the diagnosis flow of “hard cancer” and the diagnosis flow of “papillary duct cancer” (different) (Part) includes “internal echo”, “backward echo”, and “boundary property”. In order for the image interpreter to learn the image interpretation method correctly, it is necessary to recognize all these differences. However, for example, simply presenting the diagnosis flow of hard cancer and papillary ductal cancer may partially miss the difference between the two diagnosis flows, and may not be correctly learned. Also, the act of searching for the difference between the two diagnostic flows itself increases the learning time, resulting in a decrease in learning efficiency.
- the interpretation education apparatus can present the teaching content by emphasizing the misdiagnosis location of the interpreter, and can improve the learning efficiency.
- FIG. 11 is a block diagram showing a characteristic functional configuration of the image interpretation education apparatus 200 according to Embodiment 2 of the present invention.
- the same components as those in FIG. 11 are identical to FIG. 11 and the same components as those in FIG. 11;
- Interpretation education apparatus 200 includes interpretation report database 101, interpretation image presentation unit 102, interpretation result acquisition unit 103, interpretation result determination unit 104, teaching content attribute selection unit 105, teaching content database 106, output unit 107, and misdiagnosis location extraction unit. 201.
- the difference between the image interpretation education apparatus 200 shown in FIG. 11 and the image interpretation education apparatus 100 shown in FIG. 1 is that a misdiagnosis location of the image interpreter is extracted from the result input to the diagnosis item input area 30 acquired from the image interpretation result acquisition unit 103. It is a point which has the misdiagnosis location extraction part 201 to do.
- the misdiagnosis location extraction unit 201 includes, for example, a CPU and a memory storing a program executed by the CPU.
- the misdiagnosis location extraction unit 201 is based on the determination result input to the diagnosis item input area 30 acquired by the interpretation result acquisition unit 103 and the diagnosis item determination result 26 included in the interpretation information 21 stored in the interpretation report database 101.
- the misdiagnosis location of the radiogram interpreter is extracted and notified to the output unit 107. A specific misdiagnosis location extraction method will be described later.
- Misdiagnostic points are defined as diagnostic items or areas on the representative image where misdiagnosis occurred during the interpretation process.
- the interpretation process can be roughly divided into two processes, “visual recognition” and “diagnosis”.
- the misdiagnosis location in the visual recognition process corresponds to a specific image area on the interpretation image 20, and misdiagnosis in the diagnostic process.
- a location corresponds to a specific diagnostic item in the diagnostic flow.
- FIG. 12A and FIG. 12B show an example of a misdiagnosis location in a mammary gland ultrasound image.
- the misdiagnosis location on the interpretation image 20 indicates the misdiagnosis location 70, which is the corresponding image area, as shown in FIG. 12A. Further, as shown in FIG. 12B, the misdiagnosis location on the diagnosis flow indicates a misdiagnosis location 71 corresponding to the misdiagnosis item.
- misdiagnosis locations can reduce the time required to detect misdiagnosed locations by the interpreter, thereby improving learning efficiency.
- FIG. 13 is a flowchart showing an overall flow of processing executed by the image interpretation education apparatus 200. In FIG. 13, steps in which the same process as the process of the image interpretation education apparatus 100 of the first embodiment shown in FIG.
- the process of extracting the misdiagnosis location of the interpreter from the determination result input to the diagnostic item input area 30 acquired from the interpretation result acquisition unit 103 is the interpretation education according to the first embodiment. This is different from the processing in the apparatus 100. However, other processes are the same as those of the image interpretation training apparatus 100 of the first embodiment. Specifically, in FIG. 13, the processing of steps S101 to S105 executed by the interpretation education apparatus 200 is the same as the processing of the interpretation education apparatus 100 of Embodiment 1 shown in FIG.
- the misdiagnosis location extraction unit 201 extracts the misdiagnosis location of the interpreter using the determination result input to the diagnosis item input area 30 acquired from the interpretation result acquisition unit 103 (step S301).
- the output unit 107 acquires the teaching content from the teaching content database 106 and outputs it to the output medium, similarly to step S106 shown in FIG. However, the output unit 107 performs output after emphasizing the misdiagnosis location extracted by the misdiagnosis location extraction unit 201 when outputting the teaching content (step S302). A specific example of emphasis will be described later.
- FIG. 14 is a flowchart showing a detailed process flow of the process (step S301 in FIG. 13) by the misdiagnosis point extraction unit 201.
- a method for extracting a misdiagnosis location of a radiogram interpreter will be described with reference to FIG.
- the misdiagnosis location extraction unit 201 acquires the determination result input to the diagnostic item input area 30 from the interpretation result acquisition unit 103 (step S401).
- the misdiagnosis location extraction unit 201 acquires the diagnosis item determination result 26 having the same image findings 27 as the confirmed diagnosis result 24 for the interpretation image to be diagnosed from the interpretation report database 101 (step S402).
- the misdiagnosis location extraction unit 201 extracts a diagnosis item in which the determination result input in the diagnosis item input area 30 input by the image interpreter acquired in step S401 is different from the diagnosis item determination result 26 acquired in step S402 (step S402). S403). That is, the misdiagnosis location extraction unit 201 extracts these different diagnosis items as misdiagnosis locations.
- FIG. 15 shows representative images of “A cancer” and “B cancer” and examples of diagnostic items.
- the difference processing in step S403 will be described with reference to FIG. Now, suppose that the interpreter misdiagnoses A cancer for an image to be interpreted whose correct answer is B cancer. At this time, in order to identify which part was erroneously learned with respect to each diagnosis item determined as A cancer, the diagnosis item by the interpreter misdiagnosed as A cancer and the correct diagnosis of B cancer What is necessary is just to extract the diagnostic item from which a determination result differs between items. In the example of FIG.
- the internal echo 80 and the back echo 81 which are diagnostic items having different determination results between the diagnostic item by the interpreter misdiagnosed as A cancer and the diagnostic item of the correct B cancer, are extracted as misdiagnosis locations. Is done. That is, regarding the internal echo 80, the diagnosis result of cancer A is “low” while the diagnosis result of cancer B is “very low”, so the internal echo 80 is extracted as a misdiagnosis location. Further, regarding the rear echo 81, the diagnosis result of the A cancer is “attenuated”, whereas the diagnosis result of the B cancer is “invariant”, so the rear echo 81 is extracted as a misdiagnosis location.
- the misdiagnosis location extraction unit 201 can extract the misdiagnosis location of the interpreter.
- step S302 in FIG. 13 The processing by the output unit 107 (step S302 in FIG. 13) will be described with a specific example.
- FIG. 16 is a diagram illustrating an example of a screen output to the output medium by the output unit 107 when a misdiagnosis location is extracted by the misdiagnosis location extraction unit 201.
- the output unit 107 emphasizes and presents an image region corresponding to a misdiagnosis location that is a difference in diagnosis items from a correct case on a representative image of a case name misdiagnosed by an interpreter.
- the image areas corresponding to “rear echo” and “internal echo”, which are diagnostic items with different determination results between “hard cancer” and “invasive ductal carcinoma”, are highlighted by arrows. It is displayed.
- the position information of the image region to be emphasized may be stored in the teaching content database 106 in association with the diagnosis item in advance.
- the output unit 107 refers to the teaching content database 106 based on the misdiagnosis location (diagnostic item) extracted by the misdiagnosis location extraction unit 201, and acquires the position information of the image region to be emphasized, and based on the acquired location information.
- the position information of the image area to be emphasized may be stored in a place other than the teaching content database 106. Further, the position information of the image area to be emphasized may not be stored in any place. In that case, the output unit 107 may detect an image area to be enhanced by performing image processing.
- FIG. 17 is a diagram illustrating an example of a screen output to the output medium by the output unit 107 when the misdiagnosis location extraction unit 201 extracts a misdiagnosis location.
- the output unit 107 presents a diagnosis flow region that is a difference in diagnosis items from the correct case on the diagnosis flow of the case name misdiagnosed by the radiogram interpreter.
- the diagnostic flow regions corresponding to “rear echo” and “internal echo”, which are diagnostic items having different determination results between “hard cancer” and “invasive breast cancer” are indicated by broken lines. It is highlighted by being surrounded by and indicated by an arrow.
- the image interpretation education apparatus 200 can present the misdiagnosis location of the image interpreter to the output unit 107, thereby reducing oversight of the misdiagnosis location and search time. Learning efficiency can be improved.
- the essential components of the image interpretation education apparatus are an image interpretation image presentation unit 102, an image interpretation result acquisition unit 103, an image interpretation result determination unit 104, and a teaching content attribute selection unit 105.
- the component does not necessarily have to be provided.
- each of the above devices may be specifically configured as a computer system including a microprocessor, ROM, RAM, hard disk drive, display unit, keyboard, mouse, and the like.
- a computer program is stored in the RAM or hard disk drive.
- Each device achieves its functions by the microprocessor operating according to the computer program.
- the computer program is configured by combining a plurality of instruction codes indicating instructions for the computer in order to achieve a predetermined function.
- the system LSI is a super multifunctional LSI manufactured by integrating a plurality of components on a single chip, and specifically, a computer system including a microprocessor, a ROM, a RAM, and the like. .
- a computer program is stored in the RAM.
- the system LSI achieves its functions by the microprocessor operating according to the computer program.
- each of the above-described devices may be configured from an IC card or a single module that can be attached to and detached from each device.
- the IC card or module is a computer system that includes a microprocessor, ROM, RAM, and the like.
- the IC card or the module may include the super multifunctional LSI described above.
- the IC card or the module achieves its function by the microprocessor operating according to the computer program. This IC card or this module may have tamper resistance.
- the present invention may be the method described above. Further, the present invention may be a computer program that realizes these methods by a computer, or may be a digital signal composed of the computer program.
- the present invention provides a non-volatile recording medium that can read the computer program or the digital signal, such as a flexible disk, a hard disk, a CD-ROM, an MO, a DVD, a DVD-ROM, a DVD-RAM, a BD (Blu-ray). -Ray Disc (registered trademark)), recorded in a semiconductor memory, or the like.
- the digital signal may be recorded on these non-volatile recording media.
- the computer program or the digital signal may be transmitted via an electric communication line, a wireless or wired communication line, a network represented by the Internet, data broadcasting, or the like.
- the present invention may also be a computer system including a microprocessor and a memory.
- the memory may store the computer program, and the microprocessor may operate according to the computer program.
- the present invention can be used as a device for detecting the cause of misdiagnosis from the interpretation input of the interpreter.
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Public Health (AREA)
- Business, Economics & Management (AREA)
- Epidemiology (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Primary Health Care (AREA)
- General Business, Economics & Management (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Biomedical Technology (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
- Medical Treatment And Welfare Office Work (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Image Analysis (AREA)
Abstract
Description
図1は、本発明の実施の形態1に係る読影教育装置100の特徴的な機能構成を示すブロック図である。図1に示すように、読影教育装置100は、読影者の読影結果に応じた教育コンテンツを提示する装置である。読影教育装置100は、読影レポートデータベース101、読影画像提示部102、読影結果取得部103、読影結果判定部104、教示コンテンツ属性選択部105、教示コンテンツデータベース106、及び出力部107を備える。
本発明の実施の形態2に係る読影教育装置について説明する。
図11は、本発明の実施の形態2に係る読影教育装置200の特徴的な機能構成を示すブロック図である。図11において、図1と同じ構成要素については同じ符号を付し、説明を省略する。
21 読影情報
22 患者ID
23 画像ID
24 確定診断結果
25 読影者ID
26 診断項目判定結果
27 画像所見
28 読影時間
40 画像パタン属性コンテンツ
41 診断フロー属性コンテンツ
60 コンテンツ属性
61 症例名
62 コンテンツID
70、71 誤診箇所
80 内部エコー
81 後方エコー
100、200 読影教育装置
101 読影レポートデータベース
102 読影画像提示部
103 読影結果取得部
104 読影結果判定部
105 教示コンテンツ属性選択部
106 教示コンテンツデータベース
107 出力部
201 誤診箇所抽出部
Claims (8)
- 画像診断のための読影画像と前記読影画像に対する確定診断結果との組である読影レポートのうち、前記読影画像である対象読影画像を読影者に提示する読影画像提示部と、
前記対象読影画像に対する前記読影者による読影結果である第1読影結果と前記読影者が前記対象読影画像の読影に要した時間である読影時間とを取得する読影結果取得部と、
前記対象読影画像に対する確定診断結果と前記読影結果取得部が取得した前記第1読影結果とを比較することにより、前記第1読影結果の正誤を判定する読影結果判定部と、
前記読影結果判定部が前記第1読影結果を誤りであると判定したときに、(a)前記読影結果取得部が取得した前記読影時間が閾値よりも大きければ、前記読影者に提示する教示コンテンツの属性として、前記第1読影結果が示す症例名の症例の診断フローを教示するための教示コンテンツの属性を選択する第1選択処理、および、(b)前記読影時間が前記閾値以下であれば、前記読影者に提示する前記教示コンテンツの属性として、前記第1読影結果が示す症例名の症例の画像パタンを教示するための教示コンテンツの属性を選択する第2選択処理のうちの少なくとも一方の選択処理を実行する教示コンテンツ属性選択部と
を備える誤診原因検出装置。 - 前記読影レポートは、さらに、前記読影画像に対して既に行われた読影結果である第2読影結果を含み、
前記読影画像提示部は、前記確定診断結果と前記第2読影結果とが一致する前記読影レポートに含まれる前記読影画像を前記読影者に提示する
請求項1に記載の誤診原因検出装置。 - さらに、症例名ごとに、当該症例名の症例の診断フローを教示するための教示コンテンツと当該症例名の症例の画像パタンを教示するための教示コンテンツとが記憶されている教示コンテンツデータベースから、前記第1読影結果が示す症例名の症例に対する前記教示コンテンツ属性選択部が選択した属性の教示コンテンツを取得し、取得した前記教示コンテンツを出力する出力部を備える
請求項1または2に記載の誤診原因検出装置。 - 前記読影レポートは、さらに、複数の診断項目の各々に対する診断結果を含み、
前記読影結果取得部は、さらに、前記複数の診断項目の各々に対して前記読影者により行われた診断結果を取得し、
前記誤診原因検出装置は、さらに、
前記読影レポートに含まれる前記診断結果と前記読影結果取得部が取得した前記診断結果とが異なる診断項目を抽出する誤診箇所抽出部を備える
請求項1または2に記載の誤診原因検出装置。 - さらに、症例名ごとに、当該症例名の症例の診断フローを教示するための教示コンテンツと当該症例名の症例の画像パタンを教示するための教示コンテンツとが記憶されている教示コンテンツデータベースから、前記第1読影結果が示す症例名の症例に対する前記教示コンテンツ属性選択部が選択した属性の教示コンテンツを取得し、取得した前記教示コンテンツのうち、前記誤診箇所抽出部が抽出した前記診断項目に対応する箇所を強調した教示コンテンツを作成し、作成した前記教示コンテンツを出力する出力部を備える
請求項4に記載の誤診原因検出装置。 - 前記閾値は、前記第1読影結果が示す症例名ごとに異なる
請求項1に記載の誤診原因検出装置。 - コンピュータが、画像診断のための読影画像と前記読影画像に対する確定診断結果との組である読影レポートのうち、前記読影画像である対象読影画像を読影者に提示する読影画像提示ステップと、
コンピュータが、前記対象読影画像に対する前記読影者による読影結果である第1読影結果と前記読影者が前記対象読影画像の読影に要した時間である読影時間とを取得する読影結果取得ステップと、
コンピュータが、前記対象読影画像に対する確定診断結果と前記読影結果取得ステップにおいて取得された前記第1読影結果とを比較することにより、前記第1読影結果の正誤を判定する読影結果判定ステップと、
コンピュータが、前記読影結果判定ステップにおいて前記第1読影結果が誤りであると判定されたときに、(a)前記読影結果取得ステップにおいて取得された前記読影時間が閾値よりも大きければ、前記読影者に提示する教示コンテンツの属性として、前記第1読影結果が示す症例名の症例の診断フローを教示するための教示コンテンツの属性を選択する第1選択処理、および、(b)前記読影時間が前記閾値以下であれば、前記読影者に提示する前記教示コンテンツの属性として、前記第1読影結果が示す症例名の症例の画像パタンを教示するための教示コンテンツの属性を選択する第2選択処理のうちの少なくとも一方の選択処理を実行する教示コンテンツ属性選択ステップと
を含む誤診原因検出方法。 - 画像診断のための読影画像と前記読影画像に対する確定診断結果との組である読影レポートのうち、前記読影画像である対象読影画像を読影者に提示する読影画像提示ステップと、
前記対象読影画像に対する前記読影者による読影結果である第1読影結果と前記読影者が前記対象読影画像の読影に要した時間である読影時間とを取得する読影結果取得ステップと、
前記対象読影画像に対する確定診断結果と前記読影結果取得ステップにおいて取得された前記第1読影結果とを比較することにより、前記第1読影結果の正誤を判定する読影結果判定ステップと、
前記読影結果判定ステップにおいて前記第1読影結果が誤りであると判定されたときに、(a)前記読影結果取得ステップにおいて取得された前記読影時間が閾値よりも大きければ、前記読影者に提示する教示コンテンツの属性として、前記第1読影結果が示す症例名の症例の診断フローを教示するための教示コンテンツの属性を選択する第1選択処理、および、(b)前記読影時間が前記閾値以下であれば、前記読影者に提示する前記教示コンテンツの属性として、前記第1読影結果が示す症例名の症例の画像パタンを教示するための教示コンテンツの属性を選択する第2選択処理のうちの少なくとも一方の選択処理を実行する教示コンテンツ属性選択ステップと
をコンピュータに実行させるためのプログラム。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201180007768.6A CN102741849B (zh) | 2010-09-07 | 2011-08-29 | 误诊原因检测装置以及误诊原因检测方法 |
JP2011553996A JP4945705B2 (ja) | 2010-09-07 | 2011-08-29 | 誤診原因検出装置及び誤診原因検出方法 |
US13/454,239 US20120208161A1 (en) | 2010-09-07 | 2012-04-24 | Misdiagnosis cause detecting apparatus and misdiagnosis cause detecting method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2010-200373 | 2010-09-07 | ||
JP2010200373 | 2010-09-07 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/454,239 Continuation US20120208161A1 (en) | 2010-09-07 | 2012-04-24 | Misdiagnosis cause detecting apparatus and misdiagnosis cause detecting method |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2012032734A1 true WO2012032734A1 (ja) | 2012-03-15 |
Family
ID=45810345
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2011/004780 WO2012032734A1 (ja) | 2010-09-07 | 2011-08-29 | 誤診原因検出装置及び誤診原因検出方法 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20120208161A1 (ja) |
JP (1) | JP4945705B2 (ja) |
CN (1) | CN102741849B (ja) |
WO (1) | WO2012032734A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016177418A (ja) * | 2015-03-19 | 2016-10-06 | コニカミノルタ株式会社 | 読影結果評価装置及びプログラム |
JP2017107553A (ja) * | 2015-12-09 | 2017-06-15 | 株式会社ジェイマックシステム | 読影訓練支援装置、読影訓練支援方法および読影訓練支援プログラム |
JP2020537233A (ja) * | 2017-10-06 | 2020-12-17 | コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. | 補遺ベースのレポート品質スコアカード作成 |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9355569B2 (en) * | 2012-08-30 | 2016-05-31 | Picmonic Inc. | Systems, methods, and computer program products for providing a learning aid using pictorial mnemonics |
CN102982242A (zh) * | 2012-11-28 | 2013-03-20 | 徐州医学院 | 一种医学影像读片差错智能提醒系统 |
CN103595775B (zh) | 2013-11-04 | 2018-01-19 | 惠州Tcl移动通信有限公司 | 媒体文件的共享方法及系统 |
CN106415555B (zh) * | 2013-11-26 | 2019-10-18 | 皇家飞利浦有限公司 | 用于病理学报告与放射学报告的相关联的系统和方法 |
WO2015134668A1 (en) * | 2014-03-04 | 2015-09-11 | The Regents Of The University Of California | Automated quality control of diagnostic radiology |
JP6525527B2 (ja) * | 2014-08-07 | 2019-06-05 | キヤノン株式会社 | 読影レポート作成支援装置、読影レポート作成支援方法及びプログラム |
US11902396B2 (en) | 2017-07-26 | 2024-02-13 | Amazon Technologies, Inc. | Model tiering for IoT device clusters |
US10980085B2 (en) * | 2017-07-26 | 2021-04-13 | Amazon Technologies, Inc. | Split predictions for IoT devices |
US11108575B2 (en) | 2017-07-26 | 2021-08-31 | Amazon Technologies, Inc. | Training models for IOT devices |
US11611580B1 (en) | 2020-03-02 | 2023-03-21 | Amazon Technologies, Inc. | Malware infection detection service for IoT devices |
US12041094B2 (en) | 2020-05-01 | 2024-07-16 | Amazon Technologies, Inc. | Threat sensor deployment and management |
US12058148B2 (en) | 2020-05-01 | 2024-08-06 | Amazon Technologies, Inc. | Distributed threat sensor analysis and correlation |
US11489853B2 (en) | 2020-05-01 | 2022-11-01 | Amazon Technologies, Inc. | Distributed threat sensor data aggregation and data export |
US11989627B1 (en) | 2020-06-29 | 2024-05-21 | Amazon Technologies, Inc. | Automated machine learning pipeline generation |
WO2022013265A1 (en) * | 2020-07-16 | 2022-01-20 | Koninklijke Philips N.V. | Automatic certainty evaluator for radiology reports |
CN118116584A (zh) * | 2024-04-23 | 2024-05-31 | 鼎泰(南京)临床医学研究有限公司 | 一种基于大数据的可调整医疗辅助诊断系统及方法 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006277219A (ja) * | 2005-03-29 | 2006-10-12 | Konica Minolta Medical & Graphic Inc | 医用画像読影システム |
JP2010057727A (ja) * | 2008-09-04 | 2010-03-18 | Konica Minolta Medical & Graphic Inc | 医用画像読影システム |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6301462B1 (en) * | 1999-01-15 | 2001-10-09 | Unext. Com | Online collaborative apprenticeship |
JP2007275408A (ja) * | 2006-04-10 | 2007-10-25 | Fujifilm Corp | 類似画像検索装置および方法並びにプログラム |
JP5337992B2 (ja) * | 2007-09-26 | 2013-11-06 | 富士フイルム株式会社 | 医用情報処理システム、医用情報処理方法、及びプログラム |
JP2009082182A (ja) * | 2007-09-27 | 2009-04-23 | Fujifilm Corp | 検査作業支援装置及び方法、並びに検査作業支援システム |
JP2009078085A (ja) * | 2007-09-27 | 2009-04-16 | Fujifilm Corp | 医用画像処理システム、医用画像処理方法、及びプログラム |
US20110039249A1 (en) * | 2009-08-14 | 2011-02-17 | Ronald Jay Packard | Systems and methods for producing, delivering and managing educational material |
CN101706843B (zh) * | 2009-11-16 | 2011-09-07 | 杭州电子科技大学 | 一种乳腺cr图像交互式读片方法 |
-
2011
- 2011-08-29 WO PCT/JP2011/004780 patent/WO2012032734A1/ja active Application Filing
- 2011-08-29 CN CN201180007768.6A patent/CN102741849B/zh active Active
- 2011-08-29 JP JP2011553996A patent/JP4945705B2/ja active Active
-
2012
- 2012-04-24 US US13/454,239 patent/US20120208161A1/en not_active Abandoned
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006277219A (ja) * | 2005-03-29 | 2006-10-12 | Konica Minolta Medical & Graphic Inc | 医用画像読影システム |
JP2010057727A (ja) * | 2008-09-04 | 2010-03-18 | Konica Minolta Medical & Graphic Inc | 医用画像読影システム |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016177418A (ja) * | 2015-03-19 | 2016-10-06 | コニカミノルタ株式会社 | 読影結果評価装置及びプログラム |
JP2017107553A (ja) * | 2015-12-09 | 2017-06-15 | 株式会社ジェイマックシステム | 読影訓練支援装置、読影訓練支援方法および読影訓練支援プログラム |
JP2020537233A (ja) * | 2017-10-06 | 2020-12-17 | コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. | 補遺ベースのレポート品質スコアカード作成 |
JP7319256B2 (ja) | 2017-10-06 | 2023-08-01 | コーニンクレッカ フィリップス エヌ ヴェ | 補遺ベースのレポート品質スコアカード作成 |
Also Published As
Publication number | Publication date |
---|---|
JP4945705B2 (ja) | 2012-06-06 |
CN102741849B (zh) | 2016-03-16 |
US20120208161A1 (en) | 2012-08-16 |
CN102741849A (zh) | 2012-10-17 |
JPWO2012032734A1 (ja) | 2014-01-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4945705B2 (ja) | 誤診原因検出装置及び誤診原因検出方法 | |
KR102043130B1 (ko) | 컴퓨터 보조 진단 방법 및 장치 | |
CN104584018B (zh) | 用于有效查看和报告成像研究相关的先前注释的自动检测和检索 | |
AU2004266022B2 (en) | Computer-aided decision support systems and methods | |
JP5475923B2 (ja) | 類似症例検索装置および類似症例検索方法 | |
JP5100285B2 (ja) | 医用診断支援装置およびその制御方法、プログラム、記憶媒体 | |
US20110137132A1 (en) | Mammography Information System | |
US20120166211A1 (en) | Method and apparatus for aiding imaging diagnosis using medical image, and image diagnosis aiding system for performing the method | |
JP2006500124A (ja) | 医療画像をコンピュータ支援検出(cad)のガイドによって読み取る方法及びシステム | |
JP2009516551A (ja) | 医用画像の定量的および定性的なコンピュータ支援分析方法およびシステム | |
JP2007279942A (ja) | 類似症例検索装置、類似症例検索方法およびそのプログラム | |
KR102049336B1 (ko) | 컴퓨터 보조 진단 장치 및 방법 | |
JP2000276587A (ja) | 異常陰影検出処理方法およびシステム | |
US12046367B2 (en) | Medical image reading assistant apparatus and method providing hanging protocols based on medical use artificial neural network | |
US20220172826A1 (en) | Medical image reading assistant apparatus and method for adjusting threshold of diagnostic assistant information based on follow-up examination | |
US11996182B2 (en) | Apparatus and method for medical image reading assistant providing representative image based on medical use artificial neural network | |
CN100507928C (zh) | 用于确保医学图像中计算机标记的人工审查的计算机辅助检测系统和方法 | |
JP2007275440A (ja) | 類似画像検索装置および方法並びにプログラム | |
JP7452068B2 (ja) | 情報処理装置、情報処理方法及びプログラム | |
JP6448588B2 (ja) | 医療診断支援装置、医療診断支援システム、情報処理方法及びプログラム | |
JP2013041428A (ja) | 医療診断支援装置及び医療診断支援方法 | |
EP3362925B1 (en) | Systems and methods for generating correct radiological recommendations | |
JP6316325B2 (ja) | 情報処理装置、情報処理装置の作動方法及び情報処理システム | |
JP2009045110A (ja) | 画像処理装置、画像処理方法、および画像処理プログラム | |
JP5279996B2 (ja) | 画像抽出装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201180007768.6 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2011553996 Country of ref document: JP |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11823215 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 11823215 Country of ref document: EP Kind code of ref document: A1 |