WO2019180120A1 - Medical radiology report validation and augmentation system - Google Patents

Medical radiology report validation and augmentation system Download PDF

Info

Publication number
WO2019180120A1
WO2019180120A1 PCT/EP2019/057051 EP2019057051W WO2019180120A1 WO 2019180120 A1 WO2019180120 A1 WO 2019180120A1 EP 2019057051 W EP2019057051 W EP 2019057051W WO 2019180120 A1 WO2019180120 A1 WO 2019180120A1
Authority
WO
WIPO (PCT)
Prior art keywords
anatomical structures
computing device
patient body
data
report
Prior art date
Application number
PCT/EP2019/057051
Other languages
French (fr)
Inventor
Prescott Peter KLASSEN
Lyubomir Georgiev Zagorchev
Original Assignee
Koninklijke Philips N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips N.V. filed Critical Koninklijke Philips N.V.
Publication of WO2019180120A1 publication Critical patent/WO2019180120A1/en

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H15/00ICT specially adapted for medical reports, e.g. generation or transmission thereof
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/055Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves  involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R33/00Arrangements or instruments for measuring magnetic variables
    • G01R33/20Arrangements or instruments for measuring magnetic variables involving magnetic resonance
    • G01R33/44Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
    • G01R33/48NMR imaging systems
    • G01R33/54Signal processing systems, e.g. using pulse sequences ; Generation or control of pulse sequences; Operator console
    • G01R33/56Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution
    • G01R33/5608Data processing and visualization specially adapted for MR, e.g. for feature analysis and pattern recognition on the basis of measured MR data, segmentation of measured MR data, edge contour detection on the basis of measured MR data, for enhancing measured MR data in terms of signal-to-noise ratio by means of noise filtering or apodization, for enhancing measured MR data in terms of resolution by means for deblurring, windowing, zero filling, or generation of gray-scaled images, colour-coded images or images displaying vectors instead of pixels
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2576/00Medical imaging apparatus involving image processing or analysis

Definitions

  • the present disclosure relates generally to radiological examination, and in particular, to devices, systems, and methods for augmenting and validating findings in radiology report.
  • Embodiments of the present disclosure are configured to validate and augment findings in a radiology report.
  • MR magnetic resonance
  • a radiology report of findings and impressions is prepared by the radiologist.
  • the same MR image is automatically segmented by a computing device to delineate the geometries of the anatomical structures in the MR image.
  • the geometries of these anatomical structures are compared to normative data or the patient’s historical data to determine features or anomalies concerning some of the anatomical features.
  • the radiology report is analyzed by the computing device to identify text descriptions associated with some anatomical features.
  • the computing device then compared the features determined by the computing device with the text descriptions in the radiology report to generate a comparison result that may include inconsistencies and omissions.
  • the computing device of the present disclosure can also output a visual representation of the comparison result to a display device for the radiologist’s review.
  • the embodiments of the present disclosure augments the radiology report, help control the quality of the radiology report, and aids in identifying good practices in radiology examinations.
  • a method for magnetic resonance (MR) examination includes receiving, using a computing device, MR data of a patient body obtained from a magnetic resonance imaging (MRI) device, the patient body including a plurality of anatomical structures; segmenting, using the computing device, the MR data of the patient body to obtain geometries of the plurality of anatomical structures; comparing, using the computing device, the geometries of the plurality of anatomical structures to reference image data to determine features associated with the plurality of anatomical structures; receiving, at the computing device, a report comprising findings representative of the plurality of anatomical structures; analyzing, by the computing device, the report to identify, within the findings, text descriptions associated with the plurality of anatomical structures; comparing, by the computing device, the determined features associated with the plurality of anatomical structures with the identified text descriptions associated with the plurality of anatomical structures to generate a comparison result; and outputting, to a display device in communication with
  • the visual representation of the comparison result includes an indication of an error in the analyzing of the report or the comparing of the determined features with the identified text descriptions. In some embodiments, the visual representation includes an indication of an inconsistency between one of the determined features and one of the identified text descriptions. In some implementations, the visual representation includes an indication of one of the determined features that does not correspond to one of the identified text descriptions. In some implementations, the visual representation includes additional information associated with the determined features and not described in the identified text descriptions. In some embodiments, analyzing the report includes parsing the report. In some embodiments, analyzing the report includes recognizing text in the report.
  • segmenting the MR data of the patient body includes receiving a three-dimensional (3D) model of the patient body, and segmenting the MR data of the patient body based on the 3D model of the patient body.
  • the patient body is a brain and the 3D model of the patient body is a shape-constrained deformable brain model.
  • a magnetic resonance (MR) examination system includes a computing device.
  • the computing device is operable to receive MR data of a patient body obtained from a magnetic resonance imaging (MRI) device, the patient body including a plurality of anatomical structures; segment the MR data of the patient body to obtain geometries of the plurality of anatomical structures; compare the geometries of the plurality of anatomical structures to reference image data to determine features associated with the plurality of anatomical structures; receive a report comprising findings representative of the plurality of anatomical structures; analyze the report to identify, within the findings, text descriptions associated with the plurality of anatomical structures;
  • MRI magnetic resonance imaging
  • the computing device is in communication with the MRI device. In some embodiments, the computing device is operable to control the MRI device to obtain the MR data of the patient body.
  • the system further includes the MRI device. In some embodiments, the system further includes the display device. In some implementations, the visual representation of the comparison result includes an indication of an error in analyzing of the report or comparing of the determined features with the identified text descriptions. In some implementations, the visual representation includes an indication of an inconsistency between one of the determined features and one of the identified text descriptions. In some embodiments, the visual representation includes an indication of one of the determined features that does not correspond to one of the identified text descriptions. In some implementations, the visual representation includes additional information associated with the determined features and not described in the identified text descriptions.
  • FIG. 1 is a schematic diagram of a system for MR examination, according to aspects of the present disclosure.
  • FIG. 2 is a flowchart illustrating a method of performing MR examinations, according to aspects of the present disclosure.
  • FIG. 3 is a schematic diagram of a 3D brain model of a human brain, according to aspects of the present disclosure.
  • FIG. 4 is an MR image of a patient’ s brain overlaid with a segmented model of an anatomy, according to aspects of the present disclosure.
  • FIG. 5 is a schematic diagram of a report being analyzed for anatomical structures and text descriptions, according to aspects of the present disclosure.
  • FIG. 6 is a schematic table illustrating an exemplary visual representation of a comparison between the features determined by the computing device and the text descriptions in a report, according to aspects of the present disclosure.
  • FIG. 7 is an annotated MR image illustrating an exemplary visual representation of a comparison between the features determined by the computing device and the text description in a report, according to aspects of the present disclosure.
  • FIG. 8 is an annotated report illustrating another exemplary visual representation of a comparison between the features determined by the computing device and the text description in a report, according to aspects of the present disclosure.
  • FIG. 1 shown therein a schematic diagram of a system 100 for
  • the system 100 includes a computing device 120 connected to a magnetic resonance imaging (MRI) device 1 10, a user input device 130, and a display device 140.
  • the computing device 120 includes an imaging processor 121, a language engine 122, a graphics engine 123 and a database 124.
  • the computing device 120 can be a workstation or a controller that serves as an interface between the MRI device 110 and the display device 140.
  • the user input device 130 serves as an interface between a user and the computing device 120 and allows the user to interact with the computing device 120 by entering user inputs.
  • the user input device 130 can be at least one of a keyboard, a camera, a scanner, a mouse, a touchpad, a trackpad, a touchscreen mounted on the display device 140, a communication port, a USB port, a hand gesture control device, a virtual reality glove, or another input device.
  • the computing device 120 performs several functions.
  • the computing device 120 can receive magnetic resonance (MR) data from the MRI device 110, process the same by use of the imaging processor 121 and output MR image data to the display device 140 for display of the MR images.
  • the imaging processor 121 of the computing device 120 can automatically segment anatomical structures in received MR data based on a segmentation protocol.
  • the imaging processor 121 can automatically segment the anatomic structures in the MR data based on a three-dimensional (3D) model of a brain or a patient body.
  • the computing device 120 of the system 100 receives a 3D brain model from a storage media or through wired or wireless connection to a server or a remote workstation.
  • the 3D brain model can be stored in the database 124 or a storage device retrievable by the computing device 120.
  • the 3D brain model is a shape- constrained deformable brain model.
  • the 3D brain model may be the brain model described in“Eval uation of traumatic brain injury patients using a shape-constrained deformable model,” by L. Zagorchev, C. Meyer, T. Stehle, R. Rneser, S. Young and I. Weese, 2011, in Multimodal Brain Image Analysis by Liu T., Shen D., Ibanez L., Tao X. (eds). MB I A 2011. Lecture Notes in Computer Science , vol 7012.
  • the 3D brain model may be the deformable brain model described in U.S. Pat. No. 9,256,951 , titled“SYSTEM FOR RAPID AND ACCURATE QUANTITATIVE ASSESSMENT OF TRAUMATIC BRAIN INJURY” or the shape-constrained deformable brain model described in U.S. Pat. App. Pub. No.
  • the computing device 120 can obtain the geometries of these anatomical structures in the MR data by delineating boundaries of these anatomical structures.
  • the database 124 of the computing device 120 stores normative data of volumes, shapes and other attributes of human anatomical structures.
  • the normative data is organized by gender, race, and age such that a patient’s imaging data can be better compared to normative data pools more germane to his or her gender, race and age groups.
  • the database 124 stores historical radiological imaging data from the patient’s prior scans.
  • the imaging processor 121 can compare the geometries of anatomical structures from segmentation of the MR data to the normative data or historical radiological imaging data of the patient to determine features associated with one or more anatomical structures.
  • the imaging processor 121 can detect that an anatomical structure of the patient is substantially smaller/larger than a normative anatomical structure from his/her age, gender and race group and determine that size deviation is a feature. For another example, upon the comparison, the imaging processor 121 can detect that an anatomical structure of the patient increases/decreases in volume by 10% when compared to a prior scan performed nine weeks prior and determine that such abnormal volume increase/ decrease is a feature. It is noted that while the database 124 is depicted as an integrated element of the computing device 120, the database 124 can be a remote database or server connected to the computing device 120 by wire or wirelessly. In some embodiments, the database 124 can be cloud-based services provided by a third-party medical database service provider.
  • the computing device 120 can receive a radiology report prepared by a radiologist and analyze the radiology report by use of the language engine 122.
  • the radiology report includes various findings and impressions of the radiologist after he or she examines the MR images that can be displayed on the display device 140.
  • the radiology report can be a handwritten or computer generated hardcopy or a computer readable soft copy.
  • the user input device 130 can be a camera or a scanner that captures an image of the hardcopy.
  • the language engine 122 can then operate to recognize the text in the captured image of the hardcopy and convert the same into a form readable by the language engine 122.
  • the radiology report is a computer readable softcopy
  • the text recognition operation can be omitted.
  • the computer readable or recognizable text is then analyzed or parsed by the language engine 122 to identify text descriptions associated with anatomical structures of the patient.
  • the language engine 122 may come across an error in analyzing the radiology report.
  • the language engine 122 may record an error.
  • the radiology report is generated by an interactive computer interface where the radiologist picks an anatomical structure from a pull-down list of selections and then chooses one or more findings from a pull-down list of selections. When the radiology report is generated through such an interactive computer interface, no text recognition or parsing operations are needed as the text descriptions are automatically associated with the selected anatomical structures.
  • the computing device 120 by use of the language engine
  • the imaging processor 122 or the imaging processor 121 can compare the features determined by the imaging processor 121 with the text description identified by the language engine 122 on an anatomical-structure- by-anatomical-structure level.
  • the computing device 120 can record inconsistencies between the features and the text description with respect to a given anatomical structure. In cases where the imaging processor 121 determines a feature associated with an anatomical structure while the radiology report is completely silent on the same anatomical structure, the computing device 120 can record an omission from the radiology report. In some embodiments, when the text descriptions of the radiology report and the feature determined by the imaging processor 121 are consistent, the computing device 120 can record additional information obtained or detected by the imaging processor 121.
  • the imaging processor 121 can record additional information such as percentage of volumetric decrease as compared to imaging data obtained 9 weeks ago.
  • the computing device 120 by use of the imaging processor 121 or the language engine 122, can generate a comparison result that includes the recorded inconsistencies, omissions and additional information.
  • the comparison result generated by either the imaging processor 121 or the language engine 122 can be send to the graphics engine 123 for generation of a visual representation of the comparison result.
  • the graphics engine 123 can overlay text and schematics over an MR image to indicate the recorded inconsistencies, omissions and additional information.
  • the graphics engine 123 can generate a comparison chart or comparison table laying out the comparison result with respect to each of the anatomical structures.
  • the graphics engine 123 can overlay text and schematics over the image of the radiology report to indicate the recorded inconsistencies, omissions and additional information.
  • the graphics engine 123 can generate hyperlinks or pop-up dialog boxes and incorporate them in the visual representations. By clicking on the hyperlinks or pop-up dialog boxes, a user or a radiologist can learn more about the recorded inconsistencies, omissions and additional information.
  • the visual representation may include an indication of an error that the language engine 122 comes across when analyzing or parsing the radiology report.
  • FIG. 2 is a flowchart illustrating a method 200 of performing MR examinations.
  • the method 200 includes operations 202, 203, 204, 206, 208, 210, 212, and 214. It is understood that the operations of method 200 may be performed in a different order than shown in Fig. 2, additional operations can be provided before, during, and after the operations, and/or some of the operations described can be replaced or eliminated in other embodiments.
  • the operations of the method 200 can be carried out by a computing device in a radiological imaging system, such as the computing device 120 of the system 100. The method 200 will be described below with reference to FIGS. 3, 4, 5, 6, 7 and 8. [0028]
  • MR data of a patient body is obtained.
  • the MR data is obtained from the MRI device 110, and in some embodiments, the MRI device 1 10 is in communication with the computing device 120.
  • the operation 202 also includes controlling, using the computing device 120, the MRI device 110 to obtain the MR data of the patient body.
  • the patient body can be a part of a patient’s body or an organ of a patient.
  • the operations of the method 200 will be described based on MR examination of a patient’s brain.
  • the MR data of the patient body includes a plurality of anatomical structures. In the case of a brain, the MR data of the brain includes anatomical structures of a human brain.
  • the computing device 120 receives the MR data of the patient body. In some embodiments, the computing device 120 receives the MR data of the patient body from the MRI device 110.
  • the MR data of the patient body is segmented by the imaging processor 121 of the computing device 120 to obtain geometries of the plurality of anatomical structures.
  • the MR data of the patient body can be automatically segmented by the imaging processor 121 based on a segmentation protocol.
  • the MR data of the patient body can be automatically segmented based on a three-dimensional (3D) model of the patient body, such as the 3D brain model 300 shown in FIG. 3.
  • the computing device 120 of the system 100 receives a 3D brain model from a storage media or through wired or wireless connection to a server or a remote workstation.
  • the 3D brain model can be stored in the database 124 or a storage device retrievable by the computing device 120.
  • the 3D brain model is a shape- constrained deformable brain model.
  • the 3D brain model may be the brain model described in“Eval uation of traumatic brain injury patients using a shape-constrained deformable model,” by L. Zagorchev, C. Meyer, T. Stehle, R. Kneser, S. Young and I. Weese, 2011, in Multimodal Brain Image Analysis by Liu T., Shen D., Ibanez L., Tao X. (eds). MB I A 2011. Lecture Notes in Computer Science , vol 7012.
  • the 3D brain model may be the deformable brain model described in U.S. Pat. No. 9,256,951 , titled“SYSTEM FOR RAPID AND ACCURATE QUANTITATIVE ASSESSMENT OF TRAUMATIC BRAIN INJURY” or the shape-constrained deformable brain model described in U.S. Pat. App. Pub. No. 20150146951, titled“METHOD AND SYSTEM FOR QUANTITATIVE EVALUATION OF IMAGE SEGMENTATION,” each of which is hereby incorporated by reference in its entirety.
  • the segmentation at operation 204 is exemplarily illustrated in FIG. 4.
  • FIG. 4 shows segmentation of an amygdalahippocampal complex 410 (AHC 410) in an MR image 400.
  • AHC 410 amygdalahippocampal complex 410
  • the segmentation based on a 3D brain model delineates the boundary of the AHC 410 and the geometry of the AHC 410 can be obtained.
  • the geometries of the plurality of anatomical structures are compared, by use of the imaging processor 121, to reference image data to determine features associated with the plurality of anatomical structures.
  • the reference image data are stored in the database 124 of the computing device 120.
  • the reference image data can include normative data of volumes, shapes and other attributes of human anatomical structures.
  • the normative data is organized by gender, race, and age such that a patient’s imaging data can be better compared to normative data pools more germane to his or her gender, race and age groups.
  • the reference image data include historical radiological imaging data from the patient’s prior scans.
  • the imaging processor 121 can compare the geometries of anatomical structures from segmentation of the MR data to the reference image data to determine features associated with one or more anatomical structures of the patient. For example, upon such comparison, the imaging processor 121 can detect that an anatomical structure of the patient is substantially smaller/larger than a normative anatomical structure from his/her age, gender and race group and determine that size deviation is a feature associated with the anatomical structure. For another example, upon the comparison, the imaging processor 121 can detect that an anatomical structure of the patient increases/decreases in volume by 10% when compared to a prior scan performed nine weeks prior and determine that such abnormal volume increase/decrease is a feature associated with the anatomical structure.
  • a radiology report in received by the computing device 120.
  • the radiology report includes findings representative of or associated with the plurality of anatomical structures or at least some of the anatomical structures in the MR image.
  • An exemplary radiology report 500 can be found in FIG. 5. Besides the patient’s information and identification, the radiology report 500 includes various findings and impressions of the radiologist after he or she examines the MR images.
  • the radiology report 500 can be a handwritten or computer generated hardcopy or a computer readable soft copy. In cases where the radiology report 500 is a hardcopy, the radiology report 500 is received by the computing device 120 by use of the user input device 130, such as a camera or a scanner.
  • the camera or a scanner can capture a scanned image of the radiology report 500.
  • the radiology report 500 is received by the computing device 120 by use of the user input device 130, such as a communication port, a USB port or by a wired or wireless connection to a database or server where the radiology report 500 is stored.
  • the radiology report such as one similar to the radiology report
  • the language engine 122 of the computing device 120 is analyzed by the language engine 122 of the computing device 120 to identify, within the findings of the radiology report, text descriptions associated with the plurality of anatomical structures or at least some of the anatomical structures in the MR image.
  • the language engine 122 can operate to recognize the text in the scanned image of the hardcopy and convert the same into a form readable by the language engine 122.
  • the radiology report is a computer readable softcopy
  • the text recognition operation can be omitted.
  • the computer readable or recognizable text is then analyzed or parsed by the language engine 122 to identify text descriptions associated with anatomical structures of the patient.
  • the language engine 122 can identify anatomical structures or anatomies referred to the radiology report 500.
  • the identification of anatomical structures (or anatomies) can be based on a comparison with a dictionary or a list of anatomical structures stored in the database 124.
  • the dictionary or list can include short hands and synonyms to ensure correct identification of the anatomical structures or anatomies. As shown in FIG.
  • the language engine 122 identifies Anatomy A 501, Anatomy B 502, Anatomy C 503, Anatomy D 504, Anatomy E 505, Anatomy F 506, Anatomy G 507, Anatomy FI 508 and Anatomy I 509 in the Findings section. In some instances, anatomical structures are discussed or mentioned in the Impression section. As shown in FIG. 5, the language engine 122 also identified Anatomy J 510, Anatomy K, 511, and Anatomy L 512 in the Impression section. In some implementation, the language engine 122 may come across an error when analyzing the radiology report and can record the error.
  • the language engine 122 can resolve the context and identify text descriptions (or descriptions) associated with the identified anatomical structures. As shown in the illustrative example in FIG. 5, the language engine 122 identified Description I 520, Description II 521, Description III 522, Description IV 523, Description V 524, Description VI 525, and Description V 526 associated with Anatomy A 501, Anatomy B 502, Anatomy E 505, Anatomy G 507, Anatomy J 510, Anatomy K 511, and Anatomy L 512, respectively.
  • the radiology report is generated by an interactive computer interface where the radiologist picks an anatomical structure from a pull down list of selections and then chooses one or more findings from a pull-down list of selections.
  • the radiology report is generated through such an interactive computer interface, no text recognition or parsing operations are needed as the text descriptions are automatically associated with the selected anatomical structures.
  • the features determined by the imaging processor 121 is compared with the identified text descriptions in the radiology report, on an anatomical-structure-by-anatomical-structure basis.
  • the comparison at operation 212 can be performed either by the imaging processor 121 or the language engine 122.
  • the computing device 120 is referred to as the device that performs the operation 212.
  • the computing device 120 can define the universe of comparison by identifying all of the anatomical structures ever mentioned in the radiology report or depicted in the MR data. Once the universe of comparison is defined, the computing device 120 can go through all the anatomical structures when comparing the determined features and identified text descriptions.
  • the computing device 120 can record inconsistencies between the determined features and the identified text description with respect to a given anatomical structure. In cases where the imaging processor 121 determines a feature associated with an anatomical structure while the radiology report is completely silent on the same anatomical structure, the computing device 120 can record an omission from the radiology report. In some embodiments, when the text descriptions of the radiology report and the feature determined by the imaging processor 121 are consistent, the computing device 120 can record additional information obtained or detected by the imaging processor 121. For example, in cases where both text descriptions in the radiology report and the determined feature consistently indicate a volumetric decrease of an anatomical structure, the computing device 120 can record additional information such as percentage of volumetric decrease as compared to imaging data obtained 9 weeks ago. At operation 212, the computing device 120, by use of the imaging processor 121 or the language engine 122, can generate a comparison result that includes the recorded inconsistencies, omissions and additional information.
  • a visual representation of the comparison result is output to a display device, such as the display device 140.
  • the comparison result generated by either the imaging processor 121 or the language engine 122 at operation 212 can be send to the graphics engine 123 for generation of a visual representation of the comparison result and then the visual representation of the comparison result can be output to the display device 140.
  • the graphics engine 123 can generate a comparison chart or comparison table laying out the comparison result with respect to each of the anatomical structures.
  • An exemplary comparison table 600 is illustrated in FIG. 6.
  • the comparison table 600 can be a visual representation of a comparison result involving the radiology report 500 in FIG. 5.
  • the comparison table 600 includes five columns-“Regions,”“Report,”“System,” “Alert,” and“Additional Information.”
  • the entries in the“Regions” column represent the universe of the comparison performed in operation 212.
  • the“Regions” column includes Anatomy A through Anatomy P, whose geometries are obtained by the imaging processor 121 at operation 204.
  • the Radiology report 500 in FIG. 5 only mentions Anatomies A through L.
  • the comparison table 600 therefore shows N/A (Not Available) for Anatomies M, N, O, and P.
  • the visual representation may include an indication of an error that the language engine 122 comes across when analyzing or parsing the radiology report. The indication of error can alert the user or radiologist of possible errors to watch out for.
  • the comparison table 600 shows a corresponding text description in the“Report” column.
  • the radiology report 500 includes Description I associated with Anatomy A and the comparison table 600 shows Description I in the row for Anatomy A.
  • the radiology report 500 may indicate certain anatomies to be normal.
  • the comparison table 600 shows“Normal” in the corresponding rows of the“Report” column. Flere, the radiology report 500 finds Anatomy C normal and“Normal” appears in the row for Anatomy C.
  • the “system” column of the comparison table 600 includes the features determined by the imaging processor 121 at operation 206. If the imaging processor 121 determines a feature associated with a given anatomical structure or anatomy, the feature may be entered in the corresponding row. For example, as“Feature 4” is determined by the imaging processor 121 as being associated with Anatomy E,“Feature 4” is entered into the row for Anatomy E. In instances where no feature is determined,“neutral” is entered into the corresponding row.
  • The“Alert” column is used to show the comparison result generated at operation 212. When a description in the“Report” column is inconsistent with a feature in the“System” column, the phrase“Possible Inconsistency” is entered into the corresponding row of the“Alert” column.
  • An example is shown in the row for Anatomy B.
  • the phrase“Possible Omission” may show in the corresponding row of the“Alert” column.
  • An example is shown in the row for Anatomy D.
  • operation 206 may generate additional information that can augment the radiology report 500 or otherwise be helpful to the radiologist.
  • An example is the“Additional Information” shown in the row for Anatomy G.
  • the content and layout of the comparison table 600 including the choice of words and phrases, is only exemplary and should not be construed as limiting. People of ordinary skill in the art, upon examination of the present disclosure, may construct a comparison table with different styles, different layouts, and different phrases to visually represent the omissions, inconsistencies and additional information.
  • the graphics engine 123 can overlay text and schematics over an MR image 700 to indicate the recorded inconsistencies, omissions and additional information. As shown in FIG. 7, the graphics engine 123 can overlay a dotted oval 701 over an anatomy of the patient body to show a feature omitted from the radiology report or otherwise indicated as normal in the radiology report. In FIG. 7, the dotted oval 701 indicates that the choroid plexus of the lateral ventricle of the patient decreases in size as compared the MR image taken nine weeks prior. By pointing out the omission or
  • the system 100 and method 200 of the present disclosure afford the radiologist another opportunity to review the findings in the radiology report. If the radiologist does not revise the radiology report for any reason, the visual representation of the comparison result can still be made available to other radiologists and physicians for future references or quality control purposes.
  • the graphics engine 123 can also overlay an arrow 702 to indicate an area of calcification omitted from the radiology report or otherwise indicated as normal in the radiology report.
  • the schematics overlaid on the comparison table 600 are only exemplary and should not be construed as limiting. People of ordinary skill in the art, upon examination of the present disclosure, may use different schematics to visually represent the omissions, inconsistencies and additional information.
  • annotated report 800 illustrating another exemplary visual representation of the comparison result between the features determined by the computing device and the text description in a report.
  • the annotated report 800 can be a radiology report similar to the radiology report 500, overlaid with text, schematics, or icons.
  • each of the anatomical structures (or anatomies) is highlighted by rectangle boxes and the corresponding descriptions are marked for consistency, inconsistencies and additional information. For example,
  • Description I 520 is circled by a solid oval with a plus sign indicating additional information.
  • Descriptions II 821 and Description III 822 are circled by a dotted oval to indicate their consistency with the determined features.
  • Descriptions IV and VI are circled by a solid oval with a slashed equal sign indicating that they are inconsistent with the features determined by the imaging processor 121.
  • the graphics engine 123 can generate hyperlinks or pop-up dialog boxes and incorporate them in the visual representations. By moving a cursor over or clicking on the hyperlinks or pop-up dialog boxes, a user or a radiologist can learn more about the recorded inconsistencies, omissions and additional information. For example, when a cursor 810 is moved over the plus sign next to the Description I 820, a pop-up dialog box 831 can appear, providing additional information, such as“volume change (-5%)” or other quantitative or percentage information. For another example, when a cursor is moved over the slashed equal sign next to the Description IV 823, a pop-up dialog box 832 can appear, indicating possible inconsistency between Description IV 823 and the feature determined by the imaging processor 121.
  • the comparison results can be stored in a database, such as the database 124. Over time, the stored comparison results can be analyzed to identify good radiological practices or procedures such that the overall quality of the radiology reports can be improved. The stored comparison results can also be analyzed to monitor performance of radiologists.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Surgery (AREA)
  • Signal Processing (AREA)
  • Heart & Thoracic Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Condensed Matter Physics & Semiconductors (AREA)
  • Veterinary Medicine (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

Systems and methods for magnetic resonance (MR) examination are provided. In an embodiment, a method for magnetic resonance (MR) examination includes obtaining MR data of a patient body, the patient body including a plurality of anatomical structures; segmenting, using the computing device, the MR data of the patient body to obtain geometries of the plurality of anatomical structures; comparing, using the computing device, the geometries of the plurality of anatomical structures to reference image data to determine features associated with the plurality of anatomical structures; receiving, at the computing device, a report comprising findings representative of the plurality of anatomical structures; analyzing, by the computing device, the report to identify, within the findings, text descriptions associated with the plurality of anatomical structures; comparing, by the computing device, the determined features associated with the plurality of anatomical structures with the identified text descriptions associated with the plurality of anatomical structures to generate a comparison result; and outputting, to a display device in communication with the computing device, a visual representation of the comparison result.

Description

MEDICAL RADIOLOGY REPORT VALIDATION AND AUGMENTATION SYSTEM
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This patent application claims the priority benefit under 35 U.S.C. § 119(e) of
U.S. Provisional Application No. 62/645,919 filed on March 21, 2018, the contents of which are herein incorporated by reference.
TECHNICAL FILED
[0002] The present disclosure relates generally to radiological examination, and in particular, to devices, systems, and methods for augmenting and validating findings in radiology report.
BACKGROUND
[0003] Conventional radiological evaluation of clinical scans is restricted to subjective visual review of imaging data. Taking magnetic resonance (MR) examination for example, after a part of patient’s body is scanned and the MR image is obtained, a radiologist would examine the MR image by his/her eyes to detect abnormalities, such as volumetric abnormalities. There are problems associated with this conventional“eyeballing” technique. First, the human eye is simply not sensitive enough to capture subtle volumetric difference in imaging data. Second, because this technique is highly subjective and different radiologists may have different eye sights, the conventional technique suffers from low reliability that prevents reproducible tracking of volumetric changes. In some situations where an anatomic structure of a patient changes over time, chronic increase or decrease in volume may not be timely detected by the conventional “eyeballing” technique until it is too late for remediating intervention. Augmentation and validation of the conventional technique is therefore desired.
SUMMARY
[0004] Embodiments of the present disclosure are configured to validate and augment findings in a radiology report. After magnetic resonance (MR) image of a patient body is obtained by use of a MRI device, the MR image is reviewed by a radiologist and a radiology report of findings and impressions is prepared by the radiologist. The same MR image is automatically segmented by a computing device to delineate the geometries of the anatomical structures in the MR image. The geometries of these anatomical structures are compared to normative data or the patient’s historical data to determine features or anomalies concerning some of the anatomical features. The radiology report is analyzed by the computing device to identify text descriptions associated with some anatomical features. The computing device then compared the features determined by the computing device with the text descriptions in the radiology report to generate a comparison result that may include inconsistencies and omissions. The computing device of the present disclosure can also output a visual representation of the comparison result to a display device for the radiologist’s review. By allowing the radiologist to review the comparison result, the embodiments of the present disclosure augments the radiology report, help control the quality of the radiology report, and aids in identifying good practices in radiology examinations.
[0005] Systems and methods for magnetic resonance (MR) examination are provided. In an embodiment, a method for magnetic resonance (MR) examination includes receiving, using a computing device, MR data of a patient body obtained from a magnetic resonance imaging (MRI) device, the patient body including a plurality of anatomical structures; segmenting, using the computing device, the MR data of the patient body to obtain geometries of the plurality of anatomical structures; comparing, using the computing device, the geometries of the plurality of anatomical structures to reference image data to determine features associated with the plurality of anatomical structures; receiving, at the computing device, a report comprising findings representative of the plurality of anatomical structures; analyzing, by the computing device, the report to identify, within the findings, text descriptions associated with the plurality of anatomical structures; comparing, by the computing device, the determined features associated with the plurality of anatomical structures with the identified text descriptions associated with the plurality of anatomical structures to generate a comparison result; and outputting, to a display device in communication with the computing device, a visual representation of the comparison result. In some embodiments, the method further includes controlling, using the computing device, the MRI device to obtain MR data of a patient body.
[0006] In some embodiments, the visual representation of the comparison result includes an indication of an error in the analyzing of the report or the comparing of the determined features with the identified text descriptions. In some embodiments, the visual representation includes an indication of an inconsistency between one of the determined features and one of the identified text descriptions. In some implementations, the visual representation includes an indication of one of the determined features that does not correspond to one of the identified text descriptions. In some implementations, the visual representation includes additional information associated with the determined features and not described in the identified text descriptions. In some embodiments, analyzing the report includes parsing the report. In some embodiments, analyzing the report includes recognizing text in the report. In some instances, segmenting the MR data of the patient body includes receiving a three-dimensional (3D) model of the patient body, and segmenting the MR data of the patient body based on the 3D model of the patient body. In some embodiments, the patient body is a brain and the 3D model of the patient body is a shape-constrained deformable brain model.
[0007] In another embodiments, a magnetic resonance (MR) examination system is provided. , The MR examination system includes a computing device. The computing device is operable to receive MR data of a patient body obtained from a magnetic resonance imaging (MRI) device, the patient body including a plurality of anatomical structures; segment the MR data of the patient body to obtain geometries of the plurality of anatomical structures; compare the geometries of the plurality of anatomical structures to reference image data to determine features associated with the plurality of anatomical structures; receive a report comprising findings representative of the plurality of anatomical structures; analyze the report to identify, within the findings, text descriptions associated with the plurality of anatomical structures;
compare the determined features associated with the plurality of anatomical structures with the identified text descriptions associated with the plurality of anatomical structures to generate a comparison result; and output, to a display device in communication with the computing device, a visual representation of the comparison result. In some embodiments, the computing device is in communication with the MRI device. In some embodiments, the computing device is operable to control the MRI device to obtain the MR data of the patient body.
[0008] In some embodiments, the system further includes the MRI device. In some embodiments, the system further includes the display device. In some implementations, the visual representation of the comparison result includes an indication of an error in analyzing of the report or comparing of the determined features with the identified text descriptions. In some implementations, the visual representation includes an indication of an inconsistency between one of the determined features and one of the identified text descriptions. In some embodiments, the visual representation includes an indication of one of the determined features that does not correspond to one of the identified text descriptions. In some implementations, the visual representation includes additional information associated with the determined features and not described in the identified text descriptions.
[0009] Other devices, systems, and methods specifically configured to interface with such devices and systems, and/or implement such methods are also provided.
[0010] Additional aspects, features, and advantages of the present disclosure will become apparent from the following detailed description along with the drawings.
BRIEF DESCRIPTIONS OF THE DRAWINGS
[0011] Aspects of the present disclosure are best understood from the following detailed description when read with the accompanying figures. It is emphasized that, in accordance with the standard practice in the industry, various features are not drawn to scale. In fact, the dimensions of the various features may be arbitrarily increased or reduced for clarity of discussion. In addition, the present disclosure may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed.
[0012] FIG. 1 is a schematic diagram of a system for MR examination, according to aspects of the present disclosure.
[0013] FIG. 2 is a flowchart illustrating a method of performing MR examinations, according to aspects of the present disclosure.
[0014] FIG. 3 is a schematic diagram of a 3D brain model of a human brain, according to aspects of the present disclosure.
[0015] FIG. 4 is an MR image of a patient’ s brain overlaid with a segmented model of an anatomy, according to aspects of the present disclosure.
[0016] FIG. 5 is a schematic diagram of a report being analyzed for anatomical structures and text descriptions, according to aspects of the present disclosure.
[0017] FIG. 6 is a schematic table illustrating an exemplary visual representation of a comparison between the features determined by the computing device and the text descriptions in a report, according to aspects of the present disclosure.
[0018] FIG. 7 is an annotated MR image illustrating an exemplary visual representation of a comparison between the features determined by the computing device and the text description in a report, according to aspects of the present disclosure. [0019] FIG. 8 is an annotated report illustrating another exemplary visual representation of a comparison between the features determined by the computing device and the text description in a report, according to aspects of the present disclosure.
DETAILED DESCRIPTION
[0020] For the purposes of promoting an understanding of the principles of the present disclosure, reference will now be made to the embodiments illustrated in the drawings, and specific language will be used to describe the same. It is nevertheless understood that no limitation to the scope of the disclosure is intended. Any alterations and further modifications to the described devices, systems, and methods, and any further application of the principles of the present disclosure are fully contemplated and included within the present disclosure as would normally occur to one skilled in the art to which the disclosure relates.
[0021] Referring now to FIG. 1, shown therein a schematic diagram of a system 100 for
MR examination (can also be referred to as an MR examination system 100). The system 100 includes a computing device 120 connected to a magnetic resonance imaging (MRI) device 1 10, a user input device 130, and a display device 140. The computing device 120 includes an imaging processor 121, a language engine 122, a graphics engine 123 and a database 124. The computing device 120 can be a workstation or a controller that serves as an interface between the MRI device 110 and the display device 140. The user input device 130 serves as an interface between a user and the computing device 120 and allows the user to interact with the computing device 120 by entering user inputs. The user input device 130 can be at least one of a keyboard, a camera, a scanner, a mouse, a touchpad, a trackpad, a touchscreen mounted on the display device 140, a communication port, a USB port, a hand gesture control device, a virtual reality glove, or another input device.
[0022] The computing device 120 performs several functions. In some embodiments, the computing device 120 can receive magnetic resonance (MR) data from the MRI device 110, process the same by use of the imaging processor 121 and output MR image data to the display device 140 for display of the MR images. In some implementations, the imaging processor 121 of the computing device 120 can automatically segment anatomical structures in received MR data based on a segmentation protocol. In some instances, the imaging processor 121 can automatically segment the anatomic structures in the MR data based on a three-dimensional (3D) model of a brain or a patient body. In instances where the patient body includes a brain, the computing device 120 of the system 100 receives a 3D brain model from a storage media or through wired or wireless connection to a server or a remote workstation. In some
implementations, the 3D brain model can be stored in the database 124 or a storage device retrievable by the computing device 120. In some instances, the 3D brain model is a shape- constrained deformable brain model. In some instances, the 3D brain model may be the brain model described in“Eval uation of traumatic brain injury patients using a shape-constrained deformable model,” by L. Zagorchev, C. Meyer, T. Stehle, R. Rneser, S. Young and I. Weese, 2011, in Multimodal Brain Image Analysis by Liu T., Shen D., Ibanez L., Tao X. (eds). MB I A 2011. Lecture Notes in Computer Science , vol 7012. Springer, Berlin, Heidelberg, the entirety of which is hereby incorporated by reference. In some instances, the 3D brain model may be the deformable brain model described in U.S. Pat. No. 9,256,951 , titled“SYSTEM FOR RAPID AND ACCURATE QUANTITATIVE ASSESSMENT OF TRAUMATIC BRAIN INJURY” or the shape-constrained deformable brain model described in U.S. Pat. App. Pub. No.
20150146951, titled“METHOD AND SYSTEM FOR QUANTITATIVE EVALUATION OF IMAGE SEGMENTATION,” each of which is hereby incorporated by reference in its entirety. By segmenting the anatomical structures in the MR data, the computing device 120 can obtain the geometries of these anatomical structures in the MR data by delineating boundaries of these anatomical structures.
[0023] In some embodiments, the database 124 of the computing device 120 stores normative data of volumes, shapes and other attributes of human anatomical structures. In some implementations, the normative data is organized by gender, race, and age such that a patient’s imaging data can be better compared to normative data pools more germane to his or her gender, race and age groups. In some embodiments, the database 124 stores historical radiological imaging data from the patient’s prior scans. The imaging processor 121 can compare the geometries of anatomical structures from segmentation of the MR data to the normative data or historical radiological imaging data of the patient to determine features associated with one or more anatomical structures. For example, upon such comparison, the imaging processor 121 can detect that an anatomical structure of the patient is substantially smaller/larger than a normative anatomical structure from his/her age, gender and race group and determine that size deviation is a feature. For another example, upon the comparison, the imaging processor 121 can detect that an anatomical structure of the patient increases/decreases in volume by 10% when compared to a prior scan performed nine weeks prior and determine that such abnormal volume increase/ decrease is a feature. It is noted that while the database 124 is depicted as an integrated element of the computing device 120, the database 124 can be a remote database or server connected to the computing device 120 by wire or wirelessly. In some embodiments, the database 124 can be cloud-based services provided by a third-party medical database service provider.
[0024] In some embodiments, the computing device 120 can receive a radiology report prepared by a radiologist and analyze the radiology report by use of the language engine 122.
The radiology report includes various findings and impressions of the radiologist after he or she examines the MR images that can be displayed on the display device 140. The radiology report can be a handwritten or computer generated hardcopy or a computer readable soft copy. In implementations where the radiology report is a hardcopy, the user input device 130 can be a camera or a scanner that captures an image of the hardcopy. The language engine 122 can then operate to recognize the text in the captured image of the hardcopy and convert the same into a form readable by the language engine 122. In implementations where the radiology report is a computer readable softcopy, the text recognition operation can be omitted. The computer readable or recognizable text is then analyzed or parsed by the language engine 122 to identify text descriptions associated with anatomical structures of the patient. In some implementation, the language engine 122 may come across an error in analyzing the radiology report. In those implementations, the language engine 122 may record an error. In some embodiments, the radiology report is generated by an interactive computer interface where the radiologist picks an anatomical structure from a pull-down list of selections and then chooses one or more findings from a pull-down list of selections. When the radiology report is generated through such an interactive computer interface, no text recognition or parsing operations are needed as the text descriptions are automatically associated with the selected anatomical structures.
[0025] In some embodiments, the computing device 120, by use of the language engine
122 or the imaging processor 121, can compare the features determined by the imaging processor 121 with the text description identified by the language engine 122 on an anatomical-structure- by-anatomical-structure level. The computing device 120 can record inconsistencies between the features and the text description with respect to a given anatomical structure. In cases where the imaging processor 121 determines a feature associated with an anatomical structure while the radiology report is completely silent on the same anatomical structure, the computing device 120 can record an omission from the radiology report. In some embodiments, when the text descriptions of the radiology report and the feature determined by the imaging processor 121 are consistent, the computing device 120 can record additional information obtained or detected by the imaging processor 121. For example, in cases where both text descriptions in the radiology report and the determined feature consistently indicate a volumetric decrease of an anatomical structure, the imaging processor 121 can record additional information such as percentage of volumetric decrease as compared to imaging data obtained 9 weeks ago. The computing device 120, by use of the imaging processor 121 or the language engine 122, can generate a comparison result that includes the recorded inconsistencies, omissions and additional information.
[0026] In some embodiments, the comparison result generated by either the imaging processor 121 or the language engine 122 can be send to the graphics engine 123 for generation of a visual representation of the comparison result. In some implementations, the graphics engine 123 can overlay text and schematics over an MR image to indicate the recorded inconsistencies, omissions and additional information. In some other embodiments, the graphics engine 123 can generate a comparison chart or comparison table laying out the comparison result with respect to each of the anatomical structures. In some other embodiments, the graphics engine 123 can overlay text and schematics over the image of the radiology report to indicate the recorded inconsistencies, omissions and additional information. In some implementations, the graphics engine 123 can generate hyperlinks or pop-up dialog boxes and incorporate them in the visual representations. By clicking on the hyperlinks or pop-up dialog boxes, a user or a radiologist can learn more about the recorded inconsistencies, omissions and additional information. In some implementations, the visual representation may include an indication of an error that the language engine 122 comes across when analyzing or parsing the radiology report.
[0027] FIG. 2 is a flowchart illustrating a method 200 of performing MR examinations.
The method 200 includes operations 202, 203, 204, 206, 208, 210, 212, and 214. It is understood that the operations of method 200 may be performed in a different order than shown in Fig. 2, additional operations can be provided before, during, and after the operations, and/or some of the operations described can be replaced or eliminated in other embodiments. The operations of the method 200 can be carried out by a computing device in a radiological imaging system, such as the computing device 120 of the system 100. The method 200 will be described below with reference to FIGS. 3, 4, 5, 6, 7 and 8. [0028] At operation 202, MR data of a patient body is obtained. In some embodiments, the MR data is obtained from the MRI device 110, and in some embodiments, the MRI device 1 10 is in communication with the computing device 120. In some embodiments, the operation 202also includes controlling, using the computing device 120, the MRI device 110 to obtain the MR data of the patient body. In some embodiments, the patient body can be a part of a patient’s body or an organ of a patient. For illustration purposes, the operations of the method 200 will be described based on MR examination of a patient’s brain. The MR data of the patient body includes a plurality of anatomical structures. In the case of a brain, the MR data of the brain includes anatomical structures of a human brain.
[0029] At operation 203, the computing device 120 receives the MR data of the patient body. In some embodiments, the computing device 120 receives the MR data of the patient body from the MRI device 110.
[0030] At operation 204, the MR data of the patient body is segmented by the imaging processor 121 of the computing device 120 to obtain geometries of the plurality of anatomical structures. In some implementations, the MR data of the patient body can be automatically segmented by the imaging processor 121 based on a segmentation protocol. In some instances, the MR data of the patient body can be automatically segmented based on a three-dimensional (3D) model of the patient body, such as the 3D brain model 300 shown in FIG. 3. In those instances, the computing device 120 of the system 100 receives a 3D brain model from a storage media or through wired or wireless connection to a server or a remote workstation. In some implementations, the 3D brain model can be stored in the database 124 or a storage device retrievable by the computing device 120. In some instances, the 3D brain model is a shape- constrained deformable brain model. In some instances, the 3D brain model may be the brain model described in“Eval uation of traumatic brain injury patients using a shape-constrained deformable model,” by L. Zagorchev, C. Meyer, T. Stehle, R. Kneser, S. Young and I. Weese, 2011, in Multimodal Brain Image Analysis by Liu T., Shen D., Ibanez L., Tao X. (eds). MB I A 2011. Lecture Notes in Computer Science , vol 7012. Springer, Berlin, Heidelberg, the entirety of which is hereby incorporated by reference. In some instances, the 3D brain model may be the deformable brain model described in U.S. Pat. No. 9,256,951 , titled“SYSTEM FOR RAPID AND ACCURATE QUANTITATIVE ASSESSMENT OF TRAUMATIC BRAIN INJURY” or the shape-constrained deformable brain model described in U.S. Pat. App. Pub. No. 20150146951, titled“METHOD AND SYSTEM FOR QUANTITATIVE EVALUATION OF IMAGE SEGMENTATION,” each of which is hereby incorporated by reference in its entirety. The segmentation at operation 204 is exemplarily illustrated in FIG. 4. FIG. 4 shows segmentation of an amygdalahippocampal complex 410 (AHC 410) in an MR image 400. In the illustrated example, the segmentation based on a 3D brain model delineates the boundary of the AHC 410 and the geometry of the AHC 410 can be obtained.
[0031] At operation 206, the geometries of the plurality of anatomical structures are compared, by use of the imaging processor 121, to reference image data to determine features associated with the plurality of anatomical structures. In some implementations, the reference image data are stored in the database 124 of the computing device 120. In some embodiments, the reference image data can include normative data of volumes, shapes and other attributes of human anatomical structures. In some implementations, the normative data is organized by gender, race, and age such that a patient’s imaging data can be better compared to normative data pools more germane to his or her gender, race and age groups. In some implementations, the reference image data include historical radiological imaging data from the patient’s prior scans. The imaging processor 121 can compare the geometries of anatomical structures from segmentation of the MR data to the reference image data to determine features associated with one or more anatomical structures of the patient. For example, upon such comparison, the imaging processor 121 can detect that an anatomical structure of the patient is substantially smaller/larger than a normative anatomical structure from his/her age, gender and race group and determine that size deviation is a feature associated with the anatomical structure. For another example, upon the comparison, the imaging processor 121 can detect that an anatomical structure of the patient increases/decreases in volume by 10% when compared to a prior scan performed nine weeks prior and determine that such abnormal volume increase/decrease is a feature associated with the anatomical structure.
[0032] At operation 208 of the method 200, a radiology report in received by the computing device 120. The radiology report includes findings representative of or associated with the plurality of anatomical structures or at least some of the anatomical structures in the MR image. An exemplary radiology report 500 can be found in FIG. 5. Besides the patient’s information and identification, the radiology report 500 includes various findings and impressions of the radiologist after he or she examines the MR images. The radiology report 500 can be a handwritten or computer generated hardcopy or a computer readable soft copy. In cases where the radiology report 500 is a hardcopy, the radiology report 500 is received by the computing device 120 by use of the user input device 130, such as a camera or a scanner. The camera or a scanner can capture a scanned image of the radiology report 500. In cases where the radiology report 500 is a computer readable softcopy, the radiology report 500 is received by the computing device 120 by use of the user input device 130, such as a communication port, a USB port or by a wired or wireless connection to a database or server where the radiology report 500 is stored.
[0033] At operation 210, the radiology report, such as one similar to the radiology report
500, is analyzed by the language engine 122 of the computing device 120 to identify, within the findings of the radiology report, text descriptions associated with the plurality of anatomical structures or at least some of the anatomical structures in the MR image. In instances where the radiology report is a hardcopy and is received as a scanned image thereof, the language engine 122 can operate to recognize the text in the scanned image of the hardcopy and convert the same into a form readable by the language engine 122. In implementations where the radiology report is a computer readable softcopy, the text recognition operation can be omitted. The computer readable or recognizable text is then analyzed or parsed by the language engine 122 to identify text descriptions associated with anatomical structures of the patient. FIG. 5 illustrates how a radiology report, such as the radiology report 500, can be parsed and analyzed. The language engine 122 can identify anatomical structures or anatomies referred to the radiology report 500. The identification of anatomical structures (or anatomies) can be based on a comparison with a dictionary or a list of anatomical structures stored in the database 124. The dictionary or list can include short hands and synonyms to ensure correct identification of the anatomical structures or anatomies. As shown in FIG. 5, the language engine 122 identifies Anatomy A 501, Anatomy B 502, Anatomy C 503, Anatomy D 504, Anatomy E 505, Anatomy F 506, Anatomy G 507, Anatomy FI 508 and Anatomy I 509 in the Findings section. In some instances, anatomical structures are discussed or mentioned in the Impression section. As shown in FIG. 5, the language engine 122 also identified Anatomy J 510, Anatomy K, 511, and Anatomy L 512 in the Impression section. In some implementation, the language engine 122 may come across an error when analyzing the radiology report and can record the error. [0034] By parsing the text in the radiology report 500, the language engine 122 can resolve the context and identify text descriptions (or descriptions) associated with the identified anatomical structures. As shown in the illustrative example in FIG. 5, the language engine 122 identified Description I 520, Description II 521, Description III 522, Description IV 523, Description V 524, Description VI 525, and Description V 526 associated with Anatomy A 501, Anatomy B 502, Anatomy E 505, Anatomy G 507, Anatomy J 510, Anatomy K 511, and Anatomy L 512, respectively. In some embodiments, the radiology report is generated by an interactive computer interface where the radiologist picks an anatomical structure from a pull down list of selections and then chooses one or more findings from a pull-down list of selections. When the radiology report is generated through such an interactive computer interface, no text recognition or parsing operations are needed as the text descriptions are automatically associated with the selected anatomical structures.
[0035] At operation 212 of the method 200, the features determined by the imaging processor 121 is compared with the identified text descriptions in the radiology report, on an anatomical-structure-by-anatomical-structure basis. The comparison at operation 212 can be performed either by the imaging processor 121 or the language engine 122. For ease of reference, the computing device 120 is referred to as the device that performs the operation 212. At operation 212, the computing device 120 can define the universe of comparison by identifying all of the anatomical structures ever mentioned in the radiology report or depicted in the MR data. Once the universe of comparison is defined, the computing device 120 can go through all the anatomical structures when comparing the determined features and identified text descriptions. In some embodiments, the computing device 120 can record inconsistencies between the determined features and the identified text description with respect to a given anatomical structure. In cases where the imaging processor 121 determines a feature associated with an anatomical structure while the radiology report is completely silent on the same anatomical structure, the computing device 120 can record an omission from the radiology report. In some embodiments, when the text descriptions of the radiology report and the feature determined by the imaging processor 121 are consistent, the computing device 120 can record additional information obtained or detected by the imaging processor 121. For example, in cases where both text descriptions in the radiology report and the determined feature consistently indicate a volumetric decrease of an anatomical structure, the computing device 120 can record additional information such as percentage of volumetric decrease as compared to imaging data obtained 9 weeks ago. At operation 212, the computing device 120, by use of the imaging processor 121 or the language engine 122, can generate a comparison result that includes the recorded inconsistencies, omissions and additional information.
[0036] At operation 214, a visual representation of the comparison result is output to a display device, such as the display device 140. In some embodiments, the comparison result generated by either the imaging processor 121 or the language engine 122 at operation 212 can be send to the graphics engine 123 for generation of a visual representation of the comparison result and then the visual representation of the comparison result can be output to the display device 140. In some other embodiments, the graphics engine 123 can generate a comparison chart or comparison table laying out the comparison result with respect to each of the anatomical structures. An exemplary comparison table 600 is illustrated in FIG. 6. The comparison table 600 can be a visual representation of a comparison result involving the radiology report 500 in FIG. 5. The comparison table 600 includes five columns-“Regions,”“Report,”“System,” “Alert,” and“Additional Information.” The entries in the“Regions” column represent the universe of the comparison performed in operation 212. As shown in FIG. 6, the“Regions” column includes Anatomy A through Anatomy P, whose geometries are obtained by the imaging processor 121 at operation 204. The Radiology report 500 in FIG. 5 only mentions Anatomies A through L. The comparison table 600 therefore shows N/A (Not Available) for Anatomies M, N, O, and P. In some implementations, the visual representation may include an indication of an error that the language engine 122 comes across when analyzing or parsing the radiology report. The indication of error can alert the user or radiologist of possible errors to watch out for.
[0037] Reference is still made to FIGS. 5 and 6, whenever the radiology report 500 includes text descriptions associated with an anatomical structure (or an anatomy), the comparison table 600 shows a corresponding text description in the“Report” column. For example, the radiology report 500 includes Description I associated with Anatomy A and the comparison table 600 shows Description I in the row for Anatomy A. The same applies to other text descriptions associated with other anatomical structures. In some instances, the radiology report 500 may indicate certain anatomies to be normal. In those situations, the comparison table 600 shows“Normal” in the corresponding rows of the“Report” column. Flere, the radiology report 500 finds Anatomy C normal and“Normal” appears in the row for Anatomy C. The “system” column of the comparison table 600 includes the features determined by the imaging processor 121 at operation 206. If the imaging processor 121 determines a feature associated with a given anatomical structure or anatomy, the feature may be entered in the corresponding row. For example, as“Feature 4” is determined by the imaging processor 121 as being associated with Anatomy E,“Feature 4” is entered into the row for Anatomy E. In instances where no feature is determined,“neutral” is entered into the corresponding row. The“Alert” column is used to show the comparison result generated at operation 212. When a description in the“Report” column is inconsistent with a feature in the“System” column, the phrase“Possible Inconsistency” is entered into the corresponding row of the“Alert” column. An example is shown in the row for Anatomy B. When a feature in the“System” column finds no counterpart in the“Report” column, the phrase“Possible Omission” may show in the corresponding row of the“Alert” column. An example is shown in the row for Anatomy D. Still referring to FIG. 6, even when the text description in the report is consistent with the feature determined by the imaging processor 121, operation 206 may generate additional information that can augment the radiology report 500 or otherwise be helpful to the radiologist. An example is the“Additional Information” shown in the row for Anatomy G. The content and layout of the comparison table 600, including the choice of words and phrases, is only exemplary and should not be construed as limiting. People of ordinary skill in the art, upon examination of the present disclosure, may construct a comparison table with different styles, different layouts, and different phrases to visually represent the omissions, inconsistencies and additional information.
[0038] Reference is now made to FIG. 7. In some embodiments, the graphics engine 123 can overlay text and schematics over an MR image 700 to indicate the recorded inconsistencies, omissions and additional information. As shown in FIG. 7, the graphics engine 123 can overlay a dotted oval 701 over an anatomy of the patient body to show a feature omitted from the radiology report or otherwise indicated as normal in the radiology report. In FIG. 7, the dotted oval 701 indicates that the choroid plexus of the lateral ventricle of the patient decreases in size as compared the MR image taken nine weeks prior. By pointing out the omission or
inconsistency, the system 100 and method 200 of the present disclosure afford the radiologist another opportunity to review the findings in the radiology report. If the radiologist does not revise the radiology report for any reason, the visual representation of the comparison result can still be made available to other radiologists and physicians for future references or quality control purposes. The graphics engine 123 can also overlay an arrow 702 to indicate an area of calcification omitted from the radiology report or otherwise indicated as normal in the radiology report. The schematics overlaid on the comparison table 600 are only exemplary and should not be construed as limiting. People of ordinary skill in the art, upon examination of the present disclosure, may use different schematics to visually represent the omissions, inconsistencies and additional information.
[0039] Referring now to FIG. 8, shown therein is an annotated report 800 illustrating another exemplary visual representation of the comparison result between the features determined by the computing device and the text description in a report. In some embodiments, the annotated report 800 can be a radiology report similar to the radiology report 500, overlaid with text, schematics, or icons. In embodiments represented by FIG. 8, each of the anatomical structures (or anatomies) is highlighted by rectangle boxes and the corresponding descriptions are marked for consistency, inconsistencies and additional information. For example,
Description I 520 is circled by a solid oval with a plus sign indicating additional information.
For another example, Descriptions II 821 and Description III 822 are circled by a dotted oval to indicate their consistency with the determined features. For yet another example, Descriptions IV and VI are circled by a solid oval with a slashed equal sign indicating that they are inconsistent with the features determined by the imaging processor 121. In some
implementations, the graphics engine 123 can generate hyperlinks or pop-up dialog boxes and incorporate them in the visual representations. By moving a cursor over or clicking on the hyperlinks or pop-up dialog boxes, a user or a radiologist can learn more about the recorded inconsistencies, omissions and additional information. For example, when a cursor 810 is moved over the plus sign next to the Description I 820, a pop-up dialog box 831 can appear, providing additional information, such as“volume change (-5%)” or other quantitative or percentage information. For another example, when a cursor is moved over the slashed equal sign next to the Description IV 823, a pop-up dialog box 832 can appear, indicating possible inconsistency between Description IV 823 and the feature determined by the imaging processor 121.
[0040] In some embodiments, the comparison results can be stored in a database, such as the database 124. Over time, the stored comparison results can be analyzed to identify good radiological practices or procedures such that the overall quality of the radiology reports can be improved. The stored comparison results can also be analyzed to monitor performance of radiologists.
[0041] Persons skilled in the art will recognize that the apparatus, systems, and methods described above can be modified in various ways. Accordingly, persons of ordinary skill in the art will appreciate that the embodiments encompassed by the present disclosure are not limited to the particular exemplary embodiments described above. In that regard, although illustrative embodiments have been shown and described, a wide range of modification, change, and substitution is contemplated in the foregoing disclosure. It is understood that such variations may be made to the foregoing without departing from the scope of the present disclosure.
Accordingly, it is appropriate that the appended claims be construed broadly and in a manner consistent with the present disclosure.

Claims

WHAT IS CLAIMED IS:
1. A method for magnetic resonance (MR) examination, the method comprising:
receiving, using a computing device, MR data of a patient body obtained from a magnetic resonance imaging (MRI) device, the patient body including a plurality of anatomical structures; segmenting, using the computing device, the MR data of the patient body to obtain geometries of the plurality of anatomical structures;
comparing, using the computing device, the geometries of the plurality of anatomical structures to reference image data to determine features associated with the plurality of anatomical structures;
receiving, at the computing device, a report comprising findings representative of the plurality of anatomical structures;
analyzing, by the computing device, the report to identify, within the findings, text descriptions associated with the plurality of anatomical structures;
comparing, by the computing device, the determined features associated with the plurality of anatomical structures with the identified text descriptions associated with the plurality of anatomical structures to generate a comparison result; and
outputting, to a display device in communication with the computing device, a visual representation of the comparison result.
2. The method of claim 1 , wherein the visual representation of the comparison result comprises an indication of an error in the analyzing of the report or the comparing of the determined features with the identified text descriptions.
3. The method of claim 1, wherein the visual representation comprises an indication of an inconsistency between one of the determined features and one of the identified text descriptions.
4. The method of claim 1, wherein the visual representation comprises an indication of one of the determined features that does not correspond to one of the identified text descriptions.
5. The method of claim 1, wherein the visual representation comprises additional information associated with the determined features and not described in the identified text descriptions.
6. The method of claim 1, further comprising controlling, using the computing device, an MRI device to obtain the MR data of the patient body.
7. The method of claim 1, wherein the visual representation includes a graphical representation.
8. The method of claim 1, wherein segmenting the MR data of the patient body comprises: receiving a three-dimensional (3D) model of the patient body; and
segmenting the MR data of the patient body based on the 3D model of the patient body.
9. The method of claim 8, wherein the patient body is a brain and the 3D model of the patient body is a shape-constrained deformable brain model.
10. A magnetic resonance (MR) examination system, comprising:
a computing device operable to:
receive MR data of a patient body obtained from a magnetic resonance imaging (MRI) device, the patient body including a plurality of anatomical structures;
segment the MR data of the patient body to obtain geometries of the plurality of anatomical structures;
compare the geometries of the plurality of anatomical structures to reference image data to determine features associated with the plurality of anatomical structures; receive a report comprising findings representative of the plurality of anatomical structures;
analyze the report to identify, within the findings, text descriptions associated with the plurality of anatomical structures; compare the determined features associated with the plurality of anatomical structures with the identified text descriptions associated with the plurality of anatomical structures to generate a comparison result; and
output, to a display device in communication with the computing device, a visual representation of the comparison result.
1 1. The system of claim 10, furthering comprising the MRI device, wherein the computing device is in communication with the MRI device, and the computing device is further operable to control the MRI device to obtain the MR data of the patient body.
12. The system of claim 10, wherein the visual representation includes a graphical representation.
13. The system of claim 10, wherein the visual representation of the comparison result comprises an indication of an error in analyzing of the report or comparing of the determined features with the identified text descriptions.
14. The system of claim 10, wherein the visual representation comprises an indication of an inconsistency between one of the determined features and one of the identified text descriptions.
15. The system of claim 10, wherein the visual representation comprises an indication of one of the determined features that does not correspond to one of the identified text descriptions.
16. The system of claim 10, wherein the visual representation comprises additional information associated with the determined features and not described in the identified text descriptions.
PCT/EP2019/057051 2018-03-21 2019-03-21 Medical radiology report validation and augmentation system WO2019180120A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862645919P 2018-03-21 2018-03-21
US62/645,919 2018-03-21

Publications (1)

Publication Number Publication Date
WO2019180120A1 true WO2019180120A1 (en) 2019-09-26

Family

ID=65995679

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2019/057051 WO2019180120A1 (en) 2018-03-21 2019-03-21 Medical radiology report validation and augmentation system

Country Status (1)

Country Link
WO (1) WO2019180120A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112735569A (en) * 2020-12-31 2021-04-30 四川大学华西医院 System and method for outputting glioma operation area result before multi-modal MRI of brain tumor
WO2022143080A1 (en) * 2020-12-31 2022-07-07 武汉联影生命科学仪器有限公司 Experimental information management system, method, and scanning imaging system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060241353A1 (en) * 2005-04-06 2006-10-26 Kyoko Makino Report check apparatus and computer program product
US20070112599A1 (en) * 2005-10-26 2007-05-17 Peiya Liu Method and system for generating and validating clinical reports with built-in automated measurement and decision support
WO2011036585A1 (en) * 2009-09-28 2011-03-31 Koninklijke Philips Electronics N.V. Medical information system with report validator and report augmenter
US20150146951A1 (en) 2012-05-31 2015-05-28 Koninklijke Philips N.V. Method and system for quantitative evaluation of image segmentation
US9256951B2 (en) 2009-12-10 2016-02-09 Koninklijke Philips N.V. System for rapid and accurate quantitative assessment of traumatic brain injury

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060241353A1 (en) * 2005-04-06 2006-10-26 Kyoko Makino Report check apparatus and computer program product
US20070112599A1 (en) * 2005-10-26 2007-05-17 Peiya Liu Method and system for generating and validating clinical reports with built-in automated measurement and decision support
WO2011036585A1 (en) * 2009-09-28 2011-03-31 Koninklijke Philips Electronics N.V. Medical information system with report validator and report augmenter
US9256951B2 (en) 2009-12-10 2016-02-09 Koninklijke Philips N.V. System for rapid and accurate quantitative assessment of traumatic brain injury
US20150146951A1 (en) 2012-05-31 2015-05-28 Koninklijke Philips N.V. Method and system for quantitative evaluation of image segmentation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
L. ZAGORCHEV; C. MEYER; T. STEHLE; R. KNESER; S. YOUNG; J. WEESE: "Multimodal Brain Image Analysis", vol. 7012, 2011, SPRINGER, article "Evaluation of traumatic brain injury patients using a shape-constrained deformable model"

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112735569A (en) * 2020-12-31 2021-04-30 四川大学华西医院 System and method for outputting glioma operation area result before multi-modal MRI of brain tumor
WO2022143080A1 (en) * 2020-12-31 2022-07-07 武汉联影生命科学仪器有限公司 Experimental information management system, method, and scanning imaging system

Similar Documents

Publication Publication Date Title
US11443428B2 (en) Systems and methods for probablistic segmentation in anatomical image processing
JP6034192B2 (en) Medical information system with report verifier and report enhancer
JP4855141B2 (en) Medical image part recognition device and medical image part recognition program
US9779505B2 (en) Medical data processing apparatus and method
US8194960B2 (en) Method and apparatus for correcting results of region recognition, and recording medium having a program for correcting results of region recognition recorded therein
US8908946B2 (en) Information processing apparatus and its control method and data processing system
US20080058611A1 (en) Medical image processing apparatus
US11791044B2 (en) System for generating medical reports for imaging studies
EP3432215A1 (en) Automated measurement based on deep learning
JP7187244B2 (en) Medical image processing device, medical image processing system and medical image processing program
US20170262584A1 (en) Method for automatically generating representations of imaging data and interactive visual imaging reports (ivir)
JP2008006188A (en) Medical image display processing apparatus and medical image display processing program
US10402967B2 (en) Device, system and method for quality assessment of medical images
CN111986182A (en) Auxiliary diagnosis method, system, electronic device and storage medium
US20220366151A1 (en) Document creation support apparatus, method, and program
US20220262471A1 (en) Document creation support apparatus, method, and program
WO2019180120A1 (en) Medical radiology report validation and augmentation system
US11923069B2 (en) Medical document creation support apparatus, method and program, learned model, and learning apparatus, method and program
US20210049766A1 (en) Method for controlling display of abnormality in chest x-ray image, storage medium, abnormality display control apparatus, and server apparatus
JP5578889B2 (en) Interpretation report creation support apparatus and interpretation report creation support method
CN112862752A (en) Image processing display method, system electronic equipment and storage medium
US20230005580A1 (en) Document creation support apparatus, method, and program
US20230035575A1 (en) Analysis device, analysis method, and recording medium
JP7275961B2 (en) Teacher image generation program, teacher image generation method, and teacher image generation system
US20210004624A1 (en) Incidental finding augmentation system for medical radiology

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19714566

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19714566

Country of ref document: EP

Kind code of ref document: A1