FI126036B - Computer-aided medical imaging report - Google Patents

Computer-aided medical imaging report Download PDF

Info

Publication number
FI126036B
FI126036B FI20155339A FI20155339A FI126036B FI 126036 B FI126036 B FI 126036B FI 20155339 A FI20155339 A FI 20155339A FI 20155339 A FI20155339 A FI 20155339A FI 126036 B FI126036 B FI 126036B
Authority
FI
Finland
Prior art keywords
user
image
edge
highlighted
pixel
Prior art date
Application number
FI20155339A
Other languages
Finnish (fi)
Swedish (sv)
Other versions
FI20155339A (en
Inventor
Olli Santanen
Original Assignee
Carespace Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Carespace Oy filed Critical Carespace Oy
Priority to FI20155339A priority Critical patent/FI126036B/en
Priority to PCT/FI2016/050296 priority patent/WO2016181037A1/en
Application granted granted Critical
Publication of FI126036B publication Critical patent/FI126036B/en
Publication of FI20155339A publication Critical patent/FI20155339A/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/46Arrangements for interfacing with the operator or the patient
    • A61B6/467Arrangements for interfacing with the operator or the patient characterised by special input means
    • A61B6/469Arrangements for interfacing with the operator or the patient characterised by special input means for selecting a region of interest [ROI]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R33/00Arrangements or instruments for measuring magnetic variables
    • G01R33/20Arrangements or instruments for measuring magnetic variables involving magnetic resonance
    • G01R33/44Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
    • G01R33/48NMR imaging systems
    • G01R33/54Signal processing systems, e.g. using pulse sequences ; Generation or control of pulse sequences; Operator console
    • G01R33/56Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution
    • G01R33/5608Data processing and visualization specially adapted for MR, e.g. for feature analysis and pattern recognition on the basis of measured MR data, segmentation of measured MR data, edge contour detection on the basis of measured MR data, for enhancing measured MR data in terms of signal-to-noise ratio by means of noise filtering or apodization, for enhancing measured MR data in terms of resolution by means for deblurring, windowing, zero filling, or generation of gray-scaled images, colour-coded images or images displaying vectors instead of pixels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04812Interaction techniques based on cursor appearance or behaviour, e.g. being affected by the presence of displayed objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H15/00ICT specially adapted for medical reports, e.g. generation or transmission thereof
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • G06T2207/30012Spine; Backbone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Medical Informatics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • General Engineering & Computer Science (AREA)
  • Epidemiology (AREA)
  • Human Computer Interaction (AREA)
  • Primary Health Care (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Optics & Photonics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Surgery (AREA)
  • Pathology (AREA)
  • Multimedia (AREA)
  • Condensed Matter Physics & Semiconductors (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Description

Computer aided medical imaging report
FIELD OF THE INVENTION
The present invention relates to a computer aided method of classifying objects in medical images and generating a report.
BACKGROUND OF THE INVENTION
Nowadays, population are getting older and people are increasingly interested in and concerned about their health resulting in increasing demand for medical services. There is also a growing demand for more accurate and reliable diagnosis that can only be met by increasing the use of medical imaging and laboratory services. Furthermore, many nations struggle with shortage of doctor resources and the high costs of medical services they subsidize to their citizens. As a consequence there is a strong need to increase the productivity of doctors’ workflow to reduce total health costs.
In addition to the need to speed up the medical workflow there is also a need to increase the quality and repeatability of radiological reports. If the cost pressures force radiologists to analyse images faster than today, this may potentially lead to the unwanted decrease of the quality. Radiologists being only humans, they differ in their way of working, their efficiency, their experience, their knowledge and their carefulness, this being the fact which permits improvements in the quality of their work.
The patent application (EP 2657866A1) representing the level of technology at the moment introduces a system for creating a radiology report based on the annotations made by a user. Although this innovative application may bring little help, it will not solve the problems. New innovative tools are still needed to relieve the shortage of radiologists, to make the radiologists' work flow faster and increase the quality and repeatability of radiological reports.
BRIEF DESCRIPTION OF THE INVENTION
An object of the present invention is thus to provide a method, software and a system for implementing the method to alleviate the above disadvantages. The objects of the invention are achieved by a method, software and a system, which are characterized by what is stated in the independent claims. The preferred embodiments of the invention are disclosed in the dependent claims.
The invention is based on the idea in which the system highlights a pixel representing an edge in an image presented on the display depending on the location of the pointer/cursor of the pointing device thus enabling the user to quickly confirm the exact location and shape of a logical object/pattern/finding and classify it. The system is preferably configured such that the highlighted pixels form only one continuous curve at a time to make the selection simple for the user without disturbing the overall appearance of the image. The data of the highlighted and classified edge pixels can then be employed either in the generation of automated medical report or in the teaching and testing of an artificial intelligence system.
An advantage of the invention is that the confirmation process of the exact locations and dimensions of classified objects in medical images may be executed so cost-effectively that it enables the automatic generation of detailed reports thus increasing potentially radiologists' work flow and quality.
BRIEF DESCRIPTION OF THE DRAWINGS
In the following the invention will be described in greater detail by means of preferred embodiments with reference to the attached drawings, in which
Figures la-Id illustrate a T2-weighted para-sagittal MRI image with L4-5 disc herniation displayed on a screen. On the left are the views visible for the user and on the right are the corresponding image after edge detection. The edge highlighted (white dashed line) depends on the cursor location (arrow). The user extends the highlighted edge, confirms and classifies the edge by selecting items from the popup-menu.
Figures 2a-2c illustrate a T2-weighted axial MRI image with L4-5 disc herniation displayed on a screen. On the left are the views visible for the user and on the right are the corresponding image after edge detection. The user draws a polygon (white polyline) with cursor (arrow) by clicking mouse close to the distinguishable object edges which triggers the system to find the edges close to borders of the polygon and present them (black dashed line). The user confirms and classifies the edges by selecting items from the popup-menu.
DETAILED DESCRIPTION OF THE INVENTION
In a first aspect, the invention provides a method in which a pixel representing an edge in a medical image is highlighted and presented on the display depending on the current location of the cursor of the pointing device. The highlighting is based on the automated edge detection step applied to the image. Typically at least the closest pixel representing an edge relative to the cursor location is highlighted and preferably the highlighted pixels form only one continuous curve at a time. This makes easy for the user to select the highlighted pixels and confirm that they represent an edge of a logical object/finding in the image. When only one edge is presented it does not disturb the user to see the overall appearance of an image. The invention enables the user to quickly confirm that the highlighted pixels represent an edge of a logical object and classify the determined object with exact location and shape. The user of the method may be a health care professional such as a radiologist, a physician or a nurse, a non-professional person who has been otherwise skilled to the task or any other combination of them.
In the following, a method according to the invention is discussed in detail. It has to be understood that one or more of the method steps may be optional and one or more of the method steps may be mandatory in order to carry out the method according to an embodiment of the invention. One or more steps may be carried out more than once in some embodiments of the invention.
The method provides a step to present an image to be analysed on the display, a step of edge detection applied on the image, a step to highlight a pixel depending on the edge detection step and on the location of the cursor on the image representing the detected edge, a step to enable the user to confirm whether the highlighted pixel represents an edge of a logical object and a step to enable the user to classify the object determined by the highlighted pixel.
When the image is presented on the display, the user is preferably prompted to move the cursor on the image close to an edge of a logical object. The logical object may be a normal/abnormal anatomic structure e.g. an organ, a bone, a muscle, a nerve, a blood vessel or any other normal/abnormal visual pattern/finding in the image e.g. a tumour, liquid, gas, solid material. The word object denotes the kind of logical object in this document. When the cursor is on or close to the edge detected by the edge detection step, the pixel(s) representing an edge is presented e.g. as a highlighted curve e.g. on top of the image after which the user is able to both confirm whether he/she considers it represents the edge of a logical object which has a real counterpart and classify it.
The edge detection step may consist of several phases e.g. a smoothing phase and an actual edge detection phase combined together to achieve the optimal result. The edge detection phase may e.g. consist of classical edge detection filter, wavelet edge detection, multi-scale wavelet edge detection. Classical filters are e.g. Roberts, Prewitt, Sobel, Frei-Chen and Canny filter. The edge detection phase may also consist of any combination of different edge detection methods. The list of exemplary edge detection methods is not meant to be restrictive but any other type of method which serves the purpose may also be employed. In Figures 1 and 2 edge detection step has been applied to the images on the right side; the edge detection pipeline consists of the following phases: smoothing and multi-scale wavelet edge detection. The window-width procedure has been applied to the right side images in Figure 1 and 2 to increase the contrast for the viewer.
In an embodiment, the edge detection is applied to a restricted area around the current cursor location on the image in order to better adapt to local aspects of the image e.g. local maximum and minimum.
In an embodiment, the edge detection applied is adapted to the image type, the object to analyse, the anatomic location of an image, the resolution of image or the angle of view of the image.
In an embodiment, the user has an option to choose the applied edge detection algorithm e.g. from a drop-down list or rolling a mouse wheel.
In an embodiment, the user is able to decrease and increase the number of highlighted edge pixels. As an example, the user can decrease the length of the highlighted edge with a mouse wheel to quickly select only the part of the highlighted edge that he/she considers to represent the object. In Figure 1 the user has extended the highlighted edge 10 depending on the cursor 11 location by rolling the mouse wheel.
In an embodiment, the method provides an automatic pre-selection step for those edges detected in the edge detection step. The purpose of this step is to highlight those segments of edges which appear to mark out a rational shape that probably fits into a logical object in the kind of image or the object the user has already classified thus helping the user to rapidly confirm the edges of an object. The pre-selection step may construct several sets of edge collections forming continuous paths which do not fork e.g. a curve that appears to form the smallest area around the cursor. In Figure lc the shortest path of edge pixels that encloses an object 12 is highlighted with a white dashed curve. The different competing paths may be compared to the named and classified objects previously saved in the database or an artificial intelligence analysis is made for them. The pre-selection step may take into account the properties of an image (e.g. type) and an area (e.g. the dimension, shape, colour, texture and location of the area). In an embodiment, the method may primarily highlight those segments of edges which the user has already confirmed for an adjacent object. In an embodiment, the method may primarily highlight those segments of edges which the user has already confirmed for the same logical object in another image e.g. in an adjacent image in the shared series of images or in another image projection constructed from the shared series of images.
In an embodiment, the user may browse the different competing paths e.g.
with a mouse wheel or clicking a mouse, a key or a button and select the path highlighting the object best.
In an embodiment, the user is able to draw the correct edges using freehand-tool, spline-tool or the like if the user considers the edges highlighted by the system are not correct which is important in enabling exact medical reports and in improving the used edge detection and selection algorithms.
In an embodiment, the user is able to guide the system which pixels/edges to highlight by selecting two or more pixels, i.e. a path, e.g. using the mouse or his/her finger on the touch screen. The user may further draw the path using e.g. a freehand-tool, line-tool, spline-tool, rectangle-tool, ellipse-tool or the like. This embodiment is especially useful in the cases when the edges of an object is not possible to detect automatically i.e. the edges are not continuous as presented in Figure 2c. The system is configured to read user input and find for each input pixel a corresponding pixel that represents an edge e.g. the closest edge pixel which is then highlighted. For example, if the user selects a pixel on the original image to be analysed, the pixel coordinates are read by the system and employed in finding the pixel representing the closest edge in the corresponding image where edge detection step has been applied, after which this pixel is highlighted for the user on the overlay of the original image. In Figure 2a the user has drawn a polyline from the location 20 to location 23 by selecting line end points 21. In an embodiment, the corrected/translated user input pixels are highlighted and joined to form a polygon or polyline. In an embodiment, the polygon and polylines are smoothened by fitting them to a curve. The automatic pixel correction increases the quality of determined objects and potentially speeds up the determination process with respect to the case in which the user would draw the edges with “bare hands”. In an embodiment, the method provides a step in which it allows the user to set the range limit of how long away from the user selected pixel the system is allowed to highlight a pixel. In other words, the user is able to change the range limit of how long away up the normal of the user drawn shape in the image the system is allowed to search and highlight edge pixels. In Figure 2a the edge pixel 22 is in the allowed range limit and thus highlighted. In Figure 2b the edge pixel 25 located on the normal of the polygon 26 drawn by the user is not in the allowed range limit resulting to that another pixel 24 is highlighted; this highlighted pixel locates in the edge of the same polygon rounded up/smoothened. The object of the smoothening is to find a more realistic/probable shape for the classified object.
In an embodiment, the method provides a step to enclose an object automatically inside a polygon e.g. on the following way: 1) The user clicks inside an object and classifies the object, 2) the system fits the typical shape of the classified object inside of the detected edges to find out those edge pixels that collide/join to the pixels of the shape thus forming the corners of the polygon. This polygon may then be used as the bases for the user to determine other edge pixels e.g. such as done in the Figure 2 or automatically implementing coded logic.
In an embodiment, the method comprises a step of presenting the highlighted edges together with the visual data for revision. The confirmed highlighted edges are shown to the user on top of the image so that the errors and mistakes in the input process can be noticed and the user can revise the markings and remove or amend them if necessary.
The user is able to confirm the highlighted pixels representing an edge e.g. by clicking mouse or a key and then classify the object. The method presents the user a control to choose an item(s) when he/she wants to classify an object e.g. by clicking a mouse. The control may be e.g. a treeview, a menu, a dropdown-list or a textfield. In the case of the textfield, the user may write at least the first letters of e.g. the name of an object enabling him/her to view the list of object names matching written characters. In Figure Id the user has clicked the right mouse button to open a popup-menu 13 in which the user is choosing a submenu item 14 resulting simultaneously to the confirmation of the highlighted edge 15 and the classification the object 16. In an embodiment, the control contains the names and classifications of all possible objects found on similar images. In another embodiment, the control contains the names and classifications of all possible objects found on the restricted anatomic location determined by the cursor location in similar images. In another embodiment, the control contains only the most probable names and classifications considered adequate after applying an additional data processing step e.g. programmatically coded sequence of logical operations or artificial intelligence analysis which aim to e.g. reduce the list by comparing the properties or derived properties of the object to the predetermined limits. The property may be e.g. the dimension, shape, colour, texture, location of the object or its adjacent object or e.g. type of image. It is also possible to employ other kind of data to reduce the list e.g. anamnestic or demographic data. Reducing the number of list items potentially speeds up the user's classification process.
In an embodiment, the method may provide the name and classifications of the object, the edge of which was lastly confirmed, as the first item in the control until the object is fully enclosed. In another embodiment, the method may provide as the first item in the menu the name and classifications of the already classified object the pixel of which matches with the current object to be confirmed. In Figure 2c the classifications in the first menu item 28 are the same as already classified in Figure Id because the object 27 in Figure 2c and the object 16 in Figure Id share pixels in three dimensional space and thus refer to the same real object. In Figure 2c the user has selected the submenu item 29 “Compresses L5-root“since that information is possible to see from that axial view. The user may choose the first item in the control e.g. by clicking a mouse or a key such as the enter-key. When the object is named, it is preferably simultaneously classified into its super classes such as e.g. abdominal aorta may belong to the hierarchical classes of aorta, artery, blood vessel and circulatory system.
In an embodiment, after confirming and naming the first edge of an object or choosing the object name/class to be selected, the user or preferably the system chooses the pre-selection strategy to be applied to select the edges optimal for the object under analyses. The pre-selection strategy to find an area suits both to localized structures/organs such as kidney and the cross sections of more tubular structures such as blood vessels while the pre-selection strategy to find parallel lines suits well only to tubular structures. It is clear that the method may be easily configured to prioritize certain kind of shapes e.g. more circular shapes depending on the classified object e.g. blood vessel.
In an embodiment, the user is able to name or classify the finding before he/she confirms the edge of the logical object he/she is going to confirm. If the user considers that the highlighted pixels does not represent an edge of any logical object or do not want to name it, the user does not have to do anything but move the cursor e.g. forward. If he/she does not find a certain object in the image to be analysed, the user may also report this e.g. by choosing it from a list or checking a checkbox. In an embodiment, the user may further classify the object he/she has e.g. named into different subclasses. In this embodiment, the user may e.g. click mouse button or a key to view a list of subclasses or a textfield in which the user may write at least the first letters to view the list of words matching written characters of the possible subclasses. The list of subclasses suggested by the system may also be shortened e.g. according to the dimension, shape, location, colour and texture of the object. The subclass could be any word which describes that object e.g. a measurement of the object, simple describing class such as “small”, “large”, “round”, “red”, “uneven”, “diffuse”, “infiltrating” or more statistical class such as “normal” or “abnormal” or any more specific classification. After the user has finished classifying an object, he/she is able to repeat the confirming and classifying steps for another object. In an embodiment, the method may classify the objects automatically into its subclasses without the need of the user to confirm the process as long as the system may be sure enough it is safe e.g. when 1) the object matches closely to the control object(s) with the same classification and poorly to any other competing control object, 2) the analysis depends on huge number of control objects and 3) a risk analysis has been made for the process.
In an embodiment, the method provides a checklist to ensure the user does not forget look for objects/findings that should be checked with the type of image that the user is working on. The checklist may contain the most common findings that are found in similar images. The finding may be a name or a classification of an object or their combination. The checklist may contain all the findings that have been found earlier in similar images. In an embodiment, the user has to either confirm the corresponding edges of each item in the checklist or indicate the absence of checklisted items in the image. In addition of an image, the method presents a finding such as the name of an anatomic structure (e.g. a tonsil) or an abnormal finding (e.g. liquid in maxillary sinus) on the display. The finding listed may be e.g. a link to a separate site where is further information regarding to the matter. The user is prompted to move the cursor into the location of the image which corresponds to the finding and select and confirm its edges. After the user has finished edge confirmation, the method may enable the user to classify the finding into its subclasses. The method presents new findings repeatedly until the edges of all the essential findings have been confirmed or reported not to exist in the image. The number of findings presented depends e.g. on the anatomic site and type of the image. The essential findings e.g. anatomic structures and abnormalities in the check list have been listed for the use of the system by a medical professional or preferably a team of medical professionals.
In an embodiment, the method provides at least one example image for each item/finding in the said check-list e.g. object name to help the user to remember e.g. the shape and location of the object in that kind of image. In addition of an image to be analysed and the name of the object, the method presents at least one example image in which the edges of a similar object are highlighted. The user may browse example images e.g. with a mouse wheel. The user is prompted to move the cursor into the location of an image to be analysed which corresponds to the object highlighted in the example image and then select and confirm the edges of the object in the image to be analysed. After the user has finished confirming and classifying the object, the method may present a new example object in a new or the same image and the user is prompted to repeat the edge confirmation step.
In an embodiment, the user is able to browse example images with a similar object so that he/she can get a good understanding on the normal variance of the object. In an embodiment the user is able to browse example images as a video.
In an embodiment, the user is able to insert free text annotations into the image.
In an embodiment, the method provides an automatic shape detection step preferably based on the edge detection step. This step may be based on artificial intelligence or statistical analysis. As a result of the shape detection step the method may e.g. classify the object, highlight its edges and present variety of information on the display such as the name and object classifications and the estimation of the probability of the correctness of the classification. The user may either confirm he/she considers the object and its classifications valid, edit or reject the object and/or its classifications.
The method provides a step to generate an automated medical report relating to the confirmed objects. The report generation is preferably executed after the user has finished confirming and classifying the objects in all the images of a study to ensure that all the dependencies between the classified objects are properly taken into account. The kind of dependency mentioned may be e.g. if there is an enlarged object adjacent to a narrowed object in the image suggesting that the enlarged object may compress the narrowed object. The classification of an object to “enlarged” may be automatically done e.g. by comparing the dimensions and shape of the object both to other objects in the same image to normalize its dimensions and to other similar objects in the database to get its position in normal variation. This is especially probable if the objects tend to behave like that as e.g. an intervertebral disk may protrude and compress a nerve root.
The report generated for each object depends on the combination of its classifications e.g. such a way that the method may select the pre-wrote text which matches the combination or the text is generated more dynamically from individual classifications. As an example, in Figure 2c the classifications 28 and 29 may result to the following report text: Disc L4-L5 protrudes into the spinal canal on the right side compressing the L5-nerve root against lamina. The report may also contain automatically measured dimensions and other classifications such as density approximations depending on the colour of the object.
The report may take into account the properties of the image(s) (e.g. type, anatomic area) and the area (e.g. the dimension, shape, colour, texture and location of the area). In an embodiment, the method may utilize also three dimensional data (e.g. the number and distance between images, different projections, three dimensional model) computed from the images of a series or study for generating medical report.
The list of exemplary parameters which may be utilized for report generation is not meant to be restrictive but any other parameter may also be used.
In an embodiment, the method provides a step to present a list of confirmed objects, each item of which may serve as a link to the image(s) where the object is highlighted. In another embodiment, the method provides a step to present a list of confirmed objects, each item of which may serve as a link to the automatically generated text for the object. In an embodiment, the user has the option to agree, edit or remove the automatically generated text or any part of it e.g. by using a typical text editor.
The method provides a step to store the data related to the confirmed edges to a memory of the electronic device. In an embodiment, the method provides a step to store the generated medical report to a memory of the electronic device.
In an embodiment, the method provides a step in which the user analyses whether the quality of an image is good enough for any further analyses. In an embodiment, the method provides a step in which screen type and resolution is checked to be good enough for analysing images. In an embodiment, the method provides a step in which a test image is shown to the user in order to check the quality of the screen. In an embodiment, the method provides a step in which a vision test is done for the user who wants to analyse medical images in order to check that the user can see well enough. In an embodiment, the method provides a step in which the IP address of the user is checked to ensure that only one user is using the user session at the same time. In an embodiment, the method provides a step in which the identity of the user who analyses images is checked e.g. by bank identification, smart card identification or other identification means. In an embodiment, the method provides a step in which the identity of the user who analyses medical images is checked during each session. In an embodiment, the method has authentication and authorization steps. In an embodiment, the method is an independent or preferably an integrated part of medical image viewing software.
In a second aspect, the invention provides a software program that comprises program code means configured to implement all the method steps of the first aspect when it is executed in a computer, laptop, a tablet, a smart phone or other electronic device capable to run the software. Features described for the method may also apply to the software when appropriate.
In a third aspect, the invention provides a system for implementing the method of the first aspect. Features described for the method may also apply to the system when appropriate. The system may be comprised of e.g. a computer, a laptop, a server, a mobile device, a tablet, a smart phone or a combination comprising one or more said devices preferably functionally connected to each other e.g. via a network. The system comprises a display for presenting information, a graphical user input interface (GUI), a pointing device for receiving user input via the GUI and an electronic device having computing means to process data and memory means for holding gathered data in the electronic device during its use, a storage device for storing gathered data permanently and preferably a software implementing the method. The system may also comprise a keyboard for inputting data. The parts of the electronic device listed may be physically or functionally connected with each other.
The GUI will typically be part of the interactive software. The pointing device can comprise e.g. a touch screen, a keyboard or a mouse. The user may then confirm a highlighted edge in the image using the mouse or using his/her finger or electronic pen on the touch screen. The confirmed edge is the user input and a simple tap with a finger may give important data about whether or not the presented highlighted edge represents the edge of a logical object in the image having the real counterpart.
Finally, the user input data is stored to a memory of the electronic device. The memory can be volatile or non-volatile. The memory can be within the electronic device or within a peripheral device functionally connected to the electronic device such as a user device may be connected via the internet to a server and a database.
Each type of medical image, e.g. an image of pharynx taken by a camera or a MRI image of lumbar spine, contain a limited number of essential anatomical structures and known abnormalities, i.e. objects, all of which can be listed. Human beings have a natural ability to detect borders and shapes in images which seems to be challenging to automated/computerized systems. The invention intuitively combines the rapidity of computerized edge detection and human reassurance together to enable the user to quickly confirm object edges in the medical images. The invention maximizes the utilization of the user's ability to detect and identify shapes in such an effective way that it finally allows cost-effective classification of objects in medical images. Distinguishing reliably the edges of an object in a medical image is a prerequisite for any further analysis of that object.
An advantage of the invention is that the confirmation of the classified objects edges enables the automatic generation of detailed imaging reports increasing potentially both radiologists' work flow and the quality of reports.
Another advantage of the invention is that the user input data, the confirmed and classified object edges, can be easily used as teaching and testing material for artificial intelligence systems such as neural networks. Gathering the same kind of data would be in any other way, e.g. radiologist drawing edges by a free-hand tool, slow and extremely expensive. As data gathered using the invention cumulates, it is probable that the artificial intelligence systems become more and more perceptive resulting to better object classifications. The invention brings the fully automatic report generation for medical images closer than ever.
An advantage of the invention is that the gathered edge data can be directly transformed into virtual objects with exactly known dimensions which may be utilized as such or in health monitoring. As an example, the three dimensional virtual objects calculated from a MRI image series may be used to customize patient's personal three dimensional human model. If the same study is executed more than once to update the virtual model, the velocity/acceleration of changes in the object dimensions can be calculated certainly bringing new valuable information for the diagnostics. The same calculation is possible for other aspects of objects derived from the colour and texture of objects.
An advantage of the invention is that it enables the transfer of the routine work of radiologists' to other skilled persons e.g. radiographers who is capable of preview images, confirm object edges and pre-classify objects leaving the most demanding tasks to the radiologist. Radiographer is typically a nurse educated for the task. Radiologist can detect abnormalities much quicker and more reliably if the abnormalities are marked for them and save considerable time if most or all of the objects have been already classified. This would reduce costs and potentially improve patient security by the prepared classifications providing a second opinion.
The invention enables better quality assessment of the work of radiologists. Confirmed objects of one radiologist may be compared to other radiologists' corresponding objects of the same image to find out if there exists variance. The routinely done continuous cross checking ensures that the quality of work of the radiologists remains high and that any classification negligence or erogenous edge can be noticed immediately and that the persons making poor quality can be e.g. informed about the matter.
Referring to the advantages of the invention, the invention has in practice such a revolutionary impact on the process of the radiological reporting that it may be considered to yield unexpected effect.
It will be obvious to a person skilled in the art that, as the technology advances, the inventive concept can be implemented in various ways. The invention and its embodiments are not limited to the examples described above but may vary within the scope of the claims.

Claims (9)

1. Menetelmä lääketieteellisen kuvantamislausunnon muodostamiseksi Joka koostuu seuraavista järjestelmän suoritttamista vaiheista esitetään käyttäjälle kuva näytöllä, mahdollistetaan käyttöliittymän ja osoitinlaiteen avulla, että käyttäjä voi liikuttaa kuvassa kursoria, tunnettu siitä, että järjestelmä suorittaa reunan havaitsemisprosessin löytääkseen pikselin, joka vastaa mainitussa kuvassa olevaa reunaa, korostaa pikselin riippuen mainitusta reunanhavaitsemisvaiheesta ja kursorin sijainnista, mahdollistaa käyttäjän syöttää tietoa käyttöliittymän avulla vahvistaakseen edustaako mainittu korostettu pikseli loogisen objektin reunaa, lukee mainitun käyttäjän syöttämän tiedon, tallentaa tiedon elektronisen laitteen muistiin, joka liittyy mainittuun käyttäjän syötteeseen.1. A method of generating a medical imaging statement comprising the following steps performed by a system for displaying an image on a screen to a user, allowing the user to move the cursor within the image, characterized in that the system performs an edge detection process to depending on said edge detection step and cursor position, allows a user to input information through a user interface to confirm whether said highlighted pixel represents the edge of a logical object, reads said user input, stores data in the memory of an electronic device associated with said user input. 2. Patenttivaatimuksen 1 mukainen menetelmä, tunnettu siitä, että järjestelmä esittää käyttöliittymän avulla käyttäjälle valikon, jossa on valittavana nimikkeitä loogisen objektin luokittelemiseksi vähintään yhteen luokkaan.Method according to Claim 1, characterized in that the system displays to the user, via the user interface, a menu with items to be selected for classifying the logical object into at least one class. 3. Patenttivaatimuksen 1 tai 2 mukainen menetelmä, tunnettu siitä, että järjestelmä luokittelee mainitun loogisen objektin vähintään yhteen luokkaan.Method according to claim 1 or 2, characterized in that the system classifies said logical object into at least one class. 4. Patenttivaatimuksen 1-3 mukainen menetelmä, tunnettu siitä, että järjestelmä tarjoaa tarkistuslistan nimikkeistä, jotka tulisi vahvistaa mainitun vahvistusvaiheen avulla.A method according to claims 1-3, characterized in that the system provides a checklist of items which should be confirmed by said validation step. 5. Patenttivaatimuksen 1-4 mukainen menetelmä, tunnettu siitä, että järjestelmä esittää vähintään yhden esimerkkikuvan kutakin tarkistuslistan nimikettä kohden.Method according to claims 1-4, characterized in that the system presents at least one example image for each item in the checklist. 6. Patenttivaatimuksen 1-3 mukainen menetelmä, tunnettu siitä, että järjestelmä mahdollistaa käyttäjän asettaa etäisyysrajan sille kuinka kaukaa mainitusta käyttäjän valitsemasta kuvan pikselistä järjestelmän sallitaan korostaa pikseli.Method according to claims 1-3, characterized in that the system allows the user to set a distance limit on how far from the user-selected image pixel the system is allowed to emphasize the pixel. 7. Tietokoneohjelma, tunnettu siitä, että se käsittää ohjelmakoodivälineet, jotka on järjestetty suorittamaan minkä tahansa patenttivaatimuksissa 1-6 määritellyn menetelmän kaikki vaiheet suoritettaessa mainittu tietokoneohjelma tietokoneessa.A computer program, characterized in that it comprises program code means arranged to perform all the steps of any of the methods defined in claims 1-6 when executing said computer program on a computer. 8. Järjestelmä, joka suorittaa minkä tahansa patenttivaatimuksissa 1-6 määritellyn menetelmän kaikki vaiheet, tunnettu siitä, että se koostuu näytöstä tiedon esittämistä varten, graafisestä käyttöliittymästä ja osoitinlaitteesta käyttäjän syötteen vastaanottamiseksi, elektronisesta laitteesta, jossa on välineet tiedon käsittelemiseen, muistivälineistä kerätyn tiedon pitämiseksi elektronisessa laitteessa ja tallennuslaitteesta kerätyn tiedon tallentamiseksi, joka on yhdistetty toiminnallisesti elektroniseen laitteeseen.A system which performs all the steps of any of the methods defined in claims 1-6, characterized in that it comprises a display for displaying information, a graphical user interface and a pointing device for receiving user input, an electronic device having means for processing data, storing information in memory and recording data collected on the device and the storage device which is functionally connected to the electronic device. 9. Patenttivaatimuksen 8 mukainen järjestelmä, tunnettu siitä, että se koostuu palvelimesta, jossa on välineet tiedon käsittelemiseen ja verkkoyhteys, ja joka on yhdistetty mainittuun elektroniseen laitteeseen ja mainittuun tallennuslaitteeseen verkon avulla, ja joka kykenee lähettämään ja vastaanottamaan tietoa mainituista laitteista kun mainittu tallennuslaite on tietokanta.System according to claim 8, characterized in that it consists of a server having means for processing data and a network connection, which is connected to said electronic device and said storage device via a network, and which is capable of sending and receiving information from said devices when said storage device is a database. .
FI20155339A 2015-05-11 2015-05-11 Computer-aided medical imaging report FI126036B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
FI20155339A FI126036B (en) 2015-05-11 2015-05-11 Computer-aided medical imaging report
PCT/FI2016/050296 WO2016181037A1 (en) 2015-05-11 2016-05-06 Computer aided medical imaging report

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
FI20155339A FI126036B (en) 2015-05-11 2015-05-11 Computer-aided medical imaging report

Publications (2)

Publication Number Publication Date
FI126036B true FI126036B (en) 2016-06-15
FI20155339A FI20155339A (en) 2016-06-15

Family

ID=56106270

Family Applications (1)

Application Number Title Priority Date Filing Date
FI20155339A FI126036B (en) 2015-05-11 2015-05-11 Computer-aided medical imaging report

Country Status (2)

Country Link
FI (1) FI126036B (en)
WO (1) WO2016181037A1 (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070116357A1 (en) * 2005-11-23 2007-05-24 Agfa-Gevaert Method for point-of-interest attraction in digital images
US8391603B2 (en) * 2009-06-18 2013-03-05 Omisa Inc. System and method for image segmentation
US8619093B2 (en) * 2010-07-20 2013-12-31 Apple Inc. Keying an image
US8503801B2 (en) * 2010-09-21 2013-08-06 Adobe Systems Incorporated System and method for classifying the blur state of digital image pixels

Also Published As

Publication number Publication date
FI20155339A (en) 2016-06-15
WO2016181037A1 (en) 2016-11-17

Similar Documents

Publication Publication Date Title
US8442280B2 (en) Method and system for intelligent qualitative and quantitative analysis of digital radiography softcopy reading
US7925653B2 (en) Method and system for accessing a group of objects in an electronic document
EP4170673A1 (en) Auto-focus tool for multimodality image review
CN105167793B (en) Image display device, display control unit and display control method
US10607122B2 (en) Systems and user interfaces for enhancement of data utilized in machine-learning based medical image review
US20190172581A1 (en) Systems and user interfaces for enhancement of data utilized in machine-learning based medical image review
CN111971752B (en) Display of medical image data
JP2006500124A (en) Method and system for reading medical images with a computer aided detection (CAD) guide
US11562587B2 (en) Systems and user interfaces for enhancement of data utilized in machine-learning based medical image review
US20080166070A1 (en) Method for providing adaptive hanging protocols for image reading
JP6039427B2 (en) Using a structured library of gestures in a multi-touch clinical system
US20170262584A1 (en) Method for automatically generating representations of imaging data and interactive visual imaging reports (ivir)
JP2005510326A (en) Image report creation method and system
US9361711B2 (en) Lesion-type specific reconstruction and display of digital breast tomosynthesis volumes
US20230098785A1 (en) Real-time ai for physical biopsy marker detection
CN111383328A (en) 3D visualization method and system for breast cancer focus
FI126036B (en) Computer-aided medical imaging report
CN115249527A (en) Method and system for generating and structuring medical examination information
US20240062857A1 (en) Systems and methods for visualization of medical records
EP4339961A1 (en) Methods and systems for providing a template data structure for a medical report
JP2008029703A (en) Three-dimensional image display device
Kohlmann et al. Smart Linking of 2D and 3D Views in Medical Applications

Legal Events

Date Code Title Description
FG Patent granted

Ref document number: 126036

Country of ref document: FI

Kind code of ref document: B

MM Patent lapsed