WO2021207109A1 - Système de défilement intelligent - Google Patents
Système de défilement intelligent Download PDFInfo
- Publication number
- WO2021207109A1 WO2021207109A1 PCT/US2021/025835 US2021025835W WO2021207109A1 WO 2021207109 A1 WO2021207109 A1 WO 2021207109A1 US 2021025835 W US2021025835 W US 2021025835W WO 2021207109 A1 WO2021207109 A1 WO 2021207109A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- user
- dependent
- refresh rate
- timer
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0354—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
- G06F3/03543—Mice or pucks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/0485—Scrolling or panning
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H40/00—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
- G16H40/60—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/70—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
Definitions
- aspects of this disclosure are generally related to improving scrolling techniques on a mouse.
- a typical case might have 500 axial, 500 sagittal and 500 coronal slices for a total of 1,500 slices to be viewed for each case.
- a radiologist must rapidly scroll through these images using a computer mouse with a wheel button. The radiologist may use a one finger or two finger technique. To meet the time constraints the radiologist might only be able to spend 0.25 seconds per slice.
- the purpose of this patent is to provide an improved method of viewing and reporting on medical images.
- a method, software suite and apparatus are disclosed.
- the method presents images to a user at a user- controlled image refresh rate, generates a triggering event wherein the triggering event is associated with a timer-dependent image refresh rate, monitors for the triggering event and when the triggering event occurs, presenting images to the user at the timer- dependent image refresh rate.
- Some embodiments comprise wherein the user controlled image refresh rate is performed by at least one of the group comprising: a rollerball on the mouse; a scroll wheel on a mouse; a click and drag movement on a mouse; and, arrow keys on a keyboard.
- the triggering event comprises at least one of the group of: an imaging finding; an aspect of patient metadata; user eye tracking metrics; predetermined responses; report elements; and, user facial recognition metrics.
- the timer-dependent image refresh rate comprises at least one of the group of comprising: a first timer-dependent image refresh rate causes pausing at a single image for a minimum period of time wherein after the minimum period of time has passed, the refresh rate the user-controlled refresh rate resumes; a second timer-dependent image refresh rate utilized for at least two consecutive images wherein the second timer-dependent image refresh rate is slower than the user- controlled refresh rate; a third timer-dependent image refresh rate utilized for at least two consecutive images wherein the third timer-dependent image refresh rate is faster than the user-controlled refresh rate; a fourth timer-dependent image refresh rate utilized only a limited number of times, such that after the limited number of times is exceeded, the refresh rate is user-controlled; a fifth timer-dependent image refresh rate wherein
- Some embodiments comprise wherein a user notification is presented when a triggering event occurs wherein the user notification comprises at least one of the group of: visual annotation marker on the image; visual image manipulation techniques; auditory notification; and, tactile notification.
- Some embodiments comprise wherein the images comprise at least one of the group comprising: cross-sectional images; volume rendered images; and, images displayed on an extended reality headset.
- Some embodiments comprise an opportunity to turn off monitoring of the triggering event.
- Some embodiments comprise presenting images to a user at a user-controlled viewing parameter, generating a triggering event wherein the triggering event is associated with an image-dependent viewing parameter, monitoring for the triggering event and when the triggering event occurs, presenting the images to the user with the image-dependent viewing parameter.
- the user-controlled viewing parameter is performed by at least one of the group comprising: a user-performed strike of a hotkey on a keyboard; a user-performed click and drag movement on a mouse; a user-performed movement of a scroll wheel on a mouse; and a user-performed point and click on a drop down menu.
- the purpose of the performance of the maneuvers is to achieve at least one of the group comprising: a user-desired window and level setting; a user-desired false color setting; a user-desired zoom setting; a user-desired image rotation position; a user- desired convergence point; a user-desired viewing angle setting; and a user-desired manipulation of voxels.
- Some embodiments comprise wherein the triggering event comprises at least one of the group of: an imaging finding; an aspect of patient metadata; user eye tracking metrics; predetermined responses; report elements; and, user facial recognition metrics.
- the image-dependent viewing parameter comprises at least one of the group of comprising: a first image-dependent viewing parameter wherein the first image-dependent viewing parameter is a window width and window level setting for the entire dataset; a second image-dependent viewing parameter wherein the second image-dependent viewing parameter includes setting a window and level parameter for a first image slice independently from a window and level parameter for a second image slice; a third image-dependent viewing parameter wherein the third image-dependent viewing parameter includes displaying simultaneously a first visual representation adjustment logic for a first segmented structure and a second visual representation adjustment logic for a second segmented structure wherein the first visual representation adjustment logic is different from the second visual representation adjustment logic; a fourth image-dependent viewing parameter wherein the fourth image- dependent viewing parameter is a false color setting; a fifth image-dependent viewing parameter wherein the fifth image-dependent viewing parameter is a zoom setting; a sixth image-dependent viewing parameter wherein the sixth image-dependent viewing parameter is an image rotation setting; a seventh image-dependent viewing parameter wherein the
- Some embodiments comprise wherein a user notification is presented when a triggering event occurs wherein the user notification comprises at least one of the group of: visual annotation marker on the image; visual image manipulation techniques; auditory notification; and, tactile notification.
- Some embodiments comprise wherein the images comprise at least one of the group comprising: cross-sectional images; volume rendered images; and, images displayed on an extended reality headset.
- Some embodiments comprise an opportunity to turn off monitoring of the triggering event.
- Some embodiments comprise presenting an image reporting system to a user with user-controlled reporting parameters, generating a triggering event wherein the triggering event is associated with an image-dependent reporting parameter, monitoring for the triggering event and when the triggering event occurs, presenting the image reporting system to the user with the image-dependent reporting parameter.
- the user-controlled reporting parameter is performed by at least one of the group comprising: a user-performed strike of a button on a microphone; a user-performed strike of a hotkey on a keyboard; a user-performed click and drag movement on a mouse; a user-performed movement of a scroll wheel on a mouse; and a user-performed point and click on a drop down menu.
- the purpose of the user inputs is to achieve at least one of the group comprising: a user-desired input of text; a user-desired alteration of text; a user-desired deletion of text; and a user-desired navigation from a first section of a report to a second section of a report.
- Some embodiments comprise wherein the triggering event comprises at least one of the group of: an imaging finding; an aspect of patient metadata; user eye tracking metrics; and, user facial recognition metrics.
- the image-dependent reporting parameter comprises at least one of the group of comprising: a first image-dependent reporting parameter wherein text is automatically inputted into a section of a report; a second image-dependent reporting parameter wherein text in a section of a report is automatically altered; a third image-dependent reporting parameter wherein text in a section of a report is automatically deleted; and a fourth image-dependent reporting parameter wherein a cursor is automatically moved from a first section of a report to a second section of a report.
- Some embodiments comprise wherein a user notification is presented when a triggering event occurs wherein the user notification comprises at least one of the group of: visual annotation marker on the image; visual image manipulation techniques; auditory notification; and, tactile notification.
- the images comprise at least one of the group comprising: cross-sectional images; volume rendered images; and, images displayed on an extended reality headset.
- Some embodiments comprise an opportunity to turn off monitoring of the triggering event.
- Some embodiments comprise a method of reviewing images and reporting comprising: presenting images to a user at a user-controlled image refresh rate and user- controlled viewing parameter; presenting an image reporting system to the user with user- controlled reporting parameters; generating a first triggering event wherein the first triggering event is associated with a timer-dependent image refresh rate; generating a second triggering event wherein the second triggering event is associated with an image- dependent viewing parameter; generating a third triggering event wherein the third triggering event is associated with an image-dependent reporting parameter; monitoring for the first triggering event; monitoring for the second triggering event; monitoring for the third triggering event; when the first triggering event occurs, presenting the images to the user at the timer-dependent image refresh rate; when the second triggering event occurs, presenting the images to the user with the image-dependent viewing parameter; and when the third triggering event occurs, presenting the image reporting system to the user with the image-dependent reporting parameter.
- Types of patient medical condition data would include but, not be limited to: doctor’s examination data; patient’s health records; artificial intelligence (AI)/ machine learning; patient’s responses to medical personnel questioning; and medical images taken in previous visits to the hospital or medical imaging facility.
- AI artificial intelligence
- the medical image slices containing portions of the lungs could be treated as explained in the embodiments that follow.
- the medical image slices containing portions of the heart could be treated as explained in the embodiments that follow.
- the medical image slices containing portions of the abdomen and appendix could be treated as explained in the embodiments that follow.
- the patient in for a scheduled appointment for follow up on status of a tumor the medical images from a previous imaging session could be retrieved and displayed in conjunction with currently obtained medical images.
- the scrolling process would automatically pause for a period of time the duration of which could be specified by the individual radiologist, best practices of the medical facility or derived from studies regarding miss rate as a function of viewing time.
- the pause would, in the preferred embodiment, override the finger scrolling technique. Therefore, even if the finger were still moving on the scroll wheel, the same image (e.g., a single axial slice) would remain on the screen for a period of pause time. This would allow the radiologist additional saccades eye movements and additional fixation points on the screen for that particular slice.
- the pause can be event triggered.
- Triggering events include, but are not limited to the following: AI detected abnormality; specific anatomic feature which is known to require additional time for review; and, same slice as prior exam wherein a lesion was identified and marked. Other triggering events are discussed in the figures. [0031] In some embodiments, a portion of the image slice(s) could be highlighted through change of contrast (e.g., increasing contrast in some region pertaining to pertinent patient medical condition data or subduing contrast in regions not related to pertinent patient medical condition data).
- external symbols could be employed to annotate the slice(s) of the region(s) pertaining to the patient medical condition data.
- Examples of the external symbols would include but, not be limited to the following: an arrow, circle, or image slice boundary change colors.
- medical image slice(s) could blink as a signal to the radiologist that of the region(s) pertaining to the patient medical condition data was about to be viewed so that the radiologist could self modify the scrolling and/ or viewing technique.
- the radiologist could self modify the scrolling and/ or viewing technique.
- he/ she could use the finger tracing technique.
- an external device could be attached to the radiologist viewing system which give an audio signal when the radiologist was scrolling through a slicer(s) of the region(s) pertaining to the patient medical condition data.
- an external device could be attached to the radiologist viewing system which give a tactile signal when the radiologist was scrolling through a slicer(s) of the region(s) pertaining to the patient medical condition data.
- the audio and tactile devices could be combined in a system such as a buzzer.
- the medical image data could be segmented into the regions/ organs within the body. This would include but, not be limited to: heart, lungs, liver, kidney, bone structure, stomach, intestines, spleen - all of the specific items on the medical facility check list. These segmented regions presented to the radiologist by various means such as but not limited to: called by a pull down menu; audio command by radiologist; arranged in sequence of the check list; arranged in priority of the region(s) pertaining to the patient medical condition data.
- the radiologist may set a scroll rate which would obviate the need for the two-finger scroll. This rate would be under the control of the viewing radiologist and he/ she could have multiple scroll rate default values depending on type of medical images, check list item, type of view (e.g., sagittal vs. coronal).
- the scroll rate could automatically change in region(s) pertaining to the patient medical condition data.
- the scrolling could be interrupted by the radiologist at any time by interactions with the workstation through the keyboard, mouse or verbal command.
- the medical image slice data could be combined to form volumetric data per USPTO Patent 8,384,771 and displayed in 3D for the radiologist.
- This volumetric data would nominally be segmented and filtered in accordance with the medical facility checklist.
- the scrolling could change the viewing position of the radiologist with respect to the volumetric data.
- the scrolling could be done automatically for all four sides, top, bottom.
- the scroll rate could also automatically change in region(s) pertaining to the patient medical condition data.
- the segmentation process described above could be combined with enlarging the regions/ organs presented on the radiologist display) pertaining to the patient medical condition data.
- scrolling techniques coupled with the region(s) pertaining to the patient medical condition data could be used individually or in combination based on radiologist preference or by medical facility best practices. For example, the scrolling could be paused and an arrow shown to indicate a region pertaining to the patient medical condition data.
- Figure 1 A Illustrates prior art wherein the left index finger at the top of the mouse roller wheel moving the mouse roller wheel toward the user with the right index finger at the bottom of the mouse roller wheel just lifting off of the mouse roller wheel at time point #1.
- Figure IB Illustrates prior art wherein the left index finger at the bottom of the mouse roller wheel and just lifting off of the mouse roller wheel with the right index finger at the top of the mouse roller wheel moving the mouse roller wheel toward the user at time point #2.
- Figure 2 illustrates prior art showing scroll rates.
- FIG. 3 illustrates a processing diagram for the key steps in this patent.
- Figure 4 illustrates an apparatus for implementing the process illustrated in figure 3.
- Figure 5 illustrates a description of the user-controlled refresh rate including input methods and what the input methods accomplish.
- Figure 6 illustrates a description of the user-controlled viewing parameters including input methods and what the input methods accomplish.
- Figure 7 illustrates a description of the user-controlled reporting parameters including input methods and what the input methods accomplish.
- Figure 8 illustrates an example list of triggering events.
- Figure 9 illustrates a chart showing the three categories of predetermined responses.
- Figure 10 illustrates a list of the predetermined responses category #1, which is the timer-dependent image refresh rate.
- Figure 11 illustrates a list of the predetermined responses category #1, which is the timer-dependent image refresh rate.
- Figure 12 illustrates a list of the predetermined responses category #1, which is the timer-dependent image refresh rate.
- Figure 13 illustrates a list of the predetermined responses category #2, which is the image-dependent viewing parameter.
- Figure 14 illustrates an example list of the predetermined responses category #2, which is the image-dependent viewing parameter.
- Figure 15 illustrates an example list of the predetermined responses category #2, which is the image-dependent viewing parameter.
- Figure 16 illustrates an example list of the predetermined responses category #2, which is the image-dependent viewing parameter.
- Figure 17 illustrates an example list of the predetermined responses category #3, which is the image-dependent reporting parameter.
- Figure 18 illustrates an example list of the predetermined responses category #3, which is the image-dependent reporting parameter.
- Figure 19 illustrates an example of triggering events matched to the predetermined response of timer-dependent image refresh rate.
- Figure 20 illustrates a method of automatically performing window and level settings for improved viewing of a lesion detected by an AI algorithm.
- Figure 21 illustrates a method of automatically performing window and level settings for improved viewing of a lesion detected by an AI algorithm.
- Figure 22 illustrates the new scrolling technique implemented in this patent.
- Figure 23 A illustrates an the integration of multiple factors to determine the optimum amount of time spent on each slice.
- Figure 23B illustrates application of the algorithm to a first example slice #300, which corresponds to the second row of the table in Figure 23 A.
- Figure 23C illustrates application of the algorithm to a first example slice #270, which corresponds to the third row of the table in Figure 23 A.
- Figure 23D illustrates application of the algorithm to a first example slice #150, which corresponds to the fourth row of the table in Figure 23 A.
- Figure 24A illustrates a first example of patient’s conditions wherein the embodiments could be employed to enhance radiologist’s accuracy of diagnosis.
- Figure 24B illustrates a second example of patient’s conditions wherein the embodiments could be employed to enhance radiologist’s accuracy of diagnosis.
- Figure 24C illustrates a third example of patient’s conditions wherein the embodiments could be employed to enhance radiologist’s accuracy of diagnosis.
- Figure 25 A illustrates a slice of (e.g., slice N within the sequence) a set of medical images which is displayed for a duration of 0.25 seconds (note: this particular slice is not linked to a triggering event to cause a predetermined response of a slowing or speeding of the timer-dependent image refresh rate).
- Figure 25B illustrates a slice of (e.g., slice N+l within the sequence) a set of medical images which is displayed for a duration of 0.25 seconds (note: this particular slice is not linked to a triggering event to cause a predetermined response of a slowing or speeding of the timer-dependent image refresh rate).
- Figure 25C illustrates a slice of (e.g., slice N+2 within the sequence) a set of medical images which is displayed for a duration of 3.5 seconds (note: this particular slice is linked to a triggering event to cause a predetermined response of a slowing of the timer-dependent image refresh rate).
- Figure 25D illustrates a slice of (e.g., slice N+3 within the sequence) a set of medical images which is displayed for a duration of 3.5 seconds (note: this particular slice is linked to a triggering event to cause a predetermined response of a slowing of the timer-dependent image refresh rate).
- Figure 25E illustrates a slice of (e.g., slice N+4 within the sequence) a set of medical images which is displayed for a duration of 0.25 seconds (note: this particular slice is not linked to a triggering event to cause a predetermined response of a slowing or speeding of the timer-dependent image refresh rate).
- Figure 25F illustrates a slice of (e.g., slice N+5 within the sequence) a set of medical images which is displayed for a duration of 0.25 seconds (note: this particular slice is not linked to a triggering event to cause a predetermined response of a slowing or speeding of the timer-dependent image refresh rate).
- Figure 26A illustrates a slice of medical imagery which has, within the slice, a tumor at time point #2.
- Figure 26B illustrates a slice of medical imagery which has, within the slice, a tumor at time point #1.
- Figure 27 illustrates an example type of triggering event including utilization of eye tracking in relation to anatomic feature.
- Figure 28A illustrates a slice with conventional “abdomen window and level” setting.
- Figure 28B illustrates a text box with an example 3-pronged triggering event and a matched predetermined response of an image-dependent viewing parameter.
- Figure 28C illustrates a second example of “halo windowing” as described in US Patent Application #16/785,606.
- Figure 29 A illustrates example applications of image-dependent viewing parameters for advanced viewing on extended reality displays.
- Figure 29B illustrates an example left eye view of a breast cancer within that has undergone the segmentation process with the majority of non-breast cancer matter subtracted through a filtration process.
- Figure 29C illustrates an example right eye view of a breast cancer within that has undergone the segmentation process with the majority of non-breast cancer matter subtracted through a filtration process.
- Figure 29D illustrates an example left eye view of a breast cancer within that has undergone the segmentation process with the majority of non-breast cancer matter subtracted through a filtration process with the viewing position zoomed inward closer to the breast cancer, as compared to the viewing position from Figure 29B.
- Figure 29E illustrates an example right eye view of a breast cancer within that has undergone the segmentation process with the majority of non-breast cancer matter subtracted through a filtration process with the viewing position zoomed inward closer to the breast cancer, as compared to the viewing position from Figure 29C.
- Figure 30A illustrates an annotation of a particular region of interest on a slice by an arrow pointing to a region(s) pertaining to a finding identified by a CAD/ AI algorithm, which is an example of a notification of a triggering event.
- Figure 30B illustrates an annotation of a particular region of interest on a slice by a circle encircling a region(s) pertaining to a finding identified by a CAD/ AI algorithm, which is an example of a notification of a triggering event.
- Figure 30C illustrates an annotation of the outer edges of the slice with a colored line pertaining to a finding identified by a CAD/AI algorithm, which is an example of a notification of a triggering event.
- Figure 31 A illustrates an example layout of a radiologist workstation with an audio addition (transmit only or transmit/ receive) to alert the radiologist the slice(s) displayed are to a region(s) indicating that a triggering event has occurred.
- Figure 3 IB illustrates an example layout of a radiologist workstation with a vibration mechanism to alert the radiologist the slice(s) displayed are to a region(s) indicating that a triggering event has occurred.
- Figure 31C illustrates an example layout of a radiologist workstation with a buzzer that both emits a sound and also vibrates to alert the radiologist the slice(s) displayed are to a region(s) indicating that a triggering event has occurred.
- Figure 32A illustrates a flow chart showing the triggering event criteria and the matched predetermined response of a timer-dependent image refresh rate.
- Figure 32B illustrates application of the triggering event criteria and the matched predetermined response of a timer-dependent image refresh rate in Figure 32A.
- Figure 33 illustrates the integration of triggering events, timer-dependent image refresh rate, image-dependent viewing parameter, and image-dependent reporting parameter utilization into the interpretation of a chest x-ray.
- Figure 1 A Illustrates prior art wherein the left index finger at the top of the mouse roller wheel moving the mouse roller wheel toward the user with the right index finger at the bottom of the mouse roller wheel just lifting off of the mouse roller wheel at time point #1.
- 100 illustrates the mouse.
- 101 illustrates the left mouse button.
- 102 illustrates the right mouse button.
- 103 illustrates the mouse roller wheel.
- 104 illustrates the left hand with the left index finger at the top of the mouse roller wheel moving the mouse roller wheel 103 toward the user.
- 105 illustrates the right index finger at the bottom of the mouse roller wheel 103 just lifting off of the mouse roller wheel 103.
- This illustration represents a first time point.
- Figure IB Illustrates prior art wherein the left index finger at the bottom of the mouse roller wheel and just lifting off of the mouse roller wheel with the right index finger at the top of the mouse roller wheel moving the mouse roller wheel toward the user at time point #2.
- 100 illustrates the mouse.
- 101 illustrates the left mouse button.
- 102 illustrates the right mouse button.
- 103 illustrates the mouse roller wheel.
- 106 illustrates the left hand with the left index finger at the bottom of the mouse roller wheel just lifting off of the the mouse roller wheel 103.
- 107 illustrates the right index finger at the top of the mouse roller wheel 103 moving the mouse roller wheel 103 toward the user.
- This illustration represents a second time point. Note that the first time point and second time point both show snapshots in time of the mouse roller wheel 103 moving toward the user. This serves to generate a faster scrolling technique and speeds imaging interpretation. Time is not lost, as it would be using only a single finger, having completed one scroll to move the finger to the top of the roller ball and re-comm
- FIG. 2 illustrates prior art showing scroll rates.
- the first method is the scroll wheel as shown in Figure 1 wherein the radiologist uses a single finger or both fingers to move from slice to slice.
- the second method is by holding down the arrow key.
- a radiologist was asked on a standard Radiology Picture Archiving Communication System (PACS) to hold the down arrow key and see how long it took for all images to be displayed.
- PPS Radiology Picture Archiving Communication System
- the radiologist reviewed a stack of 512 slices. It took 83 seconds of holding the down arrow key to go through all 512 slices. This is equivalent to 0.16 seconds per slice. All slices are shown for the exact same amount of time.
- FIG. 3 illustrates a processing diagram for the key steps in this patent.
- Step 300 illustrates generating a list of predetermined triggering events.
- Step 301 illustrates generate a list of predetermined responses comprising at least one of the group of: a timer-dependent image refresh rate; an image-dependent viewing parameter; and, image- dependent reporting parameter.
- Step 302 illustrates matching each triggering event with at least one predetermined response.
- Step 303 illustrates presenting images to a user with user-controlled image refresh rate, user-controlled viewing parameter(s) and presenting an image reporting system to a user with user-controlled reporting parameter(s).
- Step 304 illustrates monitoring for a triggering event.
- Step 305 illustrates a time step wherein no triggering event occurs wherein the next processing step is to return to step 304 wherein monitoring for a triggering event is performed.
- Step 306 illustrates a time step wherein a predetermined triggering event occurs.
- Step 307 illustrates performing the predetermined image adjustment(s) matched to the triggering event. Upon completing the predetermined image adjustment(s) from step 307, the next processing step is to return to step 304 and continue monitoring for a triggering event.
- FIG. 4 illustrates an apparatus for implementing the process illustrated in figure 3.
- a radiologic imaging system 400 e.g., X-ray, ultrasound, CT (computed Tomography), PET (Positron Emission Tomography), or MRI (Magnetic Resonance Imaging)
- the medical images 402 are provided to an image processor 406, that includes processors 408 (e.g., CPUs and GPUs), volatile memory 410 (e.g., RAM), and non volatile storage 412 (e.g. HDDs and SSDs).
- a program 414 running on the image processor implements one or more of the steps described in figure 3.
- the IO device 416 may include a virtual reality headset, mixed reality headset, augmented reality headset, monitor, tablet computer, PDA (personal digital assistant), mobile phone, or any of a wide variety of devices, either alone or in combination.
- the IO device 416 may include a touchscreen, and may accept input from external devices (represented by 418) such as a keyboard, mouse, and any of a wide variety of equipment for receiving various inputs. However, some or all the inputs could be automated, e.g. by the program 414.
- a series of processing strategies 420 are implemented, which facilitate viewing of medical images by medical personnel.
- Figure 5 illustrates a description of the user-controlled refresh rate including input methods and what the input methods accomplish.
- 500 illustrates an overview of the user- controlled refresh rate.
- User inputs include, but are not limited to, the following: a user- performed finger movement on a rollerball on the mouse; a user-performed finger movement on a scroll wheel on a mouse; a user-performed click and drag movement on a mouse; and, a user-performed finger strike of an arrow key on a keyboard.
- What the user inputs accomplish include, but are not limited to, the following: user-desired movement of a single step (e.g.
- Figure 6 illustrates a description of the user-controlled viewing parameter including input methods and what the input methods accomplish.
- 600 illustrates an overview of the user-controlled viewing parameter.
- User inputs include, but are not limited to, the following: a user-performed strike of a hotkey on a keyboard; a user- performed click and drag movement on a mouse; a user-performed movement of a scroll wheel on a mouse; and, a user-performed point and click on a drop down menu.
- What the user inputs accomplish include, but are not limited to, the following: a user-desired window and level setting; a user-desired false color setting; a user-desired zoom setting; a user-desired image rotation position; a user-desired convergence point; and, a user- desired viewing angle setting.
- Figure 7 illustrates a description of the user-controlled reporting parameters including input methods and what the input methods accomplish. 700 illustrates an overview of the user-controlled reporting parameter.
- User inputs include, but are not limited to, the following: a user-performed strike of a button on a microphone; a user- performed strike of a hotkey on a keyboard; a user-performed click and drag movement on a mouse; a user-performed movement of a scroll wheel on a mouse; and, a user- performed point and click on a drop down menu.
- What the user inputs accomplish include, but are not limited to, the following: a user-desired input of text; a user-desired alteration of text; a user-desired deletion of text; and, a user-desired navigation from a first section of a report to a second section of a report.
- Figure 8 illustrates an example list of triggering events.
- 800 illustrates the list of triggering events.
- the list of triggering events include, but are not limited to, the following: imaging findings; imaging metadata; user eye tracking data; user facial recognition data; and, combination thereof.
- a first example are findings categorized by AI (e.g., AI algorithm determines CT head scan is abnormal showing an intracranial hemorrhage, AI algorithm determines that a head CT scan is normal, etc.).
- a second example include findings of a segmented structure. Some small structures are very important in the human body and other small structures are not as important clinically. One such structure that is important is the pituitary stalk. This is a thin structure connecting the pituitary gland to the hypothalamus. This structure has a disproportionate amount of pathology and therefore deserves a disproportionate amount of attention by a radiologist. Therefore, when the pituitary stalk appears on the image, this appearance event is an example of a triggering event.
- Another example is the property of a structure being reviewed. It is easier to hide a subtle finding in a complex scene, such as finding Waldo in the child’s game Where’s Waldo. Waldo can be easily be hidden because the scene is so complex. Therefore, the complexity of the scene can serve as a triggering event (e.g., highly complex, hetereogeous scenes can cause a triggering event). Additionally, some slices may contain several important structures (e.g., coronal image through the sella and pituitary stalk and cavernous sinuses), which can also serve as a triggering event.
- the radiologist may prefer to use a standard window and level setting for the abdomen. Then, after the last checklist item is reviewed using the standard window and level setting for the abdomen, the radiologist may elect to move to the bone items on the checklist. The action of moving to the next item on the checklist itself can act as a triggering event.
- Metadata associated with the imaging examination can also be used as a triggering event.
- a first example is patient history relevant to a particular structure being imaged. For example, if the patient’s medical record showed a history of kidney disease and elevated creatinine, then this metadata can serve as a triggering event.
- the preferred embodiment would be for metadata to be a component of a triggering event, rather than in isolation.
- a dual triggering event would occur when both of the following occur: first, a history of kidney disease is identified in the patient’s medical record; and, second, the kidney is within the active item being reviewed by the radiologist.
- user eye tracking metrics can serve as a triggering event.
- the user can also perform zooming and panning on his own and eye tracking performed in accordance with methods disclosed in US Patent Application 62/985,363.
- metrics such as number of fixation points within a certain distance within a segmented structure can be accomplished.
- a triggering event could be that a minimum of 5 seconds and 10 fixation points need to be performed on each imaging slice containing the pituitary stalk.
- a triggering event could occur once these metrics are achieved.
- user facial expression recognition metrics can be performed. For example, metrics relating to attentiveness is an example of a triggering event.
- Predetermined response criteria can also act as a triggering event.
- a timer-dependent image refresh rate can act as a triggering event and cause an image-dependent viewing parameter to occur.
- a timer-dependent image refresh rate can act as a triggering event and cause an image-dependent viewing parameter to occur.
- any of the above can be put together as a triggering event.
- Figure 9 illustrates a chart showing the three categories of predetermined responses.
- 900 illustrates the text box.
- the first category of predetermined response the timer-dependent image refresh rate.
- the second category is the image-dependent viewing parameter.
- the third category is the image-dependent reporting parameter.
- Figure 10 illustrates a list of the predetermined responses category #1, which is the timer-dependent image refresh rate. This is the first of three figures to each this method. 1000 illustrates a text box with three types of timer-dependent image refresh rates.
- the first timer-dependent image refresh rate causes a pause at a single image for a minimum period of time. For example, assume the user controlled scrolling rate is to see each image for 0.16 seconds per slice. The timer-dependent refresh rate when triggered causes a pause of 1.00 seconds. After this minimum period of time has passed, the refresh rate the user-controlled refresh rate (0.16 seconds per slice) resumes. This is useful by forcing the user to slow down on when triggering event (e.g., triggering event is the detection by an AI algorithm of a small pulmonary nodule on only a single slice), which could otherwise easily be missed by a user.
- triggering event is the detection by an AI algorithm of a small pulmonary nodule on only a single slice
- a triggering event has caused the predetermined response of the timer-dependent image refresh rate to pause at a single image for a minimum period of time.
- the second timer-dependent image refresh rate utilized for at least two consecutive images wherein the timer-dependent image refresh rate is slower than the user-controlled refresh rate. For example, assume the user controlled scrolling rate is to see each image for 0.16 seconds per slice.
- the timer-dependent refresh rate when triggered causes a pause of 1.00 seconds on two consecutive slices. After the 1.00 second pause on the first slice, the second slice is displayed. After the 1.00 second pause on the second slice, the user-controlled refresh rate (0.16 seconds per slice) resumes for the third slice and onward.
- triggering event is the detection by an AI algorithm of a small pulmonary nodule on two slices, which could otherwise easily be missed by a user.
- a triggering event has caused the predetermined response of the timer-dependent image refresh rate to be utilized for at least two consecutive images wherein the timer-dependent image refresh rate is slower than the user-controlled refresh rate.
- the timer-dependent refresh rate when triggered causes a pause of 0.01 seconds on two consecutive slices. After the 0.01 second pause on the first slice, the second slice is displayed. After the 0.01 second pause on the second slice, the user-controlled refresh rate (0.16 seconds per slice) resumes. This is useful by forcing the user to speed up on an non-important data (e.g., triggering event is the detection by an AI algorithm of an air gap above the patients head on a head CT scan).
- a triggering event has caused the predetermined response of the timer-dependent image refresh rate to be utilized for at least two consecutive images wherein the timer-dependent image refresh rate is faster than the user-controlled refresh rate.
- Figure 11 illustrates a list of the predetermined responses category #1, which is the timer-dependent image refresh rate. This is the second of four figures to each this method. 1100 illustrates a text box with two types of timer-dependent image refresh rates.
- the fourth timer-dependent image refresh rate utilized only a limited number of times, such that after the limited number of times is exceeded, the refresh rate is user- controlled. For example, first, assume the user controlled scrolling rate is to see each image for 0.16 seconds per slice. The timer-dependent refresh rate when a triggering event causes a pause of 1.00 seconds. After this minimum period of time has passed, the refresh rate the user-controlled refresh rate (0.16 seconds per slice) resumes. Next, assume that the user scrolls back over the same slice that triggering event causes a pause of 1.00 seconds. An option at this juncture is to no longer require the 1.00 second delay, so that the second time the slice is presented, it is shown for 0.16 seconds (not 1.00 seconds).
- triggering event is the detection by an AI algorithm of a small pulmonary nodule on only a single slice
- a triggering event has caused the predetermined response of the timer-dependent image refresh rate to be utilized only a limited number of times as described.
- the fifth timer-dependent image refresh rate is variable wherein a first image refresh rate is utilized when a first set of images are first presented and wherein a second image refresh rate is utilized when the first set of images are second presented.
- the user controlled scrolling rate is to see each image for 0.16 seconds per slice.
- the timer-dependent refresh rate when triggered causes a pause of 1.00 seconds.
- the refresh rate the user- controlled refresh rate (0.16 seconds per slice) resumes.
- An option at this juncture is to reduce the length of the delay to somewhere in between 0.16 seconds and 1.00 seconds. This is useful by forcing the user to slow down on an important finding (e.g., triggering event is the detection by an AI algorithm of a small pulmonary nodule on only a single slice), which could otherwise easily be missed by a user.
- a triggering event has caused the predetermined response of the timer-dependent image refresh rate to be variable as described.
- Figure 12 illustrates a list of the predetermined responses category #1, which is the timer-dependent image refresh rate.
- 1200 illustrates a text box. This is the third of three figures to each this method. 1200 illustrates a text box with four a timer-dependent image refresh rates.
- the sixth timer-dependent refresh rate is user-dependent wherein a first set of images is presented to a first user at a first timer-dependent image refresh rate and the first set of images is presented to a second user at a second timer-dependent image refresh rate.
- User #1 has a first preference.
- User #2 has a second preference. The first preference and second preference are different.
- User #1 sets a first preference for a 1.00 second delay each time a certain triggering event occurs (e.g., triggering event is the detection by an AI algorithm of a small pulmonary nodule on only a single slice), which could otherwise easily be missed by a user.
- User #2 sets a different preference setting as compared to user #1. For example, User #2 sets a 1.50 second delay each time the same triggering event occurs (e.g., triggering event is the detection by an AI algorithm of a small pulmonary nodule on only a single slice), which could otherwise easily be missed by a user. This is useful because different users may have different abilities or personal preferences. Additionally, the same user could choose to the timer-dependent refresh rates for a variety of other reasons, such as the time of the day. Thus, a triggering event has caused the predetermined response of the timer-dependent image refresh rate to be user-dependent as described.
- the seventh timer-dependent refresh rate is independent of the user-inputted image refresh rate and occurs over at least two consecutive images.
- a user wants to do a hands off approach and is therefore not scrolling and also assume that some of the slices are tied to triggering events and some of the slices are not tied to triggering events. This is discussed in detail in Figure 22. This is useful because different users may have different preferences. Additionally, the same user could choose to the timer-dependent refresh rates for a variety of other reasons or allow the timer-dependent refresh rate to be determined by an AI algorithm. Thus, a triggering event has caused the predetermined response of the timer-dependent image refresh rate to be independent of user-inputted image refresh rate.
- Figure 13 illustrates a list of the predetermined responses category #2, which is the image-dependent viewing parameter. 1300 illustrates a text box.
- the first image-dependent viewing parameter of performing conventional windowing and leveling for the entire dataset First, assume the radiologist has applied a first window and level setting to the entire CT abdomen and pelvis dataset. Assume that the radiologist has completed all checklist items that required the first window and level setting and is now moving to a subsequent item on the checklist that requires a second window and level setting. The triggering event (e.g., moving to a item on the checklist linked (e.g., see processing block 302 in Figure 3) to a preferred window and level setting different from the current window and level setting) has caused the predetermined response of the image dependent viewing parameter of performing conventional windowing and leveling for the entire dataset.
- the triggering event e.g., moving to a item on the checklist linked (e.g., see processing block 302 in Figure 3) to a preferred window and level setting different from the current window and level setting
- the radiologist may elect to move to the bone items on the checklist.
- the action of moving to the next item on the checklist itself can act as a triggering event. This is useful because it instantly provides an improved visual analysis of the next item on the checklist and saves the step of user mentally deciding what the preferred window and level setting is, saves the step of user-performed striking a hotkey, saves the step of mouse click and drag for windowing and leveling.
- a triggering event has caused the predetermined response of the image-dependent viewing parameter to perform conventional windowing and leveling for the entire dataset.
- a triggering event e.g., a mass lesion in the liver discovered by an AI algorithm with similar Hounsfield Units to normal liver parenchyma. This is useful because it instantly provides an improved visual analysis of the mass lesion identified by the AI algorithm and saves the step of detection, saves the step of mentally deciding whether to window and level, saves the step of user- performed striking a hotkey, saves the step of mouse click and drag for windowing and leveling.
- a triggering event has caused the predetermined response of the image- dependent viewing parameter to set a window and level parameter for a first image slice independently from a window and level parameter for a second image slice.
- Figure 14 illustrates an example list of the predetermined responses category #2, which is the image-dependent viewing parameter. 1400 illustrates a text box.
- the third image-dependent viewing parameter includes displaying simultaneously a first visual representation adjustment logic for a first segmented structure and a second visual representation adjustment logic for a second segmented structure wherein the first visual representation adjustment logic is different from the second visual representation adjustment logic.
- the radiologist may elect to move to the pancreas on the checklist.
- the action of moving to the next item on the checklist itself can act as a triggering event. This is useful because it instantly provides an improved visual analysis of the next item on the checklist and saves the step of user mentally deciding what the preferred window and level setting is, saves the step of user-performed striking a hotkey, saves the step of mouse click and drag for windowing and leveling.
- a triggering event has caused the predetermined response of the image-dependent viewing parameter to perform dual windowing and leveling for the pancreas.
- the fourth image-dependent viewing parameter is a false color setting.
- the radiologist is viewing volume rendered images on a 2D monitor. Assume that the radiologist prefers a color schematic wherein the blood vessels are light pink colored when they are not actively being examined (e.g., such as when the radiologist is reviewing the bones), but appear bright red when actively being examined.
- the triggering event is the action of moving from the bones item on the checklist to the blood vessel item on the checklist.
- the predetermined response of the image dependent viewing parameter is the implementation of red false color of the blood vessels.
- Figure 15 illustrates an example list of the predetermined responses category #2, which is the image-dependent viewing parameter. 1500 is a text box.
- the fifth image-dependent viewing parameter of performing zooming First, assume the radiologist is viewing a stack of CT slices through the lungs and a triggering event (e.g., a small 5 mm pulmonary nodule detected by an AI algorithm on the image slice) occurs. This small pulmonary nodule is seen by the radiologist in an un-zoomed image, but the radiologist needs to zoom in to better characterize it. In this example, the triggering event of the small pulmonary nodule appearing on the screen causes the predetermined response of an image-dependent viewing parameter of performing the action of zooming.
- a triggering event e.g., a small 5 mm pulmonary nodule detected by an AI algorithm on the image slice
- the sixth image-dependent viewing parameter is an image rotation.
- a triggering event e.g., a small fracture of the posterior malleolus of the tibia enters the field of view on an axial slice. This small fracture is seen by the radiologist, but the radiologist needs to better characterize it with a volume rendered image.
- the triggering event causes the predetermined response of an image-dependent viewing parameter of performing the action of generating a side panel on the radiology monitor with a volume rendered image of the posterior malleolus with a rotation.
- Additional options include automatic image markup with annotations (e.g., arrow, circle).
- a triggering event has caused the predetermined response of the image-dependent viewing parameter to perform image rotation.
- Figure 16 illustrates an example list of the predetermined responses category #2, which is the image-dependent viewing parameter.
- 1600 is a text box.
- the seventh image- dependent viewing parameter is a viewing angle setting.
- a triggering event e.g., a small fracture of the posterior malleolus of the tibia enters the field of view on an axial slice. This small fracture is seen by the radiologist, but the radiologist needs to better characterize it with a volume rendered image.
- the triggering event causes the predetermined response of an image-dependent viewing parameter of performing the action of generating a side panel on the radiology monitor with a volume rendered image of the posterior malleolus with six viewing positions (from top, from bottom, from left, from right, from front, from back), all of which have viewing angles directed toward the fracture.
- This is useful because it instantly provides an improved visual analysis of the small finding and saves the step of detection, saves the step of mentally deciding whether to create volume rendered images, saves the step of user- performed striking a hotkey, saves the step of mouse click and drag to rotate.
- a triggering event has caused the predetermined response of the image-dependent viewing parameter to generate viewing angles.
- the eighth image-dependent viewing parameter includes advanced image processing techniques.
- a radiologist is viewing contiguous cross-sectional imaging slices and a triggering event (e.g., detection by an AI algorithm of a small pulmonary nodule on only a single slice) occurs. This small nodule is seen by the radiologist, but the radiologist needs to better characterize it with advanced image processing techniques.
- the triggering event causes the predetermined response of at least one of the group of: viewing on an extended reality display; 3D cursor usage (see US Patent Application 15/878,463; virtual tool usage (see PCT/US19/47891); voxel manipulation (see US Patent application #16/195,251); and, incorporating data unit assurance markers (see US Patent Application 16/785,506).
- This is useful because it instantly provides an improved visual analysis of the small finding and saves the step of detection, saves the step of mentally deciding whether to create volume rendered images, saves the step of user-performed striking a hotkey, saves the step of mouse click and drag to rotate.
- a triggering event has caused the predetermined response of the image-dependent viewing parameter to generate advanced image processing techniques.
- Figure 17 illustrates an example list of the predetermined responses category #3, which is the image-dependent reporting parameter.
- 1700 is a text box.
- the first image-dependent reporting parameter wherein text is automatically inputted into a section of a report.
- a triggering event e.g., detection by an AI algorithm of a pulmonary nodule in the right lung that measures 5 mm in greatest axial dimension
- the triggering event causes the predetermined response of an image-dependent reporting parameter of performing the action of entering text stating “5 mm right lung pulmonary nodule”.
- Figure 18 illustrates an example list of the predetermined responses category #3, which is the image-dependent reporting parameter. 1800 is a text box.
- the third image-dependent reporting parameter wherein text in a section of a report is automatically deleted.
- a triggering event e.g., detection by an AI algorithm of a bone lucency determined to be 50% likely to represent a fracture and 50% likely to represent a nutrient foramen
- the triggering event causes the predetermined response of an image-dependent reporting parameter of performing the action of entering text stating “lucency in femur, which may represent a fracture or nutrient foramen”.
- a second triggering event occurs (e.g., detection by an AI algorithm of a bone lucency determined to be 100% likely to represent a fracture).
- the second triggering event causes the predetermined response of an image-dependent reporting parameter of performing the action of deleting text (i.e., lucency in femur, which may represent a fracture or nutrient foramen) such that the report now states “femur fracture”).
- the fourth image-dependent reporting parameter wherein a cursor is automatically moved from a first section of a report to a second section of a report.
- a triggering event e.g., axial cross-sectional imaging slices include the kidney
- the triggering event causes the predetermined response of an image-dependent reporting parameter of performing the action of moving the cursor from a first section of the report (e.g., adrenal gland section) to a second section of a report (e.g., kidney section). This is useful because it saves time of manually switching between sections.
- a triggering event has caused the predetermined response of the image-dependent reporting parameter to switch to a new section of the radiology report.
- Figure 19 illustrates an example of triggering events matched to the predetermined response of timer-dependent image refresh rate.
- the size of the image can serve as a triggering event to determine the timer- dependent image refresh rate.
- the slices have varying size. Consider a first slice of the breast located close to the nipple having breast tissue in a roughly circular region and measuring an approximate 2 cm in radius, such that the area of the breast tissue on this slice would be approximately 12.56 cm 2 . Consider a second slice of the same breast located closer to the chest wall having breast tissue in a roughly circular region, but measuring 5 cm in radius, such that the area of the breast tissue on this slice would be approximately 78.50 cm 2 . This slice therefore contains 6.25 times the number of pixels.
- the image refresh rate is adjusted such that it is slower for the area with the larger field of view. This is useful because the user, by using a smart scroll system, the amount of time on each image could be better allocated, wherein the smaller region has a smaller amount of time and the larger region has a larger amount of time. This therefore improves image analysis.
- the complexity of the image can serve as a triggering event to determine the timer-dependent image refresh rate.
- the slices have varying complexity.
- a first slice of the breast containing homogenous fatty tissue.
- a second slice of a different breast containing heterogeneous tissue with some areas of fat, some areas of calcification and some areas of glandular tissue.
- the second breast demonstrates higher amount of complexity of the scene as compared to the first breast.
- the image refresh rate is adjusted such that it is slower for the second more complex breast as compared to the first homogeneous breast.
- the contrast between the background and the pathology can serve as a triggering event to determine the timer-dependent image refresh rate.
- the area being examined is the pancreas and the purpose of the exam is to detect a pancreatic tumor, which is notoriously difficult to interpret because pancreatic cancer is of similar Hounsfield Units to normal pancreas tissue. This low image contrast resolution is taken into account when determining the timer-dependent refresh rate and causes the timer- dependent refresh rate to slow down.
- the area being examined is a diffusion weighted image of the brain and the purpose of the exam is to detect a stroke. Acute strokes show up as very bright white pixels whereas the background normal brain is mid-gray.
- This high contrast resolution is taken into account when determining the timer-dependent refresh rate and causes the timer-dependent refresh rate to speed up. This is useful because the user, by using a smart image refresh rate system, the amount of time on each image could be better allocated, wherein the more high contrast regions have a smaller amount of time allocated and the low contrast regions have a larger amount of time allocated. This therefore improves image analysis.
- the probability of pathology in the image can serve as a triggering event to determine the timer-dependent image refresh rate.
- the area being examined is the lungs in a 75 year old man who has a history of 100 pack years and emphysema. This patient is at high risk of lung cancer and the fact that the patient is at high risk of lung cancer is taken into account into the timer-dependent image refresh rate, which is slow.
- the area being examined is the lungs in a 10 year old child, which has a finite probability of harboring a lung cancer, albeit exceedingly rate. This exceedingly low probability of incidentally discovering lung cancer is taken into account into the timer-dependent image refresh rate, which is faster than the first example of the 75 year old man.
- the severity of pathology in the image can serve as a triggering event to determine the timer-dependent image refresh rate.
- a patient with a history of lung cancer with widespread metastatic disease is being imaged.
- the area being examined is the brain for metastatic disease, which is a very severe pathology and requires neurosurgical management.
- the timer-dependent image refresh rate should be slower when reviewing this severe finding.
- the area being examined is the vertebral body endplates for osteophytosis. In the grand scheme of things, this is a minor finding, as compared to the pressing issue of cancer staging and assessment for brain metastases.
- the timer-dependent image refresh rate should be faster when reviewing these relatively benign findings.
- the radiologist’s personal characteristics can serve as a triggering event to determine the timer-dependent image refresh rate.
- the personal characteristic of age influences timer-dependent image refresh.
- the timer-dependent image refresh rate slows down.
- personal preference may slow down or speed up the timer-dependent image refresh rate. It is also important to factor in eye tracking data and facial recognition data into timer-dependent image refresh rate, which is discussed elsewhere throughout this patent.
- External characteristics can serve as a triggering event to determine the timer- dependent image refresh rate.
- the time of the day can also influence timer-dependent image refresh. For example, on the very first case of the morning, the timer-dependent image refresh rate may be set to a slower speed. As one gets warmed up, the timer- dependent image refresh rate may be set to a higher speed.
- Figure 20 illustrates a method of automatically performing window and level settings for improved viewing of a lesion detected by an AI algorithm.
- the window and level settings can be set automatically such that the mass, once detected by the AI algorithm, is optimally visualized.
- Processing block 2000 illustrates the step of performing an AI algorithm to detect a lesion (e.g., tumor).
- the preferred embodiment comprises neural networks, such as is described in US Patent Application 16/506,073.
- Processing block 2001 illustrates the step of performing segmentation of the lesion. This can be accomplished by techniques described in US Patent 10,586,400.
- Processing block 2002 illustrates the step of performing segmentation of at least one surrounding structure (e.g., the organ from which the tumor arises). This can also be accomplished by techniques described in US Patent 10,586,400.
- Processing block 2003 illustrates the step of analyzing data units (e.g., Hounsfield Units) of the lesion (e.g., mean, standard deviation, range, etc.).
- Processing block 2004 illustrates the step of analyzing data units (e.g., Hounsfield Units) of the at least one surrounding structure (e.g., mean, standard deviation, range, etc.).
- Processing block 2005 illustrates the step of selecting visual representation adjustment logic for optimal visual analysis, which is further discussed in Figure 21.
- Processing block 2006 illustrates the step of performing a single window / level option.
- Processing block 2007 illustrates the step of utilizing at least two different visual representation adjustment logic.
- Figure 21 illustrates a method of automatically performing window and level settings for improved viewing of a lesion detected by an AI algorithm.
- a single window / level setting could be performed or at least two different visual representation adjustment logic could be performed.
- This flow diagram illustrates an algorithm for determining which type of visual representation adjustment logic to perform. The decision tree is as follows.
- processing block 2100 the question of “is the lesion itself substantially similar in data units compared to at least one surrounding structure?” is raised. If the answer is yes, the proceed to processing block 2101.
- Processing block 2101 is to utilize at least two different visual representation adjustment logic per US Patent #10,586,400. Assume the lesion is a pancreatic cancer with a Hounsfield unit of 30 with a standard deviation of 5 Hounsfield Units and the pancreas has a has an average Hounsfield Unit of 32 with a standard deviation of 5 Hounsfield Units. There would be large overlap of the standard deviation error bars. This would be very poorly visualized using a single window and level setting wherein the pancreatic tumor is nearly indistinguishable from the pancreas itself.
- the dual windowing technique as described in US Patent #10,586,400 overcomes this limitation by performing segmentation and a dual windowing technique. If the answer to processing block 2100 is no, then proceed to processing block 2102. For example, a situation wherein the answer is no is a lesion that can easily be distinguished from the background. For example, assume a lesion in the liver has Hounsfield Units of 80 with a standard deviation of 5 and the background liver has Hounsfield Units of 30 with a standard deviation of 5. This is not substantially similar to the surrounding liver. Therefore, the next step is to proceed to processing block 2102.
- processing block 2102 the question “are the internal characteristics of the lesion of interest deemed important in the visual analysis?” is raised. If the answer is yes, then proceed to processing block 2103.
- Processing block 2103 is to utilize at least two different visual representation adjustment logic per US Patent #10,586,400. Assume that the lesion can easily be distinguished from the background. For example, assume a lesion in the liver has Hounsfield Units of 80 with a standard deviation of 5 and the background liver has Hounsfield Units of 30 with a standard deviation of 5. In order to best interpret the internal characteristics of the lesion and the surrounding liver tissue simultaneously, a dual window level setting can be applied.
- processing block 2104 the question of “would making the structures of non interest subdued improve analysis?” is raised. If the answer is yes, then proceed to processing block 2105.
- Processing block 2105 is to utilize at least two different visual representation adjustment logic per US Patent #10,586,400. For example, if the radiologist is viewing a CT scan of the abdomen and the patient has taken oral contrast, then there will be a lot of high density material in the stomach and bowel loops. This may interfere with the ability to see the pancreas. Therefore, the radiologist may benefit from implementation of US Patent Application #16/785,606, Improving image processing via a modified segmented structure.
- FIG. 22 illustrates the new scrolling technique implemented in this patent. For example, a constant image refresh rate of 0.16 seconds per slice is established. All slices wherein there is no event trigger are shown for 0.16 seconds. Note that slices that are determined to have a triggering event are given an alternative display in accordance with the matched predetermined response of a timer-dependent image refresh rate.
- slices 1-3 were not noted to have a triggering event, thus a matched predetermined response of a timer-dependent image refresh rate is not applicable and the time spent under the of a timer-dependent image refresh rate is 0.16 seconds per slice.
- slice 4 was noted to have a triggering event of the prior exam showing an abnormality at this location and the matched predetermined response of a timer- dependent image refresh rate is a 2.0 second delay.
- an alert illustrating the triggering event with an image markup (e.g., red circle) at the site of the current scan where the abnormality was noted on the prior scan.
- slices 5-6 were not noted to have a triggering event, thus a matched predetermined response of a timer-dependent image refresh rate is not applicable and the time spent under the new scrolling process is 0.16 seconds per slice.
- slice 7 was noted to have a triggering event of the artificial intelligence (AI) algorithm detecting an abnormality at this slice and the matched predetermined response of a timer-dependent image refresh rate is a 3.5 second delay.
- AI artificial intelligence
- the markup can be shown for the whole 3.5 second delay or part of the 3.5 second delay (e.g., shown for the last 1.5 seconds).
- the user can, of course, pause the scrolling for longer than 3.5 seconds if necessary.
- slices 8-10 were not noted to have a triggering event, thus a matched predetermined response of a timer- dependent image refresh rate is not applicable and the time spent under the new timer- dependent image refresh rate is 0.16 seconds per slice.
- slice 11 was noted to have a triggering event of the slice containing anatomy relevant to clinical history (e.g., right eye pain and slices contain right orbit) and the matched predetermined response of a timer-dependent image refresh rate is a 1.0 second delay per slice.
- the user can, of course, pause the scrolling for longer than 3.5 seconds if necessary.
- slices 12-13 were not noted to have a triggering event, thus a matched predetermined response of a timer-dependent image refresh rate is not applicable and the time spent under the new timer-dependent image refresh rate process is 0.16 seconds per slice.
- slice 14 was noted to have a triggering event of the slice containing an anatomic feature that statistically needs more careful review (e.g., certain anatomic features, such as the pituitary stalk need more careful review than other anatomic features) and the matched predetermined response of a timer-dependent image refresh rate is a 1.5 second delay per slice.
- the user can, of course, pause the scrolling for longer than 1.5 seconds if necessary.
- the preferred embodiment is to assign a minimum time and fixation points to each anatomic structure in the body.
- this system would be integrated with an eye tracking system for the radiologist.
- a certain amount of time e.g., 1.5 seconds
- certain number of fixation points e.g., 10
- a certain amount of time e.g., 2.0 seconds
- certain number of fixation points e.g., 15
- a certain distance e.g., 0.5 cm
- a certain amount of time (e.g., 2.0 seconds) and certain number of fixation points (e.g., 20) are needed within a certain distance (e.g., 2.0 cm) of the midbrain. And so on.
- Another example organ where it is prudent for a radiologist to slow down is the pancreas, since pancreatic cancers are commonly missed since there are of similar Hounsfield Units to background pancreatic parenchyma.
- radiologist alertness levels e.g., via EEG analysis, via facial recognition
- slices 15- 18 were not noted to have a triggering event, thus a matched predetermined response of a timer-dependent image refresh rate is not applicable and the time spent under the new scrolling process is 0.16 seconds per slice.
- slices 19-20 were noted to have a triggering event of the slice not containing patient data (e.g., imaged air gap above the patient’s head), and the matched predetermined response of a timer-dependent image refresh rate is to have a 0.00 second delay and the time spent under the new scrolling process is 0.00 seconds per slice. This serves to speed up review of regions that do not contain patient data.
- Figure 23 A illustrates an the integration of multiple factors to determine the optimum amount of time spent on each slice.
- An example algorithm to determine the optimum amount of time spent on each slice is as follows. Set a default time of 0.10 seconds per slice. Modify the default time by application of additional factors to determine the optimum amount of time to spend on each slice.
- a value of “0” indicates a perfectly homogeneous image with standard deviation of 0 and would automatically cause the amount of time spent on that slice to be 0.01 seconds.
- a value of “1” indicates a mildly heterogeneous image (e.g., single anatomic feature with small standard deviation of less than 5 Hounsfield Units) and would indicate a IX multiplier amount of time spent on that slice.
- a value of “2” indicates a moderately or severely heterogeneous image (e.g., more than one anatomic feature or standard deviation of more than 5 Hounsfield Units)and would indicate a 2X multiplier amount of time spent on that slice.
- a value of “0” indicated that there are no pixels containing anatomy on the image slices and would automatically cause the amount of time spent on that slice to be 0.01 seconds.
- a value of “1” indicates that there is less than 10 cm 2 of data on the slice and would indicate a IX multiplier amount of time spent on that slice.
- a value of “2” indicates that there is greater than or equal to 10 cm 2 of data on the slice and would indicate a 2X multiplier amount of time spent on that slice.
- a finding detected by AI additive factor For example, a +2 second addition could be utilized if the AI algorithm detected pathology. A +0 second addition could be utilized if the AI algorithm did not detect pathology.
- the preferred AI algorithm is a neural network. Other types of machine learning (ML) and computer aided detection (CAD) can also be incorporated.
- ML machine learning
- CAD computer aided detection
- the first row of the table shows the triggering events used in determining the timer-dependent image refresh rate.
- the second row shows slice number of 300, complexity of 0, size of image of 0, presence of pathology on prior examination as not applicable, the finding detected by AI of not applicable and time time spent on the image slice 300 of 0.01 seconds.
- the third row shows slice number of 270, complexity of 1, size of image of 1, presence of pathology on prior examination as not applicable, the finding detected by AI of not applicable and time time spent on the image slice 270 of 0.2 seconds.
- the fourth row shows slice number of 150, complexity of 2, size of image of 2, presence of pathology on prior examination as not applicable, and a positive finding detected by AI and time time spent on the image slice 150 of 2.4 seconds. Note that this algorithm is equivalent to a regression. Note that it is also possible to integrate AI to determine the optimum amount of time to spend per slice. Additionally, eye tracking is an important factor at determining the amount of time spent on each slice and is discussed in detail in other sections of this patent. Note that a combination of embodiments can be performed.
- Figure 23B illustrates application of the algorithm to a first example slice #300, which corresponds to the second row of the table in Figure 23 A.
- Figure 23B is a slice that is in front of the patient’s breast and contains no patient data, just the air in front of the breast.
- a radiologist saves 0.19 seconds on this slice. Cumulatively, over multiple slices, it saves several seconds.
- the second row shows slice number of 300, complexity of 0 (because there is no patient anatomic information), size of image of 0 (because there is no patient anatomic information), no prior pathology on prior examination at this location, the finding detected by AI of not applicable (because there is no patient anatomic data on the current examination) and time spent on the image slice 300 of 0.01 seconds.
- Figure 23C illustrates application of the algorithm to a first example slice #270, which corresponds to the third row of the table in Figure 23 A.
- Figure 23C is a slice that is at the anterior most aspect of the breast and contains only a small amount of relatively homogeneous breast tissue.
- the third row shows slice number of 270, complexity of 1 (the breast tissue within the slice appears relatively homogeneous and is expected to have a small standard deviation), size of image of 1 (small size of less than 10 cm 2 ), no prior pathology on prior examination at this location, no abnormality detected by AI at this location and time spent on the image slice 270 of 0.2 seconds.
- Figure 23D illustrates application of the algorithm to a first example slice #150, which corresponds to the fourth row of the table in Figure 23 A.
- This image contains a small enhancing breast cancer 2300.
- the fourth row shows slice number of 150, complexity of 2 (hetereogeneity of the breast), size of image of 2 (greater than 10 cm 2 ), no prior pathology on prior examination, and a positive finding detected by AI and time spent on the image slice 150 of 2.4 seconds.
- FIG. 24 A illustrates a first example of patient’s conditions wherein the embodiments could be employed to enhance radiologist’s accuracy of diagnosis.
- the triggering event comprises when two parts occur at the same time.
- the first part of the triggering event is a symptom, such as shortness of breath 2400.
- the second part of the triggering event is when a certain portion of the examination, such as the CT slices including portions of the lungs 2401 is displayed.
- the predetermined response of the timer-dependent image refresh rate is linked to the triggering event and occurs when the triggering event occurs.
- the predetermined response is a slow timer- dependent image refresh rate through the CT slices including portions of the lungs 2402.
- the patient’s symptom of shortness of breath 2400 is correlated to the region being examined by a radiologist (e.g., lung) 2401.
- a radiologist e.g., lung
- the rationale in this example is that the radiologist must pay special attention to the lung region and, therefore, if he/ she probably should spend additional time (e.g., the embodiment of automatic pausing for a specified period on each slice during scrolling) on each slice that contains lung tissue. By using this pausing during the scroll, the probability of correct diagnosis would be enhanced.
- the radiologist could turn the smart scrolling function on and off, as needed to enhance overall speed and accuracy of the review.
- FIG. 24B illustrates a second example of patient’s conditions wherein the embodiments could be employed to enhance radiologist’s accuracy of diagnosis.
- the triggering event comprises when two parts occur at the same time.
- the first part of the triggering event is a symptom, such as pain in the chest 2403.
- the second part of the triggering event is when a radiologist scrolls to a certain portion of the examination, such as the CT slices including portions of the coronary arteries 2404.
- the predetermined response is linked to the triggering event and occurs when the triggering event occurs.
- the predetermined response is slow timer-dependent image refresh rate through the CT slices including portions of the coronary arteries 2405 wherein the patient’s symptom of chest pain 2403 is correlated to the region being examined by a radiologist (e.g., coronary arteries) 2404.
- a radiologist e.g., coronary arteries
- the rationale in this example is that the radiologist must pay special attention to the coronary arteries and, therefore, if he/ she probably should spend additional time (e.g., the embodiment of automatic pausing for a specified period on each slice during scrolling) on each slice that contains coronary arteries. By using this pausing during the scroll, the probability of correct diagnosis would be enhanced.
- the radiologist could turn the smart scrolling function on and off, as needed to enhance overall speed and accuracy of the review.
- a differential diagnosis is required and special attention to potential areas is required. Note that since there are multiple possible causes of chest pain, the preferred embodiment is to have multiple combinations of triggering events matched to multiple predetermined responses. If the patient presented with pain in the chest, the medical image slices containing portions of the heart could be treated as explained contained in the Summary section. The logic in this example is that the patient may have suffered a heart attack and the slices containing heart tissue needed careful examination. Alternatively, initial examining physician reported that the patient had been is an automobile accident, then the area of focus would be the bone structure.
- FIG. 24C illustrates a third example of patient’s conditions wherein the embodiments could be employed to enhance radiologist’s accuracy of diagnosis.
- the triggering event comprises when two parts occur at the same time.
- the first part of the triggering event is the purpose of the examination, such as a cancer patient presenting for tumor follow up 2406.
- the second part of the triggering event is when a radiologist scrolls to a certain portion of the examination, such as the area of the current examination where a tumor was known to be previously present from the prior examination 2407.
- the predetermined response of the slow timer-dependent image refresh rate is linked to the triggering event and occurs when the triggering event occurs.
- the predetermined response is the slow timer-dependent image refresh rate through the area of the current examination where a tumor was known to be previously present from a prior imaging examination 2408 wherein the cancer patient presenting for tumor follow up 2406 is correlated to the region being examined by a radiologist (e.g., the area of the current examination where a tumor was known to be previously present from a prior imaging examination) 2407.
- the rationale in this example is that the radiologist must pay special attention to the area of the current examination where a tumor was known to be previously present from a prior imaging examination and, therefore, if he/ she probably should spend additional time (e.g., the embodiment of automatic pausing for a specified period on each slice implemented by the timer-dependent image refresh rate) on each slice that contains area of the current examination where a tumor was known to be previously present from a prior imaging examination.
- additional time e.g., the embodiment of automatic pausing for a specified period on each slice implemented by the timer-dependent image refresh rate
- the medical images from a previous imaging session could be retrieved and displayed in conjunction with currently obtained medical images.
- the scrolling process embodiment of pausing could be applied to both current and previous images simultaneously to look back and forth and make a careful assessment of changes, if any, in the tumor.
- Figure 25 A illustrates a slice of (e.g., slice N within the sequence) a set of medical images which is displayed for a duration of 0.25 seconds (note: this particular slice is not linked to a triggering event to cause a predetermined response of a slowing or speeding of the timer-dependent image refresh rate).
- This is an example of the short time duration (e.g., 0.25 seconds) that a radiologist, using the two finger on mouse roller ball technique, would typically spend examining an individual slice of medical imagery. During this time, he/ she is expected to discern anomalous tissue which may be small in physical size and have a gray scale very close to that of adjacent tissue.
- This particular figure is an arbitrary slice within a set of medical images and, for discussion purposes, the first in a sequence. For illustration purposes, there is no triggering event (e.g., relevant patient data, AI detected abnormality, etc.) pertaining to this particular slice.
- triggering event e.g., relevant patient data, AI detected abnormality, etc.
- Figure 25B illustrates a slice of (e.g., slice N+l within the sequence) a set of medical images which is displayed for a duration of 0.25 seconds (note: this particular slice is not linked to a triggering event to cause a predetermined response of a slowing or speeding of the timer-dependent image refresh rate).
- This is the next slice in the set of medical images - the second slice in the illustration sequence.
- there is also no triggering event e.g., relevant patient data, AI detected abnormality, etc.
- the radiologist again spends only 0.25 seconds on this slice.
- Figure 25C illustrates a slice of (e.g., slice N+2 within the sequence) a set of medical images which is displayed for a duration of 3.5 seconds (note: this particular slice is linked to a region(s) pertaining to a triggering event and the predetermined response is to cause the slice to be displayed for 3.5 seconds).
- Army studies have investigated how long it takes to find a target is differing types of areas. The average results vary from 3 - 5 seconds depending on the complexity of the scene. As the scene becomes more complex, it becomes impossible for some individuals to locate the target. It is reasonable that, if the anomalous tissue is small and the gray scale blends in with the surrounding tissue in the slice, it will take a radiologist about the same time to locate this anomalous tissue.
- an automatic pause is injected into the scrolling process (e.g., due to a triggering event of an AI identified finding) to permit the radiologist to better study the display at hand and, if anomalous tissue is present, have a significantly improved chance of finding this tissue and making a more accurate diagnosis.
- a time of 3.5 seconds was injected into the timing during which of the medical image displayed is paused.
- This pause was automatically injected into the scrolling when there was a triggering event pertaining to this particular slice. This 3.5 seconds is for illustrative purposes only and the pause duration would be established by other means.
- the triggering event was a small focus of intraventricular hemorrhage 2501 detected by an artificial intelligence algorithm.
- Figure 25D illustrates a slice of (e.g., slice N+3 within the sequence) a set of medical images which is displayed for a duration of 3.5 seconds (note: this particular slice is linked to a region(s) pertaining to a triggering event and the predetermined response is to cause the slice to be displayed for 3.5 seconds).
- the triggering event was a small focus of intraventricular hemorrhage 2502 detected by an artificial intelligence algorithm.
- a fruitful area for study spawned by this patent would be to investigate false negative rates as a function of scroll rates. It is anticipated that a variety of publications will emerge. These studies could be the basis for setting nominal values.
- Figure 25E illustrates a slice of (e.g., slice N+4 within the sequence) a set of medical images which is displayed for a duration of 0.25 seconds (note: this particular slice is not linked to a triggering event to cause a predetermined response of a slowing or speeding of the timer-dependent image refresh rate).
- This 5 th slice in the sequence has no triggering event pertaining to this particular slice, hence the radiologist again spends only 0.25 seconds on this slice.
- Figure 25F illustrates a slice of (e.g., slice N+5 within the sequence) a set of medical images which is displayed for a duration of 0.25 seconds (note: this particular slice is not linked to a triggering event to cause a predetermined response of a slowing or speeding of the timer-dependent image refresh rate).
- This 5 th slice in the sequence has no triggering event pertaining to this particular slice, hence the radiologist again spends only
- Figure 26A illustrates a slice of medical imagery which has, within the slice, a tumor at time point #2. The imagery was recently taken (e.g., in 2020). The tumor 2600 in the year 2020 is shown.
- One of the key tasks for a radiologist is to track changes, if any, in tumors over time. Patients accumulate sets of medical images taken over months/ tears. The question arises whether any change has occurred that may indicate a change of patient’s condition. Hence the question whether the tumor has changed in either size or shape for this current image (e.g., in 2020) slice as compared to one taken previously month(s)/ year(s) ago.
- Figure 26B illustrates a slice of medical imagery which has, within the slice, a tumor at time point #1. This slice of medical imagery was previously taken (e.g., in 2019) and retrieved from patient’s records and displayed for comparative purposes. The tumor 1301 is shown at 2019. Under this patent, the previous image set would be retrieved from the patient’s records and displayed simultaneously with the current images. Note that the smart localization system as described in US provisional patent application #62/939,685 can be utilized for improved localization from 2019 to 2020 examinations. The portions of the both image sets containing the tumor would be displayed side-by-side.
- the scrolling can be rapid through non-tumor containing slices, but a mandatory slow down can be implemented during the tumor containing slices (e.g., an AI algorithm detects a tumor and this serves as a triggering event, which causes a predetermined response of a timer-dependent image refresh rate.
- the scrolling process embodiment of pausing could be applied to both current and previous images simultaneously to look back and forth make a careful assessment of changes, if any, in the tumor. Subjectively, the size of the tumor appears to have grown when comparing the tumor 2600 in the year 2020 with the tumor 2601 in the year 2019.
- Figure 27 illustrates an example type of triggering event including utilization of eye tracking in relation to anatomic feature.
- An example algorithm to accomplish this is discussed.
- a midline sagittal MRI image of the brain is shown 2700.
- an eye tracking system is established, as is described in US Provisional Patent Application 62/985,363.
- the pituitary stalk is segmented (e.g., atlas based segmentation).
- the midpoint of the pituitary stalk is determined. Assume the slice is an in the (x, y) plane.
- This point can be determined by taking the highest x-direction pixel pertaining to the pituitary talk and the lowest x-direction pixel pertaining to the pituitary stalk and averaging these two x-values to determine the midpoint of the pituitary stalk in the x- direction called “mid-x-point-of-pituitary-stalk”.
- the center point 2701 (shown as a red dot for illustrative purposes only) of the pituitary stalk can be determined, by this algorithm, to be the pixel located at (“mid-x-point-of-pituitary-stalk”, “mid-y-point-of-pituitary-stalk”).
- the center point 2701 shown as a red dot for illustrative purposes only
- the center point 2701 can be determined, by this algorithm, to be the pixel located at (“mid-x-point-of-pituitary-stalk”, “mid-y-point-of-pituitary-stalk”).
- a red circle 2702 for illustrative purposes only. Note that an example triggering event would be the achievement of 1.5 seconds looking in this region and a minimum of 5 fixation points. This would indicate that the structure has adequately been inspected by the radiologist.
- An example predetermined response would be to lock the image via the timer-dependent image refresh rate until the requisite triggering event (time and number of fixation points) has been performed on each anatomic structure or at least the critical anatomic structures.
- an alert method e.g., annotation such as a red circle 2702 can be utilized to increase the number of fixation spots at that anatomic feature.
- Figure 28A illustrates a slice with conventional “abdomen window and level” setting.
- 2800 shows a CT slice with a window level setting of 40 and window width setting of 350.
- Figure 28B illustrates the text box with a two-pronged triggering event and a matched predetermined response of an image-dependent viewing parameter.
- a text box 2801 is shown.
- the person has must have a history of liver disease.
- the image slices must include portions of the liver.
- a text box 2802 is shown. This is an example of a predetermined response. The liver is being reviewed using halo windowing technique, as described in US Patent Application #16/785,606.
- Figure 28C illustrates a second example of “halo windowing” as described in US Patent Application #16/785,606.
- 2803 illustrates the CT slice with “halo windowing.
- 2804 illustrates the liver with optimized grayscale setting for the liver is set with a window level of 117 and a window width of 166, which is the best possible settings for visualization of the liver.
- 2805 illustrates a modified segmented region with a “halo” appearance with a window level grayscale setting for the liver halo set with a window level of 71 and window width of 357.
- 2806 illustrates window level grayscale setting for the remainder of the structures in the CT slice is set with a window level of 475 and window width of 2618.
- This overall process improves upon the existing art by modifying the images so that the user (e.g., radiologist) focuses on the liver during the liver portion of the examination and is not distracted by other bright voxels.
- multiple halos with each halo having a unique window level setting can be performed so as to slowly alter the window level settings in a radial fashion outward from the organ.
- the triggering event causes the slice to have increased contrast of a region(s) pertaining to the patient medical condition (e.g., liver disease).
- the patient medical condition e.g., liver disease
- One of the embodiments of this patent to alert the radiologist that there is relevant patient data pertaining to this particular slice is to change the contrast of the region containing tissue of that region with respect to the surrounding tissue.
- this change in contrast could alert the radiologist of the presence of relevant information.
- Other options include displaying all non-important findings (as determined by AI) as subdued and all important findings (as determined by AI) with optimized contrast.
- the radiologist would then have the option of self-initiated slowing, automatic slowing or pausing the scrolling process.
- the liver has increased contrast.
- this slice also has decreased contrast external to a region(s) pertaining to the patient medical condition data. Specifically, it decreases the contrast of tissue not relevant to patient data (e.g., liver disease) pertaining to this particular slice. If volume rendering were performed, an option would be to change the transparency and thus achieve the same alerting function.
- Figure 29A illustrates applications of image-dependent viewing parameters for advanced viewing on extended reality displays. Advanced technologies are being introduced to the radiological community. A text box 2900 is shown. For example, by creating a 3D volume out of the 2D medical slices and presenting a slightly different picture to each eye in a head display unit such that the radiologist see a 3D version of the patient’s anatomy. Examples of image-dependent viewing parameters for advanced viewing on Extended Reality displays include: rotating the imaging volume; changing the location of the viewing perspective; zooming; and, converging.
- Figure 29B illustrates a left eye view of a breast cancer within that has undergone the segmentation process with non-breast cancer matter subtracted through a filtration process. Note that a 3D volume cursor 2901 surrounds the breast cancer lesion. In this figure the breast cancer has been segmented out and is located inside the 3D volume cursor 2901.
- Figure 29C illustrates a right eye view of a breast cancer within that has undergone the segmentation process with non-breast cancer matter subtracted through a filtration process. Note that a 3D volume cursor 2901 surrounds the breast cancer lesion. In this figure the breast cancer has been segmented out and is located inside the 3D volume cursor 2901.
- Figure 29D illustrates a left eye view of a breast cancer within that has undergone the segmentation process with non-breast cancer matter subtracted through a filtration process with the viewing position zoomed inward closer to the breast cancer, as compared to the viewing position from Figure 29B.
- the key anatomic features are enlarged on the screen. This enables a higher number of fixation points per unit area and may improve characterization of subtle imaging features.
- the spiculated margins of the tumor as denoted by the arrows 2902, circle 2903 can be appreciated on the zoomed in viewing, as seen in Figure 29D, but not well appreciated in Figure 29B.
- the breast cancer becomes more noticeable thus improving probability of detection and accuracy of characterization by the radiologist, which is caused by an increased number of fixation points on the breast cancer during saccadian eye movements.
- Figure 29E illustrates a right eye view of a breast cancer within that has undergone the segmentation process with non-breast cancer matter subtracted through a filtration process with the viewing position zoomed inward closer to the breast cancer, as compared to the viewing position from Figure 29C.
- the key anatomic features are enlarged on the screen. This enables a higher number of fixation points per unit area and may improve characterization of subtle imaging features.
- the spiculated margins of the tumor as denoted by the arrows 2902, circle 2903 can be appreciated on the zoomed in viewing, as seen in Figure 29E, but not well appreciated in Figure 29C.
- the breast cancer becomes more noticeable thus improving probability of detection and and accuracy of characterization by the radiologist, which is caused by an increased number of fixation points on the breast cancer during saccadian eye movements.
- Figure 30A illustrates annotation of a particular region of interest on a slice by an arrow pointing to a region(s) pertaining to a finding identified by a CAD/ AI algorithm, which is an example of a notification of a triggering event.
- 3000 illustrates a CT slice. This figure illustrates use of an external symbol to annotate an area or region relevant patient data pertaining to this particular slice.
- An arrow 3001 was the symbol selected for this annotation and is placed and oriented such that the radiologist can quickly spot the area of concern per the AI finding.
- Figure 30B illustrates annotation of a particular region of interest on a slice by a circle encircling a region(s) pertaining to a finding identified by a CAD/ AI algorithm, which is an example of a notification of a triggering event.
- 3000 illustrates a CT slice. This is the same image as Figure 30A with a change of symbol from arrow 3001 to circle 3002 surrounding the area detected by CAD/ AI. Note that there is a multitude of symbols which could be selected, two of which are depicted in this figure 30B and Figure 30 A. A variety of colors, shapes, dashes/ solid/ widths of lines could be used to draw attention to the region(s) relevant patient data pertaining to this particular slice. These markers serve to indicate that a triggering event has occurred on this particular slice. The radiologist, having this information, could self-initiate the process of pausing or a pause could be initiated through implementation of a timer-dependent image refresh rate.
- Figure 30C illustrates annotation of the outer edges of the slice with a colored line pertaining to a finding identified by a CAD/ AI algorithm, which is an example of a notification of a triggering event.
- 3000 illustrates a CT slice.
- the border of the slice 3003 is an example method to draw attention to the fact that a triggering event has occurred on this particular slice.
- the radiologist having this information, could self-initiate the process of pausing or a pause could be initiated through implementation of a timer- dependent image refresh rate.
- Figure 31A illustrates an example layout of a radiologist workstation with an audio addition (transmit only or transmit/ receive) to alert the radiologist the slice(s) displayed are to a region(s) indicating that a triggering event has occurred.
- 3100 illustrates the radiologist’s desk.
- 3101 illustrates the radiologist.
- 3102 illustrates the radiology monitor.
- 3103 illustrates the radiologist’s mouse.
- 3104 illustrates the audio signal emitted from speakers. This figure illustrates the use of an audio device integral with the radiologist workstation.
- the audio signal would be at least one of the following: a continuous signal during the time when a slice is displayed to draw attention to the fact this is region relevant patient data pertaining to this particular slice; signal the start and / or end of a set of slices with relevant data; be of a variety of tomes and volume, be selected by default or by the individual radiologist.
- the receive function of the audio device if present, could be used in conjunction with the preparation of notes for the report or start/ stop the pause function, or go to the relevant patient data. Note that some triggering events would be tied to the sound alert and other triggering events would not be tied to the sound alert.
- the radiologist having this information, could self-initiate the process of pausing or a pause could be initiated through implementation of a timer- dependent image refresh rate.
- Figure 3 IB illustrates an example layout of a radiologist workstation with a vibration mechanism to alert the radiologist the slice(s) displayed are to a region(s) indicating that a triggering event has occurred.
- 3100 illustrates the radiologist’s desk.
- 3101 illustrates the radiologist.
- 3102 illustrates the radiology monitor.
- 3103 illustrates the radiologist’s mouse.
- 3105 illustrates a buzzing vibration emitted from the radiologist’s mouse.
- This vibration device 3105 would serve the same purpose as the audio transmit device but, instead provide tactile data. For example, a mouse with buzzer.
- some triggering events would be tied to the vibration alert and other triggering events would not be tied to the vibration alert.
- the radiologist having this information, could self-initiate the process of pausing or a pause could be initiated through implementation of a timer-dependent image refresh rate.
- Figure 31C illustrates an example layout of a radiologist workstation with a buzzer that both emits a sound and also vibrates to alert the radiologist the slice(s) displayed are to a region(s) indicating that a triggering event has occurred.
- 3100 illustrates the radiologist’s desk.
- 3101 illustrates the radiologist.
- 3102 illustrates the radiology monitor.
- 3103 illustrates the radiologist’s mouse.
- 3104 illustrates the audio signal emitted from speakers.
- 3105 illustrates a buzzing vibration emitted from the radiologist’s mouse 3103.
- This device combines both audio 3104 and vibration 3105 capabilities.
- the radiologist having this information, could self-initiate the process of changing imaging refresh rate or pausing. Note that some triggering events can be tied to the sounds and vibration and other triggering events would not be tied to the sounds and vibration.
- the radiologist, having this information, could self-initiate the process of pausing or a pause could be initiated through implementation of a timer-dependent image refresh rate.
- Figure 32A illustrates a flow chart showing the triggering event criteria and the matched predetermined response of a timer-dependent image refresh rate.
- the triggering event criteria 3200 is that a fixation point must be within 4.0 cm of every liver pixel. Note that liver segmentation needs to be performed.
- the predetermined response 3201 is for the timer-dependent image refresh rate to move to the next slice. This overall process would allow a natural viewing of an image and scrolling pattern to roll through the images slice-by-slice once each image is comprehensively viewed.
- Figure 32B illustrates application of the triggering event criteria and the matched predetermined response in Figure 32A.
- 3203 illustrates a CT slice of the liver with dual windowing technique, as described in US Patent #10,586,400.
- 3204 illustrates a processing block determining whether or not the triggering event has been met and in this case the triggering event has not been bet as there is not a fixation point (as determined by an eye tracking system) within 4.0 cm of every liver pixel.
- 3205 illustrates a processing block illustrating that no predetermined response of the timer- dependent image refresh rate is performed. The next step is therefore to assess for triggering event at the next time interval.
- 3208 illustrates a processing block determining whether or not the triggering event has been met and in this case the triggering event has not been bet as there is not a fixation point (as determined by an eye tracking system) within 4.0 cm of every liver pixel.
- 3209 illustrates a processing block illustrating that no predetermined response of the timer-dependent image refresh rate is performed.
- 3212 illustrates a processing block determining whether or not the triggering event has been met and in this case the triggering event has been bet as there is a fixation point (as determined by an eye tracking system) within 4.0 cm of every liver pixel.
- 3213 illustrates a processing block illustrating that the predetermined response of the timer-dependent image refresh rate is to be performed. The next step is therefore to perform the triggering event and reset the clock.
- 3215 illustrates a CT slice of the liver with dual windowing technique, as described in US Patent #10,586,400. Note that this is the instant at which the slice 3215 appeared, so there are no fixation points yet.
- 3216 illustrates a processing block determining whether or not the triggering event has been met and in this case the triggering event has not been bet as there is not a fixation point (as determined by an eye tracking system) within 4.0 cm of every liver pixel.
- 3217 illustrates a processing block illustrating that no predetermined response of the timer-dependent image refresh rate is performed. The next step is therefore to assess for triggering event at the next time interval and continue this process.
- Figure 33 illustrates the integration of triggering events, timer-dependent image refresh rate, image-dependent viewing parameter, and image-dependent reporting parameter utilization into the interpretation of a chest x-ray.
- the first time interval during the image interpretation is the time period from when the image is first shown to 3.00 seconds.
- the displayed image is a window level setting of 3,000 and a window width of 30,000, which provides fairly good contrast for all anatomic structures in the field of view.
- the timer-dependent refresh rate utilized for this first time interval is comprised of two components. Both of the timer-dependent image refresh rates have to be satisfied in order to achieve wherein a new image can appear.
- the first component of the timer-dependent image refresh rate is a minimum delay for 3.00 seconds.
- the second component of the timer-dependent image refresh rate is a minimum of 4 fixation points including at least one in each quadrant of the image.
- the user performed 4 fixation points including at least one in each quadrant of the image by the time point of 2.00 seconds thereby satisfying the second timer- dependent image refresh rate component.
- the first component of the timer- dependent image refresh rate was not satisfied until 3.00 seconds. Therefore, after and only after an additional 1.00 seconds have passed will both the first and second components be satisfied. During that additional 1.00 seconds, the user has performed one additional fixation point for a total of 5. The location of the fixation points during this time interval is shown.
- the rate limiting event is the first component.
- the completion of the timer-dependent refresh rate criteria acts as a triggering event for both the first image-dependent viewing parameter and the first image-dependent reporting parameter, which are therefore both implemented at 3.00 seconds.
- the image dependent reporting parameter does not enter data during this step.
- zooming setting is selected in order to maximize detection of pathology occurring within the trachea.
- the image-dependent viewing parameter is zooming in an dual windowing.
- the dual windowing setting shown in this example has a window level of 3,000 and a window width of 30,000 for the trachea.
- a window level setting of 50,000 and window width setting of 120,000 is implemented.
- a halo is utilized to gradually show transition between the two different window width and window level settings.
- the first aspect is the automatic transition into the trachea section of the radiology report.
- the second aspect is the performance of an CAD/AI algorithm on the image. Tracheal pathology, if detected by an AI algorithm, would be imputed into this section of the radiology report. If the AI algorithm determines that the trachea is normal, then “normal” would be inputted into the report, which is the case in this example.
- the second time interval during the image interpretation is the time period from 3.00 seconds to 9.00 seconds.
- the image displayed during the entirety of this period is the zoomed in image with dual windowing on the trachea. Note that the user can override this option if he/ she chooses.
- the report item of “normal” is entered in at the report.
- the timer-dependent refresh rate utilized for this second time interval is comprised of two components. Both of the timer-dependent image refresh rates have to be satisfied in order to achieve wherein a new image can appear.
- the first component of the timer-dependent image refresh rate is a minimum delay for 3.00 seconds.
- the second component of the timer-dependent image refresh rate is a minimum of 10 fixation points. In this example, the user performed 5 fixation points by the time point of 3.00 seconds. It takes the user another 3.00 seconds to reach 10 fixation points. So, only at 6.00 seconds of this time period (and time mark 9.00 seconds) is the second timer-dependent image refresh rate component satisfied. The location of the fixation points during this time interval is shown.
- the rate limiting event is the second component.
- the completion of the timer-dependent refresh rate criteria acts as a triggering event for both the second image-dependent viewing parameter and the second image-dependent reporting parameter, which are therefore both implemented at 9.00 seconds.
- zooming setting is selected in order to maximize detection of pathology occurring within the lungs.
- the image-dependent viewing parameter is zooming in an dual windowing. Please see US Patent 10,586,400 and US Patent Application 16/785,506 for details on how to perform the dual-windowing technique.
- the dual windowing setting shown in this example has a window level of 10,000 and a window width of 30,000 for the lungs.
- a window level setting of 50,000 and window width setting of 120,000 is implemented. Also, please note that a halo is utilized to gradually show transition between the two different window width and window level settings.
- both aspects of the image-dependent reporting parameter are implemented.
- the first aspect is the automatic transition into the lungs section of the radiology report.
- the second aspect is the performance of an CAD/AI algorithm on the image. Lung pathology, if detected by an AI algorithm, would be imputed into this section of the radiology report. If the AI algorithm determines that the trachea is normal, then “normal” would be inputted into the report. In this case, a right lung nodule is found by the AI algorithm and “right lung pulmonary nodule” is entered into the lung section of the report.
- the third time interval during the image interpretation is the time period from 9.00 seconds to 20.00 seconds.
- the image displayed during the entirety of this period is the zoomed in image with dual windowing on the lungs. Note that the user can override this option if he/ she chooses.
- the report item of “right lung pulmonary nodule” is entered in at the report.
- the timer-dependent refresh rate utilized for this third time interval is comprised of three components. All three of the timer-dependent image refresh rates have to be satisfied in order to achieve wherein a new image can appear.
- the first component of the timer-dependent image refresh rate is a minimum delay for 8.00 seconds.
- the second component of the timer-dependent image refresh rate is a minimum of 12 fixation points.
- the third component of the timer-dependent image refresh rate is a minimum of 1 fixation point within 4 cm of each pixel on the screen corresponding to lung tissue.
- the user performed 12 fixation points by the time point of 8.00 seconds thereby satisfying the first component and second component; however, the third component of 1 fixation point within 4 cm of each pixel on the screen corresponding to lung tissue is not yet met. It takes the user another 3.00 seconds to meet the third component.
- a total of 20 fixation points and 11.00 seconds have passed. So, only at 11.00 seconds of this time period (and time mark 20.00 seconds) is the third component of the timer-dependent image refresh rate satisfied. The location of the fixation points during this time interval is shown.
- the rate limiting event is the third component.
- an annotation e.g., red circle
- Another example markup of the image would be a timer (e.g., count down of minimum time required to spend).
- the completion of the timer-dependent refresh rate criteria acts as a triggering event for both the third image-dependent viewing parameter and the third image-dependent reporting parameter, which are therefore both implemented at 20.00 seconds.
- zooming setting is selected in order to maximize detection of pathology occurring within the heart.
- the image-dependent viewing parameter is zooming in and dual windowing.
- the dual windowing setting shown in this example has a window level of 5,000 and a window width of 30,000 for the heart.
- a window level setting of 50,000 and window width setting of 120,000 is implemented.
- a halo is utilized to gradually show transition between the two different window width and window level settings.
- the first aspect is the automatic transition into the heart section of the radiology report.
- the second aspect is the performance of an CAD/AI algorithm on the image.
- Heart pathology if detected by an AI algorithm, would be inputed into this section of the radiology report. If the AI algorithm determines that the heart is normal, then “normal” would be inputted into the report. In this case, a no pathology is found by the AI algorithm and “normal” is entered into the heart section of the report.
- the fourth time interval during the image interpretation is the time period from 20.00 seconds to 25.00 seconds.
- the image displayed during the entirety of this period is the zoomed in image with dual windowing on the heart. Note that the user can override this option if he/ she chooses.
- the report item of “normal” is entered in at the report.
- the timer-dependent refresh rate utilized for this fourth time interval is comprised of two components. Both of the timer-dependent image refresh rates have to be satisfied in order for the examination to be completed (or alternatively, additional checklist items such as the bones, upper abdomen, etc. to be performed).
- the first component of the timer- dependent image refresh rate is a minimum delay for 5.00 seconds.
- the second component of the timer-dependent image refresh rate is a minimum of 15 fixation points (as determined by an eye tracking system).
- the user performed exactly 15 fixation points by the time point of 5.00 seconds thereby satisfying the first component and second component simultaneously. So, at 5.00 seconds of this time period (and time mark 25.00 seconds) is the timer-dependent image refresh rate satisfied. The location of the fixation points during this time interval is shown. [00196]
- the completion of the timer-dependent refresh rate criteria acts as a triggering event for the examination to be completed (or alternatively, additional checklist items such as the bones, upper abdomen, etc. to be performed).
- This example therefore illustrates the interleaving of eye tracking with fixation points and multiple different types of triggering events.
- the user would just need to look at the screen. Over the 25 second period that follows, once the minimum time periods and fixation points occur, the image would be optimized for each anatomic structure on the checklist (e.g., zoom, window and level) and the report would automatically be filled in in steps.
- the user did not override (or turn off) the triggering events and the adjustment of the image parameters and report generation was automated and user eye tracking was factored in.
- Facial recognition could also be implemented into this overall system and used as a triggering event. It is possible that the face reacts to a dangerous finding in a predictable way, which can be factored in. Further, it is possible that inattentiveness can be picked out on facial recognition.
- EEG analysis of the user can be performed as well and utilized as a triggering event.
- Additional options include wherein the user could speed up or slow down the timer.
- the user could also take over manual control (e.g. alter text in report). User take over could be accomplished by keyboard input, mouse input, controller input or voice recognition input.
- the user can also perform zooming and panning on his own and eye tracking performed in accordance with methods disclosed in US Patent Application 62/985,363.
- AI controlled panning and zooming integrated with eye tracking metrics can also be performed.
- reporting metrics including the number of fixation points can be incorporated including the number of fixation points overall and the number of fixation points per structure.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- Primary Health Care (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Epidemiology (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Data Mining & Analysis (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Databases & Information Systems (AREA)
- Pathology (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Brevet divulguant un procédé et un appareil pour améliorer le flux de travail pour un radiologue. En particulier, ce brevet améliore le défilement manuel actuel tel qu'à un taux constant de 0,2 seconde par tranche d'image, en établissant des événements de déclenchement qui amènent un système de synchronisation précis à déterminer exactement la quantité appropriée de temps pour passer sur chaque image. Par exemple, la visualisation de régions homogènes où il existe un bon contraste entre une lésion et un arrière-plan servira d'événement déclencheur pour un défilement rapide. En revanche, la visualisation de régions hétérogènes et complexes ou de régions où la cible est similaire en échelle de gris à l'arrière-plan servira d'événement déclencheur pour un défilement lent.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/842,631 | 2020-04-07 | ||
US16/842,631 US11003342B1 (en) | 2018-10-10 | 2020-04-07 | Smart scrolling system |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021207109A1 true WO2021207109A1 (fr) | 2021-10-14 |
Family
ID=78023421
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2021/025835 WO2021207109A1 (fr) | 2020-04-07 | 2021-04-05 | Système de défilement intelligent |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2021207109A1 (fr) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117806474A (zh) * | 2024-02-21 | 2024-04-02 | 深圳尚睿博科技有限公司 | 一种具有自适应刷新率的电竞鼠标及其响应控制方法 |
EP4383270A1 (fr) * | 2022-12-08 | 2024-06-12 | Koninklijke Philips N.V. | Commande de l'affichage d'images médicales |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6865718B2 (en) * | 1999-09-29 | 2005-03-08 | Microsoft Corp. | Accelerated scrolling |
US20120327061A1 (en) * | 2010-10-01 | 2012-12-27 | Z124 | Smart pad operation of display elements with differing display parameters |
US20140107471A1 (en) * | 2011-06-27 | 2014-04-17 | Hani Haider | On-board tool tracking system and methods of computer assisted surgery |
US10049625B1 (en) * | 2016-12-21 | 2018-08-14 | Amazon Technologies, Inc. | Context-based rendering |
US20190146640A1 (en) * | 2013-11-18 | 2019-05-16 | Maestro Devices, LLC | Rapid analyses of medical imaging data |
-
2021
- 2021-04-05 WO PCT/US2021/025835 patent/WO2021207109A1/fr active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6865718B2 (en) * | 1999-09-29 | 2005-03-08 | Microsoft Corp. | Accelerated scrolling |
US20120327061A1 (en) * | 2010-10-01 | 2012-12-27 | Z124 | Smart pad operation of display elements with differing display parameters |
US20140107471A1 (en) * | 2011-06-27 | 2014-04-17 | Hani Haider | On-board tool tracking system and methods of computer assisted surgery |
US20190146640A1 (en) * | 2013-11-18 | 2019-05-16 | Maestro Devices, LLC | Rapid analyses of medical imaging data |
US10049625B1 (en) * | 2016-12-21 | 2018-08-14 | Amazon Technologies, Inc. | Context-based rendering |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP4383270A1 (fr) * | 2022-12-08 | 2024-06-12 | Koninklijke Philips N.V. | Commande de l'affichage d'images médicales |
WO2024120988A1 (fr) * | 2022-12-08 | 2024-06-13 | Koninklijke Philips N.V. | Commande de l'affichage d'images médicales |
CN117806474A (zh) * | 2024-02-21 | 2024-04-02 | 深圳尚睿博科技有限公司 | 一种具有自适应刷新率的电竞鼠标及其响应控制方法 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6081126B2 (ja) | 医用画像処理装置、画像診断装置、コンピュータシステム、医用画像処理プログラム、及び医用画像処理方法 | |
US10521504B2 (en) | Methods and apparatus for obtaining a snapshot of a medical imaging display | |
US9841811B2 (en) | Visually directed human-computer interaction for medical applications | |
US11594002B2 (en) | Overlay and manipulation of medical images in a virtual environment | |
US20130111387A1 (en) | Medical information display apparatus and operation method and program | |
US20120299818A1 (en) | Medical information display apparatus, operation method of the same and medical information display program | |
EP2620885A2 (fr) | Appareil de traitement d'images médicales | |
EP3027107B1 (fr) | Mise en correspondance de résultats entre des ensembles de données d'imagerie | |
Venjakob et al. | Radiologists' eye gaze when reading cranial CT images | |
WO2021207109A1 (fr) | Système de défilement intelligent | |
US11216171B2 (en) | Medical image management apparatus and recording medium | |
US11003342B1 (en) | Smart scrolling system | |
Crepps | Limited Field of View CBCT in Specialty Endodontic Practice: An Eye Tracking Pilot Study | |
JP6930515B2 (ja) | 画像表示装置、画像表示方法及び画像表示プログラム | |
JP2012529952A (ja) | シングルスキャン・マルチプロシージャ・イメージング | |
CN112740285B (zh) | 在虚拟环境中对医学图像的叠加和操纵 | |
CA3156974A1 (fr) | Procedes et systemes pour afficher des associations et des chronologies d'etudes medicales | |
Lévêque | Analysing and quantifying visual experience in medical imaging | |
US9474449B2 (en) | Single scan multi-procedure imaging |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21784891 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21784891 Country of ref document: EP Kind code of ref document: A1 |