WO2018011105A1 - Systems and methods for three dimensional touchless manipulation of medical images - Google Patents

Systems and methods for three dimensional touchless manipulation of medical images Download PDF

Info

Publication number
WO2018011105A1
WO2018011105A1 PCT/EP2017/067193 EP2017067193W WO2018011105A1 WO 2018011105 A1 WO2018011105 A1 WO 2018011105A1 EP 2017067193 W EP2017067193 W EP 2017067193W WO 2018011105 A1 WO2018011105 A1 WO 2018011105A1
Authority
WO
WIPO (PCT)
Prior art keywords
hand
rendering
commands
dataset
volume
Prior art date
Application number
PCT/EP2017/067193
Other languages
French (fr)
Inventor
Benoit Jean-Dominique Bertrand Maurice MORY
Dorothy Anita Strassner
Original Assignee
Koninklijke Philips N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips N.V. filed Critical Koninklijke Philips N.V.
Publication of WO2018011105A1 publication Critical patent/WO2018011105A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/08Volume rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Definitions

  • images may be rendered in real time or post-data set acquisition.
  • the images may be two dimensional (2D) images, also referred to as slices or planes acquired within a volume, or they may be renderings of a three dimensional (3D) volume.
  • a volume rendering may be generated from a 3D data set acquired from imaging a volume (e.g., an organ or other object of interest in a subject).
  • the volume rendering may be displayed on a two dimensional display (e.g., a conventional display unit such as an LCD, LED, or another type of display), or in a three dimensional display environment (e.g., holographic, virtual or augmented reality displays).
  • volume rendering techniques for displaying a three dimensional image on a two dimensional display may involve casting virtual rays into an imaged 3D volume to obtain a 2D projection of the data that may be displayed in a final rendered image.
  • Use of a simulated light source in rendering the image may provide a user with a sense of depth and how the various anatomic structures are arranged in the 3D volume.
  • the imaged volume may alternatively be rendered more realistically in a three dimensional display environment. Regardless of the type of display used, it may be desirable for a user to be able to manipulate the medical image (i.e., the volume rendering) in three dimensions.
  • a conventional input devices such as a keyboard (e.g., up/down buttons for panning, plus/minus buttons for zooming), a mouse, a track ball, a touch pad or touch screen, or text input corresponding to commands specifying the amount of rotation, magnification, or translation of the rendered volume. All of these techniques require the user to physically touch the input device, which can contaminate the medical imaging equipment and/or a sterile surgical environment. Improvements in the available techniques for manipulating medical images of three dimensional data sets may thus be desirable.
  • a method may include receiving a 3D dataset corresponding to an imaged volume in a subject, generating a volume rendering of the 3D dataset, and displaying the volume rendering on a display operatively associated with a touchless interface.
  • the method may further include detecting a position and motion of an object within a tracking field of the touchless interface, initiating a touchless manipulation session when the object is detected to be in a first configuration within the tracking field, generating rendering commands to move the 3D dataset or to adjust a rendering construct based on the motion of the object, and continuously updating the volume rendering responsive to the rendering commands until the object is detected to be in a second configuration within the tracking field or the object is no longer detected within the tracking field.
  • the object may be a hand
  • the first configuration may correspond to the hand being presented within the tracking field with four or more fingers extended away from a palm of the hand
  • the second configuration may correspond to the hand being presented within the tracking field with four or more fingers folded toward the palm of the hand.
  • the method may further include calculating a global displacement of the hand along all axes of a coordinate frame of the tracking field, and wherein the rendering commands include commands to move the 3D dataset based only on displacement of the hand along two of the axes of the coordinate frame.
  • the method may further include detecting a third configuration of the hand corresponding to the hand being presented within the tracking field with one finger extended away from a palm of the hand and at least three fingers folded toward the palm of the hand, and calculating a displacement of the extended finger relative to a coordinate frame of the tracking field while the hand remains in the third configuration.
  • the rendering commands include commands to move a location of a virtual light based only on the displacement of the finger.
  • the rendering commands may include commands to move the location of the virtual light are generated only if a single extended finger is detected within the tracking field.
  • the commands to move the location of the virtual light includes commands to move the light to a location within the 3D data set.
  • the method may further include detecting a forth configuration corresponding to both hands being presented within the tracking field, and generating first rendering commands for moving the 3D dataset based on movement of the first hands and generating second rendering commands for adjusting a rendering construct based on movement of the second hand.
  • the rendering construct is a cut plane and wherein a location and an orientation of the cut plane relative to the 3D dataset is dynamically adjusted responsive to detected translation and rotation of the second hand.
  • the 3D dataset may be rendered by projecting a 2D image of the 3D dataset onto a viewing plane, the 3D dataset may be constrained from translation in 3 degrees of freedom (DOF), and the cut plane may be limited to translation only along a direction perpendicular to the viewing plane.
  • DOF degrees of freedom
  • the method may further include recording the position of the user's hand when detected to be in the first configuration as an initial reference position, pausing the generating of rendering commands when the hand is detected to be in the second configuration, and resuming the generation of rendering commands when the hand is subsequently detected to be in the first configuration, wherein the position of the hand when subsequently detected to be in the first configuration is recorded as a new reference position, and wherein rendering commands generated after the resuming are based on the motion of the hand relative to the new reference position.
  • the method may be performed by a medical imaging system such as an ultrasound system.
  • the 3D dataset is received by the ultrasound system while ultrasonically imaging the volume with a probe of the ultrasound system.
  • the 3D dataset may be rendered and updated in real-time with live imaging data.
  • Any of the techniques for providing a touchless interface may be embodied in executable instructions stored on non-transitory computer-readable medium, which when executed cause a processor of a visualization system or a medical imaging system to perform the processes embodied thereon.
  • a medical image viewing and manipulation system may include a volume renderer configured to receive a three dimensional (3D) data set corresponding to an imaged volume and generate a volume rendering of the imaged volume, and a touchless interface configured to generate commands responsive to touchless user input.
  • the touchless interface may include a hand-tracking device having a field of view, wherein the hand-tracking device is configured to generate tracking data responsive to movement of a user's hand or a portion thereof within the field of view, and a rendering controller communicatively coupled to the hand-tracking device and the volume renderer, wherein the rendering controller is configured to generate commands for manipulating the 3D dataset based on the tracking data.
  • the system may further include a display configured to display the volume rendering and update the display in real time based on the manipulation commands.
  • the volume renderer may be part of an ultrasound imaging system which includes an ultrasound probe and a signal processor, wherein the signal processor is configured to receive ultrasound echoes from the ultrasound probe to generate the 3D data set.
  • the hand-tracking device may be incorporated into a console of the ultrasound imaging system.
  • the hand-tracking device may be an optical tracking device, which is configured to track a global position of the hand and positions of individual fingers of the hand.
  • the rendering controller may be configured to generate a first set of commands operable to control movement of the 3D dataset responsive to detection of the hand in a first configuration and generate a second set of commands operable to control a rendering construct different than the 3D dataset responsive to detection of the hand in a second configuration.
  • the touchless interface may be configured to ignore movements of the hand following a detection of the hand in a closed configuration until the hand is arranged in another configuration different than the closed configuration.
  • the touchless interface may be configured to independently track movement of both hands of a user and the rendering controller may be configured to generate first commands for controlling movement of the 3D dataset based on movement of one hand of the user and generate second commands for controlling a rendering construct in relation to the 3D dataset based on movement of the other hand of user.
  • FIG. 1A is a flow diagram of a process in accordance with the present disclosure.
  • FIG. IB is an illustration of a 3D data set of an imaged volume in accordance with the present disclosure.
  • Figure 2 is a block diagram of a system for visualizing and manipulating medical imaging data in accordance with the present disclosure.
  • FIG. 3 is a block diagram of components of a touchless interface in accordance with the present disclosure.
  • FIG. 4 is a block diagram of an ultrasound imaging system which includes a touchless interface in accordance with the present disclosure.
  • Figure 5 is an illustration of a portion of an ultrasound imaging system which includes a touchless interface in accordance with the present disclosure.
  • Figures 6A-6D are illustrations of a hand-tracking device and various hand presentation configurations in accordance with the present disclosure.
  • Figure 7 is flow diagram of a process for manipulating medical images in accordance with the present disclosure.
  • Figure 8 is flow diagram of another process for manipulating medical images in accordance with the present disclosure.
  • FIG. 1A shows a flow diagram of a process 100 for rendering and manipulating medical images
  • Figure IB shows an illustration of a 3D volume which may be rendered in accordance with the examples herein.
  • Process 100 may be used to generate a volume rendering of a three dimensional (3D) dataset (e.g., 3D data set 130).
  • the 3D dataset 130 may include medical imaging data corresponding to a 3D volume in a subject (e.g., a patient).
  • the 3D dataset may be a 3D ultrasound dataset.
  • the 3D dataset may include imaging data acquired with another imaging modality.
  • Process 100 begins by accessing a 3D dataset, as shown in block 110.
  • the 3D dataset 130 may be received a processor of a visualization system such as system 200 described further below with reference to Figure 2. It will be understood that the 3D volume in Figure IB is shown as a simple rectangular prism for simplicity of illustration but in practice three dimensional (3D) datasets of medical imaging data may typically be irregularly shaped (e.g., the shape of an imaged organ or a volume of imaged tissue).
  • the processor may retrieve the 3D dataset 130 from a picture archiving and communication system (PACS) server or another storage device, such as portable non-transitory media.
  • PPS picture archiving and communication system
  • the 3D dataset 130 is received by a volume renderer of a medical imaging system (e.g., an ultrasound system), which in some cases may occur in real-time during acquisition of the imaging data for displaying of the volume renderings in real- or near real-time (accounting for transmission and processing delays).
  • the 3D data set 130 may include one or more regions of interest 135, each of which may be a portion of an object (e.g., wall of blood vessel, valve of heart) or an entire object (e.g., tumor, fetus) within the imaged volume.
  • a volume rendering of the 3D dataset 130 may be generated by projecting a
  • a simulated light source 150 may be used to provide a perception of depth.
  • the location and orientation of the 3D dataset 130 within the virtual 3D space 155 and thus relative to the viewing plane 140, as well as the location of the simulated light source 150 may be determined by default settings.
  • the 3D data set 130 may initially be rendered with the X-Y plane of the local coordinate frame of the volume facing the viewing plane and with the light source located midway between volume and viewing plane.
  • the volume rendering (i.e., the 2D projected image of the volume) may be displayed on a display monitor, as shown in block 116 and may be updated responsive to user inputs.
  • User inputs may be received to manipulate the 3D data set within a virtual space.
  • the virtual space may be defined by a coordinate system, having a first coordinate frame 155 (also referred to as rendering coordinate frame) and the 3D dataset may also be associated with its own local coordinate frame, for example as may be defined during imaging data acquisition.
  • User inputs may correspond to commands to move (e.g., translate, rotate, scale or magnify) the 3D dataset 130 in relation to the coordinate frame of the virtual space.
  • Conventional inputs in the form of physical controls for manipulating rendered volumes may not be as efficient or intuitive as may be desirable and may have other shortcomings (e.g., result in contamination of equipment or sterile environment).
  • the display may be operatively associated with a touchless interface.
  • the touchless interface enables a user to touchlessly manipulate the rendered volume within the virtual space, for example to touchlessly control the position and orientation of the rendered volume within the virtual space.
  • the touchless interface may be used to dynamically select and display slices of the volume and/or to control aspects of the simulated light source (also referred to as virtual light source).
  • the touchless interface may include a hand- tracking device, which is described in further detail below.
  • the hand-tracking device may be associated with a tracking field, which may also be referred to herein as a field of view.
  • the touchless interface may be part of a medical imaging data acquisition system, such as an ultrasound imaging system.
  • the touchless interface may be part of a system for visualizing and manipulating medical imaging data, for example an analysis workstation which may not itself be capable of acquiring the medical imaging data but may instead receive the data, for example through a wired or wireless network.
  • the process may continue by tracking position and calculating motion of an object, such as one or both hands of the user or a tool (e.g., a stylus), while disposed within the tracking field of the touchless interface.
  • Tracking data may be transmitted to graphics processing components of the system for generation of manipulation commands, as shown in block 120.
  • the displayed rendering of the volume may be updated responsive to the manipulation commands, as shown in block 122.
  • before motion is applied to the 3D data set responsive to motion of the object in the tracking field.
  • hand tracking e.g., optical hand tracking
  • hand tracking may be used to record and output in real-time the relative 6 DOF movements performed by the user's hand(s), which motion data is then translated into relevant parameters for volume rendering.
  • the user may be able to touchlessly rotate, translate or pan, and scale (increase or decrease magnification) a volume to be rendered, as well as easily obtain a 2D slice image of the volume by touchlessly and dynamically adjusting the location of a slice plane relative to the rendered volume.
  • the techniques described may enable the user to touchlessly and intuitively manipulate the location or other parameters of a virtual light source, as well as add annotations (e.g., place markers, or labels associated with the markers or the image in general) on the rendered image.
  • FIG. 2 shows a system 200 for visualizing and manipulating medical imaging data.
  • the system 200 includes a volume renderer 210, a rendering controller 220, and a user interface 230 including a display 232, a control panel 234, and a touchless interface 240.
  • the control panel 234 may include conventional physical controls, for example buttons, switches, rotary encoders, a keyboard, a mouse, a trackball, a touch screen, and touch pad or other touch-sensitive controls.
  • the touchless interface 240 may include a motion tracking device 242 configured to track movements of an object (e.g., precise movements of a user's hand) within the tracking field. The touchless interface 240 may thus enable the user to more easily and intuitively interact with the rendered volume.
  • the motion tracking device 242 may utilize optical tracking, an electromagnetic (EM) tracking, inertial tracking, or other type of motion tracking technique to acquire positional information, for example of a user's hand(s).
  • EM electromagnetic
  • the system 200 is communicatively coupled to memory 201, which stores a 3D dataset
  • the memory 201 may be part of a PACS server or it may be memory of a medical imaging data acquisition system, such as an ultrasound system.
  • the system 200 may be integrated within a medical imaging data acquisition system (e.g., an ultrasound imaging system).
  • the medical imaging data acquisition system may be configured to acquire and process the medical imaging data, for example echo signals, into the 3D dataset 202 and transmit the 3D dataset 202 to the volume renderer in real time (that is, during the acquisition) such that volume renderings may be provided and manipulated via the touchless interface in real time.
  • the volume renderer 210 of the system 200 receives a 3D dataset 202.
  • the 3D dataset 202 may be ultrasonic imaging data which includes echo information obtained by ultrasonically scanning a volume (e.g., a portion of a subject). While the example below is described with reference to ultrasonic imaging data, it will be understood that the techniques described herein may be applied equally to medical imaging data obtained via a different imaging modality (e.g., CT, MR, or others) suitable for acquiring a 3D dataset.
  • the volume render 210 generates a projected image 204 of the 3D dataset 202 as perceived by a virtual observer. This projected image 204, which may be interchangeably referred to herein as a volume rendering, is displayed on the display 232. Virtual lighting may be used to enhance the perception of depth.
  • Conventional methods for controlling the positioning of the volume to be rendered include the use of a trackball or touch screen, which are inherently two dimensional and thus inherently only allowing for a change of at most 2 parameters at once. This can make it more challenging to find a desired point in 3D space and typically requiring multiple operations of separate controls to arrive at the desired point in space and the user may need a high level of understanding of 3D space to efficiently operate a sequence of rotations and translations to position the volume as may be desired using conventional controls.
  • the touchless interface described herein may offer additional advantages, for example in an environment which must be maintained sterile (e.g., an operating room), by obviating the need for physical contact with non-sterile equipment during a surgical procedure as an example.
  • the hand-tracking device may be placed near the operating table and away from the imaging system such that the operator can manipulate the imaging data without additional assistance.
  • This simpler and more direct touchless interaction with the medical imaging data may significantly reduce the additional personnel required during a surgical procedure.
  • Yet further advantages of the examples herein may be obtained in applications, which require precise selections while maintaining an imaging probe in a precise position with respect to a subject.
  • the techniques described may provide a simpler and more intuitive way to indicate in the imaged data, the sample volume for Doppler analysis, particularly when the Doppler sample volume is to be placed in relation to a small blood vessel.
  • common tasks such as freeze and acquire may be accomplished efficiently and intuitively by predetermined hand gestures (e.g., such as moving a given finger, for example an index finger, downward as in pressing a button within the tracked field), which can enable the operator to remain in proximity to the patient and farther away from the imaging system.
  • predetermined hand gestures e.g., such as moving a given finger, for example an index finger, downward as in pressing a button within the tracked field
  • the volume renderer 210 may receive input 206 from the user interface 230 and may provide output (e.g., volume renderings) to the user interface 230.
  • the input 206 may include a given reference point (e.g., the viewpoint of the virtual observer), location of a simulated light source, and/or properties of the simulated light source for the rendered projected image.
  • the volume renderer 210 may receive input 208 from a rendering controller 220.
  • the input 208 may include manipulation commands, which may be indicative of a change in position, orientation, or magnification of the rendered 3D dataset relative to a positional reference frame, or another property of the volume rendering.
  • the volume renderer 210 sends signals to the display 232, which is configured, responsive to these signals, to update the displayed volume rendering 204 in accordance with the manipulation commands.
  • the volume renderings are coupled from the volume renderer 210 to an image processor
  • the image processor 205 may generate graphic overlays for display with the rendered images. These graphic overlays can contain, e.g., standard identifying information such as patient name, date and time of the image, imaging parameters, and the like. For these purposes the image processor 205 may receive input from the user interface 230, such as a typed patient name. While describes as discrete components, some or all of the functionality of the volume renderer 210, image processor 250, and rendering controller 220, collectively referred to as image processing circuitry, may be incorporated into a single integrated processing circuit.
  • Figure 3 illustrates further aspects of a touchless interface which may be used for visualization and manipulation of medical imaging data.
  • a hand-tracking device 330 is communicatively coupled to a tracking data processor 340.
  • the hand-tracking device 330 may be used to implement the motion tracking device 242 of the system 200.
  • the hand- tracking device 330 may be configured to track the hands and fingers of a user with high precision and tracking frame rate.
  • the hand-tracking device 330 includes a tracking sensor 332, a sensor data processor 334, and on-board memory 336.
  • the tracking sensor 332, sensor data processor 334, and memory 336 are operatively arranged to detect and report discrete positions of the user's hand or a portion thereof for determining motion of the user's hand or a portion thereof.
  • the hand-tracking device 330 may be further configured to detect and track objects, such as a manipulation tool (e.g., a stylus), within its field of view 303.
  • the hand- tracking device 330 may be an optical tracking device which uses optical sensors and infrared light to detect hands and fingers of a user and track motion thereof such as the Leap Motion controller, supplied by Leap Motion, Inc.
  • the tracking sensor 332 and processor 334 are configured to determine physical quantities, such as locations of tracked entities, within a field of view 303 of the tracking sensor 332.
  • the tracking sensor 332 detects the position of a tracked entity 305 relative to the coordinate frame of the tracking device 330.
  • the tracked entities 305 may include fingertips and other anatomical markers of the hands (e.g., center of the palm, heel of the hand, metacarpals, phalanges, etc.).
  • the sensor data processor 334 may be programmed to use the sensor data obtained by the tracking sensor 332 with a model 338 of the human hand 337 stored in the on-board memory 336 to resolve user motion into the discrete tracked data.
  • the sensor data processor 334 may determine direction of a tracked entity (e.g., direction in which a finger is pointing, or a direction normal to the palm) based on the model 338 of the human hand.
  • the hand model 338 may further provide information about the identity (e.g., left, right), position/orientation (e.g., x, y, z coordinates of a centroid of the hand or geometric center of a surface estimated as the palm of the hand, a direction of axis normal to palm, etc.), and other aspects (e.g., list/identify of fingers) of a hand detected within the field of view, which may be reported in each tracking data frame.
  • identity e.g., left, right
  • position/orientation e.g., x, y, z coordinates of a centroid of the hand or geometric center of a surface estimated as the palm of the hand, a direction of axis normal to palm, etc.
  • other aspects e.g., list/identify of
  • the hand-tracking device 330 may be configured to measure tracked physical quantities
  • the hand- tracking device 330 may be configured to output the tracked physical quantities in packets or frames 306 at high frame rates (e.g., 1 frame/millisecond or faster, for example 4 frames or more, 5 frames or more, 10 frames per millisecond or other).
  • Each tracking data frame 306 includes a list, table, object, or other data structure containing the tracked physical quantities for a given frame, which may also be used to calculate relative displacement, speed, or other parameters related to the tracked entities.
  • a data structure within a frame 306 may contain a list of the hands and fingers detected at each snapshot in time within a given frame.
  • the physical quantities recorded for each hand may include a position, an orientation, and a displacement (i.e., translation and rotation) from the previous recorded position and orientation of each hand relative to the tracking reference frame.
  • a listing of each detected finger as well as the position and pointing direction of each fingertip may be included.
  • Volume manipulation commands may be generated based on the global and relative motion of the user's hand and fingers as determined from the tracking data.
  • the touchless interface may assign a specific command to a specific finger (e.g., movement of the index or another finger may be mapped as movement of a virtual light source used in rendering the volume).
  • a specific command e.g., movement of the index or another finger may be mapped as movement of a virtual light source used in rendering the volume.
  • the global movement of the hand may be mapped to globally re-position (rotate and translate) the volume to be rendered.
  • small movements of the fingers may be irrelevant and ignored by the system, so that the user need not maintain the hand in a rigid position but may comfortably rotate the hand while in a relaxed state.
  • the tracking data received by the rendering controller 320 is used to generate manipulation commands 308 as described herein.
  • the manipulation commands instruct the volume renderer 310 to re-position and/or re-orient the 3D dataset within the virtual space, and an updated rendering 304 is generated by the volume rendered 310 for display.
  • volume manipulation techniques are described below with the appreciation that other commands responsive to different inputs may be programmed into the rendering controller 320 in other examples.
  • a first mode of the touchless interface may be invoked responsive to the detection of the tracked object (e.g., the user's hand) in a first configuration.
  • the first mode may be invoked when the user's hand is detected to be presented with at least four fingers extended.
  • translation of the hand within the tracked field i.e., relative to the coordinate frame of the tracking field and calculated from a reference position of the hand
  • the touchless interface may track translation in all three degrees of freedom, e.g.
  • the translation of the tracked object in 3DOF may be mapped to translation of the 3D dataset in only two DOF, for example mapping the x and y displacements to corresponding x and y displacements while disregarding any displacements in the z direction which may result from the user inadvertently moving his hand in and out of (or transversely to the tracking field).
  • Rotations in 3DOF i.e., angular displacements about each of the x, y, and z axes
  • the user may be able to pan the rendered volume in plane but not out of plane (i.e., the zooming would be disabled), which may ensure that the volume remains in appropriate magnification during the manipulation and zooming does not distort size of structures on the screen.
  • the light source may remain fixed; however in some examples a different detected hand gesture or a conventional physical control may be used for controlling the location, direction, or intensity of the light.
  • a second mode of the touchless interface may be invoked responsive to the detection of the tracked object in a first configuration.
  • the tracked object may be a user's hand and the second mode may be invoked when the user's hand is detected to be presented with at least one finger but less than four fingers extended.
  • the hand configuration which initiates the second mode may be presenting the hand in a fist with only the index finger extended.
  • the touchless interface tracks the position and movement of the finger within the tracked field. Specifically, the position and displacement of the fingertip of the extended finger may be obtained from the tracking data and used to change the position of a virtual light source.
  • the light source is moved on the display dynamically with the movement of the finger.
  • the ending position of the finger is used to relocate the light source.
  • multiple light sources may be utilized when rendering the volume and different fingers of the hand may be associated with different ones of the light sources such that the user may move a first finger (e.g., an index finger) to position a first light source, then fold the first finger and extend a second finger (e.g., the little finger) to position a second light source, and so on.
  • a change in the pointing direction of the extended finger may be detected and used to adjust a direction of the light source.
  • the second mode may be initiated responsive to presentation of the hand with two fingers extended, for example the thumb and index finger. In such instances, a change in the distance between the two extended fingers may be used to control the intensity of the light source, such as by increasing the intensity as the fingertips are spread farther apart and decreasing the intensity as the fingertips are brought closer together.
  • a third mode of the touchless interface may be invoked responsive to the detection of one or more objects in a third configuration.
  • the tracked one or more objects may be the user's hands and the third mode may be invoked when both of the user's hand are detected within the tracking field.
  • movement of one of the user's hand for example the right hand, may control movements of the 3D dataset, e.g., as described in the first example. Movements of the other hand, in this case the left hand, may be used to derive manipulation commands for a rendering construct other than the 3D dataset.
  • the rendering construct may be the virtual light source and properties of the light source may be manipulated with the left hand e.g., as described in the second example.
  • the rendering construct may be a slice plane (also referred to as cut plane), which may be positioned relative to the volume.
  • cut plane also referred to as cut plane
  • the touchless interface is configured to independently and simultaneously track movement of any or both of the user's hands and apply the movements or adjustments in real time.
  • the user interface enables the user to simultaneously rotate the 3D dataset while adjusting the location and orientation of the cut plane for selecting a slice image (e.g., a B-mode image) through the imaged volume.
  • a slice image e.g., a B-mode image
  • the movement of the 3D dataset is limited to rotations only without displacement (i.e., an displacements of the right hand in the tracking field are ignored by the touchless interface) and the movement of the cut plane is limited to only movement along a direction perpendicular to the viewing plane (i.e., in and out of the plane as the user views the displayed image).
  • This mode may enable the user to intuitively and dynamically select a slice plane by being able to visualize the structures within the volume at changing depth and orientation relative to the volume as the user scrolls the location of the cut plane on the screen.
  • rendering commands may be generated based on the joint position and/or relative position of multiple objects within the tracking field.
  • a starting configuration for this mode may be the presentation of both hands of the user in the open position (e.g., for at least 4 extended fingers) within the tracking field. Both hands of the user may then be concurrently tracked while both hands remain within the tracking field or until one of the two hands is provided in the closed configuration.
  • the individual position for each tracked object e.g., left and right individual fingers, left and right palms, etc.
  • relative position of one tracked object to another e.g., the distance between the left and right palms
  • the tracking data may be processed to determine the movements of both hands at once to generate for instance a 3D rotation or 3D crop responsive to the relative position of the hands.
  • a 3D rotation of the rendered volume may be touchlessly commended for example responsive to change in the orientation of a virtual line extending between the two hands (e.g., a virtual line connecting the palm or center of one hand to the palm or center of the other hand) relative to the tracking coordinate frame.
  • Zooming or cropping functions may be implemented based on the relative distance between the hands and/or joint movement of the hands such as to reposition a cropping window.
  • control sequences may be implemented using the touchless interface described herein.
  • movement of the hand or fingers may be mapped as movement of the volume or other rendering construct in a 1 :1 relationship (e.g., 10mm of detected displacement corresponds of 10mm of displacement of the volume in the virtual space) or the movements may be scaled to tune the sensitivity of touchless interface or as may be appropriate when displaying on smaller or larger display monitors.
  • the touchless interface may be configured to seamlessly transition from one mode to another mode without ending a manipulation session upon detection of a change in the configuration of the tracked object. In other words, the touchless interface may initially be operating in the first mode responsive to detecting the object in the first configuration.
  • the touchless interface may monitor for change in the configuration (e.g., may identify a second hand entering the tracking field or a hand exiting the field, or it may detect the folding of the fingers of the presented hand) which may cause the touchless interface to transition to the relevant mode without interrupting or terminating the manipulation session.
  • the touchless interface When entering a given mode, the touchless interface performs a registration of the hand globally or extended finger(s) to obtain a reference position of the hand or finger based upon which movements are then calculated during the session within a given mode. While examples herein have been described with reference to Cartesian coordinate systems, non-Cartesian (e.g., polar coordinate systems) may also be used.
  • FIG 4 shows a block diagram of an ultrasound imaging system 410 constructed in accordance with the principles of the present disclosure.
  • the ultrasound imaging system in Figure 4 may include a touchless interface and graphics processing circuitry in accordance with the examples described with reference to Figures 2 and 3.
  • an ultrasound imaging system is shown in explanatory examples of embodiments of the invention, embodiments of the invention may be practiced with other medical imaging modalities. Other modalities may include, but are not limited to, magnetic resonance imaging and computed tomography.
  • the ultrasound imaging system 410 in Figure 4 includes an ultrasound probe 412 which includes a transducer array 414 for transmitting ultrasonic waves and receiving echo information.
  • transducer arrays are well known in the art, e.g., linear arrays, convex arrays or phased arrays.
  • the transducer array 414 can include a two dimensional array (as shown) of transducer elements capable of scanning in both elevation and azimuth dimensions for 2D and/or 3D imaging.
  • the transducer array 414 is coupled to a microbeamformer 416 in the ultrasound probe 412 which controls transmission and reception of signals by the transducer elements in the array.
  • the microbeamformer 416 is coupled by the probe cable to a transmit/receive (T/R) switch 418, which switches between transmission and reception and protects the main beamformer 422 from high energy transmit signals.
  • T/R switch 418 and other elements of the system can be included in the ultrasound probe rather than in a separate ultrasound system base.
  • the transmission of ultrasonic beams from the transducer array 414 under control of the microbeamformer 416 is directed by the transmit controller 20 coupled to the T/R switch 418 and the beamformer 422, which receive input from the user's operation of the user interface 424.
  • One of the functions controlled by the transmit controller 420 is the direction in which beams are steered. Beams may be steered straight ahead from (orthogonal to) the transducer array, or at different angles for a wider field of view.
  • the partially beamformed signals produced by the microbeamformer 416 are coupled to a main beamformer 422 where partially beamformed signals from individual patches of transducer elements are combined into a fully beamformed signal.
  • the beamformed signals are coupled to a signal processor 426.
  • the signal processor 426 can process the received echo signals in various ways, such as bandpass filtering, decimation, I and Q component separation, and harmonic signal separation.
  • the signal processor 426 may also perform additional signal enhancement such as speckle reduction, signal compounding, and noise elimination.
  • the processed signals are coupled to a B-mode processor 428, which can employ amplitude detection for the imaging of structures in the body.
  • the signals produced by the B-mode processor 428 are coupled to a scan converter 430 and a multiplanar reformatter 432.
  • the scan converter 430 arranges the echo signals in the spatial relationship from which they were received in a desired image format.
  • the scan converter 430 may arrange the echo signal into a two dimensional (2D) sector-shaped format, or a pyramidal three dimensional (3D) image.
  • the multiplanar reformatter 432 can convert echoes which are received from points in a common plane in a volumetric region of the body into an ultrasonic image of that plane, as described in U.S. Pat. No. 6,443,896 (Detmer).
  • a volume renderer 434 converts the echo signals of a 3D dataset into a projected 3D image as viewed from a given reference point, e.g., as described in U.S. Pat. No. 6,530,885 (Entrekin et al.).
  • the volume renderer 434 may receive input from the user interface 424, which may include a conventional control panel 452 and a touchless interface 450 in accordance with any of the examples herein.
  • the input may include the given reference point (e.g., viewpoint of a virtual observer), location of a simulated light source, and/or properties of the simulated light source for the rendered projected image. Any of these inputs may be received via the touchless interface 450 or via the control panel 452.
  • inputs for manipulating a displayed image may be received via the touchless interface 450in accordance with any of the examples herein.
  • the 2D or 3D images from the scan converter 430, multiplanar reformatter 432, and volume renderer 434 are coupled to an image processor 436 for further enhancement, buffering and temporary storage for display on an image display 438.
  • a graphics processor 40 can generate graphic overlays for display with the ultrasound images. These graphic overlays can contain, e.g., standard identifying information such as patient name, date and time of the image, imaging parameters, and the like. For these purposes the graphics processor receives input from the user interface 424, such as a typed patient name.
  • the user interface can also be coupled to the multiplanar reformatter 432 for selection and control of a display of multiple multiplanar reformatted (MPR) images.
  • MPR multiplanar reformatted
  • FIG 5 shows a portion of an ultrasound system 500 according to one embodiment.
  • the ultrasound system 500 may include some or all of the components of the ultrasound imaging system of Figure 4.
  • the ultrasound system 500 includes a display 510 and a console 505 which supports a control panel with a variety of physical controls (e.g., a trackball 525 and buttons 530, rotary encoders 520, a touch screen 515, and others).
  • the ultrasound system may be operatively coupled to additional displays (not shown).
  • a main display monitor may provide elsewhere within the examination or operating room, which may provide a more convenient viewing location for the user during data acquisition.
  • a motion tracking device 550 may be provided on or proximate the console 505.
  • the motion tracking device may be located elsewhere, such as on or near a patient examination table.
  • the motion tracking device 550 may be part of a touchless interface implemented in accordance with the examples herein.
  • the motion tracking device 550 may have a field of view.
  • the field of view may be generally prism-shaped and may extend radially to about 150 degrees and vertically to about 2 feet above the motion tracking device 550.
  • the field of view may be differently shaped depending on the number and arrangement of sensors in the tracking device and the type of tracking device.
  • the field of view of the tracking device 550 may be configurable by the user to limit tracking within a desired portion of the field of view to filter out noise, prevent unintentional inputs to the touchless interface, and generally reduce computational load.
  • the desired portion may begin at a certain distance, for example 4 inches, 5 inches, 6 inches or more above the console, such that the user can manipulate physical controls on the console while the touchless interface remains active without unintentionally initiating a touchless session.
  • the tracking device 550 may have an effective field of view, which may be smaller than the full field of view and may represent the optimal volumetric region within which most accurate tracking data is recorded.
  • the desired portion may be configured to coextend with the effective field of view.
  • Figures 6A-6D show examples of hand configurations or presentations relative to a hand- tracking device (e.g., hand tracking device 330).
  • Figure 6A shows an open hand configuration or presentation of the right hand 602 within the field of view 303, and specifically within the effective field of view 305 of a tracking device 330.
  • the touchless interface may be configured to enter a first mode in which the rendering controller generates commands to move (i.e., translate and/or rotate) the rendered volume based on detected movement of the presented right hand.
  • Rotation of the hand for example rotation of the center of the palm, the heel of the palm, or another reference point of the hand (e.g., a centroid of the hand), relative to the x, y, and z axis of the tracking coordinate frame may be calculated and an equal or scaled rotation may be applied to the volume relative to the coordinate frame of the virtual space.
  • Figure 6B shows a closed hand presentation of the right hand 602 within the field of view
  • the closed configuration or presentation may correspond with the hand being presented within the field of view with four or more of the fingers folded toward the palm of the hand.
  • the touchless interface may be configured to pause a manipulation session upon detecting a closed presentation, as described further below. Relative position of the finger tips and pointing direction of the each fingering may be used to determine whether a hand is being presented in an open or closed configuration.
  • the presented hand may be moved to another location within the field of view before being presented again in the open configuration to re-start a session. The presented hand is again registered to the tracking coordinate frame once placed at the new location and presented in the open configuration.
  • Movement of the hand while the manipulation session is paused is not applied as movement to the volume. In other words, tracking data from frames recorded while the hand is in the closed configuration is ignored. Once the hand has been moved to a new desired position and an open configuration is again detected, positional information associated with the new starting is recorded from which relative movement is subsequently calculated.
  • Other hand presentations may include presenting the left hand in an open configuration.
  • the touchless interface may be configured to enter another mode which may be different from the mode associated with presentation of the right hand.
  • presenting the left hand may invoke a slice plane mode, in which movements of the hand are translated to equal or scaled movement of a cut plane.
  • Rotation of the left hand may cause rotation of the cut plane to a fixed volume, while translation of the hand may cause translation of the cut plane relative to the fixed volume.
  • the rendering is updated in real time responsive to movements of the left hand thus enabling the user to dynamically select a slice within the rendered volume.
  • a partially open hand presentation may be detected when the user presents a hand with fewer than four fingers extended.
  • the hand may be presented with only finger extended, for example the index finger, e.g., as shown in Figure 6C.
  • the touchless interface may be configured to enter a third mode, which may be different from the volume translation/rotation and the slice plane mode.
  • the rendering controller may be configured to generate commands for modifying the virtual light source. For example movement if the extended finger within the field of view may be translated to movement of the light source (e.g., in the x-y plane or in all three dimensions).
  • Relative movement of two extended fingers may be detected, which may result in a command to increase or decrease brightness for example responsive to detecting increase or decrease of the relative distance between the two fingers, or to rotate a direction of the light source for example responsive to detecting rotation of a virtual line connecting the two fingertips.
  • the hand presentations operable to invoke a particular mode may be reversed, for example the left hand invoking the volume translation/rotation mode and the right invoking the slice plane mode, such as to suit users with different hand-dominance or preference.
  • Figure 6D shows another open hand presentation, in this example a presentation of both hands in the open configuration within the field of view 303 of a tracking device 330.
  • this open hand presentation may be used to invoke a specific mode of the touchless interface, in which different manipulations may be performed with each of the different hands during a given manipulation session.
  • relative motion of the right hand may be used to determine relative rotation and translation of the volume within the virtual space.
  • Relative motion of the left hand may be user to dynamically select a position and orientation of a slice plane within the volume.
  • a B-mode image corresponding to the slice plane may be displayed concurrently with the volume rendering.
  • the B-mode image may be displayed in a separate window of the display or it may be overlaid on the volume rendering.
  • Figures 7 and 8 show operations of a touchless interface in accordance with the present disclosure. These operations may be implemented as executable instructions which program a rendering controller to perform the processes illustrated in Figures 7 and 8.
  • the process 700 starts with activating the touchless interface, as shown at block 705.
  • the touchless interface may be activated by powering up the hand-tracking device, by selecting touchless mode via the control panel, or by exposing the tracking sensor of the hand-tracking device, such as by moving or removing a physical cover which is provided over the hand-tracking device to occlude the field of view of the tracking sensor when not in use.
  • the rendering controller Upon activation of the touchless interface, the rendering controller begins to receive tracking data from the hand-tracking device, as shown in block 710. The rendering controller analyzes the tracking data in each frame to determine whether a starting configuration has been detected, as shown in blocks 715 and 720.
  • the starting configuration may be the presentation of at least one hand open (e.g., with at least four fingers extended) within the field of view. In another example, the starting configuration may be the presentation of both hands fully open (e.g., with at least four fingers of each hand extended) within the field of view.
  • unintentional commands may be filtered out by only generating an initiation command if the starting configuration is detected and maintained for a predetermined period of time (e.g., the user maintains the hands open and generally still for at least 1 second.
  • the rendering controller enters a different mode depending on the starting configuration detected.
  • the rendering controller may initiate a touchless manipulation session, as shown in block 730.
  • the tracking data associated with the starting configuration is stored, as shown in block 725 to enable calculations of relative movement for the given session. That is, the positional information of the presented hand(s) in the starting configuration is recorded for that session and stored in memory. Relative movement of the hands and/or fingers may then be calculated as the difference between the detected position of the user's hand(s) and/or fingers in a subsequent frame and the position of the user's hand(s) and/or fingers in the starting configuration. Because relative calculations are related back only to the starting configuration for a given session, additional (e.g., global or absolute) calibration of the touchless interface may not be required making the touchless interface easy to install and operate.
  • FIG. 8 shows a flow diagram of a process associated with a touchless manipulation session.
  • the rendering controller analyzes the tracking data in each frame to determine changes in the hand presentation. For example, if an open hand presentation is detected, as shown in block 820, the rendering controller may extract global hand movements from the tracking data frames to generate commands for moving the 3D data set in accordance with the global hand movements.
  • an open hand presentation in may be a presentation of the hand with four or more fingers extended (i.e., away from the palm of the hand).
  • a partially open presentation may be detected when a fewer than four fingers are detected as extended.
  • the rendering controller may determine displacement of the one or more extended fingers, as shown in block 845, and generate commands for modifying a rendering construct, for example the virtual light source (e.g., move the light relative to the volume, change direction of the light, increase or decrease intensity of the light).
  • a closed presentation may be detected responsive to all fingers being moved towards the palm.
  • the rendering controller may be configured to pause the current manipulation session until an open or partially open presentation is once again detected.
  • the user may freely move the hand within the tracked field without causing changes to the volume rendering.
  • the session may terminate when a hand is no longer detected within the tracked field.
  • a new session may be initiated in accordance with the process in Figure 7, by initially presenting the hand in a starting configuration in order to register the hand to the tracked field.
  • a programmable device such as a computer-based system or programmable logic
  • the above-described systems and methods can be implemented using any of various known or later developed programming languages, such as "C”, “C++”, “C#”, “FORTRAN”, “Pascal”, “VHDL” and the like.
  • various storage media such as magnetic computer disks, optical disks, electronic memories and the like, can be prepared that can contain information that can direct a device, such as a computer, to implement the above-described systems and/or methods.
  • the storage media can provide the information and programs to the device, thus enabling the device to perform functions of the systems and/or methods described herein.
  • the computer could receive the information, appropriately configure itself and perform the functions of the various systems and methods outlined in the diagrams and flowcharts above to implement the various functions. That is, the computer could receive various portions of information from the disk relating to different elements of the above-described systems and/or methods, implement the individual systems and/or methods and coordinate the functions of the individual systems and/or methods described above.
  • processors described herein can be implemented in hardware, software and firmware. Further, the various methods and parameters are included by way of example only and not in any limiting sense. In view of this disclosure, those of ordinary skill in the art can implement the present teachings in determining their own techniques and needed equipment to affect these techniques, while remaining within the scope of the invention.
  • the functionality of one or more of the processors described herein may be incorporated into a fewer number or a single processing unit (e.g., a CPU) and may be implemented using application specific integrated circuits (ASICs) or general purpose processing circuits which are programmed responsive to executable instruction to perform the functions described herein.
  • ASICs application specific integrated circuits
  • the present system may have been described with particular reference to an ultrasound imaging system, it is also envisioned that the present system can be extended to other medical imaging systems where one or more images are obtained in a systematic manner. Accordingly, the present system may be used to obtain and/or record image information related to, but not limited to renal, testicular, breast, ovarian, uterine, thyroid, hepatic, lung, musculoskeletal, splenic, cardiac, arterial and vascular systems, as well as other imaging applications related to ultrasound-guided interventions. Further, the present system may also include one or more programs which may be used with conventional imaging systems so that they may provide features and advantages of the present system.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

A method according to an embodiment includes receiving a 3D dataset corresponding to an imaged volume in a subject, generating a volume rendering of the 3D dataset, and displaying the volume rendering on a display operatively associated with a touchless interface. The method further includes detecting a position and motion of an object within a tracking field of the touchless interface, initiating a touchless manipulation session when the object is detected to be in a first configuration within the tracking field, generating rendering commands to move the 3D dataset or to adjust a rendering construct based on the motion of the object, and continuously updating the volume rendering responsive to the rendering commands until the object is detected to be in a second configuration within the tracking field or the object is no longer detected within the tracking field.

Description

SYSTEMS AND METHODS FOR THREE DIMENSIONAL TOUCHLESS
MANIPULATION OF MEDICAL IMAGES
BACKGROUND
[001] In medical imaging, images may be rendered in real time or post-data set acquisition. The images may be two dimensional (2D) images, also referred to as slices or planes acquired within a volume, or they may be renderings of a three dimensional (3D) volume. A volume rendering may be generated from a 3D data set acquired from imaging a volume (e.g., an organ or other object of interest in a subject). The volume rendering may be displayed on a two dimensional display (e.g., a conventional display unit such as an LCD, LED, or another type of display), or in a three dimensional display environment (e.g., holographic, virtual or augmented reality displays). For example, volume rendering techniques for displaying a three dimensional image on a two dimensional display may involve casting virtual rays into an imaged 3D volume to obtain a 2D projection of the data that may be displayed in a final rendered image. Use of a simulated light source in rendering the image may provide a user with a sense of depth and how the various anatomic structures are arranged in the 3D volume. The imaged volume may alternatively be rendered more realistically in a three dimensional display environment. Regardless of the type of display used, it may be desirable for a user to be able to manipulate the medical image (i.e., the volume rendering) in three dimensions.
[002] In medical imaging, conventional techniques for manipulating volume renderings involve the use of a conventional input devices, such as a keyboard (e.g., up/down buttons for panning, plus/minus buttons for zooming), a mouse, a track ball, a touch pad or touch screen, or text input corresponding to commands specifying the amount of rotation, magnification, or translation of the rendered volume. All of these techniques require the user to physically touch the input device, which can contaminate the medical imaging equipment and/or a sterile surgical environment. Improvements in the available techniques for manipulating medical images of three dimensional data sets may thus be desirable.
SUMMARY
[003] A method according to one embodiment may include receiving a 3D dataset corresponding to an imaged volume in a subject, generating a volume rendering of the 3D dataset, and displaying the volume rendering on a display operatively associated with a touchless interface. The method may further include detecting a position and motion of an object within a tracking field of the touchless interface, initiating a touchless manipulation session when the object is detected to be in a first configuration within the tracking field, generating rendering commands to move the 3D dataset or to adjust a rendering construct based on the motion of the object, and continuously updating the volume rendering responsive to the rendering commands until the object is detected to be in a second configuration within the tracking field or the object is no longer detected within the tracking field.
[004] In some embodiments, the object may be a hand, the first configuration may correspond to the hand being presented within the tracking field with four or more fingers extended away from a palm of the hand, and the second configuration may correspond to the hand being presented within the tracking field with four or more fingers folded toward the palm of the hand. In some embodiments, the method may further include calculating a global displacement of the hand along all axes of a coordinate frame of the tracking field, and wherein the rendering commands include commands to move the 3D dataset based only on displacement of the hand along two of the axes of the coordinate frame. In some embodiments, the method may further include detecting a third configuration of the hand corresponding to the hand being presented within the tracking field with one finger extended away from a palm of the hand and at least three fingers folded toward the palm of the hand, and calculating a displacement of the extended finger relative to a coordinate frame of the tracking field while the hand remains in the third configuration. In some embodiments, the rendering commands include commands to move a location of a virtual light based only on the displacement of the finger. In some embodiments, the rendering commands may include commands to move the location of the virtual light are generated only if a single extended finger is detected within the tracking field. In some embodiments, the commands to move the location of the virtual light includes commands to move the light to a location within the 3D data set.
[005] In some embodiments, the method may further include detecting a forth configuration corresponding to both hands being presented within the tracking field, and generating first rendering commands for moving the 3D dataset based on movement of the first hands and generating second rendering commands for adjusting a rendering construct based on movement of the second hand. In some embodiments, the rendering construct is a cut plane and wherein a location and an orientation of the cut plane relative to the 3D dataset is dynamically adjusted responsive to detected translation and rotation of the second hand. In some embodiments, the 3D dataset may be rendered by projecting a 2D image of the 3D dataset onto a viewing plane, the 3D dataset may be constrained from translation in 3 degrees of freedom (DOF), and the cut plane may be limited to translation only along a direction perpendicular to the viewing plane. In some embodiments, the method may further include recording the position of the user's hand when detected to be in the first configuration as an initial reference position, pausing the generating of rendering commands when the hand is detected to be in the second configuration, and resuming the generation of rendering commands when the hand is subsequently detected to be in the first configuration, wherein the position of the hand when subsequently detected to be in the first configuration is recorded as a new reference position, and wherein rendering commands generated after the resuming are based on the motion of the hand relative to the new reference position.
[006] In some embodiments, the method may be performed by a medical imaging system such as an ultrasound system. In some embodiments, the 3D dataset is received by the ultrasound system while ultrasonically imaging the volume with a probe of the ultrasound system. The 3D dataset may be rendered and updated in real-time with live imaging data.
[007] Any of the techniques for providing a touchless interface may be embodied in executable instructions stored on non-transitory computer-readable medium, which when executed cause a processor of a visualization system or a medical imaging system to perform the processes embodied thereon.
[008] In some embodiments, a medical image viewing and manipulation system may include a volume renderer configured to receive a three dimensional (3D) data set corresponding to an imaged volume and generate a volume rendering of the imaged volume, and a touchless interface configured to generate commands responsive to touchless user input. The touchless interface may include a hand-tracking device having a field of view, wherein the hand-tracking device is configured to generate tracking data responsive to movement of a user's hand or a portion thereof within the field of view, and a rendering controller communicatively coupled to the hand-tracking device and the volume renderer, wherein the rendering controller is configured to generate commands for manipulating the 3D dataset based on the tracking data. The system may further include a display configured to display the volume rendering and update the display in real time based on the manipulation commands.
[009] In some embodiments, the volume renderer may be part of an ultrasound imaging system which includes an ultrasound probe and a signal processor, wherein the signal processor is configured to receive ultrasound echoes from the ultrasound probe to generate the 3D data set. In some embodiments, the hand-tracking device may be incorporated into a console of the ultrasound imaging system. In some embodiments, the hand-tracking device may be an optical tracking device, which is configured to track a global position of the hand and positions of individual fingers of the hand.
[010] In some embodiments, the rendering controller may be configured to generate a first set of commands operable to control movement of the 3D dataset responsive to detection of the hand in a first configuration and generate a second set of commands operable to control a rendering construct different than the 3D dataset responsive to detection of the hand in a second configuration. In some embodiments, the touchless interface may be configured to ignore movements of the hand following a detection of the hand in a closed configuration until the hand is arranged in another configuration different than the closed configuration. In some embodiments, the touchless interface may be configured to independently track movement of both hands of a user and the rendering controller may be configured to generate first commands for controlling movement of the 3D dataset based on movement of one hand of the user and generate second commands for controlling a rendering construct in relation to the 3D dataset based on movement of the other hand of user.
BRIEF DESCRIPTION OF THE DRAWINGS
[011] Figure. 1A is a flow diagram of a process in accordance with the present disclosure.
[012] Figure. IB is an illustration of a 3D data set of an imaged volume in accordance with the present disclosure.
[013] Figure 2 is a block diagram of a system for visualizing and manipulating medical imaging data in accordance with the present disclosure.
[014] Figure 3 is a block diagram of components of a touchless interface in accordance with the present disclosure.
[015] Figure 4 is a block diagram of an ultrasound imaging system which includes a touchless interface in accordance with the present disclosure.
[016] Figure 5 is an illustration of a portion of an ultrasound imaging system which includes a touchless interface in accordance with the present disclosure.
[017] Figures 6A-6D are illustrations of a hand-tracking device and various hand presentation configurations in accordance with the present disclosure. [018] Figure 7 is flow diagram of a process for manipulating medical images in accordance with the present disclosure.
[019] Figure 8 is flow diagram of another process for manipulating medical images in accordance with the present disclosure.
DETAILED DESCRIPTION
[020] The following description of certain exemplary embodiments is merely exemplary in nature and is in no way intended to limit the invention or its applications or uses. In the following detailed description of embodiments of the present systems and methods, reference is made to the accompanying drawings which form a part hereof, and in which are shown by way of illustration specific embodiments in which the described systems and methods may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the presently disclosed systems and methods, and it is to be understood that other embodiments may be utilized and that structural and logical changes may be made without departing from the spirit and scope of the present system. Moreover, for the purpose of clarity, detailed descriptions of certain features will not be discussed when they would be apparent to those with skill in the art so as not to obscure the description of the present system. The following detailed description is therefore not to be taken in a limiting sense, and the scope of the present system is defined only by the appended claims.
[021] Figure 1A shows a flow diagram of a process 100 for rendering and manipulating medical images and Figure IB shows an illustration of a 3D volume which may be rendered in accordance with the examples herein. Process 100 may be used to generate a volume rendering of a three dimensional (3D) dataset (e.g., 3D data set 130). The 3D dataset 130 may include medical imaging data corresponding to a 3D volume in a subject (e.g., a patient). In some examples, the 3D dataset may be a 3D ultrasound dataset. In other examples, the 3D dataset may include imaging data acquired with another imaging modality. Process 100 begins by accessing a 3D dataset, as shown in block 110. In some examples, the 3D dataset 130 may be received a processor of a visualization system such as system 200 described further below with reference to Figure 2. It will be understood that the 3D volume in Figure IB is shown as a simple rectangular prism for simplicity of illustration but in practice three dimensional (3D) datasets of medical imaging data may typically be irregularly shaped (e.g., the shape of an imaged organ or a volume of imaged tissue). [022] The processor may retrieve the 3D dataset 130 from a picture archiving and communication system (PACS) server or another storage device, such as portable non-transitory media. In other examples, the 3D dataset 130 is received by a volume renderer of a medical imaging system (e.g., an ultrasound system), which in some cases may occur in real-time during acquisition of the imaging data for displaying of the volume renderings in real- or near real-time (accounting for transmission and processing delays). The 3D data set 130 may include one or more regions of interest 135, each of which may be a portion of an object (e.g., wall of blood vessel, valve of heart) or an entire object (e.g., tumor, fetus) within the imaged volume.
[023] In block 112, a volume rendering of the 3D dataset 130 may be generated by projecting a
2D image of the 3D dataset 130 onto a viewing plane 140 as perceived from a given reference point (e.g., a virtual observer 145). As is generally known, a simulated light source 150 may be used to provide a perception of depth. Initially, the location and orientation of the 3D dataset 130 within the virtual 3D space 155 and thus relative to the viewing plane 140, as well as the location of the simulated light source 150, may be determined by default settings. For example, the 3D data set 130 may initially be rendered with the X-Y plane of the local coordinate frame of the volume facing the viewing plane and with the light source located midway between volume and viewing plane. The volume rendering (i.e., the 2D projected image of the volume) may be displayed on a display monitor, as shown in block 116 and may be updated responsive to user inputs. User inputs may be received to manipulate the 3D data set within a virtual space. The virtual space may be defined by a coordinate system, having a first coordinate frame 155 (also referred to as rendering coordinate frame) and the 3D dataset may also be associated with its own local coordinate frame, for example as may be defined during imaging data acquisition. User inputs may correspond to commands to move (e.g., translate, rotate, scale or magnify) the 3D dataset 130 in relation to the coordinate frame of the virtual space. Conventional inputs in the form of physical controls for manipulating rendered volumes may not be as efficient or intuitive as may be desirable and may have other shortcomings (e.g., result in contamination of equipment or sterile environment).
[024] In the example in Figure 1A, the display may be operatively associated with a touchless interface. The touchless interface enables a user to touchlessly manipulate the rendered volume within the virtual space, for example to touchlessly control the position and orientation of the rendered volume within the virtual space. In some examples, the touchless interface may be used to dynamically select and display slices of the volume and/or to control aspects of the simulated light source (also referred to as virtual light source). The touchless interface may include a hand- tracking device, which is described in further detail below. The hand-tracking device may be associated with a tracking field, which may also be referred to herein as a field of view. The touchless interface may be part of a medical imaging data acquisition system, such as an ultrasound imaging system. In some embodiments, the touchless interface may be part of a system for visualizing and manipulating medical imaging data, for example an analysis workstation which may not itself be capable of acquiring the medical imaging data but may instead receive the data, for example through a wired or wireless network.
[025] As shown in block 118, the process may continue by tracking position and calculating motion of an object, such as one or both hands of the user or a tool (e.g., a stylus), while disposed within the tracking field of the touchless interface. Tracking data may be transmitted to graphics processing components of the system for generation of manipulation commands, as shown in block 120. The displayed rendering of the volume may be updated responsive to the manipulation commands, as shown in block 122. In exemplary embodiments, before motion is applied to the 3D data set responsive to motion of the object in the tracking field.
[026] As will be appreciated, the techniques and systems described herein may improve the ease of visualization and exploration of 3D medical imaging data (e.g., 3D ultrasound datasets) through a more direct interaction with the dataset, which may enhance the hand-eye coordination when manipulating the dataset. In accordance with some examples herein, hand tracking (e.g., optical hand tracking) may be used to record and output in real-time the relative 6 DOF movements performed by the user's hand(s), which motion data is then translated into relevant parameters for volume rendering. In accordance with the examples herein, the user may be able to touchlessly rotate, translate or pan, and scale (increase or decrease magnification) a volume to be rendered, as well as easily obtain a 2D slice image of the volume by touchlessly and dynamically adjusting the location of a slice plane relative to the rendered volume. Additionally, the techniques described may enable the user to touchlessly and intuitively manipulate the location or other parameters of a virtual light source, as well as add annotations (e.g., place markers, or labels associated with the markers or the image in general) on the rendered image.
[027] Figure 2 shows a system 200 for visualizing and manipulating medical imaging data. The system 200 includes a volume renderer 210, a rendering controller 220, and a user interface 230 including a display 232, a control panel 234, and a touchless interface 240. The control panel 234 may include conventional physical controls, for example buttons, switches, rotary encoders, a keyboard, a mouse, a trackball, a touch screen, and touch pad or other touch-sensitive controls. The touchless interface 240 may include a motion tracking device 242 configured to track movements of an object (e.g., precise movements of a user's hand) within the tracking field. The touchless interface 240 may thus enable the user to more easily and intuitively interact with the rendered volume. The motion tracking device 242 may utilize optical tracking, an electromagnetic (EM) tracking, inertial tracking, or other type of motion tracking technique to acquire positional information, for example of a user's hand(s).
[028] The system 200 is communicatively coupled to memory 201, which stores a 3D dataset
202 of medical imaging data. The memory 201 may be part of a PACS server or it may be memory of a medical imaging data acquisition system, such as an ultrasound system. In some examples, the system 200 may be integrated within a medical imaging data acquisition system (e.g., an ultrasound imaging system). In some such examples, the medical imaging data acquisition system may be configured to acquire and process the medical imaging data, for example echo signals, into the 3D dataset 202 and transmit the 3D dataset 202 to the volume renderer in real time (that is, during the acquisition) such that volume renderings may be provided and manipulated via the touchless interface in real time.
[029] The volume renderer 210 of the system 200 receives a 3D dataset 202. The 3D dataset 202 may be ultrasonic imaging data which includes echo information obtained by ultrasonically scanning a volume (e.g., a portion of a subject). While the example below is described with reference to ultrasonic imaging data, it will be understood that the techniques described herein may be applied equally to medical imaging data obtained via a different imaging modality (e.g., CT, MR, or others) suitable for acquiring a 3D dataset. The volume render 210 generates a projected image 204 of the 3D dataset 202 as perceived by a virtual observer. This projected image 204, which may be interchangeably referred to herein as a volume rendering, is displayed on the display 232. Virtual lighting may be used to enhance the perception of depth.
[030] In medical imaging, finding a suitable 3D viewpoint and selecting a 2D slice through an imaged volume are two common tasks that may be performed during 3D imaging. These tasks, when performed on conventional medical imaging or diagnostic system (e.g., an ultrasound system or analysis workstation), may be very cumbersome and challenging as simple intuitive manipulation tools are yet to be developed for manipulating 3D datasets. According to known techniques, these tasks require that the operator (e.g., sonographer, clinician) determine and specify the particular position and orientation in 3D space of the volume to be rendered and/or the 2D slice to be displayed. One difficulty arises from the use of separate controls. Conventional methods for controlling the positioning of the volume to be rendered include the use of a trackball or touch screen, which are inherently two dimensional and thus inherently only allowing for a change of at most 2 parameters at once. This can make it more challenging to find a desired point in 3D space and typically requiring multiple operations of separate controls to arrive at the desired point in space and the user may need a high level of understanding of 3D space to efficiently operate a sequence of rotations and translations to position the volume as may be desired using conventional controls. Furthermore, the touchless interface described herein may offer additional advantages, for example in an environment which must be maintained sterile (e.g., an operating room), by obviating the need for physical contact with non-sterile equipment during a surgical procedure as an example. In such instances, the hand-tracking device may be placed near the operating table and away from the imaging system such that the operator can manipulate the imaging data without additional assistance. This simpler and more direct touchless interaction with the medical imaging data may significantly reduce the additional personnel required during a surgical procedure. Yet further advantages of the examples herein may be obtained in applications, which require precise selections while maintaining an imaging probe in a precise position with respect to a subject. For example, the techniques described may provide a simpler and more intuitive way to indicate in the imaged data, the sample volume for Doppler analysis, particularly when the Doppler sample volume is to be placed in relation to a small blood vessel. In accordance with the principles described herein, common tasks such as freeze and acquire may be accomplished efficiently and intuitively by predetermined hand gestures (e.g., such as moving a given finger, for example an index finger, downward as in pressing a button within the tracked field), which can enable the operator to remain in proximity to the patient and farther away from the imaging system.
Referring back to Figure 2, responsive to user inputs, the volume renderer 210 may receive input 206 from the user interface 230 and may provide output (e.g., volume renderings) to the user interface 230. The input 206 may include a given reference point (e.g., the viewpoint of the virtual observer), location of a simulated light source, and/or properties of the simulated light source for the rendered projected image. The volume renderer 210 may receive input 208 from a rendering controller 220. The input 208 may include manipulation commands, which may be indicative of a change in position, orientation, or magnification of the rendered 3D dataset relative to a positional reference frame, or another property of the volume rendering. The volume renderer 210 sends signals to the display 232, which is configured, responsive to these signals, to update the displayed volume rendering 204 in accordance with the manipulation commands. [032] The volume renderings are coupled from the volume renderer 210 to an image processor
250 for further enhancement, buffering and temporary storage prior to display on the display 232. The image processor 205 may generate graphic overlays for display with the rendered images. These graphic overlays can contain, e.g., standard identifying information such as patient name, date and time of the image, imaging parameters, and the like. For these purposes the image processor 205 may receive input from the user interface 230, such as a typed patient name. While describes as discrete components, some or all of the functionality of the volume renderer 210, image processor 250, and rendering controller 220, collectively referred to as image processing circuitry, may be incorporated into a single integrated processing circuit.
[033] Figure 3 illustrates further aspects of a touchless interface which may be used for visualization and manipulation of medical imaging data. As shown in Figure 3, a hand-tracking device 330 is communicatively coupled to a tracking data processor 340. The hand-tracking device 330 may be used to implement the motion tracking device 242 of the system 200. The hand- tracking device 330 may be configured to track the hands and fingers of a user with high precision and tracking frame rate. The hand-tracking device 330 includes a tracking sensor 332, a sensor data processor 334, and on-board memory 336. The tracking sensor 332, sensor data processor 334, and memory 336 are operatively arranged to detect and report discrete positions of the user's hand or a portion thereof for determining motion of the user's hand or a portion thereof. In some examples, the hand-tracking device 330 may be further configured to detect and track objects, such as a manipulation tool (e.g., a stylus), within its field of view 303. In some embodiments, the hand- tracking device 330 may be an optical tracking device which uses optical sensors and infrared light to detect hands and fingers of a user and track motion thereof such as the Leap Motion controller, supplied by Leap Motion, Inc.
[034] The tracking sensor 332 and processor 334 are configured to determine physical quantities, such as locations of tracked entities, within a field of view 303 of the tracking sensor 332. The tracking sensor 332 detects the position of a tracked entity 305 relative to the coordinate frame of the tracking device 330. The tracked entities 305 may include fingertips and other anatomical markers of the hands (e.g., center of the palm, heel of the hand, metacarpals, phalanges, etc.). The sensor data processor 334 may be programmed to use the sensor data obtained by the tracking sensor 332 with a model 338 of the human hand 337 stored in the on-board memory 336 to resolve user motion into the discrete tracked data. For example, the sensor data processor 334 may determine direction of a tracked entity (e.g., direction in which a finger is pointing, or a direction normal to the palm) based on the model 338 of the human hand. The hand model 338 may further provide information about the identity (e.g., left, right), position/orientation (e.g., x, y, z coordinates of a centroid of the hand or geometric center of a surface estimated as the palm of the hand, a direction of axis normal to palm, etc.), and other aspects (e.g., list/identify of fingers) of a hand detected within the field of view, which may be reported in each tracking data frame.
[035] The hand-tracking device 330 may be configured to measure tracked physical quantities
(e.g., position or orientation of a tracked entity) with high precision (in units of mm relative to the tracking device's coordinate frame) and at high periodicity, e.g., every microsecond. The hand- tracking device 330 may be configured to output the tracked physical quantities in packets or frames 306 at high frame rates (e.g., 1 frame/millisecond or faster, for example 4 frames or more, 5 frames or more, 10 frames per millisecond or other). Each tracking data frame 306 includes a list, table, object, or other data structure containing the tracked physical quantities for a given frame, which may also be used to calculate relative displacement, speed, or other parameters related to the tracked entities. In some examples, a data structure within a frame 306 may contain a list of the hands and fingers detected at each snapshot in time within a given frame. The physical quantities recorded for each hand may include a position, an orientation, and a displacement (i.e., translation and rotation) from the previous recorded position and orientation of each hand relative to the tracking reference frame. For each hand, a listing of each detected finger as well as the position and pointing direction of each fingertip may be included. Volume manipulation commands may be generated based on the global and relative motion of the user's hand and fingers as determined from the tracking data. In some examples, and depending on a mode in which the touchless interface is operating, the touchless interface may assign a specific command to a specific finger (e.g., movement of the index or another finger may be mapped as movement of a virtual light source used in rendering the volume). In other examples, the global movement of the hand may be mapped to globally re-position (rotate and translate) the volume to be rendered. When global movement is utilized for manipulation, small movements of the fingers may be irrelevant and ignored by the system, so that the user need not maintain the hand in a rigid position but may comfortably rotate the hand while in a relaxed state.
[036] The tracking data received by the rendering controller 320 is used to generate manipulation commands 308 as described herein. The manipulation commands instruct the volume renderer 310 to re-position and/or re-orient the 3D dataset within the virtual space, and an updated rendering 304 is generated by the volume rendered 310 for display. Several specific examples of volume manipulation techniques are described below with the appreciation that other commands responsive to different inputs may be programmed into the rendering controller 320 in other examples.
[037] Example 1
[038] In accordance with a first example, a first mode of the touchless interface may be invoked responsive to the detection of the tracked object (e.g., the user's hand) in a first configuration. For example, in cases in which the tracked object is the user's hand, the first mode may be invoked when the user's hand is detected to be presented with at least four fingers extended. In this mode, translation of the hand within the tracked field (i.e., relative to the coordinate frame of the tracking field and calculated from a reference position of the hand) is used to derive corresponding translation of the 3D dataset within the virtual space. In some examples, the touchless interface may track translation in all three degrees of freedom, e.g. by calculating displacements along each of the three axes of the coordinate frame of the tracking field. In some examples, the translation of the tracked object in 3DOF may be mapped to translation of the 3D dataset in only two DOF, for example mapping the x and y displacements to corresponding x and y displacements while disregarding any displacements in the z direction which may result from the user inadvertently moving his hand in and out of (or transversely to the tracking field). Rotations in 3DOF (i.e., angular displacements about each of the x, y, and z axes) may be mapped to corresponding rotations of the 3D data set. Thus, as will be appreciated, in this example, the user may be able to pan the rendered volume in plane but not out of plane (i.e., the zooming would be disabled), which may ensure that the volume remains in appropriate magnification during the manipulation and zooming does not distort size of structures on the screen. In this example, the light source may remain fixed; however in some examples a different detected hand gesture or a conventional physical control may be used for controlling the location, direction, or intensity of the light.
[039] Example 2
[040] In accordance with a second example, a second mode of the touchless interface may be invoked responsive to the detection of the tracked object in a first configuration. For example the tracked object may be a user's hand and the second mode may be invoked when the user's hand is detected to be presented with at least one finger but less than four fingers extended. In some specific instances, the hand configuration which initiates the second mode may be presenting the hand in a fist with only the index finger extended. In this mode, the touchless interface tracks the position and movement of the finger within the tracked field. Specifically, the position and displacement of the fingertip of the extended finger may be obtained from the tracking data and used to change the position of a virtual light source. In some example, the light source is moved on the display dynamically with the movement of the finger. In other examples, the ending position of the finger is used to relocate the light source. In yet further examples, multiple light sources may be utilized when rendering the volume and different fingers of the hand may be associated with different ones of the light sources such that the user may move a first finger (e.g., an index finger) to position a first light source, then fold the first finger and extend a second finger (e.g., the little finger) to position a second light source, and so on. Additionally or alternatively, a change in the pointing direction of the extended finger may be detected and used to adjust a direction of the light source. In some examples, the second mode may be initiated responsive to presentation of the hand with two fingers extended, for example the thumb and index finger. In such instances, a change in the distance between the two extended fingers may be used to control the intensity of the light source, such as by increasing the intensity as the fingertips are spread farther apart and decreasing the intensity as the fingertips are brought closer together.
[041] With reference to the first and second examples, when only a single hand is presented within the tracking field, different control functions may be associated with each of the left and right hands. For example, when the user's right hand is presented in the open configuration described in example 1 , the movements of the hand may be mapped to corresponding movements of the 3D dataset, while movements of a left hand when individually presented in the tracking field may control movement of the reference point from which the 2D projection is taken (e.g., the location of the virtual observer relative to the rendered volume).
[042] Example 3
[043] In accordance with a third example, a third mode of the touchless interface may be invoked responsive to the detection of one or more objects in a third configuration. For example, the tracked one or more objects may be the user's hands and the third mode may be invoked when both of the user's hand are detected within the tracking field. In this mode, movement of one of the user's hand, for example the right hand, may control movements of the 3D dataset, e.g., as described in the first example. Movements of the other hand, in this case the left hand, may be used to derive manipulation commands for a rendering construct other than the 3D dataset. For example, the rendering construct may be the virtual light source and properties of the light source may be manipulated with the left hand e.g., as described in the second example. In other instances, the rendering construct may be a slice plane (also referred to as cut plane), which may be positioned relative to the volume. When transecting the volume, a portion of the volume on one side of the cut plane is retained in the rendering while the portion of the volume on the opposite side of the cut plane is cut away or removed from the rendering. This can expose internal structures that may otherwise not be visible when the full volume is displayed. Thus, in this third mode, the touchless interface is configured to independently and simultaneously track movement of any or both of the user's hands and apply the movements or adjustments in real time. In this mode, the user interface enables the user to simultaneously rotate the 3D dataset while adjusting the location and orientation of the cut plane for selecting a slice image (e.g., a B-mode image) through the imaged volume. In one specific example, the movement of the 3D dataset is limited to rotations only without displacement (i.e., an displacements of the right hand in the tracking field are ignored by the touchless interface) and the movement of the cut plane is limited to only movement along a direction perpendicular to the viewing plane (i.e., in and out of the plane as the user views the displayed image). This mode may enable the user to intuitively and dynamically select a slice plane by being able to visualize the structures within the volume at changing depth and orientation relative to the volume as the user scrolls the location of the cut plane on the screen.
[044] In yet further examples, rendering commands may be generated based on the joint position and/or relative position of multiple objects within the tracking field. For example, a starting configuration for this mode may be the presentation of both hands of the user in the open position (e.g., for at least 4 extended fingers) within the tracking field. Both hands of the user may then be concurrently tracked while both hands remain within the tracking field or until one of the two hands is provided in the closed configuration. The individual position for each tracked object (e.g., left and right individual fingers, left and right palms, etc.) as well as relative position of one tracked object to another (e.g., the distance between the left and right palms) may be determined based on the tracking data. For example, the tracking data may be processed to determine the movements of both hands at once to generate for instance a 3D rotation or 3D crop responsive to the relative position of the hands. A 3D rotation of the rendered volume may be touchlessly commended for example responsive to change in the orientation of a virtual line extending between the two hands (e.g., a virtual line connecting the palm or center of one hand to the palm or center of the other hand) relative to the tracking coordinate frame. Zooming or cropping functions may be implemented based on the relative distance between the hands and/or joint movement of the hands such as to reposition a cropping window.
[045] These and other combinations of control sequences may be implemented using the touchless interface described herein. In any of the examples described movement of the hand or fingers may be mapped as movement of the volume or other rendering construct in a 1 :1 relationship (e.g., 10mm of detected displacement corresponds of 10mm of displacement of the volume in the virtual space) or the movements may be scaled to tune the sensitivity of touchless interface or as may be appropriate when displaying on smaller or larger display monitors. It will be further understood that the touchless interface may be configured to seamlessly transition from one mode to another mode without ending a manipulation session upon detection of a change in the configuration of the tracked object. In other words, the touchless interface may initially be operating in the first mode responsive to detecting the object in the first configuration. The touchless interface may monitor for change in the configuration (e.g., may identify a second hand entering the tracking field or a hand exiting the field, or it may detect the folding of the fingers of the presented hand) which may cause the touchless interface to transition to the relevant mode without interrupting or terminating the manipulation session. When entering a given mode, the touchless interface performs a registration of the hand globally or extended finger(s) to obtain a reference position of the hand or finger based upon which movements are then calculated during the session within a given mode. While examples herein have been described with reference to Cartesian coordinate systems, non-Cartesian (e.g., polar coordinate systems) may also be used.
Figure 4 shows a block diagram of an ultrasound imaging system 410 constructed in accordance with the principles of the present disclosure. The ultrasound imaging system in Figure 4 may include a touchless interface and graphics processing circuitry in accordance with the examples described with reference to Figures 2 and 3. Although an ultrasound imaging system is shown in explanatory examples of embodiments of the invention, embodiments of the invention may be practiced with other medical imaging modalities. Other modalities may include, but are not limited to, magnetic resonance imaging and computed tomography. The ultrasound imaging system 410 in Figure 4 includes an ultrasound probe 412 which includes a transducer array 414 for transmitting ultrasonic waves and receiving echo information. A variety of transducer arrays are well known in the art, e.g., linear arrays, convex arrays or phased arrays. The transducer array 414, for example, can include a two dimensional array (as shown) of transducer elements capable of scanning in both elevation and azimuth dimensions for 2D and/or 3D imaging. The transducer array 414 is coupled to a microbeamformer 416 in the ultrasound probe 412 which controls transmission and reception of signals by the transducer elements in the array. In this example, the microbeamformer 416 is coupled by the probe cable to a transmit/receive (T/R) switch 418, which switches between transmission and reception and protects the main beamformer 422 from high energy transmit signals. In some embodiments, for example in portable ultrasound systems, the T/R switch 418 and other elements of the system can be included in the ultrasound probe rather than in a separate ultrasound system base. The transmission of ultrasonic beams from the transducer array 414 under control of the microbeamformer 416 is directed by the transmit controller 20 coupled to the T/R switch 418 and the beamformer 422, which receive input from the user's operation of the user interface 424. One of the functions controlled by the transmit controller 420 is the direction in which beams are steered. Beams may be steered straight ahead from (orthogonal to) the transducer array, or at different angles for a wider field of view. The partially beamformed signals produced by the microbeamformer 416 are coupled to a main beamformer 422 where partially beamformed signals from individual patches of transducer elements are combined into a fully beamformed signal.
[047] The beamformed signals are coupled to a signal processor 426. The signal processor 426 can process the received echo signals in various ways, such as bandpass filtering, decimation, I and Q component separation, and harmonic signal separation. The signal processor 426 may also perform additional signal enhancement such as speckle reduction, signal compounding, and noise elimination. The processed signals are coupled to a B-mode processor 428, which can employ amplitude detection for the imaging of structures in the body. The signals produced by the B-mode processor 428 are coupled to a scan converter 430 and a multiplanar reformatter 432. The scan converter 430 arranges the echo signals in the spatial relationship from which they were received in a desired image format. For instance, the scan converter 430 may arrange the echo signal into a two dimensional (2D) sector-shaped format, or a pyramidal three dimensional (3D) image. The multiplanar reformatter 432 can convert echoes which are received from points in a common plane in a volumetric region of the body into an ultrasonic image of that plane, as described in U.S. Pat. No. 6,443,896 (Detmer).
[048] A volume renderer 434 converts the echo signals of a 3D dataset into a projected 3D image as viewed from a given reference point, e.g., as described in U.S. Pat. No. 6,530,885 (Entrekin et al.). In some embodiments, the volume renderer 434 may receive input from the user interface 424, which may include a conventional control panel 452 and a touchless interface 450 in accordance with any of the examples herein. The input may include the given reference point (e.g., viewpoint of a virtual observer), location of a simulated light source, and/or properties of the simulated light source for the rendered projected image. Any of these inputs may be received via the touchless interface 450 or via the control panel 452. Additionally, inputs for manipulating a displayed image, such as a rendered volume, may be received via the touchless interface 450in accordance with any of the examples herein. The 2D or 3D images from the scan converter 430, multiplanar reformatter 432, and volume renderer 434 are coupled to an image processor 436 for further enhancement, buffering and temporary storage for display on an image display 438. A graphics processor 40 can generate graphic overlays for display with the ultrasound images. These graphic overlays can contain, e.g., standard identifying information such as patient name, date and time of the image, imaging parameters, and the like. For these purposes the graphics processor receives input from the user interface 424, such as a typed patient name. The user interface can also be coupled to the multiplanar reformatter 432 for selection and control of a display of multiple multiplanar reformatted (MPR) images.
Figure 5 shows a portion of an ultrasound system 500 according to one embodiment. The ultrasound system 500 may include some or all of the components of the ultrasound imaging system of Figure 4. The ultrasound system 500 includes a display 510 and a console 505 which supports a control panel with a variety of physical controls (e.g., a trackball 525 and buttons 530, rotary encoders 520, a touch screen 515, and others). In some examples, the ultrasound system may be operatively coupled to additional displays (not shown). For example, a main display monitor, may provide elsewhere within the examination or operating room, which may provide a more convenient viewing location for the user during data acquisition. A motion tracking device 550 may be provided on or proximate the console 505. In other examples, the motion tracking device may be located elsewhere, such as on or near a patient examination table. The motion tracking device 550 may be part of a touchless interface implemented in accordance with the examples herein. The motion tracking device 550 may have a field of view. In some examples, the field of view may be generally prism-shaped and may extend radially to about 150 degrees and vertically to about 2 feet above the motion tracking device 550. The field of view may be differently shaped depending on the number and arrangement of sensors in the tracking device and the type of tracking device. The field of view of the tracking device 550 may be configurable by the user to limit tracking within a desired portion of the field of view to filter out noise, prevent unintentional inputs to the touchless interface, and generally reduce computational load. For example the desired portion may begin at a certain distance, for example 4 inches, 5 inches, 6 inches or more above the console, such that the user can manipulate physical controls on the console while the touchless interface remains active without unintentionally initiating a touchless session. Optionally, the tracking device 550 may have an effective field of view, which may be smaller than the full field of view and may represent the optimal volumetric region within which most accurate tracking data is recorded. In some examples, the desired portion may be configured to coextend with the effective field of view.
[050] Figures 6A-6D show examples of hand configurations or presentations relative to a hand- tracking device (e.g., hand tracking device 330). Figure 6A shows an open hand configuration or presentation of the right hand 602 within the field of view 303, and specifically within the effective field of view 305 of a tracking device 330. The touchless interface may be configured to enter a first mode in which the rendering controller generates commands to move (i.e., translate and/or rotate) the rendered volume based on detected movement of the presented right hand. Rotation of the hand, for example rotation of the center of the palm, the heel of the palm, or another reference point of the hand (e.g., a centroid of the hand), relative to the x, y, and z axis of the tracking coordinate frame may be calculated and an equal or scaled rotation may be applied to the volume relative to the coordinate frame of the virtual space.
[051] Figure 6B shows a closed hand presentation of the right hand 602 within the field of view
303, and specifically within the effective field of view 305 of a tracking device 330. The closed configuration or presentation may correspond with the hand being presented within the field of view with four or more of the fingers folded toward the palm of the hand. The touchless interface may be configured to pause a manipulation session upon detecting a closed presentation, as described further below. Relative position of the finger tips and pointing direction of the each fingering may be used to determine whether a hand is being presented in an open or closed configuration. When a manipulation session is paused, the presented hand may be moved to another location within the field of view before being presented again in the open configuration to re-start a session. The presented hand is again registered to the tracking coordinate frame once placed at the new location and presented in the open configuration. Movement of the hand while the manipulation session is paused is not applied as movement to the volume. In other words, tracking data from frames recorded while the hand is in the closed configuration is ignored. Once the hand has been moved to a new desired position and an open configuration is again detected, positional information associated with the new starting is recorded from which relative movement is subsequently calculated.
[052] Other hand presentations may include presenting the left hand in an open configuration.
The touchless interface may be configured to enter another mode which may be different from the mode associated with presentation of the right hand. For example, presenting the left hand may invoke a slice plane mode, in which movements of the hand are translated to equal or scaled movement of a cut plane. Rotation of the left hand may cause rotation of the cut plane to a fixed volume, while translation of the hand may cause translation of the cut plane relative to the fixed volume. The rendering is updated in real time responsive to movements of the left hand thus enabling the user to dynamically select a slice within the rendered volume.
[053] In another example, a partially open hand presentation may be detected when the user presents a hand with fewer than four fingers extended. For example, the hand may be presented with only finger extended, for example the index finger, e.g., as shown in Figure 6C. Responsive to a partially open hand presentation, the touchless interface may be configured to enter a third mode, which may be different from the volume translation/rotation and the slice plane mode. In this third mode, the rendering controller may be configured to generate commands for modifying the virtual light source. For example movement if the extended finger within the field of view may be translated to movement of the light source (e.g., in the x-y plane or in all three dimensions). Relative movement of two extended fingers may be detected, which may result in a command to increase or decrease brightness for example responsive to detecting increase or decrease of the relative distance between the two fingers, or to rotate a direction of the light source for example responsive to detecting rotation of a virtual line connecting the two fingertips. In other examples, the hand presentations operable to invoke a particular mode may be reversed, for example the left hand invoking the volume translation/rotation mode and the right invoking the slice plane mode, such as to suit users with different hand-dominance or preference.
[054] Figure 6D shows another open hand presentation, in this example a presentation of both hands in the open configuration within the field of view 303 of a tracking device 330. In some embodiments, this open hand presentation may be used to invoke a specific mode of the touchless interface, in which different manipulations may be performed with each of the different hands during a given manipulation session. For example, relative motion of the right hand may be used to determine relative rotation and translation of the volume within the virtual space. Relative motion of the left hand may be user to dynamically select a position and orientation of a slice plane within the volume. A B-mode image corresponding to the slice plane may be displayed concurrently with the volume rendering. In some examples, the B-mode image may be displayed in a separate window of the display or it may be overlaid on the volume rendering.
[055] Figures 7 and 8 show operations of a touchless interface in accordance with the present disclosure. These operations may be implemented as executable instructions which program a rendering controller to perform the processes illustrated in Figures 7 and 8. The process 700 starts with activating the touchless interface, as shown at block 705. The touchless interface may be activated by powering up the hand-tracking device, by selecting touchless mode via the control panel, or by exposing the tracking sensor of the hand-tracking device, such as by moving or removing a physical cover which is provided over the hand-tracking device to occlude the field of view of the tracking sensor when not in use.
[056] Upon activation of the touchless interface, the rendering controller begins to receive tracking data from the hand-tracking device, as shown in block 710. The rendering controller analyzes the tracking data in each frame to determine whether a starting configuration has been detected, as shown in blocks 715 and 720. In one example, the starting configuration may be the presentation of at least one hand open (e.g., with at least four fingers extended) within the field of view. In another example, the starting configuration may be the presentation of both hands fully open (e.g., with at least four fingers of each hand extended) within the field of view. In some examples, unintentional commands may be filtered out by only generating an initiation command if the starting configuration is detected and maintained for a predetermined period of time (e.g., the user maintains the hands open and generally still for at least 1 second. In some examples, the rendering controller enters a different mode depending on the starting configuration detected.
[057] Upon detection of the starting configuration, the rendering controller may initiate a touchless manipulation session, as shown in block 730. Also, the tracking data associated with the starting configuration is stored, as shown in block 725 to enable calculations of relative movement for the given session. That is, the positional information of the presented hand(s) in the starting configuration is recorded for that session and stored in memory. Relative movement of the hands and/or fingers may then be calculated as the difference between the detected position of the user's hand(s) and/or fingers in a subsequent frame and the position of the user's hand(s) and/or fingers in the starting configuration. Because relative calculations are related back only to the starting configuration for a given session, additional (e.g., global or absolute) calibration of the touchless interface may not be required making the touchless interface easy to install and operate.
[058] Once a touchless manipulation session has been initiated, hand motion detected via the touchless interface is translated into manipulation commands, as shown in block 735, and applied in real time to update the volume rendering until the session ends, as shown in block 740.
[059] Figure 8 shows a flow diagram of a process associated with a touchless manipulation session. Upon initiation of a manipulation session, as shown in block 805, the rendering controller analyzes the tracking data in each frame to determine changes in the hand presentation. For example, if an open hand presentation is detected, as shown in block 820, the rendering controller may extract global hand movements from the tracking data frames to generate commands for moving the 3D data set in accordance with the global hand movements. As described, an open hand presentation in may be a presentation of the hand with four or more fingers extended (i.e., away from the palm of the hand). As shown in block 840, a partially open presentation may be detected when a fewer than four fingers are detected as extended. If a partially open presentation is detected, the rendering controller may determine displacement of the one or more extended fingers, as shown in block 845, and generate commands for modifying a rendering construct, for example the virtual light source (e.g., move the light relative to the volume, change direction of the light, increase or decrease intensity of the light). As shown in block 860, a closed presentation may be detected responsive to all fingers being moved towards the palm. The rendering controller may be configured to pause the current manipulation session until an open or partially open presentation is once again detected. As previously described, the user may freely move the hand within the tracked field without causing changes to the volume rendering. The session may terminate when a hand is no longer detected within the tracked field. Following the end of the manipulation session, a new session may be initiated in accordance with the process in Figure 7, by initially presenting the hand in a starting configuration in order to register the hand to the tracked field.
In various embodiments where components, systems and/or methods are implemented using a programmable device, such as a computer-based system or programmable logic, it should be appreciated that the above-described systems and methods can be implemented using any of various known or later developed programming languages, such as "C", "C++", "C#", "FORTRAN", "Pascal", "VHDL" and the like. Accordingly, various storage media, such as magnetic computer disks, optical disks, electronic memories and the like, can be prepared that can contain information that can direct a device, such as a computer, to implement the above-described systems and/or methods. Once an appropriate device has access to the information and programs contained on the storage media, the storage media can provide the information and programs to the device, thus enabling the device to perform functions of the systems and/or methods described herein. For example, if a computer disk containing appropriate materials, such as a source file, an object file, an executable file or the like, were provided to a computer, the computer could receive the information, appropriately configure itself and perform the functions of the various systems and methods outlined in the diagrams and flowcharts above to implement the various functions. That is, the computer could receive various portions of information from the disk relating to different elements of the above-described systems and/or methods, implement the individual systems and/or methods and coordinate the functions of the individual systems and/or methods described above.
[061] In view of this disclosure it is noted that the various methods and devices described herein can be implemented in hardware, software and firmware. Further, the various methods and parameters are included by way of example only and not in any limiting sense. In view of this disclosure, those of ordinary skill in the art can implement the present teachings in determining their own techniques and needed equipment to affect these techniques, while remaining within the scope of the invention. The functionality of one or more of the processors described herein may be incorporated into a fewer number or a single processing unit (e.g., a CPU) and may be implemented using application specific integrated circuits (ASICs) or general purpose processing circuits which are programmed responsive to executable instruction to perform the functions described herein.
[062] Although the present system may have been described with particular reference to an ultrasound imaging system, it is also envisioned that the present system can be extended to other medical imaging systems where one or more images are obtained in a systematic manner. Accordingly, the present system may be used to obtain and/or record image information related to, but not limited to renal, testicular, breast, ovarian, uterine, thyroid, hepatic, lung, musculoskeletal, splenic, cardiac, arterial and vascular systems, as well as other imaging applications related to ultrasound-guided interventions. Further, the present system may also include one or more programs which may be used with conventional imaging systems so that they may provide features and advantages of the present system. Certain additional advantages and features of this disclosure may be apparent to those skilled in the art upon studying the disclosure, or may be experienced by persons employing the novel system and method of the present disclosure. Another advantage of the present systems and method may be that conventional medical image systems can be easily upgraded to incorporate the features and advantages of the present systems, devices, and methods.
[063] Of course, it is to be appreciated that any one of the examples, embodiments or processes described herein may be combined with one or more other examples, embodiments and/or processes or be separated and/or performed amongst separate devices or device portions in accordance with the present systems, devices and methods.
[064] Finally, the above-discussion is intended to be merely illustrative of the present system and should not be construed as limiting the appended claims to any particular embodiment or group of embodiments. Thus, while the present system has been described in particular detail with reference to exemplary embodiments, it should also be appreciated that numerous modifications and alternative embodiments may be devised by those having ordinary skill in the art without departing from the broader and intended spirit and scope of the present system as set forth in the claims that follow. Accordingly, the specification and drawings are to be regarded in an illustrative manner and are not intended to limit the scope of the appended claims.

Claims

CLAIMS What is claimed is:
1. A method comprising:
receiving a 3D dataset corresponding to an imaged volume in a subject;
generating a volume rendering of the 3D dataset;
displaying the volume rendering on a display operatively associated with a touchless interface;
detecting a position and motion of an object within a tracking field of the touchless interface;
initiating a touchless manipulation session when the object is detected to be in a first configuration within the tracking field;
generating rendering commands to move the 3D dataset or to adjust a rendering construct based on the motion of the object;
continuously updating the volume rendering responsive to the rendering commands until the object is detected to be in a second configuration within the tracking field or the object is no longer detected within the tracking field.
2. The method of claim 1, wherein the object is a hand, wherein the first configuration corresponds to the hand being presented within the tracking field with four or more fingers extended away from a palm of the hand, and wherein the second configuration corresponds to the hand being presented within the tracking field with four or more fingers folded toward the palm of the hand.
3. The method of claim 2, further comprising calculating a global displacement of the hand along all axes of a coordinate frame of the tracking field, and wherein the rendering commands include commands to move the 3D dataset based only on displacement of the hand along two of the axes of the coordinate frame.
4. The method of claim 2, further comprising detecting a third configuration of the hand corresponding to the hand being presented within the tracking field with one finger extended away from a palm of the hand and at least three fingers folded toward the palm of the hand, and calculating a displacement of the extended finger relative to a coordinate frame of the tracking field while the hand remains in the third configuration.
5. The method of claim 4, wherein the rendering commands include commands to move a location of a virtual light based only on the displacement of the finger.
6. The method of claim 5, wherein the rendering commands include commands to move the location of the virtual light are generated only if a single extended finger is detected within the tracking field.
7. The method of claim 5, wherein the commands to move the location of the virtual light includes commands to move the light to a location within the 3D data set.
8. The method of claim 2, further comprising detecting a forth configuration corresponding to both hands being presented within the tracking field, and generating first rendering commands for moving the 3D dataset based on movement of the first hands and generating second rendering commands for adjusting a rendering construct based on movement of the second hand.
9. The method of claim 8, wherein the rendering construct is a cut plane and wherein a location and an orientation of the cut plane relative to the 3D dataset is dynamically adjusted responsive to detected translation and rotation of the second hand.
10. The method of claim 9, wherein the 3D dataset is rendered by projecting a 2D image of the 3D dataset onto a viewing plane, wherein the 3D dataset is constrained from translation in 3 degrees of freedom (DOF), and wherein the cut plane is limited to translation only along a direction perpendicular to the viewing plane.
11. The method of claim 1, wherein the object is a hand, the method further comprising:
recording the position of the hand when detected to be in the first configuration as an initial reference position; pausing the generating of rendering commands when the hand is detected to be in the second configuration;
resuming the generation of rendering commands when the hand is subsequently detected to be in the first configuration, wherein the position of the hand when subsequently detected to be in the first configuration is recorded as a new reference position, and wherein rendering commands generated after the resuming are based on the motion of the hand relative to the new reference position.
12. The method of claim 1, wherein the 3D dataset is received while ultrasonically imaging the volume.
13. A non-transitory computer-readable medium comprising executable instructions, which when executed cause a processor of medical imaging system to perform any of the methods of claims 1-12.
14. A medical image viewing and manipulation system comprising:
a volume renderer configured to receive a three dimensional (3D) dataset corresponding to an imaged volume and generate a volume rendering of the imaged volume;
a touchless interface configured to generate commands responsive to touchless user input, wherein the touchless interface comprises:
a hand-tracking device having a field of view, wherein the hand- tracking device is configured to generate tracking data responsive to movement of a user's hand or a portion thereof within the field of view; and
a rendering controller communicatively coupled to the hand- tracking device and the volume renderer, wherein the rendering controller is configured to generate commands for manipulating the 3D dataset based on the tracking data; and
a display configured to display the volume rendering and update the display in real time based on the manipulation commands.
15. The system of claim 14, wherein the volume renderer is part of an ultrasound imaging system which includes an ultrasound probe and a signal processor, wherein the signal processor is configured to receive ultrasound echoes from the ultrasound probe to generate the 3D dataset.
16. The imaging system of claim 15, wherein the hand-tracking device is incorporated into a console of the ultrasound imaging system.
17. The imaging system of claim 14, wherein the hand-tracking device comprises an optical tracking device configured to track a global position of the hand and positions of individual fingers of the hand.
18. The imaging system of claim 17, wherein the rendering controller is configured to generate a first set of commands operable to control movement of the 3D dataset responsive to detection of the hand in a first configuration and generate a second set of commands operable to control a rendering construct different than the 3D dataset responsive to detection of the hand in a second configuration.
19. The imaging system of claim 17, wherein the touchless interface is configured to ignore movements of the hand following a detection of the hand in a closed configuration until the hand is arranged in another configuration different than the closed configuration.
20. The imaging system of claim 15, wherein the touchless interface is configured to independently track movement of both hands of a user and wherein the rendering controller is configured to generate first commands for controlling movement of the 3D dataset based on movement of one hand of the user and generate second commands for controlling a rendering construct in relation to the 3D dataset based on movement of the other hand of user.
21. The imaging system of claim 15, wherein the touchless interface is configured to track the movement of both hands of a user and wherein the rendering controller is configured to generate commands for manipulating the 3D dataset based on a joint movement of both hands relative to the tracking coordinate frame or based on the movement of one of the hands relative to the other hand.
PCT/EP2017/067193 2016-07-13 2017-07-10 Systems and methods for three dimensional touchless manipulation of medical images WO2018011105A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201662361735P 2016-07-13 2016-07-13
US62/361,735 2016-07-13
US201662418313P 2016-11-07 2016-11-07
US62/418,313 2016-11-07

Publications (1)

Publication Number Publication Date
WO2018011105A1 true WO2018011105A1 (en) 2018-01-18

Family

ID=59656016

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2017/067193 WO2018011105A1 (en) 2016-07-13 2017-07-10 Systems and methods for three dimensional touchless manipulation of medical images

Country Status (1)

Country Link
WO (1) WO2018011105A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110047126A (en) * 2019-04-25 2019-07-23 北京字节跳动网络技术有限公司 Render method, apparatus, electronic equipment and the computer readable storage medium of image
WO2020171907A1 (en) * 2019-02-23 2020-08-27 Microsoft Technology Licensing, Llc Locating slicing planes or slicing volumes via hand locations

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6443896B1 (en) 2000-08-17 2002-09-03 Koninklijke Philips Electronics N.V. Method for creating multiplanar ultrasonic images of a three dimensional object
US6530885B1 (en) 2000-03-17 2003-03-11 Atl Ultrasound, Inc. Spatially compounded three dimensional ultrasonic images
US20050289472A1 (en) * 2004-06-29 2005-12-29 Ge Medical Systems Information Technologies, Inc. 3D display system and method
US20110050562A1 (en) * 2009-08-27 2011-03-03 Schlumberger Technology Corporation Visualization controls
US20130033571A1 (en) * 2011-08-03 2013-02-07 General Electric Company Method and system for cropping a 3-dimensional medical dataset
EP2782070A1 (en) * 2013-03-19 2014-09-24 Esaote S.p.A. Imaging method and device for the cardiovascular system
US20160147308A1 (en) * 2013-07-10 2016-05-26 Real View Imaging Ltd. Three dimensional user interface
GB2533777A (en) * 2014-12-24 2016-07-06 Univ Of Hertfordshire Higher Education Corp Coherent touchless interaction with steroscopic 3D images

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6530885B1 (en) 2000-03-17 2003-03-11 Atl Ultrasound, Inc. Spatially compounded three dimensional ultrasonic images
US6443896B1 (en) 2000-08-17 2002-09-03 Koninklijke Philips Electronics N.V. Method for creating multiplanar ultrasonic images of a three dimensional object
US20050289472A1 (en) * 2004-06-29 2005-12-29 Ge Medical Systems Information Technologies, Inc. 3D display system and method
US20110050562A1 (en) * 2009-08-27 2011-03-03 Schlumberger Technology Corporation Visualization controls
US20130033571A1 (en) * 2011-08-03 2013-02-07 General Electric Company Method and system for cropping a 3-dimensional medical dataset
EP2782070A1 (en) * 2013-03-19 2014-09-24 Esaote S.p.A. Imaging method and device for the cardiovascular system
US20160147308A1 (en) * 2013-07-10 2016-05-26 Real View Imaging Ltd. Three dimensional user interface
GB2533777A (en) * 2014-12-24 2016-07-06 Univ Of Hertfordshire Higher Education Corp Coherent touchless interaction with steroscopic 3D images

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020171907A1 (en) * 2019-02-23 2020-08-27 Microsoft Technology Licensing, Llc Locating slicing planes or slicing volumes via hand locations
US11507019B2 (en) 2019-02-23 2022-11-22 Microsoft Technology Licensing, Llc Displaying holograms via hand location
CN110047126A (en) * 2019-04-25 2019-07-23 北京字节跳动网络技术有限公司 Render method, apparatus, electronic equipment and the computer readable storage medium of image
CN110047126B (en) * 2019-04-25 2023-11-24 北京字节跳动网络技术有限公司 Method, apparatus, electronic device, and computer-readable storage medium for rendering image

Similar Documents

Publication Publication Date Title
US11723734B2 (en) User-interface control using master controller
US11662830B2 (en) Method and system for interacting with medical information
JP7197368B2 (en) Systems and methods for generating B-mode images from 3D ultrasound data
US7773074B2 (en) Medical diagnostic imaging three dimensional navigation device and methods
US10896538B2 (en) Systems and methods for simulated light source positioning in rendered images
WO2008076079A1 (en) Methods and apparatuses for cursor control in image guided surgery
JP6887449B2 (en) Systems and methods for illuminating rendered images
EP2733947A2 (en) Medical image generating apparatus and medical image generating method
US7567701B2 (en) Input system for orientation in a three-dimensional visualization and method for visualization of three-dimensional data sets
US20140055448A1 (en) 3D Image Navigation Method
JP5784388B2 (en) Medical manipulator system
Krapichler et al. VR interaction techniques for medical imaging applications
US6616618B2 (en) Method of and device for visualizing the orientation of therapeutic sound waves onto an area to be treated or processed
WO2018011105A1 (en) Systems and methods for three dimensional touchless manipulation of medical images
CN111904462B (en) Method and system for presenting functional data
AU2021256457A1 (en) System and method for augmented reality data interaction for ultrasound imaging
KR20130089645A (en) A method, an apparatus and an arrangement for visualizing information
KR101611484B1 (en) Method of providing medical image
US8576980B2 (en) Apparatus and method for acquiring sectional images
CN109313818B (en) System and method for illumination in rendered images
Krapichler et al. Human-machine interface for a VR-based medical imaging environment
Shukla et al. A Movable Tomographic Display for 3D Medical Images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17754264

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17754264

Country of ref document: EP

Kind code of ref document: A1