WO2013136239A1 - Interactive correspondence refinement in stereotactic x-ray imaging - Google Patents

Interactive correspondence refinement in stereotactic x-ray imaging Download PDF

Info

Publication number
WO2013136239A1
WO2013136239A1 PCT/IB2013/051857 IB2013051857W WO2013136239A1 WO 2013136239 A1 WO2013136239 A1 WO 2013136239A1 IB 2013051857 W IB2013051857 W IB 2013051857W WO 2013136239 A1 WO2013136239 A1 WO 2013136239A1
Authority
WO
WIPO (PCT)
Prior art keywords
location
user
image
images
different images
Prior art date
Application number
PCT/IB2013/051857
Other languages
French (fr)
Inventor
André GOOSSEN
Thomas Buelow
Thomas Pralow
Original Assignee
Koninklijke Philips N.V.
Philips Intellectual Property & Standards Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips N.V., Philips Intellectual Property & Standards Gmbh filed Critical Koninklijke Philips N.V.
Publication of WO2013136239A1 publication Critical patent/WO2013136239A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/02Devices for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/022Stereoscopic imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/46Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment with special arrangements for interfacing with the operator or the patient
    • A61B6/461Displaying means of special interest
    • A61B6/463Displaying means of special interest characterised by displaying multiple images or images and diagnostic data on one display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/44Constructional features of apparatus for radiation diagnosis
    • A61B6/4429Constructional features of apparatus for radiation diagnosis related to the mounting of source units and detector units
    • A61B6/4435Constructional features of apparatus for radiation diagnosis related to the mounting of source units and detector units the source unit and the detector unit being coupled by a rigid structure
    • A61B6/4441Constructional features of apparatus for radiation diagnosis related to the mounting of source units and detector units the source unit and the detector unit being coupled by a rigid structure the rigid structure being a C-arm or U-arm
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/46Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment with special arrangements for interfacing with the operator or the patient
    • A61B6/467Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment with special arrangements for interfacing with the operator or the patient characterised by special input means
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/50Clinical applications
    • A61B6/502Clinical applications involving diagnosis of breast, i.e. mammography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20101Interactive definition of point of interest, landmark or seed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Theoretical Computer Science (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Quality & Reliability (AREA)
  • Human Computer Interaction (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The present invention relates to providing positioning data of an object in X-ray imaging. In order to provide an improved way of providing positioning data of an object in terms of accuracy and attention required by the user, it is proposed to provide (112) image data of at least two images of an object from different directions with a known geometric relation and to present (114) the at least two different images. Next, a first 2D position (122) is determined (124) in a first one of the at least two different images by interaction of a user; and the first position is indicated (126) in the first image. Then, a corresponding range of location (132) is computed (134) for the first position in at least a second one of the at least two different images,and the range of location is indicated (136) in the at least second image. Further, a second 2D position (142) is determined (144) in the corresponding range of location for the first position in at least a second one of the at least two different images by interaction of the user,and the second position is indicated (146) in the second image.

Description

INTERACTIVE CORRESPONDENCE REFINEMENT IN STEREOTACTIC X-RAY IMAGING
FIELD OF THE INVENTION
The present invention relates to a graphical user-interface for providing positioning data of an object, a system for providing positioning data of an object, an X-ray imaging arrangement, a method for providing navigation data of an object, a computer program element and a computer readable medium.
BACKGROUND OF THE INVENTION
Stereotactic examinations are used, for example, in screening and diagnostic mammography. Certain findings, e.g. regions with a suspicious appearance in an X-ray image, are located with stereographic images. By identifying the finding in each of two stereo images, it is possible to derive the spatial location of the finding. This positioning information can then be used, for example, to perform a biopsy and collect a specimen from the suspicious location.
US 6,022,325 relates to a mammographic biopsy apparatus and describes the use of stereoscopic images for the determination of points of interest for the placement of a biopsy needle. However, it has been shown that the manual selection of the respective positions in each of the provided images may lead to inaccuracy and has also been shown to be
cumbersome.
SUMMARY OF THE INVENTION
There may be a need for an improved way of providing positioning data of an object in terms of accuracy and attention required by the user.
The object of the present invention is solved by the subject-matter of the independent claims, wherein further embodiments are incorporated in the dependent claims.
It should be noted that the following described aspects of the invention apply also for the graphical user-interface for providing positioning data of an object, the system for providing positioning data of an object, the X-ray imaging arrangement, the method for providing navigation data of an object, as well as for the computer program element and the computer readable medium. According to a first aspect of the present invention, a graphical user-interface for providing positioning data of an object is provided, comprising a display unit, an interface controller unit, and a user-interface unit. The display unit comprises at least two display portions configured to present at least two different images of an object from different directions with a known geometric relation. The graphical user-interface unit is configured for a determination of a first 2D position in a first one of the at least two different images by interaction of a user. The interface controller unit is configured to compute a corresponding range of location for the first position in at least a second one of the at least two different images. The graphical user-interface unit is configured for a determination of a second 2D position in the corresponding range of location for the first position in at least a second one of the at least two different images by interaction of the user. The display input unit is further configured to indicate the first position in the first image, and to indicate the range of location in the at least second image. The display unit is further configured to indicate the second position in the second image.
According to a second aspect of the present invention, a system for providing positioning data of an object is provided, comprising a data input unit, a processing unit, an interface unit, and a display unit. The data input unit is configured to provide image data of at least two images of an object from different directions with a known geometric relation. The processing unit is configured to compute a corresponding range of location for the first position in at least a second one of the at least two different images. The display unit is configured to present the at least two different images, to indicate the first position in the first image, and to indicate the range of location in the at least second image, as well as to indicate the second position in the second image. The interface unit is configured for an interaction of a user to determine a first 2D position in a first one of the at least two different images, and to determine a second 2D position in the corresponding range of location for the first position in at least a second one of the at least two different images.
According to a further example, a combination of the graphical user-interface and the system is provided.
According to an exemplary embodiment, the interface unit is configured for an interaction of a user to determine a primary 2D position in each of the at least two images. The processing unit is further configured to compute a corresponding range of location for each position for all other images. The processing unit is further configured to determine and provide a proposal for a refined corresponding secondary 2D position in the other images considering the determined primary 2D position in the respective image to the user.
According to a third aspect of the present invention, an X-ray imaging arrangement is provided, comprising an X-ray source, an X-ray detector, and a processing unit. Further, a graphical user-interface is provided according to one of the above-mentioned examples. Further, a system for providing positioning data of an object according to the examples mentioned above may also be provided in addition or as an alternative to the graphical user-interface. The X-ray source and the X-ray detector are configured to acquire the X-ray images from at least two different directions with a known geometric relation.
According to a further exemplary embodiment, the X-ray imaging arrangement further comprises a biopsy device for performing a collection of a specimen from a suspicious location. Upon determination of a spatial location, based on the user interaction in the first image resulting in the first 2D position and in the second image, resulting in the second position, the spatial location is provided to the biopsy device. The biopsy device is configured to collect the specimen from the determined spatial location.
According to a fourth aspect of the present invention, a method for providing navigation data of an object is provided, comprising the following steps:
a) providing image data of at least two images of an object from different directions with a known geometric relation, and presenting the at least two different images; b) determining a first 2D position in a first one of the at least two different images by interaction of a user, and indicating the first position in the first image;
c) computing a corresponding range of location for the first position in at least a second one of the at least two different images, and indicating the range of location in the at least second image; and
d) determining a second 2D position in the corresponding range of location for the first position in at least a second one of the at least two different images by interaction of the user, and indicating the second position in the second image.
The navigation data relates to positioning data.
According to an exemplary embodiment, a step e) is provided for computing a corresponding first location for the second position in the first one of the at least two different images, and computing a proposal for a correction of the determined first 2D position based on the computed corresponding first location, and indicating the proposal for the corrected first 2D position in the first image. Further, a step f) is provided for refining at least one of the corresponding first and/or second 2D positions in at least one of the at least two images by an interaction of the user.
According to a further exemplary embodiment, in step b) the determined 2D position is a first part or object of a target structure, and a step g) is provided for determining a further 2D position of the second part of the target structure in the first one of the at least two different images by interaction of a user, and indicating the further position in the first image. Next, a step h) is provided, repeating step e) as mentioned above. Further, a step i) is provided computing the spatial extension of the target structure, and positioning a computed device maximizing a cover of the target structure extension by the computed device.
According to a further exemplary embodiment, the user corrects at least one of the indicated positions in at least one of the images. The corrected position is used as a secondary input upon which the at least one corresponding position is re-computed. Updated positions are then presented.
According to a further exemplary embodiment, the user determines a primary 2D position in each of the at least two images. A corresponding range of location for each position is then computed for all other images. The proposal for a refined corresponding secondary 2D position in the other images considering the determined primary 2D position in the respective image is provided to the user.
According to a further aspect of the present invention, a combination of generating correspondences in an automatic way and confirmation required by the operator or user is provided. For example, correspondences are generated automatically in three views, with pixel to subpixel accuracy, requiring only the operator to confirm the positions. In order to achieve this, a registration technique is provided, in which cursor positions are registered without deforming the images. Because of the calibrated epipolar geometry, the possible positions for each correspondence can be restricted to a straight line and even to a certain fraction along the line due to the limited (and known) depth of the region of interest, for example a female breast, and of course other anatomical structures as well. Thus, accuracy of correspondences is improved, as well as the workflow necessary to be performed is simplified.
These and other aspects of the invention will become apparent from and be elucidated with reference to the embodiments described hereinafter.
BRIEF DESCRIPTION OF THE DRAWINGS Exemplary embodiments of the invention will be described in the following with reference to the following drawings:
Fig. 1 schematically shows an exemplary embodiment of a graphical user- interface for providing positioning data of an object, according to an example of the present invention.
Fig. 2 schematically shows a system for providing positioning data of an object according to an example of the present invention.
Fig. 3 shows an X-ray imaging arrangement according to an exemplary embodiment of the present invention.
Fig. 4 shows basic steps of a method for providing navigation data of an object according to an example of the present invention.
Figs. 5, 6 and 7 show further examples of methods according to the present invention.
Figs. 8A to 8B show an example for images provided to the user. Figs. 9A, 9B, and 9C show portions of images provided to the user in combination with the performance of certain steps.
Figs. 10A and 10B show further images displayed to the user according to the present invention.
Figs. HA to 11B, Figs. 12A to 12C, and Figs. 13A to 13B show photographic illustrations of the drawings of Figs. 8A to 8B, Figs. 9A to 9C, and Figs. 10A to 10B.
DETAILED DESCRIPTION OF EMBODIMENTS
Fig. 1 shows a graphical user-interface 10 for providing positioning data of an object. The graphical user-interface 10 comprises a display unit 12, an interface controller unit 14, and a user-interface unit 16. The display unit 12 comprises at least two display portions 18 configured to present at least two different images of an object from different directions with a known geometric relation. The user-interface unit 16 is configured for a determination of a first 2D position in a first one of the at least two different images by interaction of a user. The interface controller unit 14 is configured to compute a corresponding range of location for the first position in at least a second one of the at least two different images. The user-interface unit is further configured for a determination of a second 2D position in the corresponding range of location for the first position in at least a second one of the at least two different images by interaction of a user. The display input unit 12 is further configured to indicate the first position in the first image, and to indicate the range of location in the at least second image, and to indicate the second position in the second image.
In Fig. 1, an arrow 20 indicates a control connection of the interface controller unit 14 with the display unit 12. Further, a connecting line 22 indicates a data connection between the user-interface unit 16 and the interface controller unit 14. Of course, the indication of data connections comprises wire-based connections, as well as wireless connections. Further, the display unit 12, the interface controller unit 14, and/or the user- interface unit 16 may be provided in an integral manner, for example in a single housing or apparatus structure.
The user-interface unit 16 may be provided as a separate input element, for example a keyboard, a mouse or a joystick, as well as a trackball or other interaction devices. The user-interface unit 16 may also be provided as a touch-sensitive function of the display unit, i.e. as a touch screen surface of the display unit 12.
The interface controller unit 14 may be configured to determine a spatial location upon the user action in the first and second image, based on the first 2D position and the at least second 2D position. The spatial location may then be provided for further steps.
The interface controller unit 14 may be configured to compute the corresponding first location for the second position in the first one of the at least two different images, and to compute a proposal for a correction of the determined first 2D position based on the computed corresponding first location. Further, the display unit may be configured to indicate a proposal for the corrected first 2D position in the at least first image. The user- interface unit 16 may also be configured for a refinement of at least one of the corresponding first and/or second 2D positions in the at least one of the at least two images by interaction of the user.
For example, the determined 2D position is a first part of a target structure. The user-interface unit 16 may be configured for determination of a further 2D position of a second part of the target structure in one of the at least two different images by interaction of the user. The display unit 12 may be configured to indicate a further position in the first image, and the interface controller unit 14 may be configured to compute the spatial extension of the target structure, and to position a computed device maximizing a cover of the target structure extension by the computed device.
For example, a first image is shown in the left display portion and a second image is shown in the right display portion. Both images relate to the same object, but acquired from different directions with a known geometric relation. The user then determines a position in the left image, for example, and the corresponding range of location is computed and presented in the right image. Of course, this could also be provided vice versa, i.e.
determining a first position in the right image and indicating the corresponding range of location in the other, i.e. left image. Instead of having to exactly identify the corresponding position in the second image by the user himself, which is not only time-consuming, but also subject to possible inaccuracy, the user is provided with a range of location, thus facilitating the identification and determination of the second 2D position in the second image.
Fig. 2 schematically shows a system 30 for providing positioning data of an object, comprising a data input unit 32, a processing unit 34, an interface unit 36, and a display unit 38. The data input unit 32, for example a data exchange interface of the processing unit, is configured to provide image data of at least two images of an object from different directions with a known geometric relation. The processing unit 34 is configured to compute a corresponding range of location for the first position in at least a second one of the at least two different images. The display unit 38 is configured to present the at least two different images (not further shown), and to indicate the first position in the first image, and also to indicate the range of location in the at least second image, as well as to indicate the second position in the second image. The interface unit 36 is configured for an interaction of a user to determine a first 2D position in a first one of the at least two different images, and to determine a second 2D position in the corresponding range of location for the first position in at least a second one of the at least two different images. The functional relations will be explained also further below.
For example, the processing unit 34 is configured to determine a spatial location upon the user interaction in the first and second image, based on the first 2D position and the at least second 2D position. The spatial location may then be provided for further steps, for example via the data input unit 32, acting as a data output unit.
For example, the interface unit 36 is configured for an interaction of a user to determine a primary 2D position in each of the at least two images. The processing unit 34 is further configured to compute a corresponding range of location for each position for all other images. The processing unit 34 is further configured to determine and provide a proposal for a refined corresponding secondary 2D position in the other images considering the determined primary 2D position in the respective image to the user. Thus, the possibility to correct and thus further define, or refine, the identified 2D positions by an interaction of the user is provided. For example, upon presenting the result of a first round of determination of 2D position and the respective identification within the provided range for the second 2D position, the user can interact in one (or more) correction loop(s) for further positioning.
For example, the processing unit may further be configured to compute a corresponding first location for the second position in the first one of the at least two different images and to compute a proposal for a correction of the determined first 2D position based on the computed corresponding first location. Thus, the user is further supported by the processing unit, providing a guide or hint for a further refinement. The display unit is further configured to indicate the proposal for the corrected first 2D position in the at least first image. The interface unit is further configured for an interaction of the user to refine at least one of the corresponding first and/or second 2D positions in at least one of the at least two images by interaction of the user.
In a further example, the determined 2D position is a first part of a target structure, and the interface unit is configured for an interaction of a user to determine a further 2D position of a second part of the target structure in one of the at least two different images. The display unit may be configured to indicate the further position in the first image, and the processing unit is configured to compute the spatial extension of the target structure and to position a computed device maximizing a cover of target structure extension by the computed device.
For example, the computed device is relating to a biopsy device and the target structure is an identified finding with a three-dimensional extension. Thus, the user is provided with a guidance based on simulation in order to evaluate and assess the success of a planned extraction step, for example for collecting a specimen of the identified finding.
Fig. 3 shows an X-ray imaging arrangement 40, comprising an X-ray source 42, an X-ray detector 44, and a processing unit 46 (not further shown in detail). Further, a graphical user-interface 10 may be provided, as described in relation with Fig. 1. In addition, or alternatively, a system 30 for providing positioning data of an object, as described in relation with Fig. 2, may be provided.
The X-ray imaging arrangement 40 is shown as a C-arm structure in Fig. 3, comprising a C-arm support structure 50, movably holding a C-arm 52. The X-ray source 42 and the X-ray detector 44 are provided at opposing ends of the C-arm 52. Thus, it is possible to acquire X-ray images from different directions of an object 54. Further, a support arrangement 56, for example a patient support table, may be provided. Further, some lighting equipment 58 and some further display equipment 60 is indicated.
According to the present invention, the depicted C-arm arrangement of Fig. 3 is shown as an example only. The present invention is also provided for other X-ray imaging equipment, for example for moveable X-ray sources in combination with portable detector tablets, as well as to mammography X-ray imaging systems, and also for CT systems.
Independent of the chosen X-ray imaging structure, the X-ray source and the X-ray detector are configured to acquire the X-ray images from at least two different directions with a known geometric relation.
According to a further example (not further shown in detail), the X-ray imaging arrangement 40 comprises a biopsy device for performing a collection of a specimen from a suspicious location. Upon determination of a spatial location, based on the user interaction in the first image resulting in the first 2D position, and in the second image resulting in the second 2D position, the spatial location is provided to the biopsy device. The biopsy device is configured to collect the specimen from the determined spatial location. For example, the biopsy device is a biopsy needle, for example for mammography examinations.
In the following, the procedure of providing positioning data of an object is further explained.
Fig. 4 shows an example of a method 100 for providing navigation data of an object. The following steps are provided: In a first step 110, image data of at least two images of an object from different directions with a known geometric relation is provided in a first substep 112, and the at least two different images are presented in a second substep 114. In a second step 120, a first 2D position 122 is determined in a first one of the at least two different images by interaction of a user in a first substep 124. In a second substep 126, the first position is indicated in the first image. In a third step 130, a corresponding range of location 132 for the first position in at least a second one of the at least two different images is computed in a first substep 134. The range of location in the at least second image is indicated in a second substep 136. In a fourth step 140, a second 2D position 142 in the corresponding range of location for the first position in at least a second one of the at least two different images is determined in a first substep 144 by interaction of the user. In a second substep 146, the second position in the second image is indicated.
The first step 110 is also referred to as step a), the second step 120 as step b), the third step 130 as step c), and the fourth step 140 as step d).
According to a further example, shown in Fig. 5, a fifth step 150 is provided in which a corresponding first location 152 for the second position in the first one of the at least two different images is computed in a first substep 154, and a proposal 156 for a correction of the determined first 2D position based on the computed corresponding first location is computed in a second substep 158. In a further substep 159, the proposal for the corrected first 2D position in the at least first image is indicated. In a sixth step 160, a refining 162 of at least one of the corresponding first and/or second 2D positions in at least one of the at least two images is refined by interaction of the user.
The fifth step 150 is also referred to as step e), and the sixth step 160 as step f).
For example, the 2D positions are target positions, and in a further step, an optimum device placement is computed maximizing a probability to hit the selected target.
According to a further example, shown in Fig. 6, in step b), the determined 2D position is a first part of a target structure. A first further step 170 is provided with a first substep 172, in which a further 2D position 174 of a second part of the target structure is determined in one of the at least two different images by interaction of a user. In a second substep 176, the further position is indicated in the first image. In a second further step 180, it is provided to compute a corresponding first location 182 for the second position in the first one of the at least two different images in a first substep 184. Then, in a second substep 186, a proposal 188 for a correction of the determined first 2D position is computed based on the computed corresponding first location. In a further substep 189, the proposal for the corrected first 2D position is indicated in the at least first image. In a third further step 190, the spatial extension of the target structure is computed in a first substep 192, and a computed device is positioned in a second substep 194, maximizing a cover of the target structure extension by the computed device.
The first further step 170 is also referred to as step g), the second further step 180 as step h), and the third further step 190 as step i).
It is noted that the substeps of step h) may be provided in accordance with step e) of the example shown in Fig. 5, as described above.
According to a further example (not further shown), before step g), the steps d) to f) are repeated in at least one refinement loop. According to a further example, a spatial location is determined based on the first 2D position and the at least second 2D position. The spatial location is then provided for further steps.
For example, the spatial location is referred to as a 3D position, location in space or the like.
The range of location in step d) may be an epipolar line of the position determined in step c), but considering the probable location within the object, thus providing only a segment of the epipolar line. According to a further example, the range of location is restricted to a line segment arranged inside an object. For example, the epipolar line is restricted to be arranged inside a breast volume.
Fig. 7 shows a further embodiment of a method, wherein the user corrects at least one of the indicated positions in at least one of the images. For example, the first 2D position, as indicated in substep 126, is corrected in a first correction step 210. The corrected position is used as a secondary input 212 upon which the at least one corresponding position is re-computed in a further substep 214, and updated positions are presented in a further substep 216.
A further example for a correction provided by the user is also shown in Fig. 7, however not meant as a compulsory aspect, but as an alternative correction, which may also be provided in addition. In other words, Fig. 7 shows two different ways of correcting, which two ways may be provided independently from each other, or also in addition.
As indicated, the user can correct the second position as indicated in substep 146, which correction is indicated with a further frame 220. The corrected position is then used as a secondary input 222 upon which the at least one corresponding position is recomputed 224, and updated positions are presented in substep 226.
According to a further example (not shown), a preview of an interventional device is computed and positioned such that the determined target, which is defined by the at least first and second positions, is maximally covered by the device and the position of the computed device is then presented.
According to a further example (also not shown), the user determines a primary 2D position in each of the at least two images. A corresponding range of location for each position is computed for all other images. Further, a proposal for a refined corresponding secondary 2D position in the other images, considering the determined primary 2D position in the respective image, is provided to the user. For example, instead of an automatic refinement, the user may also determine a respective secondary position in a corresponding range of location for the primary position in all other images.
According to a further example, upon user interaction, at least one of the positions is corrected, and upon the user action, the interventional device may be re-computed and an updated visualization may be shown.
The position may be used for performing an interventional procedure.
For example, the position is used as a target location for placing a biopsy needle. After placing the biopsy needle, the positioning is compared with the target location. Further, the biopsy needle is re-positioned based on the comparison for the positioning with the target location.
Further combinations of the features described in relation with Fig. 4 and the features described in relation with one or more of the Figs. 5, 6 and/or 7 are provided.
A still further example shall be described in the following with reference to Figs. 8A to 10B.
Fig. 8 A shows a first image 300 acquired in a view at +15° as a first viewing direction. Fig. 8B shows a second image 302, acquired in an angle of -15° degrees as a second viewing direction. As can be seen in both images, the region of interest shows a first finding 304, and a second finding 306, representing a micro-calcification cluster.
For example, to assess the depth extension of structures and target them optimally, a certain localization of the target structure is selected in one preferred view, e.g. a single calcification for a calcification cluster. Fig. 9 A shows the first and second image in form of a small portion of the respective image each. Thus, Fig. 9A shows the calcification cluster 306 of the left image or first image 300, and calcification cluster of the second image 302.
Fig. 9B shows the selection by the user in the first image, indicated with an arrow 308. A second arrow 310, shown in dotted line, indicates the correspondence in the other view, as computed by the system.
Fig. 9C shows a first additional arrow 312 in the right image, indicating the location of another part of the target structure as provided by the user. A second additional arrow 314, again shown in a dotted manner, indicates the correspondence in the left image as computed by the system. Further, although not further shown, additional correspondences can be set. After setting the correspondences, the system determines the depth expansion of the target, for example by providing a comparison with a depth indicator representing an overlapping indicator of a device. This is indicated with a first depth expansion indicator 316 in Fig. 10A in the left image, and a second indicator 318 in the right image of Fig. 10A.
The overlapping indicator could be represented as relating to a certain biopsy device, for example. Thus, depending on the cannula width and the opening length, a certain volume could be extracted, which volume would be indicated for planning purposes. The biopsy device may be pre-determined by an operator.
The overlapping indicator could also be represented as a target volume that is definable by the operator, for example by adapting the frame on the screen until the desired matching is achieved. Thus, a certain volume would be indicated that is meant to be extracted. The system could then determine the matching biopsy device, or at least makes a proposal for best matching of the target volume and different extraction volumes of different biopsy needles.
In a next step, as shown in Fig. 10B, the depth expansion is transferred to a needle placement visualization. A visualized biopsy needle 320 is shown with the respective opening 322 for collecting a specimen. The depth expansion is provided in form of a rectangle 324 as depth information, in combination with the first arrow 308 and the first additional arrow 312, which are representing the respectively selected target structures, which can be also visible (for figure-drafting reasons, the target structure is not shown in Fig. 10B).
Thus, by the first type of indicators, i.e. in Fig. 10A, it is possible to review what portion of the finding would be hit, and by the second type of indicator, i.e. in Fig. 10B, it is possible to review which of the annotations would be within the opening of the biopsy device.
In another example, instead of the rectangle, or cuboid, the depth expansion is shown as an ellipsoid.
Fig. 11A shows a photographic illustration of the drawing of Fig. 8 A, and Fig. 1 IB is in accordance with Fig. 8B. Similar is the case for Figs. 9A to 9C which are shown in a photographic representation in Figs. 12A to 12C, as well as Figs. 10A to 10B shown in a photographic representation in Figs. 13A to 13B.
According to a further example, an interactive refinement for targeting in stereotactic biopsy is provided. It is noted that targeting of suspicious region requires a good understanding of the underlying three-dimensional object. However, from the stereo projections, it is often cumbersome and hard for untrained users to capture and derive the 3D extent from the 2D images only.
According to an embodiment, an improvement of the workflow is provided as follows:
i) The user selects the target in one preferred view.
ii) The system computes and visualizes the estimated correspondence in the other views and the target depth view port, without presenting a needle placement.
iii) The user locates the target in another different view.
iv) The system computes a new set of correspondences and a new target location. v) The user refines the correspondences according to the selected target depth by reviewing the target depth visualization.
vi) The system presents an optimum needle placement maximizing the probability to hit the selected target.
In another embodiment, the features are used in the following way to assess depth extension of structures and target them optimally:
i) Selecting a certain localization of the target structure in one preferred view, e.g. a single calcification for a calcification cluster or a certain border for a circumscribed lesion.
ii) The system computes and visualizes the correspondence in the other views and the target visualization, without presenting a needle placement.
iii) The user optionally refines the target according to steps iii) to v) of the above- mentioned workflow.
iv) The user locates another part of the target structure in the same or a different view, e.g. a different calcification or another border.
v) The system computes a new set of correspondences and a new target location. vi) After setting several correspondences, the system determines the depth expansion of the biopsy target and positions a new correspondence and a needle placement maximizing the amount of target tissue within the biopsy.
According to a further example, the correspondences are generated automatically in three views, with pixel to sub-pixel accuracy, only requiring the operator to confirm the positions. To achieve this, a registration technique is applied, where only the cursor positions are registered without actually deforming the images. Because of the calibrated epipolar geometry, the possible positions for each correspondence can be restricted to a straight line and even to a certain fraction of this line due to the limited (and known) depth of the region of interest, for example a breast, which makes this a better well-posed problem.
According to a further embodiment, the operator selects the location of the biopsy target in any of the three views (for example 15 degrees, 0 degrees, and plus 15 degrees). The user-interface then proposes corresponding locations in the remaining two views. The operator confirms or corrects any of the positions. The user-interface then computes a 3D target position for the operator's input.
In an alternative embodiment, the user marks the biopsy target in all views. Based on these imprecise initial indications, the system suggests refined corresponding positions that the user has to either accept or reject.
According to a further embodiment, the algorithm retrieves a target provided by the operator. Next, feature points are extracted around the target location. Alternatively, a template matching approach can be applied. The search area is then restricted to the corresponding epipolar line in the two remaining views. Within the search area, congruence with the feature points of the original annotation is optimized. Where possible, grey value information is used to perform a sub-pixel accurate alignment. Retrieved correspondences are then cast to the user-interface for operator confirmation.
In another exemplary embodiment of the present invention, a computer program or a computer program element is provided that is characterized by being adapted to execute the method steps of the method according to one of the preceding embodiments, on an appropriate system.
The computer program element might therefore be stored on a computer unit, which might also be part of an embodiment of the present invention. This computing unit may be adapted to perform or induce a performing of the steps of the method described above. Moreover, it may be adapted to operate the components of the above described apparatus. The computing unit can be adapted to operate automatically and/or to execute the orders of a user. A computer program may be loaded into a working memory of a data processor. The data processor may thus be equipped to carry out the method of the invention.
This exemplary embodiment of the invention covers both, a computer program that right from the beginning uses the invention and a computer program that by means of an up-date turns an existing program into a program that uses the invention.
Further on, the computer program element might be able to provide all necessary steps to fulfil the procedure of an exemplary embodiment of the method as described above.
According to a further exemplary embodiment of the present invention, a computer readable medium, such as a CD-ROM, is presented wherein the computer readable medium has a computer program element stored on it which computer program element is described by the preceding section.
A computer program may be stored and/or distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the internet or other wired or wireless telecommunication systems.
However, the computer program may also be presented over a network like the World Wide Web and can be downloaded into the working memory of a data processor from such a network. According to a further exemplary embodiment of the present invention, a medium for making a computer program element available for downloading is provided, which computer program element is arranged to perform a method according to one of the previously described embodiments of the invention.
It has to be noted that embodiments of the invention are described with reference to different subject matters. In particular, some embodiments are described with reference to method type claims whereas other embodiments are described with reference to the device type claims. However, a person skilled in the art will gather from the above and the following description that, unless otherwise notified, in addition to any combination of features belonging to one type of subject matter also any combination between features relating to different subject matters is considered to be disclosed with this application.
However, all features can be combined providing synergetic effects that are more than the simple summation of the features.
While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive. The invention is not limited to the disclosed embodiments.
Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing a claimed invention, from a study of the drawings, the disclosure, and the dependent claims. In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. A single processor or other unit may fulfil the functions of several items re-cited in the claims. The mere fact that certain measures are re-cited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. Any reference signs in the claims should not be construed as limiting the scope.

Claims

CLAIMS:
1. A graphical user-interface (10) for providing positioning data of an object, comprising:
a display unit (12);
an interface controller unit (14); and
- a user-interface unit (16);
wherein the display unit comprises at least two display portions (18) configured to present at least two different images of an object from different directions with a known geometric relation;
wherein the user-interface unit is configured for a determination of a first 2D position in a first one of the at least two different images by interaction of a user;
wherein the interface controller unit is configured to compute a corresponding range of location for the first position in at least a second one of the at least two different images;
wherein the user-interface unit is configured for a determination of a second 2D position in the corresponding range of location for the first position in at least a second one of the at least two different images by interaction of a user; and
wherein the display input unit is further configured to indicate the first position in the first image; and to indicate the range of location in the at least second image; and to indicate the second position in the second image.
2. A system (30) for providing positioning data of an object, comprising:
a data input unit (32);
a processing unit (34);
an interface unit (36); and
- a display unit (38);
wherein the data input unit is configured to provide image data of at least two images of an object from different directions with a known geometric relation;
wherein the processing unit is configured to compute a corresponding range of location for a first position in at least a second one of the at least two different images;
wherein the display unit is configured to present the at least two different images; to indicate the first position in the first image; and to indicate the range of location in the at least second image; and to indicate the second position in the second image; and
wherein the interface unit is configured for an interaction of a user to determine a first 2D position in a first one of the at least two different images; and to determine a second 2D position in the corresponding range of location for the first position in at least a second one of the at least two different images.
3. System according to claim 2, wherein the interface unit is configured for an interaction of a user to determine a primary 2D position in each of the at least two images;
wherein the processing unit is further configured to compute a corresponding range of location for each position for all other images; and
wherein the processing unit is further configured to determine and provide a proposal for a refined corresponding secondary 2D position in the other images considering the determined primary 2D position in the respective image to the user.
4. An X-ray imaging arrangement (40), comprising:
an X-ray source (42);
- an X-ray detector (44);
a processing unit (46); and
a graphical user-interface (10) according to claim 1; and/or a system (30) for providing positioning data of an object according to claim 2 or 3;
wherein the X-ray source and the X-ray detector are configured to acquire the X-ray images from at least two different directions with a known geometric relation.
5. X-ray imaging arrangement according to claim 4, further comprising a biopsy device for performing a collection of a specimen from a suspicious location;
wherein upon determination of a spatial location, based on the user interaction in the first image resulting in the first 2D position and in the second image, resulting in the second position, the spatial location is provided to the biopsy device; and
wherein the biopsy device is configured to collect the specimen from the determined spatial location.
6. A method (100) for providing navigation data of an object, comprising the following steps:
a) providing (112) image data of at least two images of an object from different directions with a known geometric relation; and presenting (114) the at least two different images;
b) determining (124) a first 2D position (122) in a first one of the at least two different images by interaction of a user; and indicating (126) the first position in the first image;
c) computing (134) a corresponding range of location (132) for the first position in at least a second one of the at least two different images; and indicating (136) the range of location in the at least second image;
d) determining (144) a second 2D position (142) in the corresponding range of location for the first position in at least a second one of the at least two different images by interaction of the user; and indicating (146) the second position in the second image.
7. Method according to claim 6, wherein the following steps are provided:
e) computing (154) a corresponding first location (152) for the second position in the first one of the at least two different images; computing (158) a proposal (156) for a correction of the determined first 2D position based on the computed corresponding first location; and indicating (159) the proposal for the corrected first 2D position in the at least first image;
f) refining (162) at least one of the corresponding first and/or second 2D positions in at least one of the at least two images by interaction of the user.
8. Method according to claim 6 or 7, wherein in step b) the determined 2D position is a first part of a target structure; and wherein the following steps are provided: g) determining (172) a further 2D position (174) of a second part of the target structure in one of the at least two different images by interaction of a user; and indicating (176) the further position in the first image;
h) computing (184) a corresponding first location (182) for the second position in the first one of the at least two different images; computing (188) a proposal (186) for a correction of the determined first 2D position based on the computed corresponding first location; and indicating (189) the proposal for the corrected first 2D position in the at least first image; and
i) computing (192) the spatial extension of the target structure; and positioning
(194) a computed device maximizing a cover of the target structure extension by the computed device.
9. Method according to claim 6, 7 or 8, wherein a spatial location is determined based on the first 2D position and the at least second 2D position; and
wherein the spatial location is provided for further steps.
10. Method according to claim 6, 7, 8 or 9, wherein the range of location is restricted to a line segment arranged inside an object.
11. Method according to claim 6, 7, 8, 9 or 10, wherein the user corrects (210) at least one of the indicated positions in at least one of the images;
wherein the corrected position is used as a secondary input (212) upon which the at least one corresponding position is re-computed (214); and
wherein updated positions are presented (216).
12. Method according to claim 6, 7, 8, 9, 10 or 11, wherein a preview of an interventional device is computed and positioned such that the determined target defined by the at least first and second positions is maximally covered by the device; and
wherein the position of the computed device is presented.
13. Method according to claim 6, 7, 8, 9, 10, 11 or 12, wherein the user determines a primary 2D position in each of the at least two images;
wherein a corresponding range of location for each position is computed for all other images; and
wherein a proposal for a refined corresponding secondary 2D position in the other images considering the determined primary 2D position in the respective image is provided to the user.
14. A computer program element for controlling an apparatus according to one of the claims 1 to 5, which, when being executed by a processing unit, is adapted to perform the method of one of the claims 6 to 13.
15. A computer readable medium having stored the program element of claim 14.
PCT/IB2013/051857 2012-03-12 2013-03-08 Interactive correspondence refinement in stereotactic x-ray imaging WO2013136239A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201261609614P 2012-03-12 2012-03-12
US61/609,614 2012-03-12

Publications (1)

Publication Number Publication Date
WO2013136239A1 true WO2013136239A1 (en) 2013-09-19

Family

ID=48444436

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2013/051857 WO2013136239A1 (en) 2012-03-12 2013-03-08 Interactive correspondence refinement in stereotactic x-ray imaging

Country Status (1)

Country Link
WO (1) WO2013136239A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5415169A (en) * 1989-11-21 1995-05-16 Fischer Imaging Corporation Motorized mammographic biopsy apparatus
US5699446A (en) * 1993-05-13 1997-12-16 Ge Medical Systems S.A. Method for the acquisition of images of a body by the rotational positioning of a radiology device, notably an angiography device
US6050724A (en) * 1997-01-31 2000-04-18 U. S. Philips Corporation Method of and device for position detection in X-ray imaging
US20040171933A1 (en) * 2002-11-25 2004-09-02 Milton Stoller Mammography needle biopsy system and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5415169A (en) * 1989-11-21 1995-05-16 Fischer Imaging Corporation Motorized mammographic biopsy apparatus
US6022325A (en) 1989-11-21 2000-02-08 Fischer Imaging Corporation Mammographic biopsy apparatus
US5699446A (en) * 1993-05-13 1997-12-16 Ge Medical Systems S.A. Method for the acquisition of images of a body by the rotational positioning of a radiology device, notably an angiography device
US6050724A (en) * 1997-01-31 2000-04-18 U. S. Philips Corporation Method of and device for position detection in X-ray imaging
US20040171933A1 (en) * 2002-11-25 2004-09-02 Milton Stoller Mammography needle biopsy system and method

Similar Documents

Publication Publication Date Title
US11844635B2 (en) Alignment CT
US20230008465A1 (en) System and method for navigating x-ray guided breast biopsy
EP3157435B1 (en) Guiding system for positioning a patient for medical imaging
EP2854646B1 (en) Methods and apparatus for estimating the position and orientation of an implant using a mobile device
EP2963616A2 (en) Fluoroscopic pose estimation
CN106725851B (en) System and method for image acquisition for surgical instrument reconstruction
WO2016033065A1 (en) Image registration for ct or mr imagery and ultrasound imagery using mobile device
EP3629932B1 (en) Device and a corresponding method for providing spatial information of an interventional device in a live 2d x-ray image
WO2013136239A1 (en) Interactive correspondence refinement in stereotactic x-ray imaging
EP3712847A1 (en) Catheter tip detection in fluoroscopic video using deep learning
EP3525174B1 (en) System and method for displaying an alignment ct
WO2016046289A1 (en) Surgical guide-wire placement planning
JP6245801B2 (en) Image display apparatus and medical image diagnostic apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13722818

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13722818

Country of ref document: EP

Kind code of ref document: A1