WO2019175865A1 - Procédé d'imagerie orthopédique de grande surface - Google Patents

Procédé d'imagerie orthopédique de grande surface Download PDF

Info

Publication number
WO2019175865A1
WO2019175865A1 PCT/IL2019/050269 IL2019050269W WO2019175865A1 WO 2019175865 A1 WO2019175865 A1 WO 2019175865A1 IL 2019050269 W IL2019050269 W IL 2019050269W WO 2019175865 A1 WO2019175865 A1 WO 2019175865A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
dimensional
previous
adjacent
Prior art date
Application number
PCT/IL2019/050269
Other languages
English (en)
Inventor
Eli Zehavi
Leonid Kleyman
Original Assignee
Mazor Robotics Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mazor Robotics Ltd. filed Critical Mazor Robotics Ltd.
Publication of WO2019175865A1 publication Critical patent/WO2019175865A1/fr

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5229Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image
    • A61B6/5235Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image combining images from the same or different ionising radiation imaging techniques, e.g. PET and CT
    • A61B6/5241Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image combining images from the same or different ionising radiation imaging techniques, e.g. PET and CT combining overlapping images of the same imaging modality, e.g. by stitching
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/45For evaluating or diagnosing the musculoskeletal system or teeth
    • A61B5/4538Evaluating a particular part of the muscoloskeletal system or a particular medical condition
    • A61B5/4566Evaluating the spine
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/04Positioning of patients; Tiltable beds or the like
    • A61B6/0492Positioning of patients; Tiltable beds or the like using markers or indicia for aiding patient positioning
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/48Diagnostic techniques
    • A61B6/486Diagnostic techniques involving generating temporal series of image data
    • A61B6/487Diagnostic techniques involving generating temporal series of image data involving fluoroscopy
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/50Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
    • A61B6/505Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for diagnosis of bone

Definitions

  • the present application relates to the field of the medical imaging of large regions of a subject’s anatomy in a form that ensures optimal accuracy over the whole region, especially as applied to orthopaedic imaging of the whole or a large section of the spine.
  • the alignment of neighboring images with each other is generally performed by identifying common anatomic features in a pair of neighboring images, and mutually aligning the neighboring images laterally and angularly until those identified features are commonly located in the final stitched composite image. Attempts at equalizing intensities are also performed in order to provide as life-like an image as possible.
  • the present application describes exemplary systems and methods to generate more accurate reconstruction of a stitched, large-area 2-dimensional image of a subject, from a series of intra-operative projection images of more limited areal extent, each smaller than the extent of the large area image required.
  • the series of images can be conveniently generated as a video sequence.
  • the present system and method is adapted to be applicable to a manually moved imaging system, making it readily available in a standardly equipped operating or imaging room.
  • the position and orientation of each of the smaller area images are known using image processing analysis of the images of a series of three-dimensional registration targets, such that the position and orientation of each image is known independently of the knowledge of the position and orientation of the imaging system.
  • each of the smaller area images can then be virtually brought by well- known image processing methods, to the same orientation relative to the subject, and to the same relative size, making the stitching procedure more accurate.
  • the mutual position of adjacent images is then known by using the imaged position of a three dimensional target appearing in both of the adjacent images. The mutual positions of these smaller area images is thus known without the need to compare anatomical elements of the subject appearing in adjacent images, as is done in prior art systems.
  • stitching is generally used to mean the connection of adjacent images by matching features of the anatomy in the image to the equivalent anatomical part in the neighboring image
  • stitching is used more broadly to indicate the connection of neighboring images by spatial matching of any relevant feature, in this case, the three dimensional markers, including digital realignment of the image data to provide matching orientation of the neighboring images.
  • a conventional C-arm, or otherwise mounted dynamic imaging system having an image receptor and an X-ray source, is used intra-operatively to create a series of digital projection images of a patient on a surgical bed.
  • the series of images can be generated by any form of scanning motion of the imaging system across the region of the patient which it is desired to image.
  • the motion can be motorized, or it can be manually performed without any specific rate of scan, such as by manually advancing the system by a member of the medical staff, along a direction in which the large area image is to be generated.
  • the frame rate can be any suitable speed, provided that the speed does not prevent the imaging details from being sufficiently well resolved.
  • the motion can be continuous, or it can be performed stepwise, with the successive frames being taken in a stationary mode between each step.
  • the frame series can alternatively be generated by motion of the patient bed under a static beam.
  • the system for performing these methods includes a series of three dimensional (3D) registration targets, which, prior to beginning the imaging sequence, are placed on or near the subject to be imaged, most conveniently by attaching them to the subject, and along the direction in which the large scale image is to be generated.
  • Each of the registration targets is constructed of arrays of radiopaque marker points, such as small ball bearings, arranged in known positions in a known 3-dimensional pattern.
  • a commonly used target of this type typically comprises two parallel layers of radiopaque markers, with the geometry of the markers within each layer, and the spacing apart of the layers within the three-dimensional array being known.
  • the three-dimensional registration targets are positioned at such a distance apart that each frame of a fluoroscope image set generated of a scanned section of the subject, should show at least a measurable part of two adjacent targets, such that it is possible to relate the relative positions and orientations of adjacent image frames by comparing the poses of a single marker (or a meaningful part of a marker) in two adjacent frames.
  • One convenient way of providing such a spatial sequence of markers is to use a long, X-ray transparent element or sheet, laid along the longitudinal length of the subject to be imaged, with three-dimensional registration targets positioned therealong at intervals.
  • the stitching information does not have to be obtained only for small area images spaced integrally apart at the same distance as the targets, but can use many more frames than the target spacing implies, thereby generating a redundancy of information, increasing the accuracy of the stitched large scale image.
  • the term“adjacent images” used in this disclosure is understood to be applied not only to immediate neighboring images, but also to any pair of small area images whose mutual position is then related to each other by use of images of a target which appears in both.
  • the reconstructed large area image may be constructed for the entire scanned object, by stitching together image frames of adjacent parts of the patient’s anatomy, using the position and orientation data obtained from the three-dimensional targets to define the position and orientation of the features of each frame image.
  • the image of a 3-D registration target, or part thereof, in successive frames of the image sequence thus enables the position and orientation of the image frames to be defined relative to their neighboring frames.
  • the position and orientation of each successive frame can be determined by analyzing the images of the 3D targets, independently of the image features of the patient’s anatomy, which may not be clearly defined in the images. This provides a more accurate method of defining the location and orientation of each image frame to be stitched together to generate the large-scale image.
  • an image adaptor formed from radiolucent plates containing a predetermined pattern of radiopaque beads, is attached to the sensor of the imaging system.
  • the purpose of the image adaptor is to enable localization of the X-ray source and image de- warping, as is known in the art. Every frame of the acquired video image may be de-warped, using this image adaptor. De-warping using an image adaptor, is a well-known procedure which attempts to correct distortions arising from the different orientational trajectories of the X-ray radiation emanating as a cone beam from the X-ray source, through the subject to the imaging plane. However, while positional distortions may be reduced, they are not eliminated completely.
  • an image adaptor does not compensate for the reduced resolution and clarity of the various parts of the image, resulting from the use of the cone-beam imaging technique.
  • X-rays traversing the subject’s body at different angles will undergo different levels of absorption and different levels of distortion, depending on the angle of the X-ray trajectory through the body.
  • the shorter the trajectory traversed by rays passing through the subject the less the absorption for a given tissue radio-opacity, and hence the clearer and less distorted the image.
  • the image processor in order to select the highest resolution and least distorted image for stitching to its neighboring images, is configured to use only that part of the imaged data arising from the pixels of the image centered around and closest to the normal from the source to the imaging plane of the patient.
  • the image distortions in that region are less than those of other orientations of the cone beam of the incident X-rays outside of that region.
  • the extent of the regions selected is determined as a compromise between the level of resolution improvement required, and the time taken to generate the number of frames required for the sequence. The narrower the region, the higher the clarity of the images to be stitched together, but the greater the number of images including that region required to cover the entire imaged region required.
  • the X-ray beam direction from the source is identified that is determined to be the most perpendicular to the plane of the subject being imaged.
  • a reference length of pixels around that perpendicular line is selected as being the selected range within which the image has a distortion sufficiently low, and a resolution sufficiently high that those pixels can be used in constructing the stitched image of the subject to the accuracy and clarity standards desired of the method.
  • the pixel values for a slice of the image laterally across the subject for that location, and having a longitudinal length determined by the image quality required, is then interpolated from the appropriate frame in the original images into a reconstructed large area image.
  • the lateral dimension of the regions or slices selected can be chosen to provide as much detail as is required in the lateral direction, with the understood trade-off between extent of the region chosen and the resolution and clarity of the image at the outer extremities.
  • a comparatively narrow lateral region is usually sufficient, unless the subject suffers from serious scoliosis, in which case a wider slice is required.
  • the longitudinal length of the region in the scanning direction will determine the number of image frames required to generate the desired reconstructed large area image.
  • Other parts of the anatomy will require regions or slices of image in accordance with the specific requirements of the organs being imaged.
  • the presently described apparatus therefore differs from prior art methods in two aspects. Firstly, it allows a composite image of a large scanned area of the patient’s anatomy to be obtained, using the most accurately obtainable images of separate longitudinal sections of the large area, by selection of only those pixels which are situated in a region in which the beam passes through the subject’s anatomy within a predetermined angle from the perpendicular direction. Secondly, it references adjacent image frames to each other by use of the 3D positioning targets, thus avoiding the disadvantage of prior art stitching methods in which corresponding anatomical features of adjacent images have to be correlated in neighboring images in order to generate the large area composite image. In the presently described method, the comparative location and orientation of adjacent image frames is determined by comparing the location and orientation of a 3 -dimensional target in adjacent frames to provide the relative pose of those frames.
  • a method for generating a series of smaller area images whose mutual positions are known by image processing analysis of three- dimensional targets in adjacent images, but which utilizes, in constructing the composite large scale image, only a limited section of each frame of the series of images, namely that region close to the normal of the beam from the source to the imaged plane. Those regions of the image close to the normal have the least level of image distortion, and therefore contribute to a more accurate final image construction.
  • a method for generating a large area image of a region of interest of a subject comprising:
  • the spatial referencing may comprise at least one of the mutual position, size and orientation of the pair of image frames. Furthermore, the images of the at least one adjacent pair of images may be aligned relative to each other without the need to compare anatomical features in the images.
  • the generating of a sequence of two-dimensional images over the region of interest of the subject may be performed by use of an X-ray C-arm system.
  • the scanning can be performed manually.
  • the series of three-dimensional registration target elements may be mounted in a radio-transparent sheet adapted to be laid on the subject.
  • the extent of the predetermined distance around the normal from the beam source to the plane of the imaging array should be selected in accordance with the level of image distortion desired for the large area image.
  • the series of two-dimensional images may advantageously be formed as a video sequence.
  • the spatial referencing of at least one pair of adjacent two dimensional images may comprise determining the relative position and orientation of the adjacent images, by comparing features of a target which appears in both of the adjacent images.
  • At least one of the three-dimensional registration target elements may comprise two layers of radiopaque marker points, each layer arranged in a known three-dimensional pattern, the layers being spaced apart by a known distance.
  • the radiopaque marker points may be metallic balls.
  • Fig.1 shows a prior art method of generating a large area fluoroscopic image of the spine of a patient using individual smaller area images
  • Fig. 2 shows a method according to the present disclosure, in which three dimensional marker targets are used to correlate adjacent images by identifying the images of a target in the two adjacent images, so that their respective poses can be matched when the individual adjacent images are joined.
  • Fig. 1 is a view of a patient 1 lying in a prone position on a hospital bed 19, undergoing X-ray fluoroscopic imaging using a prior art method of constructing a large scale image, in this case, of the patient’s spine 2.
  • the large scale image is generated from individual images of smaller areas of the patient’s spine.
  • the imager could be a C-arm imager, having a source 10 and an imaging detector 13 which could be an image intensifier or a direct imaging array, with, as is conventionally used, dewarping image adaptors installed thereupon.
  • any other imaging arrangement that can take multiple images of a longitudinal area of the patient’s region of interest, could also be used.
  • the C-arm as an exemplary imaging system, in the drawing are shown three positions of the imaging system, obtained by moving the system longitudinally along the direction of the patient’s spine, as shown by the arrow at the top of the drawing, which it is understood could indicate motion in either or successively in both directions to increase data collection and hence accuracy.
  • the three positions of the source 10, 11, 12, and its associated imaging detector 13, 14, 15 respectively, are not shown in Fig. 1 as being in a single straight line, to illustrate that since the motion of the C-arm can be performed manually, any variantly shaped path in the scan direction can be used, not necessarily a straight line.
  • a curved path of the C-arm positions is shown for the prosaic reason of being able to illustrate the method more clearly in the drawing, by avoiding the need to show the C-arm positions aligned in one straight line and therefore falling partly on top of each other.
  • the location of the imaging components of the system at different imaging positions of the linear scan can thus be more clearly differentiated in the drawing.
  • the source emits a cone shaped beam 18, which covers a comparatively limited part of the whole spine 2.
  • a vertebra 16 taken as a typical feature which can be readily viewed and identified in the X-ray images, is shown near one edge of the imaged part of the cone beam 18, shown as the right hand edge in Fig. 1.
  • the imaging system is moved to the position at which the next image is to be taken, designated by the positions 11,14, in Fig. 1, that same vertebra 16 which was seen in the first image, is also visible in the second image.
  • One major disadvantage of this prior art procedure is that the vertebrae situated away from the center of the cone beam 18, have higher levels of distortion than those in the center region of each image, since the imaging beam at the edges of the cone pass through more tissue of the patient, and are therefore more prone to distortion during their passage through the patient.
  • Another major disadvantage is that the passage of the beam through any anatomical tissue, and especially through bone tissue, leads to reduction of the energy transmitted to the equivalent pixels of the image and to increased scattering, and therefore, the vertebra images have lower resolution and are more likely to involve difficulty in accurate matching in the two neighboring images, due to different absorption levels, or to distorted shapes or sizes between the two images. This effect is accentuated if the object of interest in the image is situated at the outer edges of the cone beam, since the beam then has to pass through a larger thickness of the tissue.
  • FIG. 2 shows a view of the patient 1 of Fig. 1 lying in a prone position on a hospital bed 19 undergoing X-ray fluoroscopic imaging of the spine 2, using an exemplary implementation of the method of the present application for constructing the large area image.
  • This method differs from the prior art method of Fig. 1 in that a series of three-dimensional targets 21, 22, 23, incorporating radiopaque beads arranged in known three-dimensional positions, is attached to the subject.
  • An enlarged representation of one example of the construction of such a three-dimensional target is shown in the circled enlarged image 25.
  • the radio-opaque beads are mounted in two layers in a single block of transparent material, the top and bottom layers having different patterns so that they can be readily distinguished in the two-dimensional fluoroscope images.
  • An alternative construction is to use two layers of beads, each mounted in their predetermined pattern in separate transparent layers of material, the two layers being connected together at their predetermined distance apart.
  • Such a 3-dimensional target has the property that by analyzing the positions of the radio-opaque beads in a two-dimensional image generated thereof, the pose of the imaging system can be determined relative to the target, and since the target is in a fixed position on the subject, also relative to the subject.
  • any image taken of a target, or of a sufficiently large part of a target to enable successful analysis of its effective three-dimensional pose can be used to indicate the true pose of the imaging system, and hence of the image relative to the subject’s anatomy.
  • the C-arm advantageously with dewarping image adaptors on the detector array, is moved down the length of the subject, but in this method, the successive image frames are generated at intervals such that a pair of selected frames, which as explained above, can be truly adjacent or merely close to each other, both show an image of a three-dimensional target element 21, 22 or 23, instead of the anatomic feature 16, 17, as in the method of Fig. 1.
  • the poses of adjacent images can then be equalized using the targets, and adjacent image frames then joined to generate the desired large scale image of the whole or major part of the spine.
  • the image matching can be performed on images which can be brought by signal processing to the same orientation and/or size, using the data obtained from analysis of the three-dimensional targets.
  • the images can be stitched together to form the large scale image, using the features of the images of the three- dimensional targets, which have clearer and more decisive features than those of the anatomic features of the prior art methods of Fig. 1.
  • the pixels of the region of the small area images having the least distortion, and hence the pixels being most representative of the anatomical picture, as derived from a fluoroscope image obtained using a cone-beam source are those around the normal connecting the source with the imaging plane.
  • the image is therefore constructed using pixels only from within a predetermined range from that normal direction, as shown by the narrow band 24 on either side of the normal, between the source 10 and the detector array 13 of Fig. 2.
  • the width of the band 24 is determined by the extent from the normal that the image pixels are considered to represent the acceptable imaged view.
  • the X-ray beam from the source that is determined to be the most perpendicular to the image plane is identified.
  • the reconstructed band of pixel values around that perpendicular line are then selected from the appropriate frames in the original image sequence, and only those pixels from each successive image frame are used in reconstructing the large scale image, thus ensuring the most accurate reconstruction possible.

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Pathology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

L'invention concerne un procédé pour générer avec précision une image bidimensionnelle assemblée, de grande surface, à partir de multiples images de projection numérique peropératoires d'étendue surfacique plus limitée. La position et l'orientation de chacune des images de surface plus petite sont connues à l'aide d'une analyse de traitement d'image des images d'une série de cibles tridimensionnelles, de telle sorte que la position de chaque image est connue indépendamment de la connaissance de la pose du système d'imagerie. Les positions mutuelles d'images adjacentes sont connues en utilisant la position analysée d'une cible apparaissant dans des images adjacentes. Pour augmenter la précision de l'image assemblée, on utilise uniquement une section limitée de chaque trame des images constitutives proches de la normale du faisceau au plan imagé, de façon à construire l'image composite à grande échelle, étant donné que ces régions de l'image proche de la normale ont le niveau minimal de distorsion d'image.
PCT/IL2019/050269 2018-03-11 2019-03-11 Procédé d'imagerie orthopédique de grande surface WO2019175865A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862641346P 2018-03-11 2018-03-11
US62/641,346 2018-03-11

Publications (1)

Publication Number Publication Date
WO2019175865A1 true WO2019175865A1 (fr) 2019-09-19

Family

ID=67907548

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2019/050269 WO2019175865A1 (fr) 2018-03-11 2019-03-11 Procédé d'imagerie orthopédique de grande surface

Country Status (1)

Country Link
WO (1) WO2019175865A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20230012344A (ko) * 2021-07-15 2023-01-26 주식회사 뷰웍스 다중 패널 디텍터 및 이를 포함하는 영상 촬영 시스템
WO2023077367A1 (fr) * 2021-11-04 2023-05-11 Shenzhen Xpectvision Technology Co., Ltd. Procédés d'imagerie avec réduction des effets de caractéristiques dans un système d'imagerie

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050104018A1 (en) * 2000-12-21 2005-05-19 Chang Yun C. Method and system for acquiring full spine and full leg images using flat panel digital radiography
US20110188726A1 (en) * 2008-06-18 2011-08-04 Ram Nathaniel Method and system for stitching multiple images into a panoramic image
US8699787B2 (en) * 2007-06-29 2014-04-15 Three Pixels Wide Pty Ltd. Method and system for generating a 3D model from images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050104018A1 (en) * 2000-12-21 2005-05-19 Chang Yun C. Method and system for acquiring full spine and full leg images using flat panel digital radiography
US8699787B2 (en) * 2007-06-29 2014-04-15 Three Pixels Wide Pty Ltd. Method and system for generating a 3D model from images
US20110188726A1 (en) * 2008-06-18 2011-08-04 Ram Nathaniel Method and system for stitching multiple images into a panoramic image

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20230012344A (ko) * 2021-07-15 2023-01-26 주식회사 뷰웍스 다중 패널 디텍터 및 이를 포함하는 영상 촬영 시스템
KR102619944B1 (ko) * 2021-07-15 2024-01-02 주식회사 뷰웍스 다중 패널 디텍터 및 이를 포함하는 영상 촬영 시스템
US11986332B2 (en) 2021-07-15 2024-05-21 Vieworks Co., Ltd. Multi-panel detector and imaging system including the same
WO2023077367A1 (fr) * 2021-11-04 2023-05-11 Shenzhen Xpectvision Technology Co., Ltd. Procédés d'imagerie avec réduction des effets de caractéristiques dans un système d'imagerie

Similar Documents

Publication Publication Date Title
US20210236067A1 (en) Systems and related methods for stationary digital chest tomosynthesis (s-dct) imaging
US9109998B2 (en) Method and system for stitching multiple images into a panoramic image
CN101854863B (zh) 用于改进3d x射线成像的图像质量的可移动楔
JP6526688B2 (ja) 二次元x線画像から三次元画像を再構築する方法
Penney et al. Validation of a two‐to three‐dimensional registration algorithm for aligning preoperative CT images and intraoperative fluoroscopy images
US9044190B2 (en) C-arm computerized tomography system
US7103136B2 (en) Fluoroscopic tomosynthesis system and method
CN104066376B (zh) 用于数字放射线照相的设备和方法
US6901132B2 (en) System and method for scanning an object in tomosynthesis applications
CN101721223B (zh) 用于相对于采集系统的等角点来定位目标的方法和装置
US20100172472A1 (en) Collecting images for image stitching with rotating a radiation detector
US7073939B2 (en) Method and system for generating an x-ray exposure
JP2004510483A (ja) 人体構造を最適に撮像する方法及びx線装置
EP3273861A1 (fr) Procédé de réduction de dose de rayons x dans un système à rayons x
US11241207B2 (en) Hybrid CT system with additional detectors in close proximity to the body
US12014491B2 (en) Correcting motion-related distortions in radiographic scans
Lin et al. Comparisons of surface vs. volumetric model-based registration methods using single-plane vs. bi-plane fluoroscopy in measuring spinal kinematics
WO2019175865A1 (fr) Procédé d'imagerie orthopédique de grande surface
KR101909125B1 (ko) 컴퓨터 기반 진단 방법 및 그에 따른 컴퓨터 기반 진단 장치
US20070019787A1 (en) Fusion imaging using gamma or x-ray cameras and a photographic-camera
US20050084147A1 (en) Method and apparatus for image reconstruction with projection images acquired in a non-circular arc
JP3400060B2 (ja) ディジタルx線撮影装置
EP1632181B1 (fr) Processus et système pour composer une image
van Eeuwijk et al. A novel method for digital X-ray imaging of the complete spine
US11903748B2 (en) Method and device for creating a cephalometric image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19767428

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19767428

Country of ref document: EP

Kind code of ref document: A1