WO2016045574A1 - System and method for image composition - Google Patents

System and method for image composition Download PDF

Info

Publication number
WO2016045574A1
WO2016045574A1 PCT/CN2015/090265 CN2015090265W WO2016045574A1 WO 2016045574 A1 WO2016045574 A1 WO 2016045574A1 CN 2015090265 W CN2015090265 W CN 2015090265W WO 2016045574 A1 WO2016045574 A1 WO 2016045574A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
sub
preliminary
overlapping
Prior art date
Application number
PCT/CN2015/090265
Other languages
French (fr)
Inventor
Wenjun Yu
Xiangcui JIN
Yang Hu
Haifeng Xiao
Hongwei Chen
Wei Wang
Original Assignee
Shanghai United Imaging Healthcare Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201410487252.1A external-priority patent/CN104268846B/en
Priority claimed from CN201410508290.0A external-priority patent/CN104287756B/en
Application filed by Shanghai United Imaging Healthcare Co., Ltd. filed Critical Shanghai United Imaging Healthcare Co., Ltd.
Priority to EP15843195.7A priority Critical patent/EP3161785B1/en
Priority to GB1704042.9A priority patent/GB2545588B/en
Priority to US15/081,892 priority patent/US9582940B2/en
Publication of WO2016045574A1 publication Critical patent/WO2016045574A1/en
Priority to US15/394,923 priority patent/US9824503B2/en
Priority to US15/662,285 priority patent/US10354454B2/en
Priority to US16/511,224 priority patent/US10614634B2/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/70Means for positioning the patient in relation to the detecting, measuring or recording means
    • A61B5/704Tables
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • A61B6/032Transmission computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • A61B6/037Emission tomography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/44Constructional features of apparatus for radiation diagnosis
    • A61B6/4429Constructional features of apparatus for radiation diagnosis related to the mounting of source units and detector units
    • A61B6/4464Constructional features of apparatus for radiation diagnosis related to the mounting of source units and detector units the source unit or the detector unit being mounted to ceiling
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/44Constructional features of apparatus for radiation diagnosis
    • A61B6/4476Constructional features of apparatus for radiation diagnosis related to motor-assisted motion of the source unit
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5205Devices using data or image processing specially adapted for radiation diagnosis involving processing of raw data to produce diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5229Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image
    • A61B6/5235Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image combining images from the same or different ionising radiation imaging techniques, e.g. PET and CT
    • A61B6/5241Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image combining images from the same or different ionising radiation imaging techniques, e.g. PET and CT combining overlapping images of the same imaging modality, e.g. by stitching
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5207Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of raw data to produce diagnostic data, e.g. for generating an image
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5238Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/32Determination of transform parameters for the alignment of images, i.e. image registration using correlation-based methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/30Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from X-rays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/30Transforming light or analogous information into electric information
    • H04N5/32Transforming X-rays
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/44Constructional features of apparatus for radiation diagnosis
    • A61B6/4429Constructional features of apparatus for radiation diagnosis related to the mounting of source units and detector units
    • A61B6/4452Constructional features of apparatus for radiation diagnosis related to the mounting of source units and detector units the source unit and the detector unit being able to move relative to each other
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/46Arrangements for interfacing with the operator or the patient
    • A61B6/467Arrangements for interfacing with the operator or the patient characterised by special input means
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/50Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
    • A61B6/504Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for diagnosis of blood vessels, e.g. by angiography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10108Single photon emission computed tomography [SPECT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • G06T2207/101363D ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2211/00Image generation
    • G06T2211/40Computed tomography
    • G06T2211/404Angiography

Definitions

  • the present disclosure generally relates to image processing, and more particularly, to a system and method for combining sub-images into a composite image.
  • Medical imaging techniques such as X-ray, magnetic resonance imaging (MRI) , computed tomography (CT) , are widely used for disease diagnosis.
  • a region of interest such as one or more blood vessels in a limb, a spinal column, or a portion thereof, may be visualized using one or more of the techniques mentioned above.
  • an imaging operation when performing an imaging operation on a region of interest whose size is larger than the field of view (FOV) of an imaging device (for example, a CT scanner, an X-ray scanner, an MRI scanner, a MicroCT scanner) , a single imaging operation may be inadequate to obtain an image of the entire region of interest; merely a portion of the region of interest may be included in an image. Under such a circumstance, multiple imaging operations may need to be performed on the region of interest to generate a series of sub-images and a sub-image covers only a portion of the region of interest. By combining the sub-images, a composite image covering the entire region of interest may be generated.
  • FOV field of view
  • a radiation-based imaging technique such as X-ray, CT
  • radiation damage may occur due to extended exposure of a region of interest (for example, a human body or a portion thereof) to radiation.
  • a region of interest for example, a human body or a portion thereof
  • an image composition system may include a parameter setting engine, an acquisition engine, an image processing engine, and a storage engine.
  • the parameter setting engine may be configured to set one or more parameters relating to, for example, image acquisition, image processing, or the like, or any combination thereof.
  • the acquisition engine may be configured to retrieve a first sub-image and a second sub-image, the first sub-image and the second sub-image may correspond to three-dimensional (3D) volume data.
  • the image processing engine may be configured to retrieve a first overlapping image from the first sub-image, retrieve a second overlapping image from the second sub-image, generate a first two-dimensional (2D) projection image and a first pixel map based on maximum intensity projection of the first overlapping image onto a plane, generate a second 2D projection image and a second pixel map based on maximum intensity projection of the second overlapping image onto the plane, perform 2D registration based on the first 2D projection image, the first pixel map, the second 2D projection image, and the second pixel map, perform 3D registration based on the 2D registration, the first pixel map and the second pixel map, identify a correlation between the first sub-image and the second sub-image based on the 2D registration or the 3D registration, and fuse the first overlapping image and the second overlapping image based on the correlation to provide a composite image.
  • 2D two-dimensional
  • a method may include one or more of the following operations.
  • a first sub-image and a second sub-image may be retrieved. Both the first sub-image and the second sub-image may correspond to 3D volume data.
  • a first overlapping image may be retrieved from the first sub-image, and a second overlapping image may be retrieved from the second sub-image.
  • a first 2D projection image and first pixel may be generated based on maximum intensity projection of the first overlapping image onto a plane.
  • a second 2D projection image and second pixel may be generated based on maximum intensity projection of the second overlapping image onto the plane.
  • Two-dimensional registration may be performed based on the first 2D projection image, the first map, the second 2D projection image, and the second pixel map.
  • Three-dimensional registration may be performed based on the 2D registration, the first pixel, and the second pixel map.
  • a correlation between the first sub-image and the second sub-image may be identified based on the 2D registration or the 3D registration.
  • the first overlapping image and the second overlapping image may be fused based on the correlation to provide a composite image.
  • the first sub-image or the second sub-image may be, for example, a 3D image, a 2D image, or the like, or a combination thereof.
  • the 3D images may be 3D-DSA images.
  • the 3D-DSA images may be 3D digital coronal images, 3D digital sagittal images, 3D digital transverse images, or the like, or a combination thereof.
  • the plane may include a coronal plane, a sagittal plane, or a transverse plane.
  • the first sub-image or the second sub-image may be retrieved by using DSA (digital subtraction angiography) , CT (computed tomography) , CTA (computed tomography angiography) , PET (positron emission tomography) , X-ray, MRI (magnetic resonance imaging) , MRA (magnetic resonance angiography) , SPECT (single-photon emission computerized tomography) , US (ultrasound scanning) .
  • DSA digital subtraction angiography
  • CT computed tomography
  • CTA computed tomography angiography
  • PET positron emission tomography
  • X-ray positron emission tomography
  • MRI magnetic resonance imaging
  • MRA magnetic resonance angiography
  • SPECT single-photon emission computerized tomography
  • US ultrasound scanning
  • the first overlapping image and the second overlapping image may be retrieved by using Digital Imaging and Communication in Medicine (DICOM) .
  • DICOM Digital Imaging and Communication in Medicine
  • label (0020 0032) of DICOM may be used to retrieve the first overlapping image and the second overlapping image.
  • an offset may be generated by performing 2D registration, the offset may include, for example, an X offset, a Y offset, a Z offset, a coronal offset, a sagittal offset, or a transverse offset.
  • another offset may be generated by performing 3D registration.
  • the offset may be in the direction perpendicular to the plane onto which the overlapping images have been projected to generate the 2D projection images.
  • the offset may include, for example, an X offset, a Y offset, a Z offset, a coronal offset, a sagittal offset, a transverse offset.
  • a fine registration may be performed based on the 2D registration and/or the 3D registration discussed elsewhere.
  • the fine registration may be based on an algorithm including, for example, recursion, a bisection method, an exhaustive method, a greedy algorithm, a divide and conquer algorithm, dynamic programming method, an iterative method, a branch-and-bound algorithm, a backtracking algorithm, or the like, or a combination thereof.
  • the second pixel map may include a calibrated pixel map based on the 2D registration. Specifically, the second pixel map may be calibrated based on the 2D offsets to generate the calibrated pixel map. In some embodiments, one of the first pixel map and the second pixel map may be a reference pixel map, and the other pixel map may be a floating pixel map.
  • the first pixel map may include information identifying the location of maximum intensity of the first overlapping image, and the location may be in a direction perpendicular to a plane onto which the first overlapping image is projected.
  • the second pixel map may include information identifying the location of maximum intensity of the second overlapping image, and the location may be in a direction perpendicular to the same plane.
  • the 3D registration may include calculating a plurality of differences in the locations in the direction perpendicular to the plane between the first pixel map and the second pixel map, each one of the plurality of differences corresponding to a pixel within the plane, comparing the plurality of differences to obtain a probability of the differences and designating an offset in the direction perpendicular to the plane based on the probability.
  • the image composition system may include an imaging device and a processor.
  • the imaging device may include an X-radiation source and a radiation detector.
  • the processor may include a parameter setting engine, a control engine and an image processing engine.
  • the parameter setting engine may be configured to set a plurality of parameters relating to the X-radiation source or the radiation detector based on one or more preliminary parameters.
  • the control engine may be configured to control a motion of the X-radiation source or a motion of the radiation detector to capture a plurality of sub-images.
  • the image processing engine may be configured to combine the plurality of sub-images.
  • the method may include: setting a plurality of parameters relating to the X-radiation source or the radiation detector based on one or more preliminary parameters; controlling, based on at least one of the plurality of parameters, a motion of the X-radiation source or a motion of the radiation detector to capture a plurality of sub-images; combining the plurality of sub-images.
  • the preliminary parameters may include at least one of a dimension of an exposure region, a number of exposures, an overlapping region between two adjacent exposures, a starting position of an effective light field, an ending position of the effective light field, or a height of the effective light field.
  • a plurality of secondary parameters may be obtained based on one or more preliminary parameters.
  • the secondary parameters may include at least one of a dimension of an exposure region, a number of exposures, an overlapping region between two adjacent exposures, a starting position of an effective light field, an ending position of the effective light field, or a height of the effective light field.
  • the difference between the secondary number of exposure and the preliminary number of exposure may be less than 1. In some embodiments, the secondary exposure region may be equal to or smaller than the preliminary exposure region.
  • the imaging device may be configured according to at least one or more of the preliminary parameters. In some embodiments, the imaging device may be configured according to at least one or more of the secondary parameters.
  • the X-radiation source may include a tube configured to generate a beam of one or more X-rays, and a beam limiting device mounted proximal to the X-radiation source.
  • the beam limiting device may function to define the beam of one or more X-rays generated by the tube.
  • the height of the effective light field may equal to a product of the opening of the beam limiting device in the vertical direction and a constant k.
  • the secondary height of the effective light field may be equal to or smaller than the preliminary height of the effective light field.
  • the tube, the beam limiting device, and the radiation detector may be positioned according to the preliminary parameters at an exposure.
  • the tube, the beam limiting device, and the radiation detector may be positioned according to the secondary parameters at an exposure.
  • the tube (or an X-radiation source) and the radiation detector may move simultaneously and/or in a synchronized fashion. In some embodiments, the tube (or an X-radiation source) and the radiation detector may move one after another.
  • FIG. 1 is a block diagram depicting an image composition system according to some embodiments of the present disclosure
  • FIG. 2 is a block diagram depicting a processor according to some embodiments of the present disclosure
  • FIG. 3 is a flowchart illustrating a workflow for image processing according to some embodiments of the present disclosure
  • FIG. 4 is a block diagram illustrating an architecture of a parameter setting engine according to some embodiments of the present disclosure
  • FIG. 5 is a flowchart of an exemplary process for setting parameters according to some embodiments of the present disclosure
  • FIG. 6 is a flowchart of another exemplary process for setting parameters according to some embodiments of the present disclosure.
  • FIG. 7 illustrates a schematic view of the Left, Posterior, Superior (LPS) coordinate system used in connection with some embodiments of the present disclosure
  • FIGs. 8 to 10 illustrate exemplary imaging systems according to some embodiments of the present disclosure
  • FIG. 11 illustrates a process for determining the number of exposures according to some embodiments of the present disclosure
  • FIG. 12 illustrates a schematic view of the tube rotation angle corresponding to the nth exposure according to some embodiments of the present disclosure
  • FIG. 13 is a block diagram illustrating an image processing engine according to some embodiments of the present disclosure.
  • FIG. 14 is a flowchart illustrating a workflow of image processing according to some embodiments of the present disclosure.
  • FIG. 15 is a block diagram illustrating a registration module according to some embodiments of the present disclosure.
  • FIG. 16 is a flowchart illustrating a registration process according to some embodiments of the present disclosure.
  • FIG. 17 is a block diagram of a registration module according to some embodiments of the present disclosure.
  • FIG. 18 is a flowchart illustrating a process of image registration according to some embodiments of the present disclosure.
  • FIG. 19 is a flowchart illustrating a process of image registration according to some embodiments of the present disclosure.
  • FIG. 20 illustrates 2D images and corresponding pixel maps according to some embodiments of the present disclosure
  • FIG. 21 is a flowchart illustrating a process of image registration according to some embodiments of the present disclosure.
  • FIG. 22 is a flowchart illustrating a process of N sub-images registration according to some embodiments of the present disclosure
  • FIGs. 23A-23I illustrate exemplary 2D images of blood vessels according to some embodiments of the present disclosure
  • FIGs. 24A-24D illustrate 2D coronal images of vascular vessels applying different methods according to some embodiments of the present disclosure
  • FIGs. 25A-25D illustrate 2D coronal images of vascular vessels applying different methods according to some embodiments of the present disclosure
  • FIGs. 26A-26D illustrate 2D coronal images of vascular vessels applying different methods according to some embodiments of the present disclosure
  • FIGs. 27A and 27B illustrate coronal images of different view angles that are composed according to some embodiments of the present disclosure
  • FIGs. 28A and 28B illustrate coronal images of different view angles that are composed according to some embodiments of the present disclosure.
  • FIGs. 29A and 29B illustrate coronal images of different view angles that are composed according to some embodiments of the present disclosure.
  • system, ” “engine, ” “unit, ” “module, ” and/or “block” used herein are one method to distinguish different components, elements, parts, section or assembly of different level in ascending order. However, the terms may be displaced by other expression if they may achieve the same purpose.
  • FIG. 1 illustrates a block diagram of an image composition system 100 according to some embodiments of the present disclosure.
  • the image composition system 100 may include an imaging device 101, a processor 102, a terminal 103, a display 104, and a database 105.
  • the imaging device 101 may be configured to generate or provide one or more images of a region of interest.
  • the imaging device 101 may include an X-radiation source and a radiation detector.
  • the images may be three-dimensional (3D) images, two-dimensional (2D) images, or the like, or a combination thereof.
  • the 3D images may be three-dimensional digital subtraction angiography images (3D-DSA images) .
  • the 3D-DSA images may be 3D digital coronal images, 3D digital sagittal images, 3D digital transverse images, or the like, or a combination thereof.
  • an overlapping image may depict an overlapping region of a number of sub-images regardless of whether they are successive or not.
  • an overlapping image may depict an overlapping region of two successive sub-images.
  • each of the sub-images may include an overlapping image.
  • An overlapping image may be part of a sub-image.
  • a sub-image may refer to an image of a portion of a region of interest.
  • a set of 3D volume data may correspond to a stack of 2D images.
  • a 2D image may be referred to as a slice.
  • a set of 3D volume data may correspond to a stack of 2D images in the coronal plane.
  • such a stack of 2D images may be referred to as a 3D coronal image.
  • a same set of 3D volume data may correspond to different stacks of 2D images in different planes.
  • a same set of 3D volume data may correspond to a stack of 2D images in the coronal plane, and also a stack of 2D images in the transverse plane.
  • a coronal plane and a transverse plane may be found elsewhere in the present disclosure. See, for example, FIG. 7 and the description thereof.
  • the overlapping region corresponding to a set of 3D volume data may be depicted by a stack of 2D overlapping images.
  • the image device 101 may utilize a technique including, for example, digital subtraction angiography (DSA) , computed tomography (CT) , computed tomography angiography (CTA) , positron emission tomography (PET) , X-ray, digital radiation (DR) , magnetic resonance imaging (MRI) , magnetic resonance angiography (MRA) , single-photon emission computerized tomography (SPECT) , ultrasound scanning (US) , or the like, or a combination thereof.
  • DSA digital subtraction angiography
  • CT computed tomography
  • CTA computed tomography angiography
  • PET positron emission tomography
  • DR digital radiation
  • MRI magnetic resonance imaging
  • MRA magnetic resonance angiography
  • SPECT single-photon emission computerized tomography
  • US ultrasound scanning
  • the processor 102 may be configured to process the images acquired by the imaging device 101 or retrieved from another source (for example, an imaging device, a database or storage, or the like, or a combination thereof) .
  • the processor 102 may include a central processing unit (CPU) , an application-specific integrated circuit (ASIC) , an application-specific instruction-set processor (ASIP) , a graphics processing unit (GPU) , a physics processing unit (PPU) , a microcontroller unit, a digital signal processor (DSP) , a field programmable gate array (FPGA) , an acorn reduced instruction set computing (RISC) machine (ARM) , or the like, or any combination thereof.
  • the processor 102 may generate a control signal relating to the configuration of the imaging device 101.
  • the terminal 103 may communicate with the processor 102 and allow one or more operators to control the production and/or display of images on the display 104.
  • the terminal 103 may include an input device, a control panel (not shown in the figure) , etc.
  • the input device may be a keyboard, a touch screen, a mouse, a remote controller, or the like, or any combination thereof.
  • An input device may include alphanumeric and other keys that may be inputted via a keyboard, a touch screen (for example, with haptics or tactile feedback) , a speech input, an eye tracking input, a brain monitoring system, or any other comparable input mechanism.
  • the input information received through the input device may be communicated to the processor 102 via, for example, a bus, for further processing.
  • cursor control device such as a mouse, a trackball, or cursor direction keys to communicate direction information and command selections to, for example, the processor 102 and to control cursor movement on the display 104 or another display device.
  • the display 104 may be configured to display information. Exemplary information may include, for example, an image before and/or after image processing, a request for input or parameter relating to image acquisition and/or processing, or the like, or a combination thereof.
  • the display device may include a liquid crystal display (LCD) , a light emitting diode (LED) -based display, a flat panel display or curved screen (or television) , a cathode ray tube (CRT) , or the like, or a combination thereof.
  • LCD liquid crystal display
  • LED light emitting diode
  • CRT cathode ray tube
  • the database 105 may be configured to store images and/or relevant information or parameters.
  • Exemplary parameters may include an exposure region, the number of exposures, the overlapping region between two adjacent (or successive) exposures, the starting position of the effective light field, the ending position of the effective light field, the height of the effective light field, or the like, or a combination thereof.
  • the imaging device 101, the processor 102, the terminal 103, the display 104, and the database 105 may communicate with each other via a network.
  • FIG. 2 is a block diagram of the processor 102 according to some embodiments of the present disclosure.
  • the processor 102 may include a parameter setting engine 201, an acquisition engine 202, an image processing engine 203, and a storage engine 204.
  • the parameter setting engine 201 may be configured to set one or more parameters relating to, for example, image acquisition, image processing, or the like, or a combination thereof. Exemplary parameters may include an exposure region, the number of exposures, the overlapping region between two adjacent exposures, the starting position of the effective light field, the ending position of the effective light field, the height of the effective light field, or the like, or a combination thereof.
  • the processor 102 may include a control engine (not shown in the figure) .
  • the control engine may control one or more components of the image composition system 100 based on one or more parameters provided by the parameter setting engine 201.
  • the control engine may control the motion of one or more components of the imaging device 101 including the X-radiation source and/or the radiation detector.
  • the acquisition engine 202 may be configured to acquire one or more images.
  • the image (s) may be obtained by the imaging device 101 or retrieved from another source (for example, an imaging device, a database or storage, or the like, or a combination thereof) .
  • Exemplary images may include a composite image, sub-images of a region of interest (acquired through, for example, a series of scans of a region of interest) , overlapping images of the sub-images, or the like, or a combination thereof.
  • a composite image may refer to an image of an entire region of interest.
  • a composite image may be constructed by way of combining a plurality of sub-images.
  • the combination may be achieved by fusing the overlapping images of two sub-images, for example, two adjacent sub-images. Two sub-images may be considered adjacent or successive if they depict adjoining portions of a region of interest.
  • the image processing engine 203 may be configured to process images acquired by the acquisition engine 202.
  • the processing may include, for example, calculating the number of exposures, performing registration of images (for example, registration of overlapping images) , fusing overlapping images, combining sub-images, or the like, or a combination thereof.
  • the registration may include 2D registration, 3D registration, or the like, or a combination thereof.
  • the storage engine 204 may be configured to storage images and/or relevant information or parameters.
  • FIG. 3 is a flowchart illustrating a process of image processing according to some embodiments of the present disclosure.
  • one or more parameters may be set. Exemplary parameters may include an exposure region, the number of exposures, the overlapping region between two adjacent exposures, the starting position of the effective light field, the ending position of the effective light field, the height of the effective light field, or the like, or a combination thereof.
  • one or more images may be obtained. The images may include a composite image, sub-images of a region of interest (acquired through, for example, a series of scans of the region of interest) , overlapping images of the sub-images, or the like, or a combination thereof.
  • the images obtained in step 302 may be processed in step 303.
  • Exemplary processing may include calculating the number of exposures, performing registration of images (for example, overlapping images) , fusing the overlapping images of sub-images to generate a composite image, or the like, or a combination thereof.
  • the registration may include 2D registration, 3D registration, or the like, or a combination thereof.
  • a composite image may be generated by way of combining a plurality of sub-images.
  • the sub-images may be acquired using an imaging device or an imaging system according to one or more parameters.
  • at least some of the parameters relating to the configuration of the imaging device or imaging system may be adjusted for individual patients.
  • a user or operator for example, a healthcare provider, an imaging specialist, etc.
  • parameters including, for example, an exposure region, the overlapping region between adjacent exposures, the number of exposures, or the like, or a combination thereof.
  • a composite image of blood vessels in a lower limb of a patient may be obtained by way of image processing.
  • An exemplary image processing procedure may include acquiring a plurality of DSA sub-images of the blood vessels of the lower limb, performing 2D registration and/or 3D registration of overlapping images of adjacent DSA sub-images, fusing overlapping images based on the 2D registration and/or 3D registration to combine the DSA sub-images.
  • a composite image may be obtained by adjusting one or more parameters relating to the configuration of an imaging device or imaging system, acquiring a plurality of sub-images, and processing the acquired sub-images based on 2D registration and/or 3D registration of overlapping images of sub-images.
  • FIG. 4 is a block diagram illustrating an architecture of a parameter setting engine 201 according to some embodiments of the present disclosure where a composite image may be acquired by adjusting the parameters relating to the configuration of an imaging device or an imaging system.
  • the parameter setting engine 201 may be connected to or otherwise communicate with, for example, the acquisition engine 202, the image processing engine 203, and the storage engine 204.
  • the parameter setting engine 201 may be connected to or communicate with the imaging device 101, the display 104, the terminal 103, the database 105, or the like, or a combination thereof. At least some of the connection or communication may be achieved via a wired connection, or wirelessly.
  • the parameter setting engine 201 may be configured to set or adjust one or more parameters relating to the configuration of the imaging device or the imaging system.
  • the parameter setting engine 201 may include a preliminary calculation module 401 and a secondary calculation module 402.
  • the parameter setting engine 201 may further include an acquisition module (not shown in the figure) configured to acquire information at least part of which may be used by another component of the parameter setting engine 201 or the image composition system 100.
  • the parameter setting engine 201 may further include a storage module (not shown in the figure) configured to store the parameters calculated by the preliminary calculation module 401 or used in the calculation, and the secondary calculation module 402.
  • parameter setting engine 201 is not exhaustive and are not limiting. Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the present disclosure.
  • the preliminary calculation module 401 may be configured to estimate or calculate one or more preliminary parameters.
  • the preliminary parameters may be used to configure the imaging device 101.
  • a non-exclusive list of preliminary parameters may include: an exposure region, the number of exposures, the overlapping region between two adjacent exposures, the starting position of the effective light field, the stopping position of the effective light field, the height of the effective light field, etc.
  • the effective light field may refer to the light field received by the detector that may be effective to generate image data.
  • the height of the effective light field may correlate to the opening of a beam limiting device in the length direction (for example, along the direction from the head to the feet of a patient, or vice versa) between a starting position and a stopping position.
  • the starting position of the effective light field may refer to the upper edge of the effective light field corresponding to the first exposure of a number of exposures;
  • the stopping position of the effective light field may refer to the lower edge of the effective light field corresponding to the last exposure of a number of exposures.
  • One or more preliminary parameters may be calculated using the preliminary calculation module 401.
  • the preliminary parameters may be calculated based on the initial parameters provided by the image composition system 100 during initialization.
  • a non-exclusive list of initial parameters that may be provided by the image composition system 100 during initialization may include: an exposure region, the number of exposures, the overlapping region between two adjacent exposures, the starting position of the effective light field, the stopping position of the effective light field, the height of the effective light field, etc.
  • the initial parameters may be stored in the imaging device 101 or the storage engine 204.
  • one or more preliminary parameters may be provided by a user.
  • the image composition system 100 may allow a user to designate a preliminary exposure region.
  • the preliminary exposure region may refer to the entire exposure region that a user may desire to image with respect to a target body (for example, a patient or a portion thereof) .
  • preliminary parameters may be calculated based on at least some of the user input or designation.
  • a preliminary number of exposures may be calculated based on the preliminary exposure region designated by the user.
  • the secondary calculation module 402 may be configured to estimate or calculate a secondary parameter according to at least some of the preliminary parameters calculated by the preliminary calculation module 401.
  • the secondary parameter may be used to configure the imaging device 101.
  • a non-exclusive list of secondary parameters may include: an exposure region, the number of exposures, the overlapping region between two adjacent exposures, the starting position of the effective light field, the stopping position of the effective light field, the height of the effective light field, etc.
  • the secondary parameters may include the same parameters as those included in the preliminary parameters.
  • a secondary parameter may be calculated based on the preliminary parameters from the preliminary calculation module 401.
  • the calculation of a secondary parameter may be performed based on a rule.
  • the rule may be that the number of exposures is an integer.
  • the preliminary number of exposures calculated by the preliminary calculation module 401 may be adjusted to provide a secondary number of exposures that is an integer. For instance, the preliminary number of exposures may be rounded to the preceding integer or the next integer, and other preliminary parameters may be adjusted accordingly to obtain one or more secondary parameters.
  • FIG. 5 is a flowchart illustrating an exemplary process for setting parameters according to some embodiments of the present disclosure. It should be noted that the flowchart described below is provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. Consequently for persons having ordinary skills in the art, numerous variations and modifications may be conducted under the teaching of the present disclosure. However, those variations and modifications do not depart the protecting scope of the present disclosure.
  • one or more preliminary parameters may be acquired.
  • the acquisition of the preliminary parameter (s) may be performed by the preliminary calculation module 401.
  • the acquisition of the preliminary parameter (s) may be performed by the acquisition module of the parameter setting engine 201.
  • the preliminary parameter (s) may be acquired from the storage engine 204, or from the database 105.
  • the preliminary parameter (s) may be acquired from user input.
  • the acquired preliminary parameter (s) may be stored in the storage engine 204, or in the database 105.
  • a preliminary calculation may be performed to provide one or more preliminary parameter.
  • the preliminary calculation may be performed by the preliminary calculation module 401. In some embodiments, the preliminary calculation may be performed based on the preliminary parameter (s) acquired in step 501.
  • at least some of the preliminary parameter (s) may be outputted. At least some of the preliminary parameter (s) may be used to configure, for example, the image device 101, an imaging system, or a portion thereof.
  • at least some of the preliminary parameter (s) may be stored in the storage engine 204, or in the database 105.
  • a secondary calculation may be performed to provide one or more secondary parameters.
  • the step 503 of the secondary calculation may be optional.
  • the secondary calculation may be skipped, and the preliminary parameters may be outputted in step 504.
  • the secondary calculation may still be performed.
  • the secondary calculation may be performed according to a second rule.
  • the first rule and the second rule may be different or the same.
  • the second rule may be part of the first rule.
  • the first rule and the second rule may be different or the same.
  • the second rule may be part of the first rule.
  • the secondary calculation may be performed in step 503 and one or more secondary parameters may be obtained.
  • the secondary calculation may be performed based on a rule. For instance, if the preliminary parameters do not satisfy a first rule, the secondary calculation may be performed according to a second rule.
  • the first rule and the second rule may be different or the same.
  • the second rule may be part of the first rule.
  • the secondary parameter (s) may be outputted. At least some of the secondary parameters may be used to configure, for example, the imaging device 101. In step 505, at least some of the secondary parameter (s) may be stored in the storage engine 204, or in the database 105.
  • FIG. 6 is a flowchart of an exemplary process for setting parameters according to some embodiments of the present disclosure. As illustrated, FIG. 6 is an exemplary process for setting parameters according to some embodiments where an X-ray examination is desired. It should be noted that the flowchart described below is provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. Consequently for persons having ordinary skills in the art, numerous variations and modifications may be conducted under the teaching of the present disclosure. However, those variations and modifications do not depart the protecting scope of the present disclosure.
  • a preliminary device parameter may be calculated according to a preliminary exposure region.
  • Step 601 may be performed by the preliminary calculation module 401.
  • Exemplary preliminary device parameters may include the number of exposures, the overlapping region between two adjacent exposures, the starting position of the effective light field, the stopping position of the effective light field, the height of the effective light field, etc.
  • the preliminary exposure region may be designated by a user.
  • the preliminary exposure region may be provided during initialization of the imaging device.
  • a preliminary number of exposures may be obtained in step 602. The preliminary number of exposures may be obtained based on, for example, the preliminary exposure region.
  • the preliminary number of exposures may be obtained by the preliminary calculation module 401.
  • the obtained preliminary number of exposures may be compared to a rule.
  • the preliminary device parameters may be used to configure an imaging device, for example, the imaging device 101, as shown in step 606.
  • the rule specifies that the number of exposures is an integer. In some embodiments where the calculated preliminary number of exposures is an integer, one or more of the remaining steps illustrated in FIG. 6 may be skipped. In some embodiments where the calculated preliminary number of exposures is an integer, one or more of the remaining steps illustrated in FIG. 6 may still be performed. In some embodiments where the calculated preliminary number of exposures is an integer, one or more secondary parameters may be set equal to the preliminary parameters.
  • the preliminary number of exposures may be adjusted to provide a secondary number of exposures according to step 603.
  • the step 603 may be performed by the secondary calculation module 402.
  • the preliminary number of exposures may be rounded to the preceding integer or the next integer such that the difference between the secondary number of exposures and the preliminary number of exposures is less than 1.
  • the adjustment of the preliminary number of exposures may be performed in accordance with a threshold. See, for example, the description in connection with FIG. 11.
  • a secondary exposure region may be calculated based on the secondary number of exposures in step 604.
  • the secondary exposure region may be calculated such that the secondary exposure region is not larger than the preliminary exposure region. See, for example, the description in connection with FIG. 11.
  • a secondary device parameter may be calculated according to the secondary exposure region.
  • a non-exclusive list of secondary device parameters may include: the overlapping region between two adjacent exposures, the starting position of the effective light field, the stopping position of the effective light field, the height of the effective light field, etc.
  • the imaging device 101 may be configured according to the secondary device parameters and/or the secondary exposure region in step 606. In some embodiments, the imaging device 101 may be configured according to the preliminary parameters in step 606.
  • FIG. 7 illustrates a schematic view of the Left, Posterior, Superior (LPS) coordinate system used in connection with some embodiments of the present disclosure.
  • the X-Y-Z axis may define a three dimensional space such that the origin of it may be located within a target body, for example, within the chest of a patient.
  • One of the three axes may be perpendicular to the other two.
  • the positive direction of each one of the three axes is illustrated in FIG. 7.
  • the x-axis may point from the right towards the left with respect to the patient.
  • the Y-axis may point from the anterior towards the posterior, i.e., from the front towards the back, with respect to the patient.
  • the Z-axis may point from the inferior towards the superior, i.e., from the feet towards the head, with respect to the patient.
  • each two of the X axis, the Y axis, and the Z axis may define a plane perpendicular to the other planes defined by the other combinations of the three axes. Therefore, the three dimensional space defined by the X axis, the Y axis, and the Z axis may include three planes that may be used to describe an anatomical position of or within the patient. Particularly, the X axis and the Y axis may define a plane that may be referred to as the axial plane or transverse plane. The axial plane or transverse plane may separate the head (superior) from the feet (inferior) .
  • the axial plane or transverse plane may be substantially parallel to the ground when a patient is standing, for example, in front of an imaging device.
  • the axial plane or transverse plane may be substantially perpendicular to the ground when the patient is lying or lying in prone on a table.
  • the X axis and the Z axis may define a plane that may be referred to as the coronal plane.
  • the coronal plane may separate the front (anterior) from the back (posterior) .
  • the coronal plane may be substantially perpendicular to the ground when a patient is standing, for example, in front of an imaging device.
  • the coronal plane may be substantially parallel to the ground when the patient is lying or lying in prone on a table.
  • the Y axis and the Z axis may define a plane that may be referred to as the sagittal plane or longitudinal plane.
  • the sagittal plane or longitudinal plane may separate the left from the right from the left.
  • the sagittal plane or longitudinal plane may be substantially perpendicular to the ground when a patient is standing, for example, in front of an imaging device.
  • the sagittal plane or longitudinal plane may be substantially perpendicular to the ground when the patient is lying or lying in prone on a table.
  • the LPS coordinate system may be used in connection with the Digital Imaging and Communications in Medicine (DICOM) standard.
  • DICOM Digital Imaging and Communications in Medicine
  • the DICOM may include a file format definition and a network communication protocol. In some sections of the DICOM standard, it specifies image plane module attributes and image plane attribute descriptions including, for example, image position and image orientation.
  • a DICOM file may be exchanged between two entities that may receive images and/or patient data in DICOM format.
  • DICOM may enable the integration of scanners, servers, workstations, printers, and network hardware from multiple manufacturers into a picture archiving and communication system (PACS) .
  • PACS picture archiving and communication system
  • DICOM is known as NEMA standard PS3, and as ISO standard 12052: 2006 "Health informatics -Digital imaging and communication in medicine (DICOM) including workflow and data management. " It should be noted that the standard described above is provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. Consequently, the teaching of the present disclosure may be used in connection with any standards that it may comply, and for persons having ordinary skills in the art, numerous variations and modifications may be conducted under the teaching of the present disclosure. However, those variations and modifications do not depart the protecting scope of the present disclosure.
  • FIG. 8 illustrates an imaging device 101 according to some embodiments of the present disclosure.
  • the imaging device 101 illustrated in FIG. 8 is an X-ray imaging device.
  • Non-exclusive examples of X-ray imaging devices that may be used in connection with some embodiments of the present disclosure include imaging devices used for computed tomography, fluoroscopy, radiography, etc.
  • the imaging device 101 may include a tube 801 that may generate a beam of X-ray used for imaging.
  • the tube 801 may constitute an X-radiation source.
  • the tube 801 may assume different configurations compatible with the present disclosure.
  • a non-exclusive list of exemplary tubes that may be used in connection with the present disclosure include a rotating anode tube, a solid-anode microfocus X-ray tube, a metal-jet-anode microfocus X-ray tube, etc.
  • the tubes that may be used in connection with the image composition system described above are not exhaustive and are not limiting. Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the present disclosure.
  • the tube 801 may be mounted proximal to a beam limiting device (not shown in the figure) .
  • the beam limiting device may function to define the beam of X-rays generated by the tube 801.
  • “to define” may mean either to cause the directions of at least some of the X-rays of the beam to align in a specific direction, or to define the spatial cross section of the beam.
  • a beam limiting device may filter a plurality of X-rays so that only those traveling in a specified direction may be allowed through the beam limiting device.
  • the width of the beam of X-rays generated by the tube 801 may be defined by the beam limiting device such that the height of the effective light field may equal to a product of the opening of the beam limiting device in the vertical direction and a constant k.
  • a target body 802 may be placed on a table 803.
  • the target body 802 may be a patient.
  • the table 803 may slide or move along in one and/or multiple directions.
  • the height of the table 803 may be adjusted.
  • the adjustment of the height of the table 803 may be realized by an upward and/or a downward movement of the table.
  • the height of the table 803 may be adjusted before an imaging operation commences.
  • the height of the table 803 may be adjusted before an exposure of an imaging operation is taken.
  • the adjustment of the height of the table 803 may accommodate target bodies of different sizes.
  • a detector 804 may be configured to detect an X-ray emitted from the tube 801 that passes through a target body.
  • the detector 804 may be placed underneath the table 803.
  • the detector 804 may be placed beneath the target body 802 and above the table 803.
  • the detector 804 may be placed inside the table 803 as long as it may receive X-ray signals.
  • the detector 804 may assume different configurations that may be compatible with the present disclosure.
  • a non-exclusive list of exemplary detectors that may be used in connection with the present disclosure includes: a gas ionization detector, a gas proportional detector, a multiwire and microstrip proportional chamber, a scintillation detector, an energy-resolving semiconductor detector, a current-mode semiconductor detector, a CCD detector, a microchannel plate detector, an image plate detector, an X-ray streak camera, a photographic film, and other X-ray detectors such as one operating at a superconducting temperature.
  • the detectors that may be used in connection with the image composition system described above are not exhaustive and are not limiting. Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the present disclosure.
  • the tube 801 may be placed in a first angle with reference to the axis perpendicular to the ground.
  • the detector 804 may be placed in a first position under the head (superior) of the human patient 802.
  • An X-ray beam generated by the tube 801 in the first angle may be received by the detector 804 placed in the first position under the head (superior) of the target body 802.
  • the first image may be generated for a first exposure area of the target body 802.
  • the tube 801 may turn a particular angle to be positioned at a second angle (as shown by the dashed line in the figure) with reference to the axis perpendicular to the ground.
  • the detector 804 may move a particular distance along the bed towards the feet (interior) of the target body 802 to be positioned at a second position (as shown in the dashed line in the figure) , such that an X-ray beam generated by the tube 801 in the second position may be received by the detector 804 in the second position.
  • the second image may be generated for a second exposure area of the target body 802.
  • the image composition system may repeat the process and generate a series of images.
  • two adjacent sub-images generated by the process described above may have an overlapping region.
  • the area of the overlapping regions between a pair of the adjacent sub-images are substantially the same. Description regarding the determination of an overlapping region in the two adjacent sub-images may be found elsewhere in the present disclosure. See, for example, FIG. 11 and the description thereof.
  • the tube 801 and the detector 804 may move simultaneously and/or in a synchronized fashion. In some embodiments, the tube 801 and the detector 804 move sequentially; either one may move prior to the other. In some embodiments, the position of the tube 801 may be adjusted manually. In some embodiments, the position of the tube 801 may be adjusted automatically. In some embodiments, the position of the detector 804 may be adjusted manually. In some embodiments, the position of the detector 804 may be adjusted automatically. In some embodiments, the position of the tube 801 and the position of the detector 804 may be adjusted in a similar manner, either manually or automatically. In some embodiments, the position of the tube 801 and the position of the detector 804 may be adjusted in different manners; one may be adjusted manually, and the other may be adjusted automatically.
  • the imaging device 101 may include a structure to facilitate the adjustment of, for example, the tube 801, the detector 804, etc.
  • the structure may include one or more components the movement of which may achieve the adjustment.
  • the structure may include, for example, a slidable handle, a rotatable handle, or the like, or a combination thereof, to allow manual adjustment.
  • the structure may be controlled by, for example, a control signal to allow automatic adjustment.
  • a user for example, a healthcare provider, an imaging specialist, etc.
  • the instruction may include one or more preliminary device parameter as described elsewhere in the present disclosure.
  • FIG. 9 and FIG. 10 illustrate another imaging device according to some embodiments of the present disclosure.
  • the imaging device illustrated in FIG. 9 and FIG, 10 is an X-ray imaging device.
  • FIG. 9 illustrates the configuration of the imaging device in a first moment.
  • FIG. 10 illustrates the configuration of the imaging device in a second moment.
  • the imaging device includes a beam 901, a table 902, a detector 903, a vertical stand 904, a moving guide 905, a ceiling suspension 906 capable of extending and contracting in the vertical direction, and a tube 907.
  • the vertical stand 904 may be installed on the ground plane o1.
  • the XY-plane in the three dimensional coordinates may be parallel to the ground plane o1.
  • the detector 903 may be mounted.
  • the tube 907 may be mounted proximal to a beam limiting device 911.
  • the detector 903 may move up and/or down along the vertical stand 904.
  • the tube 907 may be connected to the ceiling suspension 906 via a tube support 908.
  • the tube 907 via the tube support 908, the tube 907 may be rotated within the XY-plane and/or the XZ-plane.
  • the tube 907 via the tube support 908, the tube 907 may move in the vertical direction.
  • the ceiling suspension 906 may extend or contract in the vertical direction.
  • the tube support 908 may include a first support structure 909 and a second support structure 912.
  • the first support structure 909 may be at an angle with the second support structures 912.
  • the first support structure 909 may be perpendicular to the second support structure 912.
  • the central axis of the ceiling suspension 906 may be labeled as an RVA-axis.
  • the RVA-axis may be parallel to the Z-axis.
  • the central axis of the second support structure 912 may be labeled as an RHA-axis.
  • the RHA-axis may be parallel to the Y-axis.
  • the first support structure 909 may allow the tube support 908 and the tube 907 to rotate about the RVA-axis within the XY-plane.
  • the second support structure 912 may allow the tube to rotate about the RHA-axis within the XZ-plane.
  • the tube 907 may be tilted with reference to the Z-axis via the rotation of the second support structure 912 within the XZ-plane.
  • FIG. 11 illustrates a process for determining the number of exposures according to some embodiments of the present disclosure.
  • the dashed box 1101 may represent the position of the effective light field with respect to the first exposure.
  • the upper edge of the dashed box 1101 may represent the starting position of the preliminary effective light field.
  • the dashed box 110n may represent the position of the effective light field with respect to the last exposure.
  • the lower edge of the dashed box 110n may represent the preliminary stopping position of the effective light field.
  • the solid box 1102 may represent the position of the effective light field with respect to the second exposure.
  • a portion of the solid box 1101 may at least partially overlap with the dashed box 1101.
  • the height of the overlapping region may be denoted by L p .
  • the heights of the dashed boxes 1101 and 110n, and the height of the solid box 1102 may equal to the height of the preliminary effective light field h 0 .
  • the line 1190 may represent the ground plane.
  • the secondary number of exposures may equal to the preliminary number of exposures, and the secondary device parameters may be the same as the preliminary device parameters.
  • the secondary exposure region may be the same as the preliminary exposure region.
  • the secondary number of exposures may be the largest integer not greater than the preliminary number of exposures, or may be the largest integer not greater than the preliminary number of exposures plus 1.
  • the starting position and stopping position of the effective light field may be adjusted such that the secondary exposure region between the secondary starting position and the secondary stopping position of the effective light field is not greater than the preliminary exposure region.
  • the secondary starting position and the secondary stopping position of the effective light field may be the same as the preliminary exposure region while the secondary height of the effective light field is not greater than the preliminary height of the effective light field. Details regarding the above description will be further explained below.
  • the distance of the preliminary starting position of the effective light field from the ground plane may be Z start0
  • the distance of the preliminary stopping position of the effective light field from the ground plane is Z stop0
  • the preliminary composing length L 0 of the image composition may equal to:
  • a preliminary number of exposures Y may be calculated according to the following equation:
  • the preliminary number of exposures Y obtained via Equation (002) may be an integer. In such embodiments, the preliminary number of exposures Y may not need to be adjusted.
  • the secondary number of exposures may be set equal to the preliminary number of exposures Y.
  • the preliminary number of exposures Y obtained via Equation (002) is not be an integer. In such embodiments, the preliminary number of exposures Y may be adjusted and a secondary number of exposures that is an integer may be obtained.
  • Various methods may be used to adjust the preliminary number of exposures to an integer. For example, the integer part of the preliminary number of exposures Y may be designated as the secondary number of exposures.
  • the integer part of the preliminary number of exposures Y plus 1 may be designated as the secondary number of exposures.
  • the adjustment methods that may be used in connection with the present system described above are not exhaustive and are not limiting. Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the present disclosure.
  • the secondary number of exposures may be adjusted according to a rate of change in the composing length.
  • the rate of change in the composing length may depend on the preliminary composing length and a secondary composing length.
  • the rate of change in the composing length may be calculated according to the following equation:
  • L 0 represents the preliminary composing length
  • L 1 represents the secondary composing length
  • L 1 floor (Y) ⁇ (h 0 -L p ) +L p , (004)
  • the rate of change in the composing length may be compared with a threshold to determine the secondary number of exposures such that the difference of the secondary number of exposures and the preliminary number of exposures is less than 1.
  • the remnant composing length for the last exposure when the rate of change in the composing length is less than or equal to the threshold, the remnant composing length for the last exposure may be less than the height of the effective light field, the impact of which on the image composition may be considered insignificant. Then the secondary number of exposures may be set equal to floor (Y) and the fractional part of the preliminary number of exposures may be discarded. In some embodiments where the rate of change in the composing length is greater than the threshold, the remnant composing length for the last exposure may be less than the height of the effective light field, the impact of which on the image composition may be considered significant. Then the fractional part of the preliminary number of exposures may be retained. The secondary number of exposures may equal to floor (Y) + 1.
  • the threshold may be set within the range of 3%to 7%. More particularly, in some embodiments, the threshold may be 5%. In some embodiments, the threshold may be 6%or 7%. Yet in other embodiments, the user may adjust the threshold based on the clinical needs at his discretion.
  • the preliminary number of exposures is an integer. In such an occasion, it may be unnecessary to adjust one or more preliminary device parameters.
  • the imaging device may be configured according to those preliminary parameters.
  • the secondary device parameters may be set equal to the preliminary parameters, and the imaging device may be configured according to obtain such secondary device parameters.
  • the preliminary number of exposures is not an integer, the preliminary device parameters may be adjusted to provide secondary device parameters. As discussed above, the secondary number of exposures may be set equal to floor (Y) or floor (Y) + 1, and the secondary device parameters may be adjusted according to the secondary number of exposures.
  • the secondary number of exposures may be less than the preliminary number of exposures.
  • the secondary composing length may equal to the preliminary composing length:
  • the secondary composing length corresponding to the secondary number of exposures may be shorter than the preliminary composing length corresponding to the preliminary number of exposures. Therefore, the preliminary starting position and preliminary stopping position of the effective light field may be adjusted to provide a secondary starting position and a secondary stopping position of the effective light field, respectively.
  • the distance between the starting position and the stopping position of the effective light field may be the secondary composing length.
  • the secondary starting position Z start and the secondary stopping position Z stop of the effective light field corresponding to the secondary device parameters may be obtained respectively using the following equations, respectively:
  • Z start Z start0 - (L 0 -L 1 ) /2; (006)
  • Z stop Z stop0 + (L 0 -L 1 ) /2, (007)
  • the height of the effective light field may be equal to the preliminary height of the effective light field h 0 .
  • the secondary exposure region may be set equal to the preliminary exposure region such that the secondary composing length equals to the preliminary composing length. Therefore, the starting position and the stopping position of the effective light field do not need to be adjusted. Instead, the height of the effective light field may be adjusted to achieve the number of exposures of floor (Y) + 1. The secondary height of the effective light field may be less than the preliminary height of the effective light field.
  • the secondary height of the effective light field may be obtained using the following equation:
  • the secondary exposure region may be equal to or smaller than the preliminary exposure region.
  • the radiation dose received by the target body for example, a human patient, may be reduced.
  • the determination of the secondary number of exposures may depend on a rate of change in the composing length.
  • the composite image may satisfy practical clinical demands regardless of whether the secondary number of exposures equal to floor (Y) or floor (Y) + 1.
  • the secondary number of exposures may be the same as the preliminary number of exposures. In some embodiments, the secondary number of exposures may be the largest integer not greater than the preliminary number of exposures. In some embodiments, the secondary number of exposures may be the largest integer not greater than the preliminary number of exposures plus 1. And various secondary device parameters may be applied according to different secondary number of exposures. In some embodiments, the position of the detector and the tube rotation angle corresponding to an exposure may be obtained according to a secondary number of exposures and secondary device parameters.
  • the position of the detector at an exposure may be obtained.
  • the distance between the X-ray generator (or another type of radiation source) and the detector (or the image-receptor) may be referred to as the source to image-receptor distance (SID, or S) .
  • SID source to image-receptor distance
  • the change of position of the focus of the tube 907 along Z-axis may be significantly smaller than the SID, such that the position of the focus of the tube 907 may be treated as approximately fixed along the Z-axis.
  • the tube 907 may rotate about the RHZ-axis within the XZ-plane.
  • the distance between the focus of the tube 910 and the ground plane may be approximately fixed; the tube 907 may rotate about the RHA-axis via the second support structure 912 within the XZ-plane (with reference to FIG. 9 and FIG. 10) .
  • the detector 903 may move up and/or down along the vertical stand 904 in the Z-axis.
  • the imaging device may be configured according to the preliminary device parameters.
  • the secondary device parameters may be set to be the same as the preliminary device parameters, and the imaging device may be configured according to the secondary device parameters.
  • the position of detector in the Z-axis may be obtained through the preliminary position of each exposures.
  • the height of the overlapping region between two adjacent exposures may be L p .
  • the preliminary position of the first exposure may correspond to the upper edge of the effective light field corresponding to the first exposure.
  • the preliminary position of the first exposure Z 1 along the Z-direction may be set to be Z start0 .
  • the preliminary position of the second exposure may be the upper edge of the effective light field corresponding to the second exposure.
  • the preliminary position of the second exposure Z 2 may be set to be:
  • the preliminary position of the nth exposure may be the upper edge of the effective light field corresponding to the nth exposure.
  • the position of the nth exposure Z n may be set to be:
  • Z n Z start0 - (n-1) ⁇ h 0 + (n-1) ⁇ L p , (010)
  • n stands for the nth exposures.
  • the effective light field may be the light field received by the detector that may be effective to provide imaging information. Therefore, the position of the detector in the Z direction may be determined based on the effective light fields in the Z direction.
  • the center of the detector may be considered as the position of the detector, and the position of the center of the detector may be determined according to the upper edge of a corresponding effective light field.
  • the position of the center of the detector corresponding to the first exposure may be determined according to:
  • the position of the center of the detector corresponding to the second exposure may be determined according to:
  • the position of the center of the detector corresponding to the nth exposure may be determined according to:
  • n stands for the nth exposure.
  • the methods of determining the position of detector that may be used in connection with the present system described above are not exhaustive and are not limiting. Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the present disclosure.
  • the tube rotation angle may refer to the difference of the angle between the axis of the tube corresponding to an exposure and the X-axis with respect to the XZ-plane, and the angle between the axis of the tube corresponding to the preceding exposure and the X-axis with respect to the XZ-plane. In some embodiments of the present disclosure, the tube rotation angle may refer to the angle that the tube rotates between an exposure and a preceding exposure.
  • the angle between the tube axis and the X-axis is the angle between the tube 907 rotating about the RHA axis within the XZ plane and the X-axis.
  • FIG. 12 illustrates a schematic view of the tube rotation angle corresponding to the nth exposure.
  • Point G may represent the point of the rotation axis of the tube.
  • Point M and point Q may represent the positions of the focus of the tube corresponding to two adjacent exposures, respectively.
  • the angle ⁇ RHA between the GM and GQ may be the tube rotation angle corresponding to the nth exposure.
  • the point of the rotation axis of the tube G may intersect with the detector at point E in the X-axis direction.
  • the focus of the tube M may intersect with the detector at point A in the X-axis direction.
  • the effective light field that the tube projects to the detector may be between point D and point B.
  • Point C may be the intersection of the detector and the perpendicular bisector of the rays emitted by the tube G.
  • MN may intersect with the horizontal line perpendicularly at N.
  • the angle between MA and MD may be ⁇ 2
  • the angle between MA and MB may be ⁇ 1
  • the angle between MA and MC may be ⁇ 3 .
  • the distance between point Q and point E may be the SID, S SID .
  • the length of GM may be significantly greater than the length of QE, such that:
  • the position of point M along the Z-axis direction may be Z TCS .
  • Point D may be the upper edge of the effective light field corresponding to the nth exposure.
  • the position of point D along the Z-axis direction may be:
  • Z n Z start0 - (n-1) ⁇ h 0 + (n-1) ⁇ L p . (018)
  • Point B may be the lower edge of effective light field corresponding to the nth exposure.
  • the position of point B along the Z-axis direction may be Z n -h 0 .
  • ⁇ RHA may be the difference of the angle between the axis of the tube corresponding to the nth exposure and the X-axis within the XZ-plane, and the angle between the axis of the tube corresponding to the (n-1) th exposure and the X-axis within the XZ-plane.
  • the imaging device may be configured according to the secondary device parameters.
  • the positions of the detector and the rotation angles of the tube may be determined in a process similar to that described above in connection with some embodiments of the present disclosure where the preliminary number of exposures is an integer.
  • the secondary starting position and secondary stopping position of the effective light field may be the same as the preliminary starting and stopping position of the effective light field.
  • the secondary height of the effective light field may be different from the preliminary height of the effective light field, and may be denoted as h.
  • the preliminary height of the effective light field in relevant parameters may be substituted by the secondary height of the effective light field. For instance, the position of the detector corresponding to each exposure may be:
  • Z FDn Z start0 - ( (2n-1) /2) ⁇ h+ (n-1) ⁇ L p , (024)
  • Z n Z start0 - (n-1) ⁇ h+ (n-1) ⁇ L p , (026)
  • a user may input one or more preliminary device parameters and the height of the overlapping region based on a preliminary exposure region.
  • the preliminary device parameters may be provided by the image composition system during initialization.
  • the image composition system may calculate a preliminary number of exposures according to the inputs and obtain a secondary number of exposures and a secondary device parameter.
  • the user may designate a starting position and stopping position of the effective light field according to those secondary parameters.
  • the designation may be performed under the instruction of the user.
  • the image composition system may perform the designation automatically.
  • the position of the detector corresponding to an exposure may be obtained according to the position of the effective light field.
  • the image composition system may position the tubes, the detector, and/or the beam limiting devices accordingly and generate a series of images for composition.
  • a preliminary number of exposures may be obtained based on a preliminary exposure region.
  • a secondary number of exposures may be obtained based on the preliminary number of exposures, as well as a secondary device parameter, such that the secondary exposure region corresponding to the secondary number of exposures may be equal to or smaller than the preliminary exposure region, and that the absolute difference between the preliminary number of exposures and the secondary number of exposures may be less than 1.
  • the number of exposures may be an integer. Under such circumstances, the system and process according to some embodiments of the present disclosure may help avoid that the secondary number of exposures may be greater than the preliminary number of exposures and reduce the radiation dose the patient may be exposed to.
  • the position of the tube along the z-axis may be approximately fixed, and the rotation of the tube may be confined within the XZ-plane.
  • the position of the detector may be adjusted along with the rotation of the tube.
  • the detector may move along the Z-axis simultaneously with the rotation of the tube.
  • the motion of the detector and the rotation of the tube may be performed one after another.
  • a composite image may be obtained.
  • the image composition system may obtain a series of sub-images through adjusting the parameters that may be used to configure an imaging device.
  • the image composition system may obtain the position of the effective light field corresponding to an exposure and the height of the overlapping region between two adjacent sub-images.
  • the image composition system may obtain a composite image by combining adjacent sub-images according to the position of the effective light field corresponding to an exposure and the height of the overlapping region between two adjacent sub-images.
  • FIG. 13 is a block diagram illustrating the image processing engine 203 according to some embodiments of the present disclosure.
  • the image processing engine 203 may include a segmentation module 1301, a registration module 1302, a merger module 1303, and a correction module 1304.
  • the segmentation module 1301 may be configured to segment one or more overlapping regions of 3D images received from the acquisition engine 202 (shown in FIG. 2) to produce overlapping images corresponding to 3D volume data.
  • the 3D images may be three-dimensional digital subtraction angiography images (3D-DSA images) .
  • the 3D-DSA images may be 3D digital coronal images, 3D digital sagittal images, 3D digital transverse images, or the like, or a combination thereof.
  • the segmentation module 1301 may be configured to segment images stored in storage module 204, the images may be, for example, 3D images, 2D images, or the like, or a combination thereof.
  • the 3D images may be 3D-DSA images.
  • the 3D-DSA images may be 3D digital coronal images, 3D digital sagittal images, 3D digital transverse images, or the like, or a combination thereof.
  • a series of scans may generate a plurality of sub-images to be combined together so that a composite image may be acquired, 2 adjacent sub-images may have an overlapping region, and the overlapping region of a sub-image may be segmented out by the segmentation module 1301.
  • the overlapping region may be termed as overlapping image corresponding to 3D volume data.
  • exemplary sub-images may be 3D images.
  • the 3D images may be 3D-DSA images.
  • the 3D-DSA images may be 3D digital coronal images, 3D digital sagittal images, 3D digital transverse images, or the like, or a combination thereof.
  • the segmentation may be performed in accordance with digital imaging and communication in medicine (DICOM) .
  • DICOM digital imaging and communication in medicine
  • label (0020 0032) of DICOM may be utilized to segment overlapping regions out of 3D images corresponding to 3D volume data.
  • DICOM digital imaging and communication in medicine
  • Any number of overlapping regions of 3D images may be segmented by the segmentation module 1301. It should still be understood that any size of overlapping images of 3D images may be segmented by the segmentation module 1301, for example, 351*67*73, or 352*512*96.
  • the segmentation module 1301 may also be configured to segment 2D images.
  • the segmented overlapping images corresponding to 3D volume data may be sent to the registration module 1302 for one or more registrations.
  • the registration module 1302 may be configured to perform registrations including, for example, 2D registration, 3D registration, or the like, or a combination thereof.
  • the registration may be performed to align two or more images into spatial alignment.
  • the images may be taken, for instance, at different times, from different viewpoints, or from different modalities.
  • the overlapping images corresponding to 3D volume data generated in the segmentation module 1301 may be registered by the registration module 1302. Afterwards, registered overlapping images corresponding to 3D volume data may be generated.
  • the registered overlapping images corresponding to 3D volume data may be sent to the correction module 1304 for a fine registration, or to the merger module 1303 for image fusion to generate a composite image.
  • the fine registration may include a process of optimization based on one or more algorithms. Exemplary algorithms may include recursion, a bisection method, an exhaustive method, a greedy algorithm, a divide and conquer algorithm, dynamic programming method, an iterative method, a branch-and-bound algorithm, a backtracking algorithm, or the like, or any combination thereof.
  • the merger module 1303 may be configured to calibrate the sub-images based on the 3D registration, and the calibrated sub-images may be fused together to generate a composite image.
  • the description of the image processing engine 203 is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure.
  • numerous variations and modifications may be made under the teaching of the present disclosure. However, those variations and modifications do not depart from the protecting scope of the present discourse.
  • the registered overlapping images corresponding to 3D volume data may be sent to the correction module 1304 and the merger module 1303 simultaneously or sequentially at any order.
  • the correction module 1303 may be unnecessary.
  • Sub-images that are not adjacent may have one or more overlapping regions as well.
  • FIG. 14 is a flowchart illustrating a workflow of image processing according to some embodiments of the present disclosure.
  • sub-images of a region of interest e.g., peripheral vessels of lower limbs
  • a region of interest e.g., peripheral vessels of lower limbs
  • the sub-images may be acquired by techniques including DSA (digital subtraction angiography) , CT (computed tomography) , CTA (computed tomography angiography) , PET (positron emission tomography) , X-ray, MRI (magnetic resonance imaging) , MRA (magnetic resonance angiography) , SPECT (single-photon emission computerized tomography) , US (ultrasound scanning) , or the like, or a combination thereof.
  • the sub-images may be 3D images, 2D images, or the like, or a combination thereof.
  • the 3D images may be 3D-DSA images.
  • the 3D-DSA images may be 3D digital coronal images, 3D digital sagittal images, 3D digital transverse images, or the like, or a combination thereof.
  • Overlapping images corresponding to 3D volume data of the sub-images may be generated in step 1402.
  • two adjacent sub-images acquired by two successive scans may have two overlapping images corresponding to 3D volume data of each sub-image, respectively.
  • the two overlapping images corresponding to 3D volume data of the sub-images may be segmented and generated in step 1402.
  • the overlapping images corresponding to 3D volume data generated in step 1402 may be registered in step 1403.
  • exemplary reasons of the misalignment may include that the layout of the object being scanned is not perfectly parallel to the scanning plane of the imaging device; as a result, successive sub-images may be misaligned spatially.
  • Other reasons may include, for example, the motion of a patient during the imaging procedure, the motion of an internal organ of the patient during the imaging procedure, the motion of the imaging device during the imaging procedure, or the like, or a combination thereof.
  • the registration may include, a 2D registration, a 3D registration, or the like, or a combination thereof.
  • the process of a registration may include calculating one or more offsets and applying the offsets to reduce misalignment.
  • the registered overlapping images corresponding to 3D volume data may be stored in the storage engine 204.
  • a fine registration may be performed on the registered overlapping images corresponding to 3D volume data.
  • the fine registration may include a process of optimization based on one or more algorithms. Exemplary algorithms may include recursion, a bisection method, an exhaustive method, a greedy algorithm, a divide and conquer algorithm, dynamic programming method, an iterative method, a branch-and-bound algorithm, a backtracking algorithm, or the like, or any combination thereof. Results generated by the fine registration (e.g., one or more offsets) may be utilized to register the overlapping image corresponding to 3D volume data again. It should be noted the optimization may be performed iteratively in step 1404 until a desirable result is obtained.
  • the sub-images may be calibrated in accordance with the results of the registration in step 1403 or the fine registration in step 1405.
  • the results may be one or more offsets including, for example, offsets in the X direction (X offsets) , offsets in the Y direction (Y offsets) , offsets in the Z direction (Z offsets) , offsets in the coronal plane (coronal offsets) , offsets in the sagittal plane (sagittal offsets) , offsets in the transverse plane (transverse offset) , etc.
  • the calibrated sub-images may be fused together to generate a composite image.
  • the overlapping images corresponding to 3D volume data representing the overlapping regions of the sub-images may be fused.
  • the composite image may be output for display in step 1406.
  • a composite image of the vasculature including one or more blood vessels of a region of interest may be obtained for, for example, diagnosis purposes.
  • the method described in the disclosure may be utilized to diagnose a region of interest such as head, thorax, abdomen, pelvis and perineum, limbs, spine and vertebrae, or the like, or a combination thereof.
  • the head may include brain or skull, eye, teeth, or the like, or a combination thereof.
  • the thorax may include cardiac, breast, or the like, or a combination thereof.
  • the abdomen may include kidney, liver, or the like, or a combination thereof.
  • the limbs may include an arm, a leg, a wing of a bird, or the like, or a combination thereof.
  • the region of interest may be a gastrointestinal tract.
  • Three or more overlapping images corresponding to 3D volume data may be registered in step 1403. Step 1404 may be unnecessary, and step 1403 may proceed to step 1405 directly.
  • FIG. 15 depicts a block diagram of the registration module 1302 according to some embodiments of the present disclosure.
  • the registration module 1302 may include an acquisition unit 1501, a 2D registration unit 1502, and a 3D registration unit 1503.
  • the acquisition unit 1501 may be configured to acquire images to be registered.
  • the images may be 3D images, 2D images, or the like, or a combination thereof.
  • the 3D images may be 3D-DSA images.
  • the 3D-DSA images may be 3D digital coronal images, 3D digital sagittal images, 3D digital transverse images, or the like, or a combination thereof.
  • one or more overlapping images corresponding to 3D volume data of adjacent sub-images which have mutual overlapping regions may be acquired by the acquisition unit 1501.
  • the overlapping images corresponding to 3D volume data may include a stack of 2D images.
  • a 2D registration may be performed by the 2D registration unit 1502. Overlapping images corresponding to 3D volume data may be projected onto a coronal plane, a sagittal plane, a transverse plane, etc.
  • the projection may be based on maximum intensity projection (MIP) .
  • MIP maximum intensity projection
  • tMIP temporal maximum intensity projection
  • MiniP minimum intensity projection
  • VED virtual endoscopic display
  • a 2D projection image and a pixel map may be generated based on a set of 3D volume data.
  • An overlapping region may correspond to two sets of 3D volume data, one set relating to one of two adjacent 3D sub-images.
  • two 2D projection images and two corresponding pixel maps may be generated.
  • the 2D registration may include uncover the correlation between the 2D projection images.
  • the correlation may be utilized to determine 2D offsets of the two corresponding pixel maps.
  • the correlation or the 2D offsets may be used to calibrate the pixel maps.
  • the calibration may be performed by the 2D registration unit 1502.
  • the overlapping images corresponding to 3D volume data may have a stack of 2D images in the coronal plane.
  • the overlapping images may be projected onto the coronal plane.
  • the overlapping images corresponding to 3D volume data may be projected on another plane, for example, a self-defined plane.
  • the 2D offsets may include offsets in different directions or different planes, for example, offsets in the X direction (X offsets) , offsets in the Y direction (Y offsets) , offsets in the Z direction (Z offsets) , offsets in the coronal plane (coronal offsets) , offsets in the sagittal plane (sagittal offsets) , offsets in the transverse plane (transverse offset) , etc.
  • the 3D registration unit 1503 may be configured to calculate 3D offsets and perform 3D registration on the overlapping images corresponding to 3D volume data based on the 2D offsets and the 3D offsets.
  • the 3D offsets may include offsets in different directions or different planes, for example, offsets in the X direction (X offsets) , offsets in the Y direction (Y offsets) , offsets in the Z direction (Z offsets) , offsets in the coronal plane (coronal offsets) , offsets in the sagittal plane (sagittal offsets) , offsets in the transverse plane (transverse offset) , etc.
  • the registration module is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure.
  • various variations and modifications may be made in the light of the present disclosure. However, those variations and modifications do not depart from the protecting scope of the present disclosure.
  • the 3D registration may be performed on the overlapping images corresponding to 3D volume data directly without performing a 2D registration.
  • the 2D registration may be solely performed without performing a 3D registration.
  • FIG. 16 is a flowchart illustrating a registration process according to some embodiments of the present disclosure.
  • Overlapping images corresponding to 3D volume data may be retrieved in step 1601.
  • the overlapping images corresponding to 3D volume data may be generated by segmenting sub-images, which is described elsewhere in the present disclosure.
  • the overlapping images corresponding to 3D volume data may include a stack of 2D images.
  • the overlapping images corresponding to 3D volume data may be projected onto any plane of the anatomical planes.
  • Exemplary anatomical plane may include a coronal plane, a sagittal plane, a transverse plane, etc.
  • the overlapping images corresponding to 3D volume data may include 3D digital coronal images
  • the overlapping images corresponding to 3D volume data may be projected onto the coronal plane.
  • the overlapping images corresponding to 3D volume data include 3D digital sagittal images
  • the overlapping images corresponding to 3D volume data may be projected onto the sagittal plane.
  • Two-dimensional projection images of the overlapping images corresponding to two sets of 3D volume data and the corresponding pixel maps may be generated after the projection is completed.
  • the projection may be based on maximum intensity projection (MIP) .
  • the projection may be based on temporal maximum intensity projection (tMIP) , minimum intensity projection (MiniP) , virtual endoscopic display (VED) , or the like, or a combination thereof.
  • a 2D registration may be performed on the 2D images in step 1602.
  • the 2D registration may include calculating 2D offsets and applying 2D offsets.
  • the 2D offsets may include offsets in different directions or different planes, for example, offsets in the X direction (X offsets) , offsets in the Y direction (Y offsets) , offsets in the Z direction (Z offsets) , offsets in the coronal plane (coronal offsets) , offsets in the sagittal plane (sagittal offsets) , offsets in the transverse plane (transverse offset) , etc.
  • Any number of 2D images of the overlapping images corresponding to 3D volume data may be registered in step 1602.
  • a 3D registration may be performed on the overlapping images corresponding to 3D volume data based on the 2D offsets and 3D offsets generated hereby.
  • the 3D offsets may include offsets in different directions or different planes, for example, offsets in the X direction (X offsets) , offsets in the Y direction (Y offsets) , offsets in the Z direction (Z offsets) , offsets in the coronal plane (coronal offsets) , offsets in the sagittal plane (sagittal offsets) , offsets in the transverse plane (transverse offset) , etc.
  • the 3D offsets may be generated based on the 2D offsets in the plane onto which the overlapping images have been projected to generate the 2D projection images and the slice information of the overlapping images corresponding to 3D volume data.
  • the slice information may indicate the slice number of a pixel of a 2D projection image.
  • the overlapping images corresponding to 3D volume data may be output for further processing.
  • the 3D registration may be performed directly on the overlapping images corresponding to 3D volume data without performing a 2D registration.
  • the 2D registration may be performed solely on the overlapping images corresponding to 3D volume data without performing a 3D volume data.
  • FIG. 17 is a block diagram of the registration module 1302 according to some embodiments of the present disclosure.
  • the registration module 1302 may include an acquisition unit 1501, a 2D registration unit 1502, and a 3D registration unit 1503.
  • the 2D registration unit may include a projection subunit 1701, a map generation subunit 1702, a 2D offsets calculation subunit 1703, and a map calibration subunit 1705.
  • the 3D registration unit may include a 3D offsets calculation subunit 1704 and an alignment subunit 1706.
  • the acquisition unit 1501 may be configured to acquire images to be registered.
  • the images may be 3D images, 2D images, or the like, or a combination thereof.
  • the 3D images may be 3D-DSA images.
  • the 3D-DSA images may be 3D digital coronal images, 3D digital sagittal images, 3D digital transverse images, or the like, or a combination thereof.
  • one or more overlapping images corresponding to 3D volume data of adjacent sub-images that have overlapping regions may be acquired by the acquisition unit 1501.
  • a 2D registration may be performed on 2D projection images generated by projecting the overlapping images corresponding to 3D volume data on an anatomical plane.
  • Exemplary anatomical planes may include a coronal plane, a sagittal plane, a transverse plane, etc.
  • the overlapping images corresponding to 3D volume data may be projected on any plane, for example, a self-defined plane.
  • the 2D registration may be performed by the 2D registration unit 1502.
  • the projection subunit 1701 may be configured to perform the projection.
  • the projection may be based on maximum intensity projection (MIP) .
  • the projection may be based on temporal maximum intensity projection (tMIP) , minimum intensity projection (MiniP) , virtual endoscopic display (VED) , or the like, or a combination thereof.
  • MIP maximum intensity projection
  • tMIP temporal maximum intensity projection
  • the map generation subunit 1702 may be configured to generated pixel maps of the 2D projection images based on the 2D projection images generated by the projection subunit 1701.
  • the 2D projection images and the pixel maps may be generated simultaneously. In alternative embodiments, the 2D projection images and the pixel maps may be generated sequentially at any order.
  • the 2D offsets calculation subunit 1703 may be configured to generate 2D offsets based on the 2D projection images.
  • the 2D offsets may include offsets in different directions or different planes, for example, offsets in the X direction (X offsets) , offsets in the Y direction (Y offsets) , offsets in the Z direction (Z offsets) , offsets in the coronal plane (coronal offsets) , offsets in the sagittal plane (sagittal offsets) , offsets in the transverse plane (transverse offset) , etc.
  • the map generation subunit 1702 may generate one or more pixel maps of the overlapping images corresponding to 3D volume data.
  • the pixel maps may correspond to the 2D projection images generated by the 2D registration unit 1502.
  • a pixel map on the coronal plane may be generated.
  • the X-Z plane may indicate the coronal plane, and the overlapping images corresponding to 3D volume data may be projected onto the X-Z plane.
  • a pixel value of the pixel map may be the slice number corresponding to a slice of an overlapping image corresponding to 3D volume data that has the maximum intensity at a pixel (x, z) .
  • an overlapping image corresponding to 3D volume data may have 80 slices.
  • an anatomical plane e.g., a coronal plane, a sagittal plane, a transverse plane
  • each pixel of the 2D projection image may correspond to 80 pixels at the same (x, z) distributed among 80 slices of the overlapping image corresponding to 3D volume data.
  • the pixel value of the pixel map corresponding to the 2D image may be the slice number corresponding to a slice that has the maximum intensity at the pixel (x, z) among the 80 slices. Specifically, if the 55 th slice of the overlapping image corresponding to 3D volume data has the maximum intensity at the pixel (x 1 , z 1 ) , then the pixel value corresponding to the 2D image at pixel (x 1 , z 1 ) that is the projection of the overlapping image corresponding to 3D volume data may be assigned to 55.
  • the 3D offsets calculation subunit 1704 may be configured to calculate 3D offsets. The calculation may be based on the pixel maps.
  • the pixel map generated by the map generation subunit 1702 may be calibrated by the map calibration subunit 1705. The calculation may be based on the 2D offsets. A calibrated pixel map may be generated after the calibration is completed. In alternative embodiments of the present disclosure, no calibration is performed on the pixel maps.
  • the 3D offsets may be obtained based on the 2D offsets. In alternative embodiments, the 3D offsets may be obtained based on the pixel maps, for example, two pixel maps, or a pixel map and a calibrated pixel map.
  • one of the two pixel maps may be designated as a reference pixel map, and the other may be designated as the floating pixel map.
  • the 3D offsets may be calculated through a comparison of the pixel values of corresponding pixels in the reference pixel map and in the floating pixel map. Furthermore, the 3D offsets may be calculated on the basis of probability distribution of the difference.
  • the alignment subunit 1706 may be configured to perform 3D registration on the overlapping images corresponding to 3D volume data based on the 2D offsets and the 3D offsets.
  • FIG. 18 is a flowchart illustrating a process of image registration according to some embodiments of the present disclosure.
  • Overlapping images corresponding to 3D volume data may be projected on any plane of the anatomical plane in step 1801.
  • the anatomical plane may include a coronal plane, a sagittal plane, a transverse plane, etc.
  • the projection may be based on maximum intensity projection (MIP) .
  • MIP maximum intensity projection
  • tMIP temporal maximum intensity projection
  • MiniP minimum intensity projection
  • VED virtual endoscopic display
  • 2D projection images of the overlapping images corresponding to 3D volume data and corresponding pixel maps may be obtained based on the projection performed in step 1801.
  • the 2D projection images may correlate with the overlapping images corresponding to 3D volume data, as well as the pixel maps.
  • a pixel value at a pixel (x, z) of the 2D projection images may be the maximum intensity of a corresponding pixel (x, z) of the slice among multiple slices of the overlapping image corresponding to 3D volume data.
  • Two-dimensional offsets may be obtained based on the 2D projection images in step 1803.
  • the 2D offsets may include offsets in different directions or different planes, for example, offsets in the X direction (X offset) , offsets in the Y direction (Y offset) , offsets in the Z direction (Z offset) , offsets in the coronal plane (coronal offsets) , offsets in the sagittal plane (sagittal offsets) , offsets in the transverse plane (transverse offset) , etc.
  • the 2D offsets may be utilized to perform further registration, e.g., 3D registration, a pixel map calibration.
  • the pixel maps may be calibrated in step 1804.
  • the calibration may be based on the 2D offsets.
  • 3D offsets may be obtained based on the 2D offsets and the pixel maps in step 1804.
  • the 3D offsets may be obtained based on the 2D offsets.
  • the 3D offsets may be obtained based on the pixel maps, for example, two pixel maps, or a pixel map and a calibrated pixel map.
  • the alignment may be performed to register the overlapping images corresponding to 3D volume data based on the 2D offsets and the 3D offsets.
  • step 1803 may proceed to step 1805 directly without performing step 1804.
  • FIG. 19 is a flowchart illustrating a process of image registration according to some embodiments of the present disclosure.
  • a LPS coordinate system may be employed herein.
  • the X-Y-Z axis defines a three dimensional space such that the origin of it locates within the chest of a target body.
  • a first overlapping image correspond to 3D volume data and a second overlapping image corresponding to 3D volume data may be retrieved.
  • the two overlapping images may represent an overlapping region of two sub-images, the sub-images may be combined to generate a composite image.
  • the overlapping images corresponding to 3D volume data may be segmented from the sub-images by the segmentation module 1301 that is described elsewhere in the present disclosure.
  • the first overlapping image correspond to 3D volume data and the second overlapping image correspond to 3D volume data may be projected onto an anatomical plane.
  • the anatomical plane may include a coronal plane, a sagittal plane, a transverse plane, etc.
  • the projection may be based on maximum intensity projection (MIP) .
  • MIP maximum intensity projection
  • the projection may be based on temporal maximum intensity projection (tMIP) , minimum intensity projection (MiniP) , virtual endoscopic display (VED) , or the like, or a combination thereof.
  • a first 2D projection image corresponding to the first overlapping image and a second 2D projection image corresponding to the second overlapping image may be generated in step 1902 based on the projection.
  • a pixel (x, z) of a 2D projection image may be assigned with the maximum intensity of corresponding pixels (x, z) of multiple slices constituting a 3D image. For instance, an overlapping image corresponding to 3D volume data may have 80 slices. The value of a pixel (x, z) of the 2D projection image may equal to the maximum intensity at the corresponding pixel (x, z) among the 80 slices.
  • a first pixel map correspond to the first 2D projection image and a second pixel map corresponding to the second 2D projection image may be generated.
  • a pixel value of the pixel map may be the slice number corresponding to a slice that has the maximum intensity among multiple slices of an overlapping image corresponding to 3D volume data.
  • the 2D projection images obtained in 1902 may be utilized to calculate 2D offsets.
  • the 2D offsets may include offsets in different directions or different planes, for example, offsets in the X direction (X offset) , offsets in the Y direction (Y offset) , offsets in the Z direction (Z offset) , offsets in the coronal plane (coronal offsets) , offsets in the sagittal plane (sagittal offsets) , offsets in the transverse plane (transverse offset) , etc.
  • a Cartesian coordinate system may be employed herein.
  • the LPS coordinate system may be employed herein.
  • the overlapping images corresponding to 3D volume data may be projected onto the X-Z plane corresponding to the coronal plane under an MIP. Therefore, the obtained 2D projection images are on the X-Y plane.
  • an X offset and a Z offset may be obtained based on the first 2D projection image and the second 2D projection image.
  • the X offset and the Z offset may be utilized to calibrate the second pixel map as described in step 1903.
  • a calibrated pixel map may be generated from the calibration.
  • a Y offset may be obtained by comparing the first pixel map and the calibrated pixel map.
  • the comparison may be performed by way of a subtraction for corresponding pixels on the first pixel map and the calibrated pixel map (or on the reference pixel map and the floating pixel map) .
  • the probability or frequency of the differences obtained from the comparison may be assessed to provide a difference range. For example, if differences between 15 and 17 occur most, the difference range may be set to be 15-17.
  • an average value relating to all the pixel values of the first pixel map and the calibrated pixel map that are within the difference range may be calculated. The calculated average value may be determined as the Y offset.
  • the first pixel map and the second pixel map are compared to determine the Y offset when the correlation between the first pixel map and the second pixel map is determined.
  • the correlation may be represented by the X offset and the Z offset between the first pixel map and the second pixel map.
  • the X offset, the Y offset, and the Z offset may be optimized in a fine registration by utilizing one or more algorithms.
  • Exemplary algorithms may include recursion, a bisection method, an exhaustive method, a greedy algorithm, a divide and conquer algorithm, dynamic programming method, an iterative method, a branch-and-bound algorithm, a backtracking algorithm, or the like, or any combination thereof.
  • the fine registration may be based on the 2D registration and/or 3D registration.
  • an optimized X offset, an optimized Y offset, and an optimized Z offset may be generated.
  • an alignment may be performed to register the overlapping images corresponding to 3D volume data based on the X offset, the Y offset, and the Z offset, or the optimized offsets.
  • step 1902 and step 1903 may be performed concurrently, or sequentially at any order.
  • the calibration in step 1905 may be performed on the first pixel instead of the second pixel map.
  • the Y offset obtained in step 1906, together with the X offset and the Z offset obtained in step 1905, may be utilized directly to perform the 3D registration without performing step 1907 and step 1908.
  • FIG. 20 illustrates 2D images and corresponding pixel maps generated by MIP according to some embodiments of the present disclosure.
  • two overlapping images corresponding to 3D volume data may be projected onto the coronal plane, as a result, 2D projection images MIP1 and MIP2 may be generated.
  • two pixel maps corresponding to MIP1 and MIP2 respectively may be generated along with the 2D projection images.
  • MAP1 is a pixel map corresponding to the 2D projection image MIP1
  • MAP2 is another pixel map corresponding to the 2D projection image MIP2.
  • Region 2001 is a partial zoom of the 2D projection image MIP1.
  • every pixel of MIP1 may have a value of the maximum intensity corresponding to multiple slices of the overlapping image corresponding to 3D volume.
  • Region 2002 is a partial zoom of the pixel map MAP1. Every pixel value of MAP1 may be a slice number of a slice with the maximum intensity, for example, grey value. For instance, as shown in region 2002, number 57 may indicate that the 57 th slice of the corresponding overlapping image corresponding to 3D volume data has the maximum intensity. Thus the number 57 is stored in corresponding pixel of MAP1.
  • FIG. 21 is a flowchart illustrating a process of image registration according to some embodiments of the present disclosure.
  • step 2101 two overlapping images corresponding to 3D volume data I1 and I2, may be retrieved. Both I1 and I2 may be overlapping regions of two sub-images and segmented from the sub-images by the segmentation module 1301 as described elsewhere in the disclosure.
  • the overlapping images corresponding to 3D volume data may be obtained through techniques including DSA (digital subtraction angiography) , CT (computed tomography) , CTA (computed tomography angiography) , PET (positron emission tomography) , X-ray, MRI (magnetic resonance imaging) , MRA (magnetic resonance angiography) , SPECT (single-photon emission computerized tomography) , US (ultrasound scanning) , or the like, or a combination thereof.
  • the overlapping images corresponding to 3D volume data may be obtained by performing a segmentation in accordance with DICOM (digital imaging and communication in medicine) . Particularly, label (0020 0032) of DICOM may be utilized to segment overlapping images corresponding to 3D volume data out of the sub-images.
  • DICOM digital imaging and communication in medicine
  • the overlapping images corresponding to 3D volume data may be projected onto an anatomical plane.
  • the anatomical plane may include a coronal plane, a sagittal plane, a transverse plane, etc.
  • the projection may be based on maximum intensity projection (MIP) .
  • MIP maximum intensity projection
  • the projection may be based on temporal maximum intensity projection (tMIP) , minimum intensity projection (MiniP) , virtual endoscopic display (VED) , or the like, or a combination thereof.
  • I1 and I2 may be projected onto the coronal plane, the sagittal plane, and the transverse plane respectively by performing an MIP.
  • two 2D projection images corresponding to I1 and I2, MIP1 and MIP2 are acquired.
  • a 2D registration may be performed on MIP1 and MIP2 in each anatomical plane.
  • step 2104 as the result of the 2D registration in step 2203, three groups of transformation parameters, (T COR_X , T COR_Z ) , (T SAG_Y , T SAG_Z ) , (T AXI_X , T AXI_Y ) , may be generated.
  • step 2105 the transformation parameters generated in step 2104 may be utilized to generate 3D transformation parameters in accordance with the following equations:
  • an alignment may be performed on the overlapping images corresponding to 3D volume data for a 3D registration based on the 3D transformation parameter (t x , t y , t z ) .
  • FIG. 22 is a flowchart illustrating a process of N sub-images registration according to some embodiments of the present disclosure.
  • N sub-images of a series of scans may be retrieved, and each two successive sub-images may have one or more overlapping regions.
  • each sub-image may be segmented by the segmentation module 1301 to produce an overlapping image corresponding to 3D volume data that represents the overlapping region.
  • one or more registrations may be performed on each pair of successive overlapping images corresponding to 3D volume data, and 2D offsets and/or 3D offsets may be generated.
  • the 2D offsets, or the 3D offsets may include offsets in different directions or different planes, for example, offsets in the X direction, offsets in the Y direction, offsets in the Z direction, offsets in the coronal plane (coronal offsets) , offsets in the sagittal plane (sagittal offsets) , offsets in the transverse plane (transverse offset) , etc.
  • the registrations may include 2D registration, 3D registration, or the like, or a combination thereof.
  • the registrations of the overlapping images corresponding to 3D volume data are described elsewhere in the disclosure.
  • each pair of successive sub-images may be calibrated in accordance with the results of the registration in step 2203.
  • the calibrated sub-images may be fused to generate a composite image that may be used for disease diagnosis.
  • the overlapping images corresponding to 3D volume data representing the overlapping regions of the sub-images may be fused.
  • FIGs. 23A-23I illustrate 9 exemplary 2D images of blood vessels obtained based on the system and process according to some embodiments of the present disclosure.
  • Each 3D image was generated by combining three coronal sub-images as shown in the figure.
  • Each sub-image had a size of 384*512*88.
  • the size of the maximum overlapping region of two successive sub-images was 384*72*88.
  • the time for processing the sub-images according to the method provided in the disclosure was approximately 2.075s.
  • the hardware configuration was, Intel i5-2400 processor, 3.10Ghz, 4GB ROM (read-only memory) , and 64-bit OS (operating system) .
  • FIGs. 23A-23I are 9 2D vascular images in the coronal plane of a region of interest from different view angles ranging from 0°to 360°. An MIP was obtained based on the 3D sub-images. The 2D vascular images as illustrated were generated from a counter clockwise rotation on the composite 3D image. The rotation angle of FIG. 23A, FIG. 23B, FIG. 23C, FIG. 23D, FIG. 23E, FIG. 23F, FIG. 23G, FIG. 23H, FIG. 23I were 0°, 45°, 90°, 135°, 180°, 225°, 270°, 315°, 360°, respectively.
  • FIGs. 24A-24D illustrate four 2D coronal images of a region of interest depicting vasculature including a plurality of blood vessels. As shown in the figures, each 2D image was generated by combining three sub-images.
  • FIG. 24A illustrates a 2D image that was generated without registration.
  • FIG. 24B illustrates a 2D image that was generated based solely on 3D registration.
  • FIG. 24C and FIG. 24D are 2D images of different view angles that were generated based on 2D and 3D registration by utilizing the system and process according to some embodiments of the present disclosure.
  • FIGs. 25A-25D illustrate 4 2D coronal images of a region of interest depicting vasculature including a plurality of blood vessels. As shown in the figure, each 2D image was generated by combining three sub-images.
  • FIG. 25A illustrates a 2D image that was generated without registration.
  • FIG. 25B illustrates a 2D image that was generated based solely on 3D registration.
  • FIG. 25C and FIG. 25D are 2D images of different view angles that were generated based on 2D and 3D registration by utilizing the system and process according to some embodiments of the present disclosure.
  • FIGs. 26A-26D illustrate four 2D coronal images of a region of interest depicting vasculature including a plurality of blood vessels. As shown in the figures, each 2D image was generated by combining three sub-images.
  • FIG. 26A illustrates a 2D image that was generated without registration.
  • FIG. 26B illustrates a 2D image of the same region of interest that was generated based solely on 3D registration.
  • FIG. 26C and FIG. 26D are 2D images of the same region of interest from different view angles. The images were generated based on 2D registration and 3D registration utilizing the system and process according to some embodiments of the present disclosure.
  • FIG. 27A and FIG. 27B are 2D coronal images of a same region of interest from different view angles. The images were generated based on 2D registration and 3D registration utilizing the system and process according to some embodiments of the present disclosure.
  • FIG. 28A and FIG. 28B are 2D coronal images of a same region of interest from different view angles that were generated based on 2D and 3D registration by utilizing the system and process according to some embodiments of the present disclosure.
  • FIG. 29A and FIG. 29B are 2D coronal images of different view angles that were generated based on 2D and 3D registration by utilizing the system and process according to some embodiments of the present disclosure.
  • aspects of the present disclosure may be illustrated and described herein in any of a number of patentable classes or context including any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof. Accordingly, aspects of the present disclosure may be implemented entirely hardware, entirely software (including firmware, resident software, micro-code, etc. ) or combining software and hardware implementation that may all generally be referred to herein as a "block, " “module, ” “engine, ” “unit, ” “component, “ or “system. " Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable media having computer readable program code embodied thereon.
  • a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including electro-magnetic, optical, or the like, or any suitable combination thereof.
  • a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that may communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable signal medium may be transmitted using any appropriate medium, including wireless, wireline, optical fiber cable, RF, or the like, or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB. NET, Python or the like, conventional procedural programming languages, such as the "C" programming language, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP, ABAP, dynamic programming languages such as Python, Ruby and Groovy, or other programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN) , or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider) or in a cloud computing environment or offered as a service such as a Software as a Service (SaaS) .
  • LAN local area network
  • WAN wide area network
  • SaaS Software as a Service

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Pathology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • General Engineering & Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Vascular Medicine (AREA)
  • Dentistry (AREA)
  • Data Mining & Analysis (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Pulmonology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)

Abstract

A system and method for obtaining a composite image by combining multiple sub-images are provided. In some embodiments, the method may include retrieving overlapping images corresponding to sub-images including 3D volume data, generating two-dimensional (2D) projection images and pixel maps based on the overlapping images, performing one or more registrations based on the 2D projection images and the pixel maps, calibrating the sub-images based on the results of the registration (s), and fusing the sub-images to produce a composite image. In some embodiments, the method may include setting a plurality of parameters relating to an X-radiation source or a radiation detector based on a preliminary number of exposures and a preliminary exposure region, controlling, based on at least one of the plurality of parameters, a motion of the X-radiation source or a motion of the radiation detector to capture a plurality of sub-images, and combining the plurality of sub-images.

Description

SYSTEM AND METHOD FOR IMAGE COMPOSITION
CROSS-REFERENCE TO RELATED APPLICATIONS
This application claims priority of Chinese Patent Application No. 201410487252.1 filed on September 22, 2014, and Chinese Patent Application No. 201410508290.0 filed on September 28, 2014, the entire contents of each of which are hereby incorporated by reference.
TECHNICAL FIELD
The present disclosure generally relates to image processing, and more particularly, to a system and method for combining sub-images into a composite image.
BACKGROUND
Medical imaging techniques, such as X-ray, magnetic resonance imaging (MRI) , computed tomography (CT) , are widely used for disease diagnosis. A region of interest, such as one or more blood vessels in a limb, a spinal column, or a portion thereof, may be visualized using one or more of the techniques mentioned above.
However, when performing an imaging operation on a region of interest whose size is larger than the field of view (FOV) of an imaging device (for example, a CT scanner, an X-ray scanner, an MRI scanner, a MicroCT scanner) , a single imaging operation may be inadequate to obtain an image of the entire region of interest; merely a portion of the region of interest may be included in an image. Under such a circumstance, multiple imaging operations may need to be performed on the region of interest to generate a series of sub-images and a sub-image covers only a portion of the region of interest. By combining the sub-images, a composite image covering the entire region of interest may be generated.
Meanwhile, in an imaging operation using a radiation-based imaging technique, such as X-ray, CT, radiation damage may occur due to extended exposure of a region of interest (for example, a human body or a portion thereof) to radiation. Thus, it may be desirable to develop a method and system that may reduce the radiation dose applied to a human patient and perform an imaging composition on multiple successive sub-images of a region of interest.
SUMMARY
In a first aspect of the present disclosure, an image composition system is provided. In some embodiments, the image composition system may include a parameter setting engine, an acquisition engine, an image processing engine, and a storage engine. The parameter setting engine may be configured to set one or more parameters relating to, for example, image acquisition, image processing, or the like, or any combination thereof. The acquisition engine may be configured to retrieve a first sub-image and a second sub-image, the first sub-image and the second sub-image may correspond to three-dimensional (3D) volume data. The image processing engine may be configured to retrieve a first overlapping image from the first sub-image, retrieve a second overlapping image from the second sub-image, generate a first two-dimensional (2D) projection image and a first pixel map based on maximum intensity projection of the first overlapping image onto a plane, generate a second 2D projection image and a second pixel map based on maximum intensity projection of the second overlapping image onto the plane, perform 2D registration based on the first 2D projection image, the first pixel map, the second 2D projection image, and the second pixel map, perform 3D registration based on the 2D registration, the first pixel map and the second pixel map, identify a correlation between the first sub-image and the second sub-image based on the 2D registration or the 3D registration, and fuse the first overlapping image and the second overlapping image based on the correlation to provide a composite image.
In a second aspect of the present disclosure, a method is provided. The method may include one or more of the following operations. A first sub-image and a second sub-image may be retrieved. Both the first sub-image and the second sub-image may correspond to 3D volume data. A first overlapping image may be retrieved from the first sub-image, and a second overlapping image may be retrieved from the second sub-image. A first 2D projection image and first pixel may be generated based on maximum intensity projection of the first overlapping image onto a plane. A second 2D projection image and second pixel may be generated based on maximum intensity projection of the second overlapping image onto the plane. Two-dimensional registration may be performed based on the first 2D projection image, the first map, the second 2D projection image, and the second pixel map. Three-dimensional registration may be performed based on the 2D registration, the first pixel, and the second pixel map. A  correlation between the first sub-image and the second sub-image may be identified based on the 2D registration or the 3D registration. The first overlapping image and the second overlapping image may be fused based on the correlation to provide a composite image.
In some embodiments, the first sub-image or the second sub-image may be, for example, a 3D image, a 2D image, or the like, or a combination thereof. The 3D images may be 3D-DSA images. Optionally and preferably, the 3D-DSA images may be 3D digital coronal images, 3D digital sagittal images, 3D digital transverse images, or the like, or a combination thereof.
In the embodiments, the plane may include a coronal plane, a sagittal plane, or a transverse plane.
In some embodiments, the first sub-image or the second sub-image may be retrieved by using DSA (digital subtraction angiography) , CT (computed tomography) , CTA (computed tomography angiography) , PET (positron emission tomography) , X-ray, MRI (magnetic resonance imaging) , MRA (magnetic resonance angiography) , SPECT (single-photon emission computerized tomography) , US (ultrasound scanning) .
In some embodiments, the first overlapping image and the second overlapping image may be retrieved by using Digital Imaging and Communication in Medicine (DICOM) . Specifically, label (0020 0032) of DICOM may be used to retrieve the first overlapping image and the second overlapping image.
In some embodiments, an offset may be generated by performing 2D registration, the offset may include, for example, an X offset, a Y offset, a Z offset, a coronal offset, a sagittal offset, or a transverse offset.
In some embodiments, another offset may be generated by performing 3D registration. The offset may be in the direction perpendicular to the plane onto which the overlapping images have been projected to generate the 2D projection images. The offset may include, for example, an X offset, a Y offset, a Z offset, a coronal offset, a sagittal offset, a transverse offset.
In some embodiments, a fine registration may be performed based on the 2D registration and/or the 3D registration discussed elsewhere. The fine registration may be based on an algorithm including, for example, recursion, a bisection method, an exhaustive method, a greedy algorithm, a divide and conquer algorithm, dynamic programming method, an iterative method, a branch-and-bound algorithm, a backtracking algorithm, or the like, or a combination thereof.
In some embodiments, the second pixel map may include a calibrated pixel map based on the 2D registration. Specifically, the second pixel map may be calibrated based on the 2D offsets to generate the calibrated pixel map. In some embodiments, one of the first pixel map and the second pixel map may be a reference pixel map, and the other pixel map may be a floating pixel map.
In some embodiments, the first pixel map may include information identifying the location of maximum intensity of the first overlapping image, and the location may be in a direction perpendicular to a plane onto which the first overlapping image is projected. In some embodiments, the second pixel map may include information identifying the location of maximum intensity of the second overlapping image, and the location may be in a direction perpendicular to the same plane.
In some embodiments, the 3D registration may include calculating a plurality of differences in the locations in the direction perpendicular to the plane between the first pixel map and the second pixel map, each one of the plurality of differences corresponding to a pixel within the plane, comparing the plurality of differences to obtain a probability of the differences and designating an offset in the direction perpendicular to the plane based on the probability.
In a third aspect of the present disclosure, another image composition system is provided. The image composition system may include an imaging device and a processor. The imaging device may include an X-radiation source and a radiation detector. The processor may include a parameter setting engine, a control engine and an image processing engine. The parameter setting engine may be configured to set a plurality of parameters relating to the X-radiation source or the radiation detector based on one or more preliminary parameters. The control engine may be configured to control a motion of the X-radiation source or a motion of the radiation detector to capture a plurality of sub-images. The image processing engine may be configured to combine the plurality of sub-images.
In a fourth aspect of the present disclosure, another method for image composition is provided. The method may include: setting a plurality of parameters relating to the X-radiation source or the radiation detector based on one or more preliminary parameters; controlling, based on at least one of the plurality of parameters, a motion of the X-radiation source or a motion of the radiation detector to capture a plurality of sub-images; combining the plurality of sub-images.
In some embodiments, the preliminary parameters may include at least one of a dimension of an exposure region, a number of exposures, an overlapping region between two adjacent exposures, a starting position of an effective light field, an ending position of the effective light field, or a height of the effective light field.
In some embodiments, a plurality of secondary parameters may be obtained based on one or more preliminary parameters. The secondary parameters may include at least one of a dimension of an exposure region, a number of exposures, an overlapping region between two adjacent exposures, a starting position of an effective light field, an ending position of the effective light field, or a height of the effective light field.
In some embodiments, the difference between the secondary number of exposure and the preliminary number of exposure may be less than 1. In some embodiments, the secondary exposure region may be equal to or smaller than the preliminary exposure region.
In some embodiments, the imaging device may be configured according to at least one or more of the preliminary parameters. In some embodiments, the imaging device may be configured according to at least one or more of the secondary parameters.
In some embodiments, the X-radiation source may include a tube configured to generate a beam of one or more X-rays, and a beam limiting device mounted proximal to the X-radiation source. The beam limiting device may function to define the beam of one or more X-rays generated by the tube. In some embodiments, the height of the effective light field may equal to a product of the opening of the beam limiting device in the vertical direction and a constant k. In some embodiments, the secondary height of the effective light field may be equal to or smaller than the preliminary height of the effective light field. In some embodiments, the tube, the beam limiting device, and the radiation detector may be positioned according to the preliminary parameters at an exposure. In some embodiments, the tube, the beam limiting device, and the radiation detector may be positioned according to the secondary parameters at an exposure.
In some embodiments, the tube (or an X-radiation source) and the radiation detector may move simultaneously and/or in a synchronized fashion. In some embodiments, the tube (or an X-radiation source) and the radiation detector may move one after another.
Additional features will be set forth in part in the description which follows, and in part will become apparent to those skilled in the art upon examination of the following and the  accompanying drawings or may be learned by production or operation of the examples. The features of the present disclosure may be realized and attained by practice or use of various aspects of the methodologies, instrumentalities and combinations set forth in the detailed examples discussed below.
BRIEF DESCRIPTION OF THE DRAWINGS
The present disclosure is further described in terms of exemplary embodiments. These exemplary embodiments are described in detail with reference to the drawings. These embodiments are non-limiting exemplary embodiments, in which like reference numerals represent similar structures throughout the several views of the drawings, and wherein:
FIG. 1 is a block diagram depicting an image composition system according to some embodiments of the present disclosure;
FIG. 2 is a block diagram depicting a processor according to some embodiments of the present disclosure;
FIG. 3 is a flowchart illustrating a workflow for image processing according to some embodiments of the present disclosure;
FIG. 4 is a block diagram illustrating an architecture of a parameter setting engine according to some embodiments of the present disclosure;
FIG. 5 is a flowchart of an exemplary process for setting parameters according to some embodiments of the present disclosure;
FIG. 6 is a flowchart of another exemplary process for setting parameters according to some embodiments of the present disclosure;
FIG. 7 illustrates a schematic view of the Left, Posterior, Superior (LPS) coordinate system used in connection with some embodiments of the present disclosure;
FIGs. 8 to 10 illustrate exemplary imaging systems according to some embodiments of the present disclosure;
FIG. 11 illustrates a process for determining the number of exposures according to some embodiments of the present disclosure;
FIG. 12 illustrates a schematic view of the tube rotation angle corresponding to the nth exposure according to some embodiments of the present disclosure;
FIG. 13 is a block diagram illustrating an image processing engine according to some embodiments of the present disclosure;
FIG. 14 is a flowchart illustrating a workflow of image processing according to some embodiments of the present disclosure;
FIG. 15 is a block diagram illustrating a registration module according to some embodiments of the present disclosure;
FIG. 16 is a flowchart illustrating a registration process according to some embodiments of the present disclosure;
FIG. 17 is a block diagram of a registration module according to some embodiments of the present disclosure;
FIG. 18 is a flowchart illustrating a process of image registration according to some embodiments of the present disclosure;
FIG. 19 is a flowchart illustrating a process of image registration according to some embodiments of the present disclosure;
FIG. 20 illustrates 2D images and corresponding pixel maps according to some embodiments of the present disclosure;
FIG. 21 is a flowchart illustrating a process of image registration according to some embodiments of the present disclosure;
FIG. 22 is a flowchart illustrating a process of N sub-images registration according to some embodiments of the present disclosure;
FIGs. 23A-23I illustrate exemplary 2D images of blood vessels according to some embodiments of the present disclosure;
FIGs. 24A-24D illustrate 2D coronal images of vascular vessels applying different methods according to some embodiments of the present disclosure;
FIGs. 25A-25D illustrate 2D coronal images of vascular vessels applying different methods according to some embodiments of the present disclosure;
FIGs. 26A-26D illustrate 2D coronal images of vascular vessels applying different methods according to some embodiments of the present disclosure;
FIGs. 27A and 27B illustrate coronal images of different view angles that are composed according to some embodiments of the present disclosure;
FIGs. 28A and 28B illustrate coronal images of different view angles that are composed according to some embodiments of the present disclosure; and
FIGs. 29A and 29B illustrate coronal images of different view angles that are composed according to some embodiments of the present disclosure.
DETAILED DESCRIPTION
In the following detailed description, numerous specific details are set forth by way of examples in order to provide a thorough understanding of the relevant disclosure. However, it should be apparent to those skilled in the art that the present disclosure may be practiced without such details. In other instances, well known methods, procedures, systems, components, and/or circuitry have been described at a relatively high-level, without detail, in order to avoid unnecessarily obscuring aspects of the present disclosure. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present disclosure. Thus, the present disclosure is not limited to the embodiments shown, but to be accorded the widest scope consistent with the claims.
It will be understood that the term “system, ” “engine, ” “unit, ” “module, ” and/or “block” used herein are one method to distinguish different components, elements, parts, section or assembly of different level in ascending order. However, the terms may be displaced by other expression if they may achieve the same purpose.
It will be understood that when a unit, engine, module or block is referred to as being “on, ” “connected to” or “coupled to” another unit, engine, module, or block, it may be directly on, connected or coupled to, or communicate with the other unit, engine, module, or block, or an intervening unit, engine, module, or block may be present, unless the context clearly indicates otherwise. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
The terminology used herein is for the purposes of describing particular examples and embodiments only, and is not intended to be limiting. As used herein, the singular forms "a, " "an" and "the" may be intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "include, " and/or "comprising, " when used in this disclosure, specify the presence of integers, devices, behaviors, stated features,  steps, elements, operations, and/or components, but do not exclude the presence or addition of one or more other integers, devices, behaviors, features, steps, elements, operations, components, and/or groups thereof.
FIG. 1 illustrates a block diagram of an image composition system 100 according to some embodiments of the present disclosure. The image composition system 100 may include an imaging device 101, a processor 102, a terminal 103, a display 104, and a database 105. The imaging device 101 may be configured to generate or provide one or more images of a region of interest. Merely by way of example, the imaging device 101 may include an X-radiation source and a radiation detector.
The images may be three-dimensional (3D) images, two-dimensional (2D) images, or the like, or a combination thereof. For instance, the 3D images may be three-dimensional digital subtraction angiography images (3D-DSA images) . Specifically, the 3D-DSA images may be 3D digital coronal images, 3D digital sagittal images, 3D digital transverse images, or the like, or a combination thereof. As used herein, an overlapping image may depict an overlapping region of a number of sub-images regardless of whether they are successive or not. For instance, an overlapping image may depict an overlapping region of two successive sub-images. In some embodiments, each of the sub-images may include an overlapping image. An overlapping image may be part of a sub-image. As used herein, a sub-image may refer to an image of a portion of a region of interest. A set of 3D volume data may correspond to a stack of 2D images. In some embodiments, a 2D image may be referred to as a slice. For instance, a set of 3D volume data may correspond to a stack of 2D images in the coronal plane. As used herein, such a stack of 2D images may be referred to as a 3D coronal image. A same set of 3D volume data may correspond to different stacks of 2D images in different planes. For instance, a same set of 3D volume data may correspond to a stack of 2D images in the coronal plane, and also a stack of 2D images in the transverse plane. Descriptions regarding a coronal plane and a transverse plane may be found elsewhere in the present disclosure. See, for example, FIG. 7 and the description thereof. The overlapping region corresponding to a set of 3D volume data may be depicted by a stack of 2D overlapping images.
The image device 101 may utilize a technique including, for example, digital subtraction angiography (DSA) , computed tomography (CT) , computed tomography angiography (CTA) ,  positron emission tomography (PET) , X-ray, digital radiation (DR) , magnetic resonance imaging (MRI) , magnetic resonance angiography (MRA) , single-photon emission computerized tomography (SPECT) , ultrasound scanning (US) , or the like, or a combination thereof.
The processor 102 may be configured to process the images acquired by the imaging device 101 or retrieved from another source (for example, an imaging device, a database or storage, or the like, or a combination thereof) . The processor 102 may include a central processing unit (CPU) , an application-specific integrated circuit (ASIC) , an application-specific instruction-set processor (ASIP) , a graphics processing unit (GPU) , a physics processing unit (PPU) , a microcontroller unit, a digital signal processor (DSP) , a field programmable gate array (FPGA) , an acorn reduced instruction set computing (RISC) machine (ARM) , or the like, or any combination thereof. The processor 102 may generate a control signal relating to the configuration of the imaging device 101.
The terminal 103 may communicate with the processor 102 and allow one or more operators to control the production and/or display of images on the display 104. The terminal 103 may include an input device, a control panel (not shown in the figure) , etc. The input device may be a keyboard, a touch screen, a mouse, a remote controller, or the like, or any combination thereof. An input device may include alphanumeric and other keys that may be inputted via a keyboard, a touch screen (for example, with haptics or tactile feedback) , a speech input, an eye tracking input, a brain monitoring system, or any other comparable input mechanism. The input information received through the input device may be communicated to the processor 102 via, for example, a bus, for further processing. Another type of the input device may include a cursor control device, such as a mouse, a trackball, or cursor direction keys to communicate direction information and command selections to, for example, the processor 102 and to control cursor movement on the display 104 or another display device.
The display 104 may be configured to display information. Exemplary information may include, for example, an image before and/or after image processing, a request for input or parameter relating to image acquisition and/or processing, or the like, or a combination thereof. The display device may include a liquid crystal display (LCD) , a light emitting diode (LED) -based display, a flat panel display or curved screen (or television) , a cathode ray tube (CRT) , or the like, or a combination thereof.
The database 105 may be configured to store images and/or relevant information or parameters. Exemplary parameters may include an exposure region, the number of exposures,  the overlapping region between two adjacent (or successive) exposures, the starting position of the effective light field, the ending position of the effective light field, the height of the effective light field, or the like, or a combination thereof.
It should be noted that the flowchart described above is provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, various variations and modifications may be conducted under the guidance of the present disclosure. However, those variations and modifications do not depart the scope of the present disclosure. For example, the imaging device 101, the processor 102, the terminal 103, the display 104, and the database 105 may communicate with each other via a network.
FIG. 2 is a block diagram of the processor 102 according to some embodiments of the present disclosure. The processor 102 may include a parameter setting engine 201, an acquisition engine 202, an image processing engine 203, and a storage engine 204. The parameter setting engine 201 may be configured to set one or more parameters relating to, for example, image acquisition, image processing, or the like, or a combination thereof. Exemplary parameters may include an exposure region, the number of exposures, the overlapping region between two adjacent exposures, the starting position of the effective light field, the ending position of the effective light field, the height of the effective light field, or the like, or a combination thereof. The processor 102 may include a control engine (not shown in the figure) . The control engine may control one or more components of the image composition system 100 based on one or more parameters provided by the parameter setting engine 201. Merely by way of example, the control engine may control the motion of one or more components of the imaging device 101 including the X-radiation source and/or the radiation detector.
The acquisition engine 202 may be configured to acquire one or more images. The image (s) may be obtained by the imaging device 101 or retrieved from another source (for example, an imaging device, a database or storage, or the like, or a combination thereof) . Exemplary images may include a composite image, sub-images of a region of interest (acquired through, for example, a series of scans of a region of interest) , overlapping images of the sub-images, or the like, or a combination thereof. As used herein, a composite image may refer to an image of an entire region of interest. In some embodiments, a composite image may be  constructed by way of combining a plurality of sub-images. In some embodiments, the combination may be achieved by fusing the overlapping images of two sub-images, for example, two adjacent sub-images. Two sub-images may be considered adjacent or successive if they depict adjoining portions of a region of interest.
The image processing engine 203 may be configured to process images acquired by the acquisition engine 202. The processing may include, for example, calculating the number of exposures, performing registration of images (for example, registration of overlapping images) , fusing overlapping images, combining sub-images, or the like, or a combination thereof. The registration may include 2D registration, 3D registration, or the like, or a combination thereof. The storage engine 204 may be configured to storage images and/or relevant information or parameters.
FIG. 3 is a flowchart illustrating a process of image processing according to some embodiments of the present disclosure. In step 301, one or more parameters may be set. Exemplary parameters may include an exposure region, the number of exposures, the overlapping region between two adjacent exposures, the starting position of the effective light field, the ending position of the effective light field, the height of the effective light field, or the like, or a combination thereof. In step 302, one or more images may be obtained. The images may include a composite image, sub-images of a region of interest (acquired through, for example, a series of scans of the region of interest) , overlapping images of the sub-images, or the like, or a combination thereof. The images obtained in step 302 may be processed in step 303. Exemplary processing may include calculating the number of exposures, performing registration of images (for example, overlapping images) , fusing the overlapping images of sub-images to generate a composite image, or the like, or a combination thereof. The registration may include 2D registration, 3D registration, or the like, or a combination thereof.
It should be noted that the flowchart described above is provided for the purposes of illustration, and may not intend to limit the scope of the present disclosure. For persons having ordinary skills in the art, various variations and modifications may be conducted under the guidance of the present disclosure. However, those variations and modifications do not depart the scope of the present disclosure.
In some embodiments, a composite image may be generated by way of combining a plurality of sub-images. The sub-images may be acquired using an imaging device or an imaging system according to one or more parameters. To obtain a composite image and/or the corresponding sub-images, at least some of the parameters relating to the configuration of the imaging device or imaging system may be adjusted for individual patients. Merely by way of example, to obtain an X-ray image of an entire spinal column of a patient, a user or operator (for example, a healthcare provider, an imaging specialist, etc. ) may designate or adjust parameters including, for example, an exposure region, the overlapping region between adjacent exposures, the number of exposures, or the like, or a combination thereof. As another example, a composite image of blood vessels in a lower limb of a patient may be obtained by way of image processing. An exemplary image processing procedure may include acquiring a plurality of DSA sub-images of the blood vessels of the lower limb, performing 2D registration and/or 3D registration of overlapping images of adjacent DSA sub-images, fusing overlapping images based on the 2D registration and/or 3D registration to combine the DSA sub-images. As a further example, a composite image may be obtained by adjusting one or more parameters relating to the configuration of an imaging device or imaging system, acquiring a plurality of sub-images, and processing the acquired sub-images based on 2D registration and/or 3D registration of overlapping images of sub-images. It should be noted that the examples described above is provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. Apparently for persons having ordinary skills in the art, numerous variations and modifications may be conducted under the teaching of the present disclosure. However, those variations and modifications do not depart the protecting scope of the present disclosure.
FIG. 4 is a block diagram illustrating an architecture of a parameter setting engine 201 according to some embodiments of the present disclosure where a composite image may be acquired by adjusting the parameters relating to the configuration of an imaging device or an imaging system. The parameter setting engine 201 may be connected to or otherwise communicate with, for example, the acquisition engine 202, the image processing engine 203, and the storage engine 204. In some embodiments, the parameter setting engine 201 may be connected to or communicate with the imaging device 101, the display 104, the terminal 103, the database 105, or the like, or a combination thereof. At least some of the connection or  communication may be achieved via a wired connection, or wirelessly.
The parameter setting engine 201 may be configured to set or adjust one or more parameters relating to the configuration of the imaging device or the imaging system. The parameter setting engine 201 may include a preliminary calculation module 401 and a secondary calculation module 402. In some embodiments, the parameter setting engine 201 may further include an acquisition module (not shown in the figure) configured to acquire information at least part of which may be used by another component of the parameter setting engine 201 or the image composition system 100. In some embodiments, the parameter setting engine 201 may further include a storage module (not shown in the figure) configured to store the parameters calculated by the preliminary calculation module 401 or used in the calculation, and the secondary calculation module 402.
The above description regarding the parameter setting engine 201 are not exhaustive and are not limiting. Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the present disclosure.
The preliminary calculation module 401 may be configured to estimate or calculate one or more preliminary parameters. In some embodiments, the preliminary parameters may be used to configure the imaging device 101. For example, in some embodiments where an X-ray examination is desired, a non-exclusive list of preliminary parameters may include: an exposure region, the number of exposures, the overlapping region between two adjacent exposures, the starting position of the effective light field, the stopping position of the effective light field, the height of the effective light field, etc. The effective light field may refer to the light field received by the detector that may be effective to generate image data. The height of the effective light field may correlate to the opening of a beam limiting device in the length direction (for example, along the direction from the head to the feet of a patient, or vice versa) between a starting position and a stopping position. Merely by way of example, a patient subject to an imaging operation is in a standing position, the starting position of the effective light field may refer to the upper edge of the effective light field corresponding to the first exposure of a number of exposures; the stopping position of the effective light field may refer to the lower edge of the effective light field  corresponding to the last exposure of a number of exposures. The above description regarding the preliminary parameters are not exhaustive and are not limiting. Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the present disclosure.
One or more preliminary parameters may be calculated using the preliminary calculation module 401. In some embodiments of the present disclosure, the preliminary parameters may be calculated based on the initial parameters provided by the image composition system 100 during initialization. In some embodiments, a non-exclusive list of initial parameters that may be provided by the image composition system 100 during initialization may include: an exposure region, the number of exposures, the overlapping region between two adjacent exposures, the starting position of the effective light field, the stopping position of the effective light field, the height of the effective light field, etc. In some embodiments of the present disclosure, the initial parameters may be stored in the imaging device 101 or the storage engine 204. In some embodiments of the present disclosure, one or more preliminary parameters may be provided by a user. For example, in some embodiments where an X-ray examination is desired, the image composition system 100 may allow a user to designate a preliminary exposure region. The preliminary exposure region may refer to the entire exposure region that a user may desire to image with respect to a target body (for example, a patient or a portion thereof) . In some embodiments of the present disclosure, preliminary parameters may be calculated based on at least some of the user input or designation. For example, in some embodiments where an X-ray examination is desired, a preliminary number of exposures may be calculated based on the preliminary exposure region designated by the user.
The secondary calculation module 402 may be configured to estimate or calculate a secondary parameter according to at least some of the preliminary parameters calculated by the preliminary calculation module 401. In some embodiments, the secondary parameter may be used to configure the imaging device 101. For example, in some embodiments where an X-ray examination is desired, a non-exclusive list of secondary parameters may include: an exposure region, the number of exposures, the overlapping region between two adjacent exposures, the starting position of the effective light field, the stopping position of the effective light field, the  height of the effective light field, etc. In some embodiments of the present disclosure, the secondary parameters may include the same parameters as those included in the preliminary parameters. In some embodiments, a secondary parameter may be calculated based on the preliminary parameters from the preliminary calculation module 401. The calculation of a secondary parameter may be performed based on a rule. Merely by way of example, in some embodiments where an X-ray examination is desired, the rule may be that the number of exposures is an integer. In some embodiments where the preliminary number of exposures calculated by the preliminary calculation module 401 is not an integer, the preliminary number of exposures may be adjusted to provide a secondary number of exposures that is an integer. For instance, the preliminary number of exposures may be rounded to the preceding integer or the next integer, and other preliminary parameters may be adjusted accordingly to obtain one or more secondary parameters.
FIG. 5 is a flowchart illustrating an exemplary process for setting parameters according to some embodiments of the present disclosure. It should be noted that the flowchart described below is provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. Apparently for persons having ordinary skills in the art, numerous variations and modifications may be conducted under the teaching of the present disclosure. However, those variations and modifications do not depart the protecting scope of the present disclosure.
Beginning in step 501, one or more preliminary parameters may be acquired. In some embodiments, the acquisition of the preliminary parameter (s) may be performed by the preliminary calculation module 401. In some embodiments, the acquisition of the preliminary parameter (s) may be performed by the acquisition module of the parameter setting engine 201. In some embodiments, the preliminary parameter (s) may be acquired from the storage engine 204, or from the database 105. In some embodiments, the preliminary parameter (s) may be acquired from user input. In step 505, the acquired preliminary parameter (s) may be stored in the storage engine 204, or in the database 105.
In step 502, a preliminary calculation may be performed to provide one or more preliminary parameter. The preliminary calculation may be performed by the preliminary calculation module 401. In some embodiments, the preliminary calculation may be performed based on the preliminary parameter (s) acquired in step 501. In step 504, at least some of the  preliminary parameter (s) may be outputted. At least some of the preliminary parameter (s) may be used to configure, for example, the image device 101, an imaging system, or a portion thereof. In step 505, at least some of the preliminary parameter (s) may be stored in the storage engine 204, or in the database 105.
In step 503, a secondary calculation may be performed to provide one or more secondary parameters. The step 503 of the secondary calculation may be optional. Merely by way of example, if the preliminary parameter (s) satisfy/satisfies a rule, the secondary calculation may be skipped, and the preliminary parameters may be outputted in step 504. In some embodiments, if the preliminary parameters satisfy a rule, the secondary calculation may still be performed. For instance, if the preliminary parameters satisfy a first rule, the secondary calculation may be performed according to a second rule. The first rule and the second rule may be different or the same. The second rule may be part of the first rule. The first rule and the second rule may be different or the same. The second rule may be part of the first rule.
In some embodiments, if the preliminary parameters do not satisfy a rule, the secondary calculation may be performed in step 503 and one or more secondary parameters may be obtained. The secondary calculation may be performed based on a rule. For instance, if the preliminary parameters do not satisfy a first rule, the secondary calculation may be performed according to a second rule. The first rule and the second rule may be different or the same. The second rule may be part of the first rule.
In step 504, the secondary parameter (s) may be outputted. At least some of the secondary parameters may be used to configure, for example, the imaging device 101. In step 505, at least some of the secondary parameter (s) may be stored in the storage engine 204, or in the database 105.
FIG. 6 is a flowchart of an exemplary process for setting parameters according to some embodiments of the present disclosure. As illustrated, FIG. 6 is an exemplary process for setting parameters according to some embodiments where an X-ray examination is desired. It should be noted that the flowchart described below is provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. Apparently for persons having ordinary skills in the art, numerous variations and modifications may be conducted under the teaching of the present disclosure. However, those variations and modifications do not depart the protecting  scope of the present disclosure.
Beginning in step 601, a preliminary device parameter may be calculated according to a preliminary exposure region. Step 601 may be performed by the preliminary calculation module 401. Exemplary preliminary device parameters may include the number of exposures, the overlapping region between two adjacent exposures, the starting position of the effective light field, the stopping position of the effective light field, the height of the effective light field, etc. In some embodiments, the preliminary exposure region may be designated by a user. In some embodiments, the preliminary exposure region may be provided during initialization of the imaging device. A preliminary number of exposures may be obtained in step 602. The preliminary number of exposures may be obtained based on, for example, the preliminary exposure region. The preliminary number of exposures may be obtained by the preliminary calculation module 401. The obtained preliminary number of exposures may be compared to a rule. If the preliminary number of exposures satisfies the rule, the remaining steps of the process may be skipped, and the preliminary device parameters, including the preliminary exposure region, may be outputted by the parameter setting engine 201. The preliminary device parameters may be used to configure an imaging device, for example, the imaging device 101, as shown in step 606.
In some embodiments, the rule specifies that the number of exposures is an integer. In some embodiments where the calculated preliminary number of exposures is an integer, one or more of the remaining steps illustrated in FIG. 6 may be skipped. In some embodiments where the calculated preliminary number of exposures is an integer, one or more of the remaining steps illustrated in FIG. 6 may still be performed. In some embodiments where the calculated preliminary number of exposures is an integer, one or more secondary parameters may be set equal to the preliminary parameters.
In some embodiments where the calculated preliminary number of exposures is not an integer, the preliminary number of exposures may be adjusted to provide a secondary number of exposures according to step 603. The step 603 may be performed by the secondary calculation module 402. For instance, the preliminary number of exposures may be rounded to the preceding integer or the next integer such that the difference between the secondary number of exposures and the preliminary number of exposures is less than 1. In some embodiments, the  adjustment of the preliminary number of exposures may be performed in accordance with a threshold. See, for example, the description in connection with FIG. 11.
After obtaining a secondary number of exposures, a secondary exposure region may be calculated based on the secondary number of exposures in step 604. The secondary exposure region may be calculated such that the secondary exposure region is not larger than the preliminary exposure region. See, for example, the description in connection with FIG. 11.
In step 605, a secondary device parameter may be calculated according to the secondary exposure region. A non-exclusive list of secondary device parameters may include: the overlapping region between two adjacent exposures, the starting position of the effective light field, the stopping position of the effective light field, the height of the effective light field, etc. In some embodiments, the imaging device 101 may be configured according to the secondary device parameters and/or the secondary exposure region in step 606. In some embodiments, the imaging device 101 may be configured according to the preliminary parameters in step 606.
FIG. 7 illustrates a schematic view of the Left, Posterior, Superior (LPS) coordinate system used in connection with some embodiments of the present disclosure. As shown in FIG. 7, the X-Y-Z axis may define a three dimensional space such that the origin of it may be located within a target body, for example, within the chest of a patient. One of the three axes may be perpendicular to the other two. The positive direction of each one of the three axes is illustrated in FIG. 7. The x-axis may point from the right towards the left with respect to the patient. The Y-axis may point from the anterior towards the posterior, i.e., from the front towards the back, with respect to the patient. The Z-axis may point from the inferior towards the superior, i.e., from the feet towards the head, with respect to the patient.
Particularly, each two of the X axis, the Y axis, and the Z axis may define a plane perpendicular to the other planes defined by the other combinations of the three axes. Therefore, the three dimensional space defined by the X axis, the Y axis, and the Z axis may include three planes that may be used to describe an anatomical position of or within the patient. Particularly, the X axis and the Y axis may define a plane that may be referred to as the axial plane or transverse plane. The axial plane or transverse plane may separate the head (superior) from the feet (inferior) . The axial plane or transverse plane may be substantially parallel to the ground when a patient is standing, for example, in front of an imaging device. The axial plane or transverse  plane may be substantially perpendicular to the ground when the patient is lying or lying in prone on a table.
The X axis and the Z axis may define a plane that may be referred to as the coronal plane. The coronal plane may separate the front (anterior) from the back (posterior) . The coronal plane may be substantially perpendicular to the ground when a patient is standing, for example, in front of an imaging device. The coronal plane may be substantially parallel to the ground when the patient is lying or lying in prone on a table.
The Y axis and the Z axis may define a plane that may be referred to as the sagittal plane or longitudinal plane. The sagittal plane or longitudinal plane may separate the left from the right from the left. The sagittal plane or longitudinal plane may be substantially perpendicular to the ground when a patient is standing, for example, in front of an imaging device. The sagittal plane or longitudinal plane may be substantially perpendicular to the ground when the patient is lying or lying in prone on a table.
It should be noted that the coordinate system described above is provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. Apparently for persons having ordinary skills in the art, numerous variations and modifications may be conducted under the teaching of the present disclosure. However, those variations and modifications do not depart the protecting scope of the present disclosure.
The LPS coordinate system may be used in connection with the Digital Imaging and Communications in Medicine (DICOM) standard. The DICOM is a standard for handling, storing, printing, and transmitting information in medical imaging. The DICOM may include a file format definition and a network communication protocol. In some sections of the DICOM standard, it specifies image plane module attributes and image plane attribute descriptions including, for example, image position and image orientation. A DICOM file may be exchanged between two entities that may receive images and/or patient data in DICOM format. DICOM may enable the integration of scanners, servers, workstations, printers, and network hardware from multiple manufacturers into a picture archiving and communication system (PACS) . DICOM is known as NEMA standard PS3, and as ISO standard 12052: 2006 "Health informatics -Digital imaging and communication in medicine (DICOM) including workflow and data management. " It should be noted that the standard described above is provided for the purposes  of illustration, and not intended to limit the scope of the present disclosure. Apparently, the teaching of the present disclosure may be used in connection with any standards that it may comply, and for persons having ordinary skills in the art, numerous variations and modifications may be conducted under the teaching of the present disclosure. However, those variations and modifications do not depart the protecting scope of the present disclosure.
FIG. 8 illustrates an imaging device 101 according to some embodiments of the present disclosure. The imaging device 101 illustrated in FIG. 8 is an X-ray imaging device. Non-exclusive examples of X-ray imaging devices that may be used in connection with some embodiments of the present disclosure include imaging devices used for computed tomography, fluoroscopy, radiography, etc.
As may be seen in the figure, the imaging device 101 may include a tube 801 that may generate a beam of X-ray used for imaging. The tube 801 may constitute an X-radiation source. The tube 801 may assume different configurations compatible with the present disclosure. A non-exclusive list of exemplary tubes that may be used in connection with the present disclosure include a rotating anode tube, a solid-anode microfocus X-ray tube, a metal-jet-anode microfocus X-ray tube, etc. The tubes that may be used in connection with the image composition system described above are not exhaustive and are not limiting. Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the present disclosure.
In some embodiments of the present disclosure, the tube 801 may be mounted proximal to a beam limiting device (not shown in the figure) . The beam limiting device may function to define the beam of X-rays generated by the tube 801. As used herein, “to define” may mean either to cause the directions of at least some of the X-rays of the beam to align in a specific direction, or to define the spatial cross section of the beam. In some embodiments, a beam limiting device may filter a plurality of X-rays so that only those traveling in a specified direction may be allowed through the beam limiting device. The beam limiting device, in some embodiments, the width of the beam of X-rays generated by the tube 801 may be defined by the beam limiting device such that the height of the effective light field may equal to a product of the opening of the beam limiting device in the vertical direction and a constant k.
Particularly, as may be seen in the figure, a target body 802 may be placed on a table 803. The target body 802 may be a patient. In some embodiments, the table 803 may slide or move along in one and/or multiple directions. In some embodiments, the height of the table 803 may be adjusted. The adjustment of the height of the table 803 may be realized by an upward and/or a downward movement of the table. In some embodiments, the height of the table 803 may be adjusted before an imaging operation commences. In some embodiments, the height of the table 803 may be adjusted before an exposure of an imaging operation is taken. The adjustment of the height of the table 803 may accommodate target bodies of different sizes.
A detector (or referred to as radiation detector) 804 may be configured to detect an X-ray emitted from the tube 801 that passes through a target body. In some embodiments of the present disclosure, the detector 804 may be placed underneath the table 803. In alternative embodiments, the detector 804 may be placed beneath the target body 802 and above the table 803. Yet in other embodiments, the detector 804 may be placed inside the table 803 as long as it may receive X-ray signals. The detector 804 may assume different configurations that may be compatible with the present disclosure. A non-exclusive list of exemplary detectors that may be used in connection with the present disclosure includes: a gas ionization detector, a gas proportional detector, a multiwire and microstrip proportional chamber, a scintillation detector, an energy-resolving semiconductor detector, a current-mode semiconductor detector, a CCD detector, a microchannel plate detector, an image plate detector, an X-ray streak camera, a photographic film, and other X-ray detectors such as one operating at a superconducting temperature. The detectors that may be used in connection with the image composition system described above are not exhaustive and are not limiting. Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the present disclosure.
In some embodiments of the present disclosure where a long-length image and/or the scan of the whole body is desired, multiple exposures may be needed to generate a composite image. As shown in the figure in solid lines, the tube 801 may be placed in a first angle with reference to the axis perpendicular to the ground. The detector 804 may be placed in a first position under the head (superior) of the human patient 802. An X-ray beam generated by the  tube 801 in the first angle may be received by the detector 804 placed in the first position under the head (superior) of the target body 802. Hence the first image may be generated for a first exposure area of the target body 802. After the generation of the first image, the tube 801 may turn a particular angle to be positioned at a second angle (as shown by the dashed line in the figure) with reference to the axis perpendicular to the ground. The detector 804 may move a particular distance along the bed towards the feet (interior) of the target body 802 to be positioned at a second position (as shown in the dashed line in the figure) , such that an X-ray beam generated by the tube 801 in the second position may be received by the detector 804 in the second position. The second image may be generated for a second exposure area of the target body 802. In some embodiments where more than two exposures are desired, the image composition system may repeat the process and generate a series of images. In some embodiments, two adjacent sub-images generated by the process described above may have an overlapping region. The area of the overlapping regions between a pair of the adjacent sub-images are substantially the same. Description regarding the determination of an overlapping region in the two adjacent sub-images may be found elsewhere in the present disclosure. See, for example, FIG. 11 and the description thereof.
In some embodiments, the tube 801 and the detector 804 may move simultaneously and/or in a synchronized fashion. In some embodiments, the tube 801 and the detector 804 move sequentially; either one may move prior to the other. In some embodiments, the position of the tube 801 may be adjusted manually. In some embodiments, the position of the tube 801 may be adjusted automatically. In some embodiments, the position of the detector 804 may be adjusted manually. In some embodiments, the position of the detector 804 may be adjusted automatically. In some embodiments, the position of the tube 801 and the position of the detector 804 may be adjusted in a similar manner, either manually or automatically. In some embodiments, the position of the tube 801 and the position of the detector 804 may be adjusted in different manners; one may be adjusted manually, and the other may be adjusted automatically.
The imaging device 101 may include a structure to facilitate the adjustment of, for example, the tube 801, the detector 804, etc. The structure may include one or more components the movement of which may achieve the adjustment. The structure may include, for example, a slidable handle, a rotatable handle, or the like, or a combination thereof, to allow manual  adjustment. The structure may be controlled by, for example, a control signal to allow automatic adjustment. Merely by way of example, a user (for example, a healthcare provider, an imaging specialist, etc. ) may provide an instruction via, for example, an input device; a control signal may be generated based on the instruction. In some embodiments, the instruction may include one or more preliminary device parameter as described elsewhere in the present disclosure.
It should be noted that the motion of the tube 801 and the detector 804 described above is provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. Apparently for persons having ordinary skills in the art, numerous variations and modifications may be conducted under the teaching of the present disclosure. However, those variations and modifications do not depart the protecting scope of the present disclosure.
FIG. 9 and FIG. 10 illustrate another imaging device according to some embodiments of the present disclosure. The imaging device illustrated in FIG. 9 and FIG, 10 is an X-ray imaging device. FIG. 9 illustrates the configuration of the imaging device in a first moment. FIG. 10 illustrates the configuration of the imaging device in a second moment. As shown in the figures, the imaging device includes a beam 901, a table 902, a detector 903, a vertical stand 904, a moving guide 905, a ceiling suspension 906 capable of extending and contracting in the vertical direction, and a tube 907. In some embodiments, the vertical stand 904 may be installed on the ground plane o1. As may be seen in the figures, the XY-plane in the three dimensional coordinates may be parallel to the ground plane o1. Upon the vertical stand 904, the detector 903 may be mounted. In some embodiments, the tube 907 may be mounted proximal to a beam limiting device 911.
As may be seen in the figure, the detector 903 may move up and/or down along the vertical stand 904. The tube 907 may be connected to the ceiling suspension 906 via a tube support 908. In some embodiments, via the tube support 908, the tube 907 may be rotated within the XY-plane and/or the XZ-plane. In some embodiments, via the tube support 908, the tube 907 may move in the vertical direction. The ceiling suspension 906 may extend or contract in the vertical direction. The tube support 908 may include a first support structure 909 and a second support structure 912. The first support structure 909 may be at an angle with the second support structures 912. Merely by way of example, the first support structure 909 may be perpendicular to the second support structure 912.
As may be seen in the figures, the central axis of the ceiling suspension 906 may be  labeled as an RVA-axis. The RVA-axis may be parallel to the Z-axis. As may be seen in the figures, the central axis of the second support structure 912 may be labeled as an RHA-axis. The RHA-axis may be parallel to the Y-axis. In some embodiments, the first support structure 909 may allow the tube support 908 and the tube 907 to rotate about the RVA-axis within the XY-plane. In some embodiments, the second support structure 912 may allow the tube to rotate about the RHA-axis within the XZ-plane. As shown in FIG. 10 where the imaging device is configured in the second moment, the tube 907 may be tilted with reference to the Z-axis via the rotation of the second support structure 912 within the XZ-plane.
FIG. 11 illustrates a process for determining the number of exposures according to some embodiments of the present disclosure. As may be seen in FIG. 11, the dashed box 1101 may represent the position of the effective light field with respect to the first exposure. The upper edge of the dashed box 1101 may represent the starting position of the preliminary effective light field. The dashed box 110n may represent the position of the effective light field with respect to the last exposure. The lower edge of the dashed box 110n may represent the preliminary stopping position of the effective light field. The solid box 1102 may represent the position of the effective light field with respect to the second exposure. A portion of the solid box 1101 may at least partially overlap with the dashed box 1101. The height of the overlapping region may be denoted by Lp. The heights of the dashed  boxes  1101 and 110n, and the height of the solid box 1102 may equal to the height of the preliminary effective light field h0. The line 1190 may represent the ground plane.
According to some embodiments of the present disclosure where the preliminary number of exposures is an integer, the secondary number of exposures may equal to the preliminary number of exposures, and the secondary device parameters may be the same as the preliminary device parameters. The secondary exposure region may be the same as the preliminary exposure region. According to some embodiments of the present disclosure where the preliminary number of exposures is not an integer, based on the rate in the change of the composing length, the secondary number of exposures may be the largest integer not greater than the preliminary number of exposures, or may be the largest integer not greater than the preliminary number of exposures plus 1.
According to some embodiments of the present disclosure where the preliminary number of exposures is the largest integer not greater than the preliminary number of exposures, the starting position and stopping position of the effective light field may be adjusted such that the secondary exposure region between the secondary starting position and the secondary stopping position of the effective light field is not greater than the preliminary exposure region.
According to some embodiments of the present disclosure where the preliminary number of exposures is the largest integer not greater than the preliminary number of exposures plus 1, the secondary starting position and the secondary stopping position of the effective light field may be the same as the preliminary exposure region while the secondary height of the effective light field is not greater than the preliminary height of the effective light field. Details regarding the above description will be further explained below.
As may be seen in FIG. 11, the distance of the preliminary starting position of the effective light field from the ground plane may be Zstart0, and the distance of the preliminary stopping position of the effective light field from the ground plane is Zstop0. Based on the preliminary starting position and the preliminary stopping position of the effective light field, the preliminary composing length L0 of the image composition may equal to:
L0=Zstart0-Zstop0.              (001)
Based on the preliminary composing length L0 and the length of the overlapping region between the two adjacent exposure Lp, a preliminary number of exposures Y may be calculated according to the following equation:
Y= (L0-Lp) / (h0-Lp) .            (002)
In some embodiments of the present disclosure, the preliminary number of exposures Y obtained via Equation (002) may be an integer. In such embodiments, the preliminary number of exposures Y may not need to be adjusted. The secondary number of exposures may be set equal to the preliminary number of exposures Y. In some embodiments of the present disclosure, the preliminary number of exposures Y obtained via Equation (002) is not be an integer. In such embodiments, the preliminary number of exposures Y may be adjusted and a secondary number of exposures that is an integer may be obtained. Various methods may be used to adjust the  preliminary number of exposures to an integer. For example, the integer part of the preliminary number of exposures Y may be designated as the secondary number of exposures. As another example, the integer part of the preliminary number of exposures Y plus 1 may be designated as the secondary number of exposures. The adjustment methods that may be used in connection with the present system described above are not exhaustive and are not limiting. Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the present disclosure.
In some embodiments where the preliminary number of exposures Y is not an integer, the secondary number of exposures may be adjusted according to a rate of change in the composing length. For instance, the rate of change in the composing length may depend on the preliminary composing length and a secondary composing length. Merely by way of example, the rate of change in the composing length may be calculated according to the following equation:
P= (L0-L1) /L0,               (003)
where P represents the rate of change in the composing length, L0 represents the preliminary composing length, and L1 represents the secondary composing length. The secondary composing length may be calculated according to the following equation,
L1=floor (Y) × (h0-Lp) +Lp,            (004)
where the function floor (x) is the largest integer not greater than x.
The rate of change in the composing length may be compared with a threshold to determine the secondary number of exposures such that the difference of the secondary number of exposures and the preliminary number of exposures is less than 1.
In some embodiments of the present disclosure, when the rate of change in the composing length is less than or equal to the threshold, the remnant composing length for the last exposure may be less than the height of the effective light field, the impact of which on the image composition may be considered insignificant. Then the secondary number of exposures may be set equal to floor (Y) and the fractional part of the preliminary number of exposures may be discarded. In some embodiments where the rate of change in the composing length is greater  than the threshold, the remnant composing length for the last exposure may be less than the height of the effective light field, the impact of which on the image composition may be considered significant. Then the fractional part of the preliminary number of exposures may be retained. The secondary number of exposures may equal to floor (Y) + 1.
In some embodiments of the present disclosure, the threshold may be set within the range of 3%to 7%. More particularly, in some embodiments, the threshold may be 5%. In some embodiments, the threshold may be 6%or 7%. Yet in other embodiments, the user may adjust the threshold based on the clinical needs at his discretion.
The methods of determining a number of exposures that may be used in connection with the present system described above are not exhaustive and are not limiting. Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the present disclosure.
In some embodiments of the present disclosure, the preliminary number of exposures is an integer. In such an occasion, it may be unnecessary to adjust one or more preliminary device parameters. The imaging device may be configured according to those preliminary parameters. Alternatively, the secondary device parameters may be set equal to the preliminary parameters, and the imaging device may be configured according to obtain such secondary device parameters. In some embodiments of the present disclosure, the preliminary number of exposures is not an integer, the preliminary device parameters may be adjusted to provide secondary device parameters. As discussed above, the secondary number of exposures may be set equal to floor (Y) or floor (Y) + 1, and the secondary device parameters may be adjusted according to the secondary number of exposures.
In some embodiments where the secondary number of exposures equals to floor (Y) , the secondary number of exposures may be less than the preliminary number of exposures. For instance, the secondary composing length may equal to the preliminary composing length:
L1=floor (Y) × (h0-Lp) +Lp.           (005)
The secondary composing length corresponding to the secondary number of exposures may be shorter than the preliminary composing length corresponding to the preliminary number  of exposures. Therefore, the preliminary starting position and preliminary stopping position of the effective light field may be adjusted to provide a secondary starting position and a secondary stopping position of the effective light field, respectively. The distance between the starting position and the stopping position of the effective light field may be the secondary composing length.
In some embodiments of the present disclosure where the secondary number of exposures equals to floor (Y) , the secondary starting position Zstart and the secondary stopping position Zstop of the effective light field corresponding to the secondary device parameters may be obtained respectively using the following equations, respectively:
Zstart=Zstart0- (L0-L1) /2;            (006)
and,
Zstop=Zstop0+ (L0-L1) /2,             (007)
where the height of the effective light field may be equal to the preliminary height of the effective light field h0.
In some embodiments where the secondary number of exposures equals to floor (Y) + 1, the secondary exposure region may be set equal to the preliminary exposure region such that the secondary composing length equals to the preliminary composing length. Therefore, the starting position and the stopping position of the effective light field do not need to be adjusted. Instead, the height of the effective light field may be adjusted to achieve the number of exposures of floor (Y) + 1. The secondary height of the effective light field may be less than the preliminary height of the effective light field.
Merely by way of example where the secondary number of exposures equals to floor (Y) + 1, the secondary starting position of the effective light field is set to be the preliminary starting position of the effective light field Zstart0, and the secondary stopping position of the effective light field is set to be the preliminary stopping position of the effective light field Zstop0, the secondary height of the effective light field may be obtained using the following equation:
h=Lp+ (L0-Lp) / (floor (Y) +1)           (008)
In some embodiments of the present disclosure where a series of X-ray images are  generated, the secondary exposure region may be equal to or smaller than the preliminary exposure region. Hence, the radiation dose received by the target body, for example, a human patient, may be reduced. In some embodiments of the present disclosure, the determination of the secondary number of exposures may depend on a rate of change in the composing length. The composite image may satisfy practical clinical demands regardless of whether the secondary number of exposures equal to floor (Y) or floor (Y) + 1.
The process for determining a device parameter that may be used in connection with the present system described above are not exhaustive and are not limiting. Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the present disclosure.
In some embodiments of the present disclosure, the secondary number of exposures may be the same as the preliminary number of exposures. In some embodiments, the secondary number of exposures may be the largest integer not greater than the preliminary number of exposures. In some embodiments, the secondary number of exposures may be the largest integer not greater than the preliminary number of exposures plus 1. And various secondary device parameters may be applied according to different secondary number of exposures. In some embodiments, the position of the detector and the tube rotation angle corresponding to an exposure may be obtained according to a secondary number of exposures and secondary device parameters.
In some embodiments of the present disclosure, the position of the detector at an exposure may be obtained. The distance between the X-ray generator (or another type of radiation source) and the detector (or the image-receptor) may be referred to as the source to image-receptor distance (SID, or S) . The change of position of the focus of the tube 907 along Z-axis may be significantly smaller than the SID, such that the position of the focus of the tube 907 may be treated as approximately fixed along the Z-axis. The tube 907 may rotate about the RHZ-axis within the XZ-plane. In other words, during the imaging process, the distance between the focus of the tube 910 and the ground plane may be approximately fixed; the tube 907 may rotate about the RHA-axis via the second support structure 912 within the XZ-plane (with reference to FIG. 9 and FIG. 10) . The detector 903 may move up and/or down along the vertical stand 904 in the Z-axis.
In some embodiments of the present disclosure where the preliminary number of exposures is an integer, the imaging device may be configured according to the preliminary device parameters. Alternatively, the secondary device parameters may be set to be the same as the preliminary device parameters, and the imaging device may be configured according to the secondary device parameters. The position of detector in the Z-axis may be obtained through the preliminary position of each exposures.
Returning to FIG. 11, the height of the overlapping region between two adjacent exposures may be Lp. The preliminary position of the first exposure may correspond to the upper edge of the effective light field corresponding to the first exposure. The preliminary position of the first exposure Z1 along the Z-direction may be set to be Zstart0. The preliminary position of the second exposure may be the upper edge of the effective light field corresponding to the second exposure. The preliminary position of the second exposure Z2 may be set to be:
Z2=Zstart0-h0+Lp.             (009)
Similarly, the preliminary position of the nth exposure may be the upper edge of the effective light field corresponding to the nth exposure. The position of the nth exposure Zn may be set to be:
Zn=Zstart0- (n-1) ×h0+ (n-1) ×Lp,          (010)
where n stands for the nth exposures.
As already described, the effective light field may be the light field received by the detector that may be effective to provide imaging information. Therefore, the position of the detector in the Z direction may be determined based on the effective light fields in the Z direction.
In some embodiments, the center of the detector may be considered as the position of the detector, and the position of the center of the detector may be determined according to the upper edge of a corresponding effective light field. With reference to FIG. 11, the position of the center of the detector corresponding to the first exposure may be determined according to:
ZFD1=Z1- (h0/2) Zstart0- (h0/2) .          (011)
The position of the center of the detector corresponding to the second exposure may be  determined according to:
ZFD2=Z2- (h0/2) =Zstart0- (3/2) ×h0+Lp.        (012)
Similarly, the position of the center of the detector corresponding to the nth exposure may be determined according to:
ZFDn=Zn- (h0/2) =Zstart0- ( (2×n-1) /2) ×h0+ (n-1) ×Lp,     (013)
where n stands for the nth exposure. The methods of determining the position of detector that may be used in connection with the present system described above are not exhaustive and are not limiting. Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the present disclosure.
In some embodiments of the present disclosure, the tube rotation angle may refer to the difference of the angle between the axis of the tube corresponding to an exposure and the X-axis with respect to the XZ-plane, and the angle between the axis of the tube corresponding to the preceding exposure and the X-axis with respect to the XZ-plane. In some embodiments of the present disclosure, the tube rotation angle may refer to the angle that the tube rotates between an exposure and a preceding exposure. For example, the angle between the axis of the tube and the X-axis within the XZ-plane corresponding to an exposure equals to A, and the angle between the axis of the tube and the X-axis within XZ-plane corresponding to the preceding exposure equals to B, the tube rotation angle corresponding to the exposure may be α=A-B. With reference to FIG. 9 and FIG. 10, the angle between the tube axis and the X-axis is the angle between the tube 907 rotating about the RHA axis within the XZ plane and the X-axis.
FIG. 12 illustrates a schematic view of the tube rotation angle corresponding to the nth exposure. As may be seen in FIG. 12, Point G may represent the point of the rotation axis of the tube. Point M and point Q may represent the positions of the focus of the tube corresponding to two adjacent exposures, respectively. The angle αRHA between the GM and GQ may be the tube rotation angle corresponding to the nth exposure.
As illustrated in the FIG. 12, the point of the rotation axis of the tube G may intersect with the detector at point E in the X-axis direction. The focus of the tube M may intersect with  the detector at point A in the X-axis direction. The effective light field that the tube projects to the detector may be between point D and point B. Point C may be the intersection of the detector and the perpendicular bisector of the rays emitted by the tube G. MN may intersect with the horizontal line perpendicularly at N. The angle between MA and MD may be α2, the angle between MA and MB may be α1, and the angle between MA and MC may be α3.
The distance between point Q and point E may be the SID, SSID. With reference to FIG. 12,
MA=QE+NQ=QE+GM× (1-cosaRHA) .        (014)
In some embodiments, the length of GM may be significantly greater than the length of QE, such that:
GM× (1-cosaRHA) =0,              (015)
and,
MA=QE.                (016)
In other words,
MA=SSID.                (017)
The position of point M along the Z-axis direction may be ZTCS. Point D may be the upper edge of the effective light field corresponding to the nth exposure. The position of point D along the Z-axis direction may be:
Zn=Zstart0- (n-1) ×h0+ (n-1) ×Lp.          (018)
Point B may be the lower edge of effective light field corresponding to the nth exposure. The position of point B along the Z-axis direction may be Zn-h0. The following equations may be obtained according to the above description:
DA=Zn-ZTCS,               (019)
and,
BA=DA-h0=Zn-ZTCS-h0.           (020)
With reference to FIG. 11,
αRHA=α3= (α12) /2,             (021)
wherein
Figure PCTCN2015090265-appb-000001
and
Figure PCTCN2015090265-appb-000002
Substitute
Figure PCTCN2015090265-appb-000003
and
Figure PCTCN2015090265-appb-000004
to Equation 021, then,
Figure PCTCN2015090265-appb-000005
Figure PCTCN2015090265-appb-000006
Figure PCTCN2015090265-appb-000007
Substitute MA=SSID, DA=Zn-ZTCS, BA=DA-h0=Zn-ZTCS-h0 into Equation 022,
Figure PCTCN2015090265-appb-000008
where αRHA may be the difference of the angle between the axis of the tube corresponding to the nth exposure and the X-axis within the XZ-plane, and the angle between the axis of the tube corresponding to the (n-1) th exposure and the X-axis within the XZ-plane.
In some embodiments of the present disclosure where the secondary number of exposures is less than the preliminary number of exposures, the imaging device may be configured according to the secondary device parameters. The positions of the detector and the rotation angles of the tube may be determined in a process similar to that described above in connection with some embodiments of the present disclosure where the preliminary number of exposures is an integer.
As discussed above, in some embodiments where the secondary number of exposures is larger than the preliminary number of exposures, the secondary starting position and secondary stopping position of the effective light field may be the same as the preliminary starting and stopping position of the effective light field. The secondary height of the effective light field may be different from the preliminary height of the effective light field, and may be denoted as h. When calculating the position of the center of the detector and the tube rotation angles, the  preliminary height of the effective light field in relevant parameters may be substituted by the secondary height of the effective light field. For instance, the position of the detector corresponding to each exposure may be:
ZFDn=Zstart0- ( (2n-1) /2) ×h+ (n-1) ×Lp,         (024)
and the tube rotation angle corresponding to each exposure may be:
Figure PCTCN2015090265-appb-000009
where,
Zn=Zstart0- (n-1) ×h+ (n-1) ×Lp,           (026)
h=Lp+ (L0-Lp) / (floor (Y) +1) ,           (027)
and,
L0=Zstart0-Zstop0.              (028)
The methods of determining the positions of the detector and tube that may be used in connection with the present system described above are not exhaustive and are not limiting. Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the present disclosure.
In some embodiments, in the clinical practice, a user may input one or more preliminary device parameters and the height of the overlapping region based on a preliminary exposure region. In some embodiments, the preliminary device parameters may be provided by the image composition system during initialization. The image composition system may calculate a preliminary number of exposures according to the inputs and obtain a secondary number of exposures and a secondary device parameter. During imaging, the user may designate a starting position and stopping position of the effective light field according to those secondary parameters. In some embodiments, the designation may be performed under the instruction of the user. In other embodiments, the image composition system may perform the designation automatically. The position of the detector corresponding to an exposure may be obtained according to the  position of the effective light field. The image composition system may position the tubes, the detector, and/or the beam limiting devices accordingly and generate a series of images for composition.
In some embodiments, a preliminary number of exposures may be obtained based on a preliminary exposure region. A secondary number of exposures may be obtained based on the preliminary number of exposures, as well as a secondary device parameter, such that the secondary exposure region corresponding to the secondary number of exposures may be equal to or smaller than the preliminary exposure region, and that the absolute difference between the preliminary number of exposures and the secondary number of exposures may be less than 1. In some embodiments, in clinical practice, the number of exposures may be an integer. Under such circumstances, the system and process according to some embodiments of the present disclosure may help avoid that the secondary number of exposures may be greater than the preliminary number of exposures and reduce the radiation dose the patient may be exposed to.
In some embodiments of the present disclosure, the position of the tube along the z-axis may be approximately fixed, and the rotation of the tube may be confined within the XZ-plane. The position of the detector may be adjusted along with the rotation of the tube. In some embodiments, the detector may move along the Z-axis simultaneously with the rotation of the tube. In some embodiments, the motion of the detector and the rotation of the tube may be performed one after another.
In some embodiments of the present disclosure, a composite image may be obtained. The image composition system may obtain a series of sub-images through adjusting the parameters that may be used to configure an imaging device. The image composition system may obtain the position of the effective light field corresponding to an exposure and the height of the overlapping region between two adjacent sub-images. The image composition system may obtain a composite image by combining adjacent sub-images according to the position of the effective light field corresponding to an exposure and the height of the overlapping region between two adjacent sub-images. The methods of obtaining a composite image that may be used in connection with the present system described above are not exhaustive and are not limiting. Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all  such changes, substitutions, variations, alterations, and modifications as falling within the scope of the present disclosure.
FIG. 13 is a block diagram illustrating the image processing engine 203 according to some embodiments of the present disclosure. The image processing engine 203 may include a segmentation module 1301, a registration module 1302, a merger module 1303, and a correction module 1304. The segmentation module 1301 may be configured to segment one or more overlapping regions of 3D images received from the acquisition engine 202 (shown in FIG. 2) to produce overlapping images corresponding to 3D volume data. Optionally and preferably, the 3D images may be three-dimensional digital subtraction angiography images (3D-DSA images) . Specifically, the 3D-DSA images may be 3D digital coronal images, 3D digital sagittal images, 3D digital transverse images, or the like, or a combination thereof. Alternatively, the segmentation module 1301 may be configured to segment images stored in storage module 204, the images may be, for example, 3D images, 2D images, or the like, or a combination thereof. Particularly, the 3D images may be 3D-DSA images. Optionally and preferably the 3D-DSA images may be 3D digital coronal images, 3D digital sagittal images, 3D digital transverse images, or the like, or a combination thereof. Merely by way of example, a series of scans may generate a plurality of sub-images to be combined together so that a composite image may be acquired, 2 adjacent sub-images may have an overlapping region, and the overlapping region of a sub-image may be segmented out by the segmentation module 1301. The overlapping region may be termed as overlapping image corresponding to 3D volume data. It should be noted that exemplary sub-images may be 3D images. Optionally and preferably, the 3D images may be 3D-DSA images. Specifically, the 3D-DSA images may be 3D digital coronal images, 3D digital sagittal images, 3D digital transverse images, or the like, or a combination thereof.
In some embodiments of the present disclosure, the segmentation may be performed in accordance with digital imaging and communication in medicine (DICOM) . For instance, label (0020 0032) of DICOM may be utilized to segment overlapping regions out of 3D images corresponding to 3D volume data. It should be noted that the example described hereby is provided merely for the purposes of illustration, and should not be deemed to limit the scope of the present disclosure. Any number of overlapping regions of 3D images may be segmented by the segmentation module 1301. It should still be understood that any size of overlapping images  of 3D images may be segmented by the segmentation module 1301, for example, 351*67*73, or 352*512*96. Furthermore, the segmentation module 1301 may also be configured to segment 2D images.
The segmented overlapping images corresponding to 3D volume data may be sent to the registration module 1302 for one or more registrations. The registration module 1302 may be configured to perform registrations including, for example, 2D registration, 3D registration, or the like, or a combination thereof. The registration may be performed to align two or more images into spatial alignment. The images may be taken, for instance, at different times, from different viewpoints, or from different modalities. Merely by way of example, the overlapping images corresponding to 3D volume data generated in the segmentation module 1301 may be registered by the registration module 1302. Afterwards, registered overlapping images corresponding to 3D volume data may be generated. The registered overlapping images corresponding to 3D volume data may be sent to the correction module 1304 for a fine registration, or to the merger module 1303 for image fusion to generate a composite image. The fine registration may include a process of optimization based on one or more algorithms. Exemplary algorithms may include recursion, a bisection method, an exhaustive method, a greedy algorithm, a divide and conquer algorithm, dynamic programming method, an iterative method, a branch-and-bound algorithm, a backtracking algorithm, or the like, or any combination thereof. The merger module 1303 may be configured to calibrate the sub-images based on the 3D registration, and the calibrated sub-images may be fused together to generate a composite image.
It should be understood that the description of the image processing engine 203 is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, numerous variations and modifications may be made under the teaching of the present disclosure. However, those variations and modifications do not depart from the protecting scope of the present discourse. For instance, the registered overlapping images corresponding to 3D volume data may be sent to the correction module 1304 and the merger module 1303 simultaneously or sequentially at any order. As still some instances, the correction module 1303 may be unnecessary. Sub-images that are not adjacent may have one or more overlapping regions as well.
FIG. 14 is a flowchart illustrating a workflow of image processing according to some  embodiments of the present disclosure. In some embodiments of the present disclosure, sub-images of a region of interest (e.g., peripheral vessels of lower limbs) may be acquired in step 1401. The sub-images may be acquired by techniques including DSA (digital subtraction angiography) , CT (computed tomography) , CTA (computed tomography angiography) , PET (positron emission tomography) , X-ray, MRI (magnetic resonance imaging) , MRA (magnetic resonance angiography) , SPECT (single-photon emission computerized tomography) , US (ultrasound scanning) , or the like, or a combination thereof. It should be noted that the sub-images may be 3D images, 2D images, or the like, or a combination thereof. Particularly, the 3D images may be 3D-DSA images. Optionally and preferably the 3D-DSA images may be 3D digital coronal images, 3D digital sagittal images, 3D digital transverse images, or the like, or a combination thereof.
Overlapping images corresponding to 3D volume data of the sub-images may be generated in step 1402. Merely by way of example, two adjacent sub-images acquired by two successive scans may have two overlapping images corresponding to 3D volume data of each sub-image, respectively. The two overlapping images corresponding to 3D volume data of the sub-images may be segmented and generated in step 1402.
Subsequently, the overlapping images corresponding to 3D volume data generated in step 1402 may be registered in step 1403. Specifically, exemplary reasons of the misalignment may include that the layout of the object being scanned is not perfectly parallel to the scanning plane of the imaging device; as a result, successive sub-images may be misaligned spatially. Other reasons may include, for example, the motion of a patient during the imaging procedure, the motion of an internal organ of the patient during the imaging procedure, the motion of the imaging device during the imaging procedure, or the like, or a combination thereof. The registration may include, a 2D registration, a 3D registration, or the like, or a combination thereof. Specifically, the process of a registration may include calculating one or more offsets and applying the offsets to reduce misalignment. The registered overlapping images corresponding to 3D volume data may be stored in the storage engine 204.
In step 1404, a fine registration may be performed on the registered overlapping images corresponding to 3D volume data. The fine registration may include a process of optimization based on one or more algorithms. Exemplary algorithms may include recursion, a bisection  method, an exhaustive method, a greedy algorithm, a divide and conquer algorithm, dynamic programming method, an iterative method, a branch-and-bound algorithm, a backtracking algorithm, or the like, or any combination thereof. Results generated by the fine registration (e.g., one or more offsets) may be utilized to register the overlapping image corresponding to 3D volume data again. It should be noted the optimization may be performed iteratively in step 1404 until a desirable result is obtained.
In step 1405, the sub-images may be calibrated in accordance with the results of the registration in step 1403 or the fine registration in step 1405. The results may be one or more offsets including, for example, offsets in the X direction (X offsets) , offsets in the Y direction (Y offsets) , offsets in the Z direction (Z offsets) , offsets in the coronal plane (coronal offsets) , offsets in the sagittal plane (sagittal offsets) , offsets in the transverse plane (transverse offset) , etc. Afterwards the calibrated sub-images may be fused together to generate a composite image. Particularly, the overlapping images corresponding to 3D volume data representing the overlapping regions of the sub-images may be fused.
The composite image may be output for display in step 1406. In the exemplary context of angiography, a composite image of the vasculature including one or more blood vessels of a region of interest may be obtained for, for example, diagnosis purposes. It should be noted that, apart from vascular diseases, the method described in the disclosure may be utilized to diagnose a region of interest such as head, thorax, abdomen, pelvis and perineum, limbs, spine and vertebrae, or the like, or a combination thereof. Specifically, the head may include brain or skull, eye, teeth, or the like, or a combination thereof. The thorax may include cardiac, breast, or the like, or a combination thereof. The abdomen may include kidney, liver, or the like, or a combination thereof. The limbs may include an arm, a leg, a wing of a bird, or the like, or a combination thereof.
It should be noted that the flowchart described above is provided for the purposes of illustration, and may not intend to limit the scope of the present disclosure. For persons having ordinary skills in the art, various variations and modifications may be conducted under the guidance of the present disclosure. However, those variations and modifications do not depart the scope of the present disclosure. For example, the region of interest may be a gastrointestinal  tract. Three or more overlapping images corresponding to 3D volume data may be registered in step 1403. Step 1404 may be unnecessary, and step 1403 may proceed to step 1405 directly.
FIG. 15 depicts a block diagram of the registration module 1302 according to some embodiments of the present disclosure. The registration module 1302 may include an acquisition unit 1501, a 2D registration unit 1502, and a 3D registration unit 1503. The acquisition unit 1501 may be configured to acquire images to be registered. The images may be 3D images, 2D images, or the like, or a combination thereof. Particularly, the 3D images may be 3D-DSA images. For instance, the 3D-DSA images may be 3D digital coronal images, 3D digital sagittal images, 3D digital transverse images, or the like, or a combination thereof. In some embodiments of the present disclosure, one or more overlapping images corresponding to 3D volume data of adjacent sub-images which have mutual overlapping regions may be acquired by the acquisition unit 1501. The overlapping images corresponding to 3D volume data may include a stack of 2D images.
A 2D registration may be performed by the 2D registration unit 1502. Overlapping images corresponding to 3D volume data may be projected onto a coronal plane, a sagittal plane, a transverse plane, etc. The projection may be based on maximum intensity projection (MIP) . Optionally and preferably, the projection may be based on temporal maximum intensity projection (tMIP) , minimum intensity projection (MiniP) , virtual endoscopic display (VED) , or the like, or a combination thereof. The following description is provided in the exemplary context of MIP. This is for illustration purposes only, and not intended to limit the scope of the present disclosure. Other types of projection may be used to generate a 2D projection map. By way of MIP, a 2D projection image and a pixel map may be generated based on a set of 3D volume data. An overlapping region may correspond to two sets of 3D volume data, one set relating to one of two adjacent 3D sub-images. By way of MIP, two 2D projection images and two corresponding pixel maps may be generated.
The 2D registration may include uncover the correlation between the 2D projection images. The correlation may be utilized to determine 2D offsets of the two corresponding pixel maps. The correlation or the 2D offsets may be used to calibrate the pixel maps. The calibration may be performed by the 2D registration unit 1502. Merely by way of example, the overlapping images corresponding to 3D volume data may have a stack of 2D images in the  coronal plane. The overlapping images may be projected onto the coronal plane. Alternatively, the overlapping images corresponding to 3D volume data may be projected on another plane, for example, a self-defined plane. It should be noted that the 2D offsets may include offsets in different directions or different planes, for example, offsets in the X direction (X offsets) , offsets in the Y direction (Y offsets) , offsets in the Z direction (Z offsets) , offsets in the coronal plane (coronal offsets) , offsets in the sagittal plane (sagittal offsets) , offsets in the transverse plane (transverse offset) , etc.
The 3D registration unit 1503 may be configured to calculate 3D offsets and perform 3D registration on the overlapping images corresponding to 3D volume data based on the 2D offsets and the 3D offsets. It should be noted that the 3D offsets may include offsets in different directions or different planes, for example, offsets in the X direction (X offsets) , offsets in the Y direction (Y offsets) , offsets in the Z direction (Z offsets) , offsets in the coronal plane (coronal offsets) , offsets in the sagittal plane (sagittal offsets) , offsets in the transverse plane (transverse offset) , etc.
It should be understood that the preceding description of the registration module is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, various variations and modifications may be made in the light of the present disclosure. However, those variations and modifications do not depart from the protecting scope of the present disclosure. For example, the 3D registration may be performed on the overlapping images corresponding to 3D volume data directly without performing a 2D registration. As another example, the 2D registration may be solely performed without performing a 3D registration.
FIG. 16 is a flowchart illustrating a registration process according to some embodiments of the present disclosure. Overlapping images corresponding to 3D volume data may be retrieved in step 1601. The overlapping images corresponding to 3D volume data may be generated by segmenting sub-images, which is described elsewhere in the present disclosure. The overlapping images corresponding to 3D volume data may include a stack of 2D images. Merely by way of example, the overlapping images corresponding to 3D volume data may be projected onto any plane of the anatomical planes. Exemplary anatomical plane may include a coronal plane, a sagittal plane, a transverse plane, etc. Specifically, when the overlapping  images corresponding to 3D volume data include 3D digital coronal images, the overlapping images corresponding to 3D volume data may be projected onto the coronal plane. Likewise, when the overlapping images corresponding to 3D volume data include 3D digital sagittal images, the overlapping images corresponding to 3D volume data may be projected onto the sagittal plane. Two-dimensional projection images of the overlapping images corresponding to two sets of 3D volume data and the corresponding pixel maps may be generated after the projection is completed. The projection may be based on maximum intensity projection (MIP) . Optionally and preferably, the projection may be based on temporal maximum intensity projection (tMIP) , minimum intensity projection (MiniP) , virtual endoscopic display (VED) , or the like, or a combination thereof.
A 2D registration may be performed on the 2D images in step 1602. The 2D registration may include calculating 2D offsets and applying 2D offsets. It should be noted that the 2D offsets may include offsets in different directions or different planes, for example, offsets in the X direction (X offsets) , offsets in the Y direction (Y offsets) , offsets in the Z direction (Z offsets) , offsets in the coronal plane (coronal offsets) , offsets in the sagittal plane (sagittal offsets) , offsets in the transverse plane (transverse offset) , etc. Any number of 2D images of the overlapping images corresponding to 3D volume data may be registered in step 1602.
In step 1603, a 3D registration may be performed on the overlapping images corresponding to 3D volume data based on the 2D offsets and 3D offsets generated hereby. It should be noted that the 3D offsets may include offsets in different directions or different planes, for example, offsets in the X direction (X offsets) , offsets in the Y direction (Y offsets) , offsets in the Z direction (Z offsets) , offsets in the coronal plane (coronal offsets) , offsets in the sagittal plane (sagittal offsets) , offsets in the transverse plane (transverse offset) , etc. Specifically, the 3D offsets may be generated based on the 2D offsets in the plane onto which the overlapping images have been projected to generate the 2D projection images and the slice information of the overlapping images corresponding to 3D volume data. The slice information may indicate the slice number of a pixel of a 2D projection image. In step 1604, the overlapping images corresponding to 3D volume data may be output for further processing.
It should be noted that the flowchart described above is provided for the purposes of illustration, and not necessarily intended to limit the scope of the present disclosure. Apparently  for persons having ordinary skills in the art, numerous variations and modifications may be conducted under the teaching of the present disclosure. However, those variations and modifications do not depart the protecting scope of the present disclosure. For example, the 3D registration may be performed directly on the overlapping images corresponding to 3D volume data without performing a 2D registration. As another example, the 2D registration may be performed solely on the overlapping images corresponding to 3D volume data without performing a 3D volume data.
FIG. 17 is a block diagram of the registration module 1302 according to some embodiments of the present disclosure. The registration module 1302 may include an acquisition unit 1501, a 2D registration unit 1502, and a 3D registration unit 1503. The 2D registration unit may include a projection subunit 1701, a map generation subunit 1702, a 2D offsets calculation subunit 1703, and a map calibration subunit 1705. The 3D registration unit may include a 3D offsets calculation subunit 1704 and an alignment subunit 1706. The acquisition unit 1501 may be configured to acquire images to be registered. The images may be 3D images, 2D images, or the like, or a combination thereof. Particularly, the 3D images may be 3D-DSA images. Optionally and preferably the 3D-DSA images may be 3D digital coronal images, 3D digital sagittal images, 3D digital transverse images, or the like, or a combination thereof.
In some embodiments of the present disclosure, one or more overlapping images corresponding to 3D volume data of adjacent sub-images that have overlapping regions may be acquired by the acquisition unit 1501. A 2D registration may be performed on 2D projection images generated by projecting the overlapping images corresponding to 3D volume data on an anatomical plane. Exemplary anatomical planes may include a coronal plane, a sagittal plane, a transverse plane, etc. Alternatively, the overlapping images corresponding to 3D volume data may be projected on any plane, for example, a self-defined plane. The 2D registration may be performed by the 2D registration unit 1502. The projection subunit 1701 may be configured to perform the projection. The projection may be based on maximum intensity projection (MIP) . Optionally and preferably, the projection may be based on temporal maximum intensity projection (tMIP) , minimum intensity projection (MiniP) , virtual endoscopic display (VED) , or the like, or a combination thereof.
The map generation subunit 1702 may be configured to generated pixel maps of the 2D projection images based on the 2D projection images generated by the projection subunit 1701. The 2D projection images and the pixel maps may be generated simultaneously. In alternative embodiments, the 2D projection images and the pixel maps may be generated sequentially at any order. The 2D offsets calculation subunit 1703 may be configured to generate 2D offsets based on the 2D projection images. It should be noted that the 2D offsets may include offsets in different directions or different planes, for example, offsets in the X direction (X offsets) , offsets in the Y direction (Y offsets) , offsets in the Z direction (Z offsets) , offsets in the coronal plane (coronal offsets) , offsets in the sagittal plane (sagittal offsets) , offsets in the transverse plane (transverse offset) , etc.
In some embodiments of the present disclosure, the map generation subunit 1702 may generate one or more pixel maps of the overlapping images corresponding to 3D volume data. The pixel maps may correspond to the 2D projection images generated by the 2D registration unit 1502. Merely by way of example, when an overlapping image corresponding to 3D volume data is projected onto the coronal plane, a pixel map on the coronal plane may be generated. When a LPS coordinate system as described in FIG. 7 is employed, the X-Z plane may indicate the coronal plane, and the overlapping images corresponding to 3D volume data may be projected onto the X-Z plane. A pixel value of the pixel map may be the slice number corresponding to a slice of an overlapping image corresponding to 3D volume data that has the maximum intensity at a pixel (x, z) . For example, an overlapping image corresponding to 3D volume data may have 80 slices. When the overlapping image corresponding to 3D volume data is projected onto an anatomical plane (e.g., a coronal plane, a sagittal plane, a transverse plane) to generate a 2D projection image, each pixel of the 2D projection image may correspond to 80 pixels at the same (x, z) distributed among 80 slices of the overlapping image corresponding to 3D volume data. The pixel value of the pixel map corresponding to the 2D image may be the slice number corresponding to a slice that has the maximum intensity at the pixel (x, z) among the 80 slices. Specifically, if the 55th slice of the overlapping image corresponding to 3D volume data has the maximum intensity at the pixel (x1, z1) , then the pixel value corresponding to the 2D image at pixel (x1, z1) that is the projection of the overlapping image corresponding to 3D volume data may be assigned to 55.
The 3D offsets calculation subunit 1704 may be configured to calculate 3D offsets. The calculation may be based on the pixel maps. The pixel map generated by the map generation subunit 1702 may be calibrated by the map calibration subunit 1705. The calculation may be based on the 2D offsets. A calibrated pixel map may be generated after the calibration is completed. In alternative embodiments of the present disclosure, no calibration is performed on the pixel maps. In some embodiments, the 3D offsets may be obtained based on the 2D offsets. In alternative embodiments, the 3D offsets may be obtained based on the pixel maps, for example, two pixel maps, or a pixel map and a calibrated pixel map. Merely by way of example, one of the two pixel maps may be designated as a reference pixel map, and the other may be designated as the floating pixel map. The 3D offsets may be calculated through a comparison of the pixel values of corresponding pixels in the reference pixel map and in the floating pixel map. Furthermore, the 3D offsets may be calculated on the basis of probability distribution of the difference. The alignment subunit 1706 may be configured to perform 3D registration on the overlapping images corresponding to 3D volume data based on the 2D offsets and the 3D offsets.
It should be noted that the flowchart described above is provided for the purposes of illustration, and not necessarily intend to limit the scope of the present disclosure. Apparently for persons having ordinary skills in the art, numerous variations and modifications may be conducted under the teaching of the present disclosure. However, those variations and modifications do not depart from the protecting scope of the present disclosure. For example, the 2D offsets may not necessarily be utilized to calibrate a pixel map.
FIG. 18 is a flowchart illustrating a process of image registration according to some embodiments of the present disclosure. Overlapping images corresponding to 3D volume data may be projected on any plane of the anatomical plane in step 1801. The anatomical plane may include a coronal plane, a sagittal plane, a transverse plane, etc. The projection may be based on maximum intensity projection (MIP) . Optionally and preferably, the projection may be based on temporal maximum intensity projection (tMIP) , minimum intensity projection (MiniP) , virtual endoscopic display (VED) , or the like, or a combination thereof.
In  step  1802, 2D projection images of the overlapping images corresponding to 3D volume data and corresponding pixel maps may be obtained based on the projection performed in step 1801. The 2D projection images may correlate with the overlapping images corresponding  to 3D volume data, as well as the pixel maps. A pixel value at a pixel (x, z) of the 2D projection images may be the maximum intensity of a corresponding pixel (x, z) of the slice among multiple slices of the overlapping image corresponding to 3D volume data. Two-dimensional offsets may be obtained based on the 2D projection images in step 1803. It should be noted that the 2D offsets may include offsets in different directions or different planes, for example, offsets in the X direction (X offset) , offsets in the Y direction (Y offset) , offsets in the Z direction (Z offset) , offsets in the coronal plane (coronal offsets) , offsets in the sagittal plane (sagittal offsets) , offsets in the transverse plane (transverse offset) , etc. The 2D offsets may be utilized to perform further registration, e.g., 3D registration, a pixel map calibration.
In some embodiments, the pixel maps may be calibrated in step 1804. The calibration may be based on the 2D offsets. 3D offsets may be obtained based on the 2D offsets and the pixel maps in step 1804. In some embodiments, the 3D offsets may be obtained based on the 2D offsets. In alternative embodiments, the 3D offsets may be obtained based on the pixel maps, for example, two pixel maps, or a pixel map and a calibrated pixel map. In step 1805, the alignment may be performed to register the overlapping images corresponding to 3D volume data based on the 2D offsets and the 3D offsets.
It should be noted that the flowchart described above is provided for the purposes of illustration, and not necessarily intend to limit the scope of the present disclosure. Apparently for persons having ordinary skills in the art, numerous variations and modifications may be conducted under the teaching of the present disclosure. However, those variations and modifications do not depart from the protecting scope of the present disclosure. For example, step 1803 may proceed to step 1805 directly without performing step 1804.
FIG. 19 is a flowchart illustrating a process of image registration according to some embodiments of the present disclosure. As described in FIG. 7, a LPS coordinate system may be employed herein. As shown in FIG. 7, the X-Y-Z axis defines a three dimensional space such that the origin of it locates within the chest of a target body. In step 1901, a first overlapping image correspond to 3D volume data and a second overlapping image corresponding to 3D volume data may be retrieved. The two overlapping images may represent an overlapping region of two sub-images, the sub-images may be combined to generate a composite image. The  overlapping images corresponding to 3D volume data may be segmented from the sub-images by the segmentation module 1301 that is described elsewhere in the present disclosure.
In step 1902, the first overlapping image correspond to 3D volume data and the second overlapping image correspond to 3D volume data may be projected onto an anatomical plane. The anatomical plane may include a coronal plane, a sagittal plane, a transverse plane, etc. The projection may be based on maximum intensity projection (MIP) . Optionally and preferably, the projection may be based on temporal maximum intensity projection (tMIP) , minimum intensity projection (MiniP) , virtual endoscopic display (VED) , or the like, or a combination thereof. A first 2D projection image corresponding to the first overlapping image and a second 2D projection image corresponding to the second overlapping image may be generated in step 1902 based on the projection. In some embodiments of the present disclosure, a pixel (x, z) of a 2D projection image may be assigned with the maximum intensity of corresponding pixels (x, z) of multiple slices constituting a 3D image. For instance, an overlapping image corresponding to 3D volume data may have 80 slices. The value of a pixel (x, z) of the 2D projection image may equal to the maximum intensity at the corresponding pixel (x, z) among the 80 slices.
In step 1903, a first pixel map correspond to the first 2D projection image and a second pixel map corresponding to the second 2D projection image may be generated. According to the description of the pixel map described elsewhere in the disclosure, a pixel value of the pixel map may be the slice number corresponding to a slice that has the maximum intensity among multiple slices of an overlapping image corresponding to 3D volume data.
In step 1904, the 2D projection images obtained in 1902 may be utilized to calculate 2D offsets. It should be noted that the 2D offsets may include offsets in different directions or different planes, for example, offsets in the X direction (X offset) , offsets in the Y direction (Y offset) , offsets in the Z direction (Z offset) , offsets in the coronal plane (coronal offsets) , offsets in the sagittal plane (sagittal offsets) , offsets in the transverse plane (transverse offset) , etc. In some embodiments, a Cartesian coordinate system may be employed herein. In some embodiments, the LPS coordinate system may be employed herein. The overlapping images corresponding to 3D volume data may be projected onto the X-Z plane corresponding to the coronal plane under an MIP. Therefore, the obtained 2D projection images are on the X-Y plane. In step 1904, an X offset and a Z offset may be obtained based on the first 2D projection image  and the second 2D projection image. In step 1905, the X offset and the Z offset may be utilized to calibrate the second pixel map as described in step 1903. A calibrated pixel map may be generated from the calibration. In step 1906, a Y offset may be obtained by comparing the first pixel map and the calibrated pixel map. In some embodiments of the present disclosure, the comparison may be performed by way of a subtraction for corresponding pixels on the first pixel map and the calibrated pixel map (or on the reference pixel map and the floating pixel map) . The probability or frequency of the differences obtained from the comparison may be assessed to provide a difference range. For example, if differences between 15 and 17 occur most, the difference range may be set to be 15-17. After the difference range is determined, an average value relating to all the pixel values of the first pixel map and the calibrated pixel map that are within the difference range may be calculated. The calculated average value may be determined as the Y offset. In some embodiments, the first pixel map and the second pixel map are compared to determine the Y offset when the correlation between the first pixel map and the second pixel map is determined. The correlation may be represented by the X offset and the Z offset between the first pixel map and the second pixel map.
In step 1907, the X offset, the Y offset, and the Z offset may be optimized in a fine registration by utilizing one or more algorithms. Exemplary algorithms may include recursion, a bisection method, an exhaustive method, a greedy algorithm, a divide and conquer algorithm, dynamic programming method, an iterative method, a branch-and-bound algorithm, a backtracking algorithm, or the like, or any combination thereof. The fine registration may be based on the 2D registration and/or 3D registration. In step 1908, an optimized X offset, an optimized Y offset, and an optimized Z offset may be generated. In step 1909, an alignment may be performed to register the overlapping images corresponding to 3D volume data based on the X offset, the Y offset, and the Z offset, or the optimized offsets.
It should be understood that the description of the flowchart above is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, numerous variations and modifications may be made under the teaching of the present disclosure. However, those variations and modifications do not depart from the protecting scope of the present discourse. For instance, step 1902 and step 1903 may be performed concurrently, or sequentially at any order. The calibration in step 1905  may be performed on the first pixel instead of the second pixel map. The Y offset obtained in step 1906, together with the X offset and the Z offset obtained in step 1905, may be utilized directly to perform the 3D registration without performing step 1907 and step 1908.
FIG. 20 illustrates 2D images and corresponding pixel maps generated by MIP according to some embodiments of the present disclosure. As shown in the figure, two overlapping images corresponding to 3D volume data may be projected onto the coronal plane, as a result, 2D projection images MIP1 and MIP2 may be generated. As the overlapping images corresponding to 3D volume data is projected onto the coronal plane, two pixel maps corresponding to MIP1 and MIP2 respectively may be generated along with the 2D projection images. As illustrated in the figure, MAP1 is a pixel map corresponding to the 2D projection image MIP1, while MAP2 is another pixel map corresponding to the 2D projection image MIP2. The pixel values of MAP1, as well as MAP2, may correlate with those of MIP and MIP2 respectively. Region 2001 is a partial zoom of the 2D projection image MIP1. By way of MIP, every pixel of MIP1 may have a value of the maximum intensity corresponding to multiple slices of the overlapping image corresponding to 3D volume. Region 2002 is a partial zoom of the pixel map MAP1. Every pixel value of MAP1 may be a slice number of a slice with the maximum intensity, for example, grey value. For instance, as shown in region 2002, number 57 may indicate that the 57th slice of the corresponding overlapping image corresponding to 3D volume data has the maximum intensity. Thus the number 57 is stored in corresponding pixel of MAP1.
It should be understood that the 2D images and the corresponding pixel maps described above are provided for the purposes of illustration, and are not meant to limit the scope of the present disclosure. Apparently for persons having ordinary skills in the art, multiple variations and modifications may be made under the teaching of the disclosure. However, those variations and modifications do not depart from the spirit of the present closure.
FIG. 21 is a flowchart illustrating a process of image registration according to some embodiments of the present disclosure. In step 2101, two overlapping images corresponding to 3D volume data I1 and I2, may be retrieved. Both I1 and I2 may be overlapping regions of two sub-images and segmented from the sub-images by the segmentation module 1301 as described elsewhere in the disclosure. The overlapping images corresponding to 3D volume data may be obtained through techniques including DSA (digital subtraction angiography) , CT (computed  tomography) , CTA (computed tomography angiography) , PET (positron emission tomography) , X-ray, MRI (magnetic resonance imaging) , MRA (magnetic resonance angiography) , SPECT (single-photon emission computerized tomography) , US (ultrasound scanning) , or the like, or a combination thereof. As an example employing MRA, the overlapping images corresponding to 3D volume data may be obtained by performing a segmentation in accordance with DICOM (digital imaging and communication in medicine) . Particularly, label (0020 0032) of DICOM may be utilized to segment overlapping images corresponding to 3D volume data out of the sub-images.
In step 2102, the overlapping images corresponding to 3D volume data may be projected onto an anatomical plane. The anatomical plane may include a coronal plane, a sagittal plane, a transverse plane, etc. The projection may be based on maximum intensity projection (MIP) . Optionally and preferably, the projection may be based on temporal maximum intensity projection (tMIP) , minimum intensity projection (MiniP) , virtual endoscopic display (VED) , or the like, or a combination thereof. In some embodiments of the present disclosure, I1 and I2 may be projected onto the coronal plane, the sagittal plane, and the transverse plane respectively by performing an MIP. Thus, in every anatomical plane, two 2D projection images corresponding to I1 and I2, MIP1 and MIP2 are acquired. In step 2103, a 2D registration may be performed on MIP1 and MIP2 in each anatomical plane.
In step 2104, as the result of the 2D registration in step 2203, three groups of transformation parameters, (TCOR_X, TCOR_Z) , (TSAG_Y, TSAG_Z) , (TAXI_X, TAXI_Y) , may be generated. In step 2105, the transformation parameters generated in step 2104 may be utilized to generate 3D transformation parameters in accordance with the following equations:
a.
Figure PCTCN2015090265-appb-000010
b.
Figure PCTCN2015090265-appb-000011
c.
Figure PCTCN2015090265-appb-000012
In step 2106, an alignment may be performed on the overlapping images corresponding to 3D volume data for a 3D registration based on the 3D transformation parameter (tx, ty, tz) .
It should be understood that the flowchart described above is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons  having ordinary skills in the art, numerous variations and modifications may be made under the teaching of the present disclosure. However, those variations and modifications do not depart from the protecting scope of the present discourse.
FIG. 22 is a flowchart illustrating a process of N sub-images registration according to some embodiments of the present disclosure. In step 2201, N sub-images of a series of scans may be retrieved, and each two successive sub-images may have one or more overlapping regions. In step 2202, each sub-image may be segmented by the segmentation module 1301 to produce an overlapping image corresponding to 3D volume data that represents the overlapping region. In step 2203, one or more registrations may be performed on each pair of successive overlapping images corresponding to 3D volume data, and 2D offsets and/or 3D offsets may be generated. It should be noted that the 2D offsets, or the 3D offsets, may include offsets in different directions or different planes, for example, offsets in the X direction, offsets in the Y direction, offsets in the Z direction, offsets in the coronal plane (coronal offsets) , offsets in the sagittal plane (sagittal offsets) , offsets in the transverse plane (transverse offset) , etc. The registrations may include 2D registration, 3D registration, or the like, or a combination thereof. The registrations of the overlapping images corresponding to 3D volume data are described elsewhere in the disclosure. In step 2204, each pair of successive sub-images may be calibrated in accordance with the results of the registration in step 2203. In step 2205, the calibrated sub-images may be fused to generate a composite image that may be used for disease diagnosis. Particularly, the overlapping images corresponding to 3D volume data representing the overlapping regions of the sub-images may be fused.
It should be understood that the flowchart described above is merely provided for the purposes of illustration, and not intend to limit the scope of the present disclosure. For persons having ordinary skills in the art, numerous variations and modifications may be made under the teaching of the present disclosure. However, those variations and modifications do not depart from the protecting scope of the present discourse.
EXAMPLES
FIGs. 23A-23I illustrate 9 exemplary 2D images of blood vessels obtained based on the system and process according to some embodiments of the present disclosure. Each 3D image was generated by combining three coronal sub-images as shown in the figure. Each sub-image  had a size of 384*512*88. According to the label (0020 0032) that is specified in DICOM, the size of the maximum overlapping region of two successive sub-images was 384*72*88. The time for processing the sub-images according to the method provided in the disclosure was approximately 2.075s. The hardware configuration was, Intel i5-2400 processor, 3.10Ghz, 4GB ROM (read-only memory) , and 64-bit OS (operating system) .
FIGs. 23A-23I are 9 2D vascular images in the coronal plane of a region of interest from different view angles ranging from 0°to 360°. An MIP was obtained based on the 3D sub-images. The 2D vascular images as illustrated were generated from a counter clockwise rotation on the composite 3D image. The rotation angle of FIG. 23A, FIG. 23B, FIG. 23C, FIG. 23D, FIG. 23E, FIG. 23F, FIG. 23G, FIG. 23H, FIG. 23I were 0°, 45°, 90°, 135°, 180°, 225°, 270°, 315°, 360°, respectively.
FIGs. 24A-24D illustrate four 2D coronal images of a region of interest depicting vasculature including a plurality of blood vessels. As shown in the figures, each 2D image was generated by combining three sub-images. FIG. 24A illustrates a 2D image that was generated without registration. FIG. 24B illustrates a 2D image that was generated based solely on 3D registration. FIG. 24C and FIG. 24D are 2D images of different view angles that were generated based on 2D and 3D registration by utilizing the system and process according to some embodiments of the present disclosure.
FIGs. 25A-25D illustrate 4 2D coronal images of a region of interest depicting vasculature including a plurality of blood vessels. As shown in the figure, each 2D image was generated by combining three sub-images. FIG. 25A illustrates a 2D image that was generated without registration. FIG. 25B illustrates a 2D image that was generated based solely on 3D registration. FIG. 25C and FIG. 25D are 2D images of different view angles that were generated based on 2D and 3D registration by utilizing the system and process according to some embodiments of the present disclosure.
FIGs. 26A-26D illustrate four 2D coronal images of a region of interest depicting vasculature including a plurality of blood vessels. As shown in the figures, each 2D image was generated by combining three sub-images. FIG. 26A illustrates a 2D image that was generated without registration. FIG. 26B illustrates a 2D image of the same region of interest that was generated based solely on 3D registration. FIG. 26C and FIG. 26D are 2D images of the same  region of interest from different view angles. The images were generated based on 2D registration and 3D registration utilizing the system and process according to some embodiments of the present disclosure.
FIG. 27A and FIG. 27B are 2D coronal images of a same region of interest from different view angles. The images were generated based on 2D registration and 3D registration utilizing the system and process according to some embodiments of the present disclosure.
FIG. 28A and FIG. 28B are 2D coronal images of a same region of interest from different view angles that were generated based on 2D and 3D registration by utilizing the system and process according to some embodiments of the present disclosure.
FIG. 29A and FIG. 29B are 2D coronal images of different view angles that were generated based on 2D and 3D registration by utilizing the system and process according to some embodiments of the present disclosure.
It should be noted that the above description of the embodiments are provided for the purposes of comprehending the present disclosure, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, various variations and modifications may be conducted in the light of the present disclosure. However, those variations and the modifications do not depart from the scope of the present disclosure.
Having thus described the basic concepts, it may be rather apparent to those skilled in the art after reading this detailed disclosure that the foregoing detailed disclosure is intended to be presented by way of example only and is not limiting. Various alterations, improvements, and modifications may occur and are intended to those skilled in the art, though not expressly stated herein. These alterations, improvements, and modifications are intended to be suggested by this disclosure, and are within the spirit and scope of the exemplary embodiments of this disclosure.
Moreover, certain terminology has been used to describe embodiments of the present disclosure. For example, the terms “one embodiment, ” “an embodiment, ” and/or “some embodiments” mean that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Therefore, it is emphasized and should be appreciated that two or more references to “an embodiment” or “one embodiment” or “an alternative embodiment” in various portions of this specification are  not necessarily all referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined as suitable in one or more embodiments of the present disclosure.
Further, it will be appreciated by one skilled in the art, aspects of the present disclosure may be illustrated and described herein in any of a number of patentable classes or context including any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof. Accordingly, aspects of the present disclosure may be implemented entirely hardware, entirely software (including firmware, resident software, micro-code, etc. ) or combining software and hardware implementation that may all generally be referred to herein as a "block, " “module, ” “engine, ” “unit, ” "component, " or "system. " Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable media having computer readable program code embodied thereon.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including electro-magnetic, optical, or the like, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that may communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable signal medium may be transmitted using any appropriate medium, including wireless, wireline, optical fiber cable, RF, or the like, or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB. NET, Python or the like, conventional procedural programming languages, such as the "C" programming language, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP, ABAP, dynamic programming languages such as Python, Ruby and Groovy, or other programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be  connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN) , or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider) or in a cloud computing environment or offered as a service such as a Software as a Service (SaaS) .
Furthermore, the recited order of processing elements or sequences, or the use of numbers, letters, or other designations therefore, is not intended to limit the claimed processes and methods to any order except as may be specified in the claims. Although the above disclosure discusses through various examples what is currently considered to be a variety of useful embodiments of the disclosure, it is to be understood that such detail is solely for that purpose, and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover modifications and equivalent arrangements that are within the spirit and scope of the disclosed embodiments. For example, although the implementation of various components described above may be embodied in a hardware device, it may also be implemented as a software only solution—e.g., an installation on an existing server or mobile device.
Similarly, it should be appreciated that in the foregoing description of embodiments of the present disclosure, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the various inventive embodiments. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed subject matter requires more features than are expressly recited in each claim. Rather, inventive embodiments lie in less than all features of a single foregoing disclosed embodiment.

Claims (21)

  1. An image composition system comprising:
    an acquisition engine configured to retrieve a first sub-image and a second sub-image; the first sub-image corresponding to a first set of 3D volume data, and the second sub-image corresponding to a second set of 3D volume data;
    an image processing engine configured to perform operations comprising
    retrieving a first overlapping image from the first sub-image;
    retrieving a second overlapping image from the second sub-image;
    generating a first 2D projection image and a first pixel map based on maximum intensity projection of the first overlapping image onto a plane;
    generating a second 2D projection image and a second pixel map based on maximum intensity projection of the second overlapping image onto the plane;
    performing 2D registration based on the first 2D projection image, the first pixel map, the second 2D projection image, and the second pixel map;
    performing 3D registration based on the 2D registration, the first pixel map, or the second pixel map;
    identifying a correlation between of the first sub-image including the first overlapping image and the second sub-image including the second overlapping image based on the 2D registration or the 3D registration; and
    fusing the first overlapping image and the second overlapping image based on the correlation to provide a composite image.
  2. The image composition system of claim 1, the first sub-image or the second sub-image comprising a coronal image, a sagittal image, or a transverse image.
  3. The image composition system of claim 1, the plane comprising a coronal plane, a sagittal plane, or a transverse plane.
  4. The image composition system of claim 1, retrieving the first sub-image or retrieving the second sub-image comprising using DSA (digital subtraction angiography) , CT (computed tomography) , CTA (computed tomography angiography) , PET (positron emission tomography) , X-ray, MRI (magnetic resonance imaging) , MRA (magnetic resonance angiography) , SPECT (single-photon emission computerized tomography) , or US (ultrasound scanning) .
  5. The image composition system of claim 1, retrieving the first overlapping image or retrieving the second overlapping image comprising using Digital Imaging and Communication in Medicine.
  6. The image composition system of claim 1, performing 2D registration comprising calculating an offset in a direction within the plane, and
    the offset comprising X offsets, Y offsets, Z offsets, coronal offsets, sagittal offsets, or transverse offsets.
  7. The image composition system of claim 1, performing 3D registration comprising calculating an offset in a direction perpendicular to the plane, and
    the offset comprising X offsets, Y offsets, Z offsets, coronal offsets, sagittal offsets, or transverse offsets.
  8. The image composition system of claim 1 further comprising
    performing a fine registration based on recursion, a bisection method, an exhaustive method, a greedy algorithm, a divide and conquer algorithm, dynamic programming method, an iterative method, a branch-and-bound algorithm, or a backtracking algorithm.
  9. The image composition system of claim 1, the first pixel map comprising information identifying the location of maximum intensity of the first overlapping image, the location being in a direction perpendicular to the plane.
  10. The image composition system of claim 3, the performing 3D registration comprising
    calculating a plurality of differences in the locations in the direction perpendicular to the plane between the first pixel map and the second pixel map, each one of the plurality of differences corresponding to a pixel within the plane;
    comparing the plurality of differences to obtain a probability of the differences; and
    designating an offset in the direction perpendicular to the plane based on the probability.
  11. The image composition system of claim 10, the second pixel map comprising a calibrated pixel map based on the 2D registration.
  12. A method comprising:
    retrieving a first sub-image and a second sub-image; the first sub-image corresponding to a first set of 3D volume data, and the second sub-image corresponding to a second set of 3D volume data;
    retrieving a first overlapping image from the first sub-image;
    retrieving a second overlapping image from the second sub-image;
    generating a first 2D projection image and a first pixel map based on maximum intensity projection of the first overlapping image onto a plane;
    generating a second 2D projection image and a second pixel map based on maximum intensity projection of the second overlapping image onto the plane;
    performing 2D registration based on the first 2D projection image, the first pixel map, the second 2D projection image, and the second pixel map;
    performing 3D registration based on the 2D registration, the first pixel map, or the second pixel map;
    identifying a correlation between of the first sub-image including the first overlapping image and the second sub-image including the second overlapping image based on the 2D registration or the 3D registration; and
    fusing the first overlapping image and the second overlapping image based on the correlation to provide a composite image.
  13. The method of claim 12, the performing 3D registration comprising
    calculating a plurality of differences in the locations in the direction perpendicular to the plane between the first pixel map and the second pixel map, each one of the plurality of differences corresponding to a pixel within the plane;
    comparing the plurality of differences to obtain a probability of the differences; and
    designating an offset in the direction perpendicular to the plane based on the probability. 
  14. An image composition system comprising:
    an imaging device comprising an X-radiation source and a radiation detector; and
    a processor comprising
    a parameter setting engine configured to set a plurality of parameters relating to the X-radiation source or the radiation detector based on a preliminary number of exposures and a preliminary exposure region;
    a control engine configured to control, based on at least one of the plurality of parameters, a motion of the X-radiation source or a motion of the radiation detector to capture a plurality of sub-images, and
    an image processing engine configured to combine the plurality of sub-images.
  15. The image composition system of claim 14, the parameters comprising at least one of a dimension of an exposure region, a number of exposures, an overlapping region between two adjacent exposures, a starting position of an effective light field, an ending position of the effective light field, or a height of the effective light field.
  16. The image composition system of claim 15, the parameter setting engine being configured to adjust the preliminary number of exposures and the preliminary exposure region to obtain a secondary number of exposures and a second exposure region such that the difference between the preliminary number of exposures and the secondary number of exposures is less than 1, and that the secondary exposure region is equal to or smaller than the preliminary exposure region.
  17. The image composition system of claim 15, the parameters setting engine being configured to adjust the preliminary height of the effective light field to obtain a secondary height  of the effective light field such that the secondary height of the effective light field is equal to or smaller than the preliminary height of the effective light field.
  18. A method for image composition comprising:
    setting a plurality of parameters relating to the X-radiation source or the radiation detector based on a preliminary number of exposures and a preliminary exposure region;
    controlling, based on at least one of the plurality of parameters, a motion of the X-radiation source or a motion of the radiation detector to capture a plurality of sub-images;
    combining the plurality of sub-images.
  19. The method of claim 18, the parameters comprising at least one of a dimension of an exposure region, a number of exposures, an overlapping region between two adjacent exposures, a starting position of an effective light field, an ending position of the effective light field, or a height of the effective light field.
  20. The method of claim 19 further comprising adjusting the preliminary number of exposures and the preliminary exposure region to obtain a secondary number of exposures and a second exposure region such that the difference between the preliminary number of exposures and the secondary number of exposures is less than 1, and that the secondary exposure region is equal to or smaller than the preliminary exposure region.
  21. The method of claim 19 further comprising adjusting the preliminary height of the effective light field to obtain a secondary height of the effective light field such that the secondary height of the effective light field is equal to or smaller than the preliminary height of the effective light field.
PCT/CN2015/090265 2014-09-22 2015-09-22 System and method for image composition WO2016045574A1 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
EP15843195.7A EP3161785B1 (en) 2014-09-22 2015-09-22 System and method for image composition
GB1704042.9A GB2545588B (en) 2014-09-22 2015-09-22 System and method for image composition
US15/081,892 US9582940B2 (en) 2014-09-22 2016-03-27 System and method for image composition
US15/394,923 US9824503B2 (en) 2014-09-22 2016-12-30 System and method for image composition
US15/662,285 US10354454B2 (en) 2014-09-22 2017-07-28 System and method for image composition
US16/511,224 US10614634B2 (en) 2014-09-22 2019-07-15 System and method for image composition

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201410487252.1 2014-09-22
CN201410487252.1A CN104268846B (en) 2014-09-22 2014-09-22 Image split-joint method and device
CN201410508290.0 2014-09-28
CN201410508290.0A CN104287756B (en) 2014-09-28 2014-09-28 Radioscopic image acquisition methods and device

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/081,892 Continuation US9582940B2 (en) 2014-09-22 2016-03-27 System and method for image composition

Publications (1)

Publication Number Publication Date
WO2016045574A1 true WO2016045574A1 (en) 2016-03-31

Family

ID=55580314

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/090265 WO2016045574A1 (en) 2014-09-22 2015-09-22 System and method for image composition

Country Status (4)

Country Link
US (4) US9582940B2 (en)
EP (1) EP3161785B1 (en)
GB (2) GB2545588B (en)
WO (1) WO2016045574A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102016207064A1 (en) * 2016-04-26 2017-10-26 Siemens Healthcare Gmbh Capture and reconstruct X-ray image data using elliptical cylinders
WO2023191795A1 (en) * 2022-03-31 2023-10-05 Magic Leap, Inc. Localized dimming at wearable optical system

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2545588B (en) * 2014-09-22 2018-08-15 Shanghai United Imaging Healthcare Co Ltd System and method for image composition
CN104697593B (en) * 2015-03-24 2017-12-08 合肥工业大学 A kind of gas ultrasonic flowmeter based on FPGA and DSP
CN107735814A (en) * 2015-06-30 2018-02-23 皇家飞利浦有限公司 For handling the apparatus and method of computed tomography imaging data
US10049449B2 (en) 2015-09-21 2018-08-14 Shanghai United Imaging Healthcare Co., Ltd. System and method for image reconstruction
WO2017100475A1 (en) * 2015-12-09 2017-06-15 Integrated-X, Inc. Systems and methods for inspection using electromagnetic radiation
US10307128B2 (en) * 2016-05-12 2019-06-04 Shimadzu Corporation X-ray imaging device
DE102016221220B4 (en) * 2016-10-27 2023-09-28 Siemens Healthcare Gmbh Method for determining an overlay image to be displayed, display device, computer program and data carrier
EP3599982A4 (en) 2017-03-20 2020-12-23 3dintegrated ApS A 3d reconstruction system
US10334141B2 (en) * 2017-05-25 2019-06-25 Denso International America, Inc. Vehicle camera system
EP3320843B1 (en) * 2017-06-22 2020-06-17 Siemens Healthcare GmbH Method and apparatus for generating a set of processed images
CN107665486B (en) * 2017-09-30 2020-04-17 深圳绰曦互动科技有限公司 Automatic splicing method and device applied to X-ray images and terminal equipment
US20190335166A1 (en) * 2018-04-25 2019-10-31 Imeve Inc. Deriving 3d volumetric level of interest data for 3d scenes from viewer consumption data
WO2019228359A1 (en) * 2018-05-28 2019-12-05 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for taking x-ray images
JP7224857B2 (en) * 2018-11-02 2023-02-20 キヤノン株式会社 Radiation imaging system, radiation imaging method, controller and program
US10957444B2 (en) * 2019-05-06 2021-03-23 Wisconsin Alumni Research Foundation Apparatus for tomography repeat rate/reject rate capture
IT201900006616A1 (en) * 2019-05-07 2020-11-07 Angiodroid S R L METHOD FOR IMPROVING RADIOLOGICAL IMAGES IN THE COURSE OF AN ANGIOGRAPHY
CN114463565A (en) * 2021-01-20 2022-05-10 赛维森(广州)医疗科技服务有限公司 Model growing method of identification model of cervical cell slide digital image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN203424940U (en) * 2013-06-28 2014-02-12 上海联影医疗科技有限公司 X-ray photographing system
CN103871036A (en) * 2012-12-12 2014-06-18 上海联影医疗科技有限公司 Rapid registering and splicing method used for three-dimensional digital subtraction angiography image
CN104268846A (en) * 2014-09-22 2015-01-07 上海联影医疗科技有限公司 Image stitching method and device
CN104287756A (en) * 2014-09-28 2015-01-21 上海联影医疗科技有限公司 X-ray image acquisition method and device

Family Cites Families (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2565053B1 (en) * 1984-05-25 1986-08-22 Thomson Cgr PROCESS FOR PROCESSING RADIOLOGICAL IMAGES
US5123056A (en) * 1990-02-02 1992-06-16 Siemens Medical Systems, Inc. Whole-leg x-ray image processing and display techniques
DE69425416T2 (en) 1993-11-26 2001-03-15 Koninklijke Philips Electronics N.V., Eindhoven Image composition method and image forming apparatus for carrying out this method
EP0655861B1 (en) * 1993-11-26 2000-08-02 Koninklijke Philips Electronics N.V. Image composition method and imaging apparatus for performing said method
US5958680A (en) * 1994-07-07 1999-09-28 Geron Corporation Mammalian telomerase
JP3285493B2 (en) 1996-07-05 2002-05-27 株式会社日立製作所 Lean-burn engine control apparatus and method and engine system
DE69738162T2 (en) 1996-08-21 2008-06-26 Koninklijke Philips Electronics N.V. COMPOSITION OF A PICTURE OF PARTICULAR PICTURES
FR2797978B1 (en) * 1999-08-30 2001-10-26 Ge Medical Syst Sa AUTOMATIC IMAGE RECORDING PROCESS
FR2802002B1 (en) * 1999-12-02 2002-03-01 Ge Medical Syst Sa METHOD FOR AUTOMATIC RECORDING OF THREE-DIMENSIONAL IMAGES
US6946836B2 (en) 2000-04-25 2005-09-20 Kabushiki Kaisha Toshiba Magnetic resonance imaging involving movement of patient's couch
JP2002210028A (en) * 2001-01-23 2002-07-30 Mitsubishi Electric Corp Radiation irradiating system and radiation irradiating method
US7006862B2 (en) * 2001-07-17 2006-02-28 Accuimage Diagnostics Corp. Graphical user interfaces and methods for retrospectively gating a set of images
US6823044B2 (en) * 2001-11-21 2004-11-23 Agilent Technologies, Inc. System for collecting multiple x-ray image exposures of a sample using a sparse configuration
US6895076B2 (en) 2003-06-03 2005-05-17 Ge Medical Systems Global Technology Company, Llc Methods and apparatus for multiple image acquisition on a digital detector
US8265354B2 (en) 2004-08-24 2012-09-11 Siemens Medical Solutions Usa, Inc. Feature-based composing for 3D MR angiography images
JP2008537190A (en) * 2005-01-07 2008-09-11 ジェスチャー テック,インコーポレイテッド Generation of three-dimensional image of object by irradiating with infrared pattern
US7522701B2 (en) 2005-12-20 2009-04-21 General Electric Company System and method for image composition using position sensors
US8090194B2 (en) * 2006-11-21 2012-01-03 Mantis Vision Ltd. 3D geometric modeling and motion capture using both single and dual imaging
US7555100B2 (en) 2006-12-20 2009-06-30 Carestream Health, Inc. Long length imaging using digital radiography
US9135706B2 (en) * 2007-12-18 2015-09-15 Koninklijke Philips N.V. Features-based 2D-3D image registration
EP2310839A4 (en) 2008-06-18 2011-08-03 Surgix Ltd A method and system for stitching multiple images into a panoramic image
JP2010042065A (en) * 2008-08-08 2010-02-25 Toshiba Corp Medical image processor, processing method
DE102008045278A1 (en) 2008-09-01 2010-03-25 Siemens Aktiengesellschaft Method for combining images and magnetic resonance apparatus
JP2010110544A (en) * 2008-11-10 2010-05-20 Fujifilm Corp Image processing device, method and program
US20100150418A1 (en) * 2008-12-15 2010-06-17 Fujifilm Corporation Image processing method, image processing apparatus, and image processing program
JP5438493B2 (en) 2009-12-22 2014-03-12 富士フイルム株式会社 Radiation imaging system and auxiliary device thereof
US9451924B2 (en) * 2009-12-30 2016-09-27 General Electric Company Single screen multi-modality imaging displays
CN101756707A (en) 2009-12-31 2010-06-30 苏州和君科技发展有限公司 Method for carrying out scanning reconstruction on long target object by using Micro-CT imaging system
US8462002B2 (en) * 2010-06-18 2013-06-11 The Invention Science Fund I, Llc Personal telecommunication device with target-based exposure control
WO2012045040A1 (en) * 2010-10-01 2012-04-05 Varian Medical Systems, Inc. Laser accelerator driven particle brachytherapy devices, systems, and methods
GB201020073D0 (en) * 2010-11-26 2011-01-12 Siemens Medical Solutions Anatomically-aware MIP shading
CN103150715B (en) 2013-03-13 2016-10-19 腾讯科技(深圳)有限公司 Image mosaic processing method and processing device
US20140267267A1 (en) 2013-03-15 2014-09-18 Toshiba Medical Systems Corporation Stitching of volume data sets
JP6012577B2 (en) * 2013-09-30 2016-10-25 富士フイルム株式会社 Image processing apparatus, radiographic imaging system, image processing program, and image processing method
GB2545588B (en) * 2014-09-22 2018-08-15 Shanghai United Imaging Healthcare Co Ltd System and method for image composition

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103871036A (en) * 2012-12-12 2014-06-18 上海联影医疗科技有限公司 Rapid registering and splicing method used for three-dimensional digital subtraction angiography image
CN203424940U (en) * 2013-06-28 2014-02-12 上海联影医疗科技有限公司 X-ray photographing system
CN104268846A (en) * 2014-09-22 2015-01-07 上海联影医疗科技有限公司 Image stitching method and device
CN104287756A (en) * 2014-09-28 2015-01-21 上海联影医疗科技有限公司 X-ray image acquisition method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3161785A4 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102016207064A1 (en) * 2016-04-26 2017-10-26 Siemens Healthcare Gmbh Capture and reconstruct X-ray image data using elliptical cylinders
US10319117B2 (en) 2016-04-26 2019-06-11 Siemens Healthcare Gmbh Record and reconstruct x-ray image data on the basis of elliptical cylinders
WO2023191795A1 (en) * 2022-03-31 2023-10-05 Magic Leap, Inc. Localized dimming at wearable optical system

Also Published As

Publication number Publication date
GB201708653D0 (en) 2017-07-12
EP3161785A1 (en) 2017-05-03
US9824503B2 (en) 2017-11-21
US10354454B2 (en) 2019-07-16
US20160247325A1 (en) 2016-08-25
EP3161785B1 (en) 2019-08-28
GB201704042D0 (en) 2017-04-26
US20170109941A1 (en) 2017-04-20
US10614634B2 (en) 2020-04-07
US20190340839A1 (en) 2019-11-07
GB2553022A (en) 2018-02-21
GB2545588A (en) 2017-06-21
US20170372528A1 (en) 2017-12-28
US9582940B2 (en) 2017-02-28
GB2545588B (en) 2018-08-15
EP3161785A4 (en) 2018-05-16
GB2553022B (en) 2018-12-05

Similar Documents

Publication Publication Date Title
US10614634B2 (en) System and method for image composition
US9427286B2 (en) Method of image registration in a multi-source/single detector radiographic imaging system, and image acquisition apparatus
JP7337556B2 (en) MEDICAL IMAGE PROCESSING APPARATUS, X-RAY DIAGNOSTIC APPARATUS, AND MEDICAL IMAGE PROCESSING METHOD
US11877873B2 (en) Systems and methods for determining scanning parameter in imaging
US20160278724A1 (en) Robotic multi-mode radiological scanning system and method
US20220061781A1 (en) Systems and methods for positioning
US10032295B2 (en) Tomography apparatus and method of processing tomography image
EP2443614A1 (en) Imaging procedure planning
US9836861B2 (en) Tomography apparatus and method of reconstructing tomography image
US11950947B2 (en) Generation of composite images based on live images
JP6005359B2 (en) Device that determines the size change of an object
EP4301229B1 (en) Image-based planning of tomographic scan
CN108430376B (en) Providing a projection data set
US20220054862A1 (en) Medical image processing device, storage medium, medical device, and treatment system
JP7000795B2 (en) Radiation imaging device
JP7144129B2 (en) Medical image diagnosis device and medical information management device
JP6956514B2 (en) X-ray CT device and medical information management device
JP2021532903A (en) Determining the consensus plane for imaging medical devices
US11504083B2 (en) Systems and methods for determining examination parameters
CN108171764B (en) Imaging sequence control using automated determination of findings
US9734630B2 (en) Real-time three-dimensional visualization of interventional medical devices
Class et al. Patent application title: METHOD OF IMAGE REGISTRATION IN A MULTI-SOURCE/SINGLE DETECTOR RADIOGRAPHIC IMAGING SYSTEM, AND IMAGE ACQUISITION APPARATUS Inventors: Jeffrey Siewerdsen (Baltimore, MD, US) Yoshito Otake (Baltimore, MD, US) Ali Uneri (Baltimore, MD, US) J. Webster Stayman (Baltimore, MD, US) Assignees: SIEMENS AKTIENGESELLSCHAFT

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15843195

Country of ref document: EP

Kind code of ref document: A1

REEP Request for entry into the european phase

Ref document number: 2015843195

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2015843195

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 201704042

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20150922

NENP Non-entry into the national phase

Ref country code: DE