CN106251380A - Image rebuilding method - Google Patents
Image rebuilding method Download PDFInfo
- Publication number
- CN106251380A CN106251380A CN201610617163.3A CN201610617163A CN106251380A CN 106251380 A CN106251380 A CN 106251380A CN 201610617163 A CN201610617163 A CN 201610617163A CN 106251380 A CN106251380 A CN 106251380A
- Authority
- CN
- China
- Prior art keywords
- data
- motion
- information
- present application
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 77
- 238000012937 correction Methods 0.000 claims abstract description 35
- 238000012545 processing Methods 0.000 claims description 31
- 238000013519 translation Methods 0.000 claims description 29
- 238000006243 chemical reaction Methods 0.000 claims description 7
- 238000000926 separation method Methods 0.000 claims description 3
- 230000002123 temporal effect Effects 0.000 abstract description 10
- 238000010586 diagram Methods 0.000 description 35
- 238000003384 imaging method Methods 0.000 description 33
- 230000008569 process Effects 0.000 description 31
- 230000014616 translation Effects 0.000 description 28
- 238000004364 calculation method Methods 0.000 description 26
- 239000011159 matrix material Substances 0.000 description 26
- 230000015654 memory Effects 0.000 description 25
- 238000009826 distribution Methods 0.000 description 24
- 230000004048 modification Effects 0.000 description 23
- 238000012986 modification Methods 0.000 description 23
- 238000003860 storage Methods 0.000 description 19
- 238000011524 similarity measure Methods 0.000 description 18
- 239000013078 crystal Substances 0.000 description 17
- 230000006872 improvement Effects 0.000 description 17
- 230000006978 adaptation Effects 0.000 description 16
- 238000004422 calculation algorithm Methods 0.000 description 14
- 238000002600 positron emission tomography Methods 0.000 description 14
- 238000001228 spectrum Methods 0.000 description 14
- 238000001514 detection method Methods 0.000 description 13
- 238000013500 data storage Methods 0.000 description 11
- 238000004458 analytical method Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 8
- 230000000241 respiratory effect Effects 0.000 description 8
- 238000012800 visualization Methods 0.000 description 8
- 230000005855 radiation Effects 0.000 description 7
- 238000010183 spectrum analysis Methods 0.000 description 7
- 230000000670 limiting effect Effects 0.000 description 6
- 239000000700 radioactive tracer Substances 0.000 description 6
- 230000029058 respiratory gaseous exchange Effects 0.000 description 6
- 230000004044 response Effects 0.000 description 6
- 238000000638 solvent extraction Methods 0.000 description 6
- 230000000747 cardiac effect Effects 0.000 description 5
- 238000002595 magnetic resonance imaging Methods 0.000 description 5
- 238000007476 Maximum Likelihood Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000002603 single-photon emission computed tomography Methods 0.000 description 4
- 238000002591 computed tomography Methods 0.000 description 3
- 201000010099 disease Diseases 0.000 description 3
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 230000003902 lesion Effects 0.000 description 3
- 210000004072 lung Anatomy 0.000 description 3
- 210000000056 organ Anatomy 0.000 description 3
- 230000000644 propagated effect Effects 0.000 description 3
- 230000002829 reductive effect Effects 0.000 description 3
- 230000004075 alteration Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000000975 bioactive effect Effects 0.000 description 2
- 230000000052 comparative effect Effects 0.000 description 2
- 150000001875 compounds Chemical class 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000001037 epileptic effect Effects 0.000 description 2
- 230000005281 excited state Effects 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 239000012216 imaging agent Substances 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 239000002245 particle Substances 0.000 description 2
- 238000005192 partition Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000003595 spectral effect Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 239000010409 thin film Substances 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000001131 transforming effect Effects 0.000 description 2
- OVSKIKFHRZPJSS-UHFFFAOYSA-N 2,4-D Chemical compound OC(=O)COC1=CC=C(Cl)C=C1Cl OVSKIKFHRZPJSS-UHFFFAOYSA-N 0.000 description 1
- 208000019901 Anxiety disease Diseases 0.000 description 1
- 206010028980 Neoplasm Diseases 0.000 description 1
- 238000012879 PET imaging Methods 0.000 description 1
- 238000010521 absorption reaction Methods 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 210000003484 anatomy Anatomy 0.000 description 1
- 230000036506 anxiety Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000004071 biological effect Effects 0.000 description 1
- 230000035565 breathing frequency Effects 0.000 description 1
- 230000003139 buffering effect Effects 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 238000009513 drug distribution Methods 0.000 description 1
- 238000013399 early diagnosis Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000005251 gamma ray Effects 0.000 description 1
- 230000002496 gastric effect Effects 0.000 description 1
- 230000001121 heart beat frequency Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 150000002500 ions Chemical class 0.000 description 1
- 239000010977 jade Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000002503 metabolic effect Effects 0.000 description 1
- 239000002207 metabolite Substances 0.000 description 1
- 230000004660 morphological change Effects 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 238000011022 operating instruction Methods 0.000 description 1
- 230000000149 penetrating effect Effects 0.000 description 1
- 230000035479 physiological effects, processes and functions Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000002601 radiography Methods 0.000 description 1
- 230000036391 respiratory frequency Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 241000894007 species Species 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001225 therapeutic effect Effects 0.000 description 1
- 238000002834 transmittance Methods 0.000 description 1
- 238000009827 uniform distribution Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
- G06T11/006—Inverse problem, transformation from projection-space into object-space, e.g. transform methods, back-projection, algebraic methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2211/00—Image generation
- G06T2211/40—Computed tomography
- G06T2211/412—Dynamic
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2211/00—Image generation
- G06T2211/40—Computed tomography
- G06T2211/421—Filtered back projection [FBP]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2211/00—Image generation
- G06T2211/40—Computed tomography
- G06T2211/424—Iterative
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Algebra (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Mathematical Physics (AREA)
- Pure & Applied Mathematics (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The invention provides a kind of image rebuilding method, including: obtaining the data of sweep object, described data include temporal information and/or the spatial information of event;According to temporal information and/or spatial information, described data are divided into several data segment;Obtain the statistical information of data in described data segment;Statistical information according to the data in described data segment obtains the movable information of described sweep object;Movable information according to described sweep object obtains the reconstruction image after motion correction.Utilize the method for reconstructing of the present invention, can directly obtain the movable information of patient from the data of sweep object, it is thus possible to quickly, accurately detect the most in real time and revise sweep object rigid motion.
Description
Technical Field
The present invention relates to the field of image reconstruction technologies, and in particular, to an image reconstruction method for motion correction.
Background
Positron Emission Tomography (PET) employs a Positron nuclide labeled compound as an imaging agent, and the functional metabolic state of a lesion is known by the degree of uptake of the imaging agent by the lesion. The principle is based primarily on the properties and ability of tissue cells to take up the radioimaging agent. It is a functional imaging examination that provides functional image information for determining the nature of a disease and thereby making an accurate diagnosis of the disease. In PET studies, bioactive molecules carrying a radioactive tracer are first introduced into the human body, and the PET system detects gamma rays emitted by the tracer and constructs a three-dimensional image of the tracer distribution in the human body by analyzing the detected signals. The bioactive molecule used in PET studies is a natural metabolite in the target organ or tissue, and PET can assess the physiology (functionality) and anatomy (structure) of the target organ or tissue, as well as biological characteristics, and is particularly useful for early diagnosis of disease, discovery of subclinical lesions, and evaluation of therapeutic efficacy, before morphological changes are absent.
In PET imaging, due to the relatively long scan time, the scanned object inevitably moves during the long data acquisition. Motion of the scanned object can lead to severe image artifacts, including both artifacts resulting from mismatch of the attenuation map and the acquired signal, and artifacts resulting from motion of the signal at different times.
Disclosure of Invention
The invention aims to provide an image reconstruction method for reducing artifacts caused by the motion of a scanning object in a reconstructed image.
In order to solve the above problem, the present invention provides an image reconstruction method, including: acquiring data of a scanned object, wherein the data comprises time information and/or space information of an event; dividing the data into a plurality of data segments according to the time information and/or the space information; acquiring statistical information of data in the data segment; acquiring motion information of the scanning object according to the statistical information of the data in the data segment; and acquiring a reconstructed image after motion correction according to the motion information of the scanning object.
Preferably, the data of the scan object includes data in a list mode and/or data in a chord chart mode.
Preferably, the motion information of the scanned object includes translation and/or rotation information.
Preferably, the statistical information of the data in the data section comprises one or more of expectation, second moment, variance and covariance information.
Preferably, the above-mentioned obtaining the motion information of the scanning object according to the statistical information of the data in the data segment includes: and acquiring the motion information of the scanning object according to the statistical information of the reconstruction space corresponding to the data in the data segment.
Preferably, the obtaining of the statistical information of the reconstruction space corresponding to the data in the data segment includes: acquiring statistical information of a data space corresponding to the data in the data section; establishing a conversion relation between the data space and the reconstruction space; and acquiring the statistical information of the reconstruction space corresponding to the data in the data section according to the statistical information of the data space corresponding to the data in the data section and the conversion relation between the data space and the reconstruction space.
Preferably, the acquiring of the motion-corrected reconstructed image according to the motion information of the scanning object includes: according to the motion information of the scanning object, carrying out box separation processing on the data of the scanning object; and performing image reconstruction on the binned data to obtain a reconstructed image set.
Preferably, the performing image reconstruction on the binned data to obtain a reconstructed image set includes: performing motion correction on the binned data according to the motion information of the scanning object; and obtaining a reconstructed image set after motion correction according to the bin data after motion correction.
Preferably, the image reconstruction method further includes: acquiring an attenuation map of the scanned object; performing motion correction on the attenuation map according to the motion information of the scanning object to obtain an attenuation map of the real-time motion of the scanning object; and carrying out attenuation correction on the data and/or the reconstructed image of the scanning object according to the attenuation map of the real-time motion of the scanning object.
Preferably, the above-mentioned acquiring a motion-corrected reconstructed image according to the motion information of the scanning object further includes: and merging the reconstructed image sets to obtain a final reconstructed image.
By using the reconstruction method of the invention, the motion information of the patient, such as 6 translational rotational degrees of freedom of motion, can be directly obtained from the data of the scanned object, thereby being capable of quickly, accurately and even real-time detecting and correcting the rigid motion of the scanned object.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly introduced below. It is obvious that the drawings in the following description are only some embodiments of the invention, and it is obvious for a person skilled in the art that the invention can also be applied to other similar scenarios according to these drawings without inventive effort. Unless otherwise apparent from the context of language or otherwise indicated, like reference numerals in the figures refer to like structures and operations.
FIG. 1 is a schematic view of an imaging system according to some embodiments of the present application;
FIG. 2 is a schematic diagram of a processor shown in accordance with some embodiments of the present application;
FIG. 3 is a schematic diagram of a data processor, according to some embodiments of the present application;
FIG. 4 is an exemplary flow diagram of image reconstruction shown in accordance with some embodiments of the present application;
FIG. 5 is a schematic diagram of a model setting module according to some embodiments of the present application;
FIG. 6 is an exemplary flow diagram illustrating the acquisition of physiological or rigid body motion information from raw data according to some embodiments of the present application;
FIG. 7 is a schematic diagram of a physiological motion detector according to some embodiments of the present application;
FIG. 8 is an exemplary flow diagram illustrating partitioning of data according to physiological motion information and image reconstruction based on the grouped data according to some embodiments of the present application;
FIG. 9 is an exemplary flow diagram illustrating the acquisition of a VOI from raw data according to some embodiments of the present application;
FIG. 10A is a schematic diagram of a rigid body motion detector shown in accordance with some embodiments of the present application;
FIG. 10B is an exemplary flow diagram illustrating rigid body motion detection and correction according to some embodiments of the present application;
FIG. 11 is an exemplary flow diagram illustrating rigid body motion detection according to some embodiments of the present application;
FIG. 12A is an exemplary flow diagram illustrating obtaining rigid body motion information according to some embodiments of the present application;
FIG. 12B is a schematic illustration of the chordal map coordinates as a function of 3D reconstruction coordinates, in accordance with some embodiments of the present application;
FIG. 13 is an exemplary flow diagram illustrating rigid body motion correction according to some embodiments of the present application;
FIG. 14 is an exemplary flow chart of motion correction shown in accordance with some embodiments of the present application;
FIG. 15 is a schematic diagram of frequency spectra of a respiratory signal and a cardiac signal, shown in accordance with some embodiments of the present application;
fig. 16A is a schematic illustration of spectral analysis of weighted signals of unspecified VOIs, shown in accordance with some embodiments of the present application;
16B is a schematic illustration of spectral analysis of weighted signals within a given VOI, shown in accordance with some embodiments of the present application;
fig. 17 is a schematic diagram of a temporal profile of respiration waveforms of a weighted signal for an unspecified VOI and a temporal profile of respiration waveforms within a specified VOI, according to some embodiments of the present application;
18A-18D are four exemplary gated images, shown with and/or without correction according to some embodiments of the present application;
FIG. 19A is a schematic view of a motion profile shown in accordance with some embodiments of the present application;
FIG. 19B is a schematic diagram of a motion gating curve according to some embodiments of the present application;
FIG. 20A is a schematic illustration of two motion profiles according to some embodiments of the present application;
FIG. 20B is a schematic illustration of two motion profiles according to some embodiments of the present application;
FIG. 20C is a schematic illustration of two motion profiles according to some embodiments of the present application;
FIG. 20D is a schematic illustration of two motion profiles shown according to some embodiments of the present application;
FIG. 20E is a schematic illustration of two motion profiles according to some embodiments of the present application;
FIG. 20F is a schematic illustration of two motion profiles according to some embodiments of the present application;
FIG. 21 is a schematic illustration of a head image reconstructed based on raw data, according to some embodiments of the present application;
FIG. 22 is a comparative schematic illustration of translations along the X-axis acquired based on different methods, shown in accordance with some embodiments of the present application;
FIG. 23 is an exemplary flow diagram illustrating motion detection based on translation and rotation information according to some embodiments of the present application.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways than those specifically described herein, and thus the present invention is not limited to the specific embodiments disclosed below.
The terms "system", "unit", "module" and/or "block" are used to distinguish different and/or different levels of components, elements, parts or sections, and may be replaced by other terms indicating the same meaning.
As used in this application and the claims, the connection between a "system," "unit," "module" and/or "block" and other "systems," "units," "modules" and/or "blocks" may be a physical connection or an electrical connection, either a direct connection or an indirect connection, unless the context clearly dictates otherwise. In this application, the term "and/or" includes any one or combination of more than one of the listed items.
As used in this application and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
Flow charts are used herein to illustrate operations performed by systems according to embodiments of the present application. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to the processes, or a certain step or several steps of operations may be removed from the processes.
FIG. 1 is a schematic view of an imaging system according to some embodiments of the present application. It should be noted that the following description of the imaging system 100 is merely exemplary in nature and is not intended to limit the scope of the present application. The radiation described herein may include one or a combination of particle rays, photon rays, and the like. The particle beam may include one or more of neutrons, protons, alpha rays, electrons, muons, heavy ions, and the like. The photon beam may comprise one or a combination of X-rays, gamma rays, ultraviolet rays, laser light, and the like. The imaging system 100 may be applied in different fields, such as the medical field or the industrial field. In some embodiments, the imaging system 100 may be used for internal inspection, including, for example, a combination of one or more of crack detection, safety scanning, failure analysis, metrology, assembly analysis, porosity analysis, wall thickness analysis, and the like. In some embodiments, the imaging system 100 may include one or more modalities. The modality includes, but is not limited to, a combination of one or more of a Positron Emission Tomography (PET) system, a Single Photon Emission Computed Tomography (SPECT) system, a Computed Tomography (CT) system, a Digital Radiography (DR) system, a Magnetic Resonance Imaging (MRI) system, and the like. In some embodiments, the imaging system 100 may be a multi-modality imaging system, such as a PET-CT system, a SPECT-MRI system, a PET-MRI system, a SPECT-CT system, or the like.
As shown in fig. 1, the imaging system 100 may include one or more scanners 110, one or more networks 120, and one or more processors 130. In some embodiments, the scanner 110 and the processor 130 may be connected by a network 120. In some embodiments, the scanner 110 and the processor 130 may be directly connected.
The scanner 110 may detect radiation rays. In some embodiments, the scanner 110 may include a SPECT scanner or a PET scanner, among others. Taking PET as an example, PET is a medical imaging technology that displays biological activities of living bodies by using a tracing principle. Biologically active molecules carrying a radioactive tracer are first introduced into the body of a subject, and the PET system detects gamma rays emitted by the tracer and constructs an image of the tracer distribution within the body by analyzing the detected signals. In some embodiments, the processor 130 may be used to store or access imaging programs for various types of nuclear medical diagnostic devices. The nuclear medical diagnostic device includes, but is not limited to, a combination of one or more of PET, SPECT, MRI, and the like. It should be noted that the gamma rays mentioned above are only exemplary of the radiation rays, and are not used to limit the scope of the present application.
For example only, the radiation ray may be a gamma ray. The PET scanner detects the gamma rays generated by the annihilation event and coincidence detects the detected gamma rays. In other embodiments, the radiation ray may be an X-ray. The CT scanner detects X-rays penetrating an object to be measured (e.g., a patient), and acquires the transmitted rays by a computer and performs three-dimensional reconstruction imaging according to the difference in absorption and transmittance of the object to be measured for the X-rays.
In some embodiments, the processor 130 may be used to process data from the scanner 110 or other device. Including, but not limited to, a combination of one or more of a scanner, a database, a memory, etc. In some embodiments, the processor 130 may reconstruct an image using the acquired data. In some embodiments, the processor 130 may be used to set one or more parameters of the scanner 110. The processor 130 may be a general-purpose processor including, but not limited to, one or more of a Central Processing Unit (CPU), an application-specific integrated circuit (ASIC), an application-specific instruction set processor (ASIP), a Graphics Processing Unit (GPU), a physical computing unit (PPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), a Reduced Instruction Set Computer (RISC), and the like.
In some embodiments, processor 130 may control whether a signal is acquired, or the time of occurrence of the next signal. In some embodiments, the processor 130 may select data for iterative reconstruction. In some embodiments, the processor 130 may select an algorithm to process the raw data, and/or determine the number of iterations of the iterative projection process, and/or determine the location of the radiation. In some embodiments, the processor 130 may receive real-time or preset instructions from an operator, which may include an imaging technician or physician, etc., and the processor 130 sets the scanner 110 according to the received instructions. In some embodiments, the processor 130 may communicate with other portions of the imaging system 100, such as a scanner, to exchange information.
The network 120 may be a single network or a combination of multiple different networks. For example, network 120 may be a Local Area Network (LAN), a Wide Area Network (WAN), a public network, a private network, a Public Switched Telephone Network (PSTN), the Internet, a wireless network, a virtual network, or any combination of the above. Network 120 may also include one or more network access points, such as a wireless network or a wired network. A wired network may include ways to utilize one or more combinations of metallic cables, hybrid cables, one or more interfaces, and the like. Wireless networks may include approaches utilizing one or more combinations of bluetooth, Local Area Network (LAN), Wide Area Network (WAN), Wireless Personal Area Network (WPAN), Near Field Communication (NFC), and the like. Network 120 may be suitable for use within the scope of the description herein, but is not limited to the description.
The above description of imaging system 100 is intended to be illustrative only and is not intended to limit the scope of the present application. It will be apparent to those skilled in the art having the benefit of this disclosure that various modifications and changes in the form and details of the applications for which the above-described system is implemented may be made without departing from the principles of the system, with any combination of the individual modules or sub-systems being constructed to interface with other modules. In some embodiments, some other components may be incorporated into the imaging system 100, such as a patient positioning module, a gradient amplification module, or other devices or modules.
FIG. 2 is a schematic diagram of a processor shown in accordance with some embodiments of the present application. The processor 130 may include a data memory 201, an acquisition circuit 202, a data processor 204, an image reconstruction processor 207, and/or a visualization processor 208.
The data storage 201 may be used to store acquired data or signals, control parameters, and the like. For example, the data storage 201 may store signals acquired from the scanner 110. In other embodiments, the data store 201 may store control parameters. The parameters may be parameters required by the acquisition circuitry 202, the data processor 204, the image reconstruction processor 207, and/or the visualization processor 208 when processing the data. Including, but not limited to, a combination of one or more of acquisition parameters, processing parameters (e.g., gating parameters, data cutting parameters, etc.), reconstruction algorithms, visualization parameters, and the like. In some embodiments, the data storage 201 includes, but is not limited to, one or a combination of hard disks, floppy disks, Random Access Memories (RAMs), Dynamic Random Access Memories (DRAMs), Static Random Access Memories (SRAMs), bubble memories (bubbles), thin film memories (thin film memories), magnetic plated wire memories (magnetic plated wire memories), phase change memories (phase change memories), flash memories (flash memories), cloud disks (a-cloud disks), and the like. The data storage 201 may be other similar means for loading programs or instructions into a computer or processor.
The acquisition circuitry 202 may acquire data or signals. In some embodiments, the acquisition circuitry 202 may detect radiation rays, as described above, the acquisition circuitry 202 may be integrated into the scanner 110. In some embodiments, the acquisition circuitry 202 may convert the analog signal to a digital signal. For example, an analog signal generated by the scanner 110 is sent to the acquisition circuitry 202, and the analog signal is converted to a corresponding digital signal. The acquisition circuitry 202 may include one or a combination of amplifiers, filters, analog-to-digital converters, and the like. In some embodiments, the acquisition circuitry 202 may receive data or signals from other devices, such as a combination of one or more of the data storage 201, the data processor 204, an external database, and the like. In some embodiments, the data or signals received by the acquisition circuitry 202 may be sent to the data storage 201 for storage and may be loaded as needed.
The data processor 204 may process data. In some embodiments, the acquired data may be sent to the data processor 204 for further processing. In some embodiments, a volume of interest (VOI) may be selected before the data processor 204 performs the data processing. The region of interest refers to a subset of data selected based on a particular purpose. In different cases, different regions of interest may be selected. In some embodiments, a geometric space is determined based on the selected VOI. In some embodiments, tissue or tumor volume is measured based on the selected VOI. In some embodiments, the VOI is selected to reduce background noise. In some embodiments, the region of interest includes, but is not limited to, a three-dimensional volume, such as a combination of one or more of a sphere, a column, a block, and the like. In some embodiments, the data processor 204 may analyze the data. For example, by analyzing the data, body motion information (e.g., physiological motion or rigid body motion) is obtained from the data. In some embodiments, the data processor 204 may process the data based on instructions from an operator (e.g., a physician).
In some embodiments, the data processor 204 may include a data partitioning processor, which may be used to partition data or signals. The data partitioning processor 204 may load data from the acquisition circuitry 202 or from the data memory 201. In some embodiments, the data partitioning processor may partition the data into a plurality of sub data sets. In some embodiments, the time interval of each sub data set may be fixed, but the amount of data contained within each sub data set may vary. In some embodiments, the time interval for each sub data set may be set by default settings of the imaging system, or may be set by an operator (e.g., a physician) in real-time or in advance. In some embodiments, the time interval and/or the amount of data for each sub data set may be fixed or variable.
In some embodiments, the data processor 204 may include a filter. The filter may remove signals in a particular frequency range from the data. In some embodiments, the filter may remove interfering signals within a predetermined frequency range (e.g., 10-100 Hz). In some embodiments, the filter may remove background noise. The signal filter may include, but is not limited to, a combination of one or more of a high pass filter, a low pass filter, or the like. A low-pass filter is an electronic filtering device that allows signals below a cutoff frequency to pass, but does not allow signals above the cutoff frequency to pass. The high pass filter allows frequencies above a certain cutoff frequency to pass, while significantly attenuating lower frequencies, a filter that removes unwanted low frequency components of the signal or removes low frequency interference. In some embodiments, the filter may be integrated into any one of the modules or units of the processor 130. In some embodiments, a filter may be unnecessary, which may be implemented by an external filter.
The image reconstruction processor 207 may reconstruct an image using the acquired raw data or the processed data. In some embodiments, the image reconstruction processor 207 may include, but is not limited to, one or a combination of one or more of a microcontroller, a Reduced Instruction Set Computer (RISC), an Application Specific Integrated Circuit (ASIC), an application specific instruction set processor (ASIP), a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Physical Processing Unit (PPU), a microcontroller unit, a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), and the like. In some embodiments, the image reconstruction processor 207 may employ a general purpose imaging reconstruction technique for image reconstruction. The general image reconstruction techniques may include, but are not limited to, one or a combination of fourier reconstruction, constrained image reconstruction, regularized image reconstruction in parallel MRI, and the like. In some embodiments, the image reconstruction processor 207 may employ different reconstruction algorithms, e.g., analytical, iterative. Common analytical methods include, but are not limited to, one or a combination of Filtered Backprojection (FBP), Backprojection Filtering (BFP), ρ -filtering, and the like. Commonly used iterative methods include, but are not limited to, a combination of one or more of maximum likelihood estimated image reconstruction method (MLEM), ordered subset maximum expectation method (OSEM), line processing maximum likelihood algorithm (RAMLA), dynamic line processing maximum likelihood algorithm (DRAMLA), and the like.
The visualization processor 208 may display the imaging results, for example, an image reconstructed by the image reconstruction processor 207. In some embodiments, visualization processor 208 may include a display device and/or a user interface, such as a combination of one or more of a Liquid Crystal Display (LCD), a Light Emitting Diode (LED), a flat panel display, a curved screen (or television), or a cathode ray tube, among others. In some embodiments, the visualization processor 208 may include one or more input devices, such as one or more of a keyboard, a touch screen, a touch pad, a mouse, a remote control, and so forth.
The foregoing disclosure is by way of example only and is not intended as limiting the scope of the invention. Various modifications, improvements and adaptations to the present application may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present application and thus fall within the spirit and scope of the exemplary embodiments of the present application. For example, visualization processor 208 may be unnecessary and the imaging results may be displayed using an external device (e.g., a monitor). In some embodiments, the visualization processor 208 may be integrated in the image reconstruction processor 207, and the imaging results or intermediate images may be displayed in real time.
FIG. 3 is a schematic diagram of a data processor, shown in accordance with some embodiments of the present application. The data processor 204 may include an I/O module 301, a model setting module 302, a gating module 303, and a storage module 305.
I/O module 301 may be used to input or output data or information. In some embodiments, as shown in FIG. 2, the acquired data may be obtained through input to I/O module 301. In some embodiments, the processed data may be output through the I/O module 301. In some embodiments, the I/O module 301 may include a board having one or more connection ports and one or more general purpose circuits mounted on the board. The connection port may be connected to one or more external devices. The general-purpose circuit may input or output an analog signal or a discrete signal to an external device.
The model setup module 302 may be used to build a model and set one or more parameters for building the model. The model may include a physiological motion model or a rigid body motion model. In some embodiments, the data is analyzed according to a model, and different types of data (e.g., data related to physiological motion or data related to rigid body motion) are obtained. For example, the data related to the physiological motion corresponds to a physiological motion model. In some embodiments, the one or more parameters used to construct the model include, but are not limited to, a combination of one or more of a region of interest (VOI), a type of probability distribution, a signal-to-noise ratio calculation method, a target frequency analysis of weighted signals, and the like.
In some embodiments, the model setting module 302 may include a computational unit or other unit that may perform computations. The model setup module 302 may calculate or analyze different types of data or information. In some embodiments, the data may be analyzed according to a model and the data related to the motion may be classified. By way of example only, data relating to physiological motion and data relating to rigid body motion are obtained from data based on a physiological motion model and a rigid body motion model. In some embodiments, the calculation unit may generate a body motion curve, which in the present application may show physiological motion and rigid body motion.
The gating module 303 may divide the data. The partitioning may refer to dividing the data into a plurality of groups and any one of the groups may be selected for further processing, if desired. By way of example only, the data may be divided into two groups, one corresponding to the pi/2 phase or "peak" interval of the physiological motion curve, and the other corresponding to the 3 pi/2 phase or "valley" interval of the physiological motion curve.
The storage module 305 may be used to store data or information. The data or information may be data or information from the I/O module 301, the model setting module 302, the calculation module 303, the control module 304, or the like. The data or information may include a combination of one or more of raw data, models, model factors, control parameters, calculations, algorithms, and the like. The storage module 305 may be unnecessary and any memory disclosed anywhere in this application may be used to store the data or information described above. For example, the data processor 204 may share a memory with the imaging system 100.
In some embodiments, the data processor 204 may include a control module. The control module may be used to set parameters. The parameters may be parameters associated with the operation of the modules, such as model setting module 302, storage module 305, etc. Including, but not limited to, one or a combination of parameters used to construct the model, parameters used to analyze the data, and the like.
The foregoing disclosure is by way of example only and is not intended as limiting the scope of the invention. Various modifications, improvements and adaptations to the present application may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present application and thus fall within the spirit and scope of the exemplary embodiments of the present application. In some embodiments, the memory module 305 may be integrated into any one of the modules of the data processor 204. In some embodiments, the data processor 204 may not include the memory module 305, but may access a memory module of the imaging system 100, or access a memory module external to the imaging system 100. In some embodiments, the I/O module 301 in the data processing module 204 may be unnecessary, and I/O ports between any two components shown in FIG. 2 may be used.
FIG. 4 is an exemplary flow diagram of image reconstruction shown in accordance with some embodiments of the present application. Step 401: data is acquired. The data may be from the I/O module 301. The data may also come from data acquisition circuitry 202, data storage 201, or other storage disclosed in the present application. Step 402: a motion model is selected. The motion model may be used to describe body motion. The model setting module 302 performs the operation of selecting a motion model. Here, the body motion may include physiological motion (e.g., respiratory motion, cardiac motion, gastrointestinal motion, etc.) and rigid body motion (e.g., movement of the body, etc.). The motion models include, but are not limited to, combinations of one or more of physiological motion models corresponding to one or more types or sources of physiological motion, rigid body motion models corresponding to one or more types or sources of rigid body motion, and the like. In some embodiments, the motion model may be selected according to default settings of the imaging system 100, or according to instructions of an operator (e.g., a physician).
Step 403: motion information is obtained based on the motion model. The motion information may be obtained by the model setting module 302. The selected motion model may be used to define a corresponding motion, as shown in step 402. The motion information may include a body motion profile generated based on the acquired data. As can be seen from the body motion curve, the amplitude of the motion varies with time (as shown in fig. 19).
Step 404: the data is partitioned based on the motion information. The gating module 303 performs the partitioned data operation. Motion may occur within a particular time interval, and the data obtained during that time interval contains motion information. In some embodiments, the motion information includes, but is not limited to, a combination of one or more of physiological motion information, rigid body motion information, and the like. In some embodiments, a decision threshold may be set, and when the magnitude of motion between different sub data sets is greater than the threshold, the time interval may be considered to have moved. In some embodiments, the decision threshold may be set according to a default setting of the imaging system 100 or according to an instruction of an operator (e.g., a doctor). In some embodiments, the decision threshold may be adjusted according to different circumstances. For example, the amplitude of the respiratory motion when the scanning subject is at rest is adjusted to be different from the amplitude of the respiratory motion when the scanning subject is at anxiety.
In some embodiments, the data is partitioned into a plurality of sub data sets that may be further processed, e.g., for reconstructing an image. The image reconstruction processor 207 performs the reconstructed image operation. In some embodiments, the data includes, but is not limited to, a combination of one or more of data including physiological motion information and data including rigid body motion information, and the like. In some embodiments, the partitioned sub data sets may be motion corrected prior to reconstructing the image.
The foregoing disclosure is by way of example only and is not intended as limiting the scope of the invention. Various modifications, improvements and adaptations to the present application may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present application and thus fall within the spirit and scope of the exemplary embodiments of the present application. For example, a storing step or a buffering step may be added between any two steps, wherein signals or intermediate data may be stored or buffered.
FIG. 5 is a schematic diagram of a model setting module according to some embodiments of the present application. The model setting module 302 may include a motion classifier 501, a physiological motion detector 502, and a rigid body motion detector 503.
The motion classifier 501 may be used to determine a type of motion. In some embodiments, the motion classifier 501 may classify motion as physiological motion, rigid body motion, or compound motion, among others. In some embodiments, the type of motion may be determined based on information of the scanned object. For example, if the information of the scanning object indicates that the scanning object is in an excited state, an epileptic state, or the like, and data is acquired at this time, the motion classifier 501 may determine that rigid motion occurs. In other embodiments, the motion classifier 501 may determine that physiological motion has occurred if the scan object information indicates that a region of interest of the scan object is located near the heart or lungs. The classification results may be sent to the rigid body motion detector 503 and/or the physiological motion detector 502. The physiological motion detector 502 may process data to detect physiological motion. The rigid body motion detector 503 may process data to detect rigid body motion.
The rigid body motion detector 503 and the physiological motion detector 502 may be general purpose motion detectors including, but not limited to, a combination of one or more of an electrocardiograph detector, a respiration detector, a light sensor, and the like. The above description is intended to be illustrative only and is not intended to limit the scope of the present application. Various modifications, improvements and adaptations to the present application may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present application and thus fall within the spirit and scope of the exemplary embodiments of the present application.
Fig. 6 is an exemplary flow diagram illustrating the acquisition of physiological motion information or rigid body motion information from raw data according to some embodiments of the present application. The process flow 600 may be performed by processing logic, including, but not limited to, one or more combinations of hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., operating instructions for running a hardware simulation on a processing device), and so forth. In some embodiments, the process flow 600 may be performed by one or more processing devices (e.g., the model setting module 302) and/or other devices disclosed herein.
Step 601: raw data is acquired. The raw data may be scan data from one or more scanners (e.g., scanner 110 shown in fig. 1), or data from one or more storage modules 305 or other storage devices (e.g., floppy disks, hard disks, wireless terminals, cloud storage, etc.), and so forth.
Step 602: the type of motion is determined. In some embodiments, the type of motion may be determined based on information of the scanned object. For example, if the information of the scanning object indicates that the scanning object is in an excited state, an epileptic state, or the like, and data is acquired at this time, the motion classifier 501 may determine that rigid motion occurs. In other embodiments, the motion classifier 501 may determine that physiological motion has occurred if the scan object information indicates that a region of interest of the scan object is located near the heart or lungs. If it is determined that the physiological motion has occurred, go to step 603; if it is determined that rigid body motion has occurred, step 604 is performed.
Step 603: physiological motion information is obtained based on the raw data. The physiological motion information may include a physiological motion profile. The physiological motion profile may show the position of the scanned subject as a function of time. In some embodiments, the physiological motion profile may show the change in position of a particular organ, such as the heart, over time. In some embodiments, the physiological motion curve may show the change in the position of the centroid of the region of interest of the scanned subject over time. In some embodiments, the center of mass of the region of interest moves in a particular direction, for example, in the X-axis direction. The physiological motion information and/or physiological motion profile may include a combination of one or more of heartbeat frequency, respiratory frequency information, and the like.
In some embodiments, a region of interest is first constructed. For example, the region of interest may be a region located near the heart. In other embodiments, the region of interest may be a region located near or containing a lung. In some embodiments, the shape of the region of interest may be a simple geometric volume, e.g., a sphere, cube, column, block, etc. A physiological motion profile within a region of interest is acquired.
And step 604, obtaining rigid body motion information according to the original data. The rigid body motion information may include a rigid body motion curve. The rigid body motion curve may show the motion of the scanned object over time, e.g., the motion of the head of the scanned object over time. Rigid body motion includes, but is not limited to, a combination of one or more of translation, rotation, and the like. For example, translation along the X-axis. Rigid body motion may be described using a motion field matrix. The motion field matrix can be represented by the following formula:
T=Rx*Ry*Rz*S (1)
wherein R isx、RyAnd RzAre rotation matrices about the X, Y and Z axes, respectively, and S is a translation matrix. The motion field matrix may be used to quantify the rigid body motion. For example, the amount of translation along the X-axis, Y-axis, and/or Z-axis may be obtained from the translation matrix.
Rigid body motion information may be obtained based on a threshold. In some embodiments, a threshold may be used to determine the occurrence of rigid body motion. For example, the original data is divided into a plurality of sub data sets. Similarity between different sub data sets is evaluated. If the similarity between two different sub data sets is smaller than a threshold, it can be considered that rigid motion does not occur at the time of obtaining the sub data sets, otherwise, rigid motion is considered to occur. In some embodiments, one sub data set is selected from the plurality of sub data sets as the reference data set. In other embodiments, one data set is selected from at least one data set stored in advance as the reference data set. And evaluating the similarity between other sub data sets and the reference data set, and if the similarity between one sub data set and the reference data set is greater than the threshold, considering that rigid body motion occurs at the moment of acquiring the sub data set.
In some embodiments, after step 603 or step 604, other operations may also be performed. For example, the raw data may be corrected based on a rigid body motion curve, or the position of a line of response (LOR) may be corrected by performing some spatial transformation based on a motion field matrix. Fig. 13 illustrates an exemplary process of correcting raw data of a list mode.
Fig. 7 is a schematic diagram of a physiological motion detector according to some embodiments of the present application. The physiological motion detector 502 may include one or a combination of plural kinds of a VOI setting unit 703, a TOF probability distribution setting unit 702, a physiological spectrum setting unit 703, an SQI setting unit 704, a storage unit 705, and the like.
The VOI setting unit 701 may be used to set parameters. The parameters may be used to construct the shape of a region of interest (VOI). The shape of the VOI includes, but is not limited to, a combination of one or more of a sphere, cube, column, block, and the like. For example only, the shape of the VOI may be a sphere, which VOI may be defined by the parameters (X1, X2, X3) and X4, (X1, X2, X3) are coordinates of the center of the sphere, and X4 is the radius of the sphere. If the shape of the VOI is a block, the VOI may be defined by parameters (X1, X2, X3), X4, X5, X6, coordinates of the center of the block, length, width, and height of the block, respectively.
The TOF probability distribution setting unit 702 can set a probability model for estimating a probability distribution of list mode data. For example, TOF coordinate λeProbability distribution of (2). The probability model is obtained based on a combination of one or more of bernoulli distribution, poisson distribution, uniform distribution, exponential distribution, normal distribution, and the like.
The physiological spectrum setting unit 703 may be used to set a physiological spectrum for the target signal. The target signal may be a signal including body motion information. For example, the target signal may be a signal describing the motion of the centroid of a region of interest of the scanned object. The physiological spectrum includes, but is not limited to, a frequency range of the respiratory signal and/or a combination of one or more of the heartbeat signals, etc. For example, fig. 15 shows a spectrum containing a respiratory signal and a cardiac signal. By way of example only, the human breathing frequency is typically between 10-20 or 0.16-0.3Hz per minute, and thus the physiological spectral range set for the breathing signal may be between 0-0.5 Hz. In other embodiments, the frequency of human heartbeat is typically between 50-150 beats per minute or 0.8-2.5Hz, so the physiological spectrum range set for the heartbeat signal may be between 0.6-1.5 Hz. By setting a physiological spectrum for the target signal, the physiological signal included in the target signal can be acquired using a method of spectral analysis. For example, fourier analysis methods are used to acquire physiological information from a target signal.
The SQI setting unit 704 may be configured to calculate a signal quality indicator of the target signal with respect to the physiological spectrum. The signal quality indicator may refer to a signal-to-noise ratio, which may be a ratio of an energy level of the target signal within the physiological spectrum to an energy level of the target signal outside the physiological spectrum. The energy level of the signal may be measured from the spectral behavior of the signal.
The storage unit 705 may be used to store data including, but not limited to, one or a combination of more of data from the VOI setting unit 701, data from the TOF probability distribution setting unit 702, data from the physiological spectrum setting unit 703, data from the SQI setting unit 704, and the like. Types of data include, but are not limited to, a combination of one or more of parameters defining the VOI, TOF probability distributions, physiological spectrum settings, SQI calculation methods, and the like. The storage unit 705 may include a plurality of components. In some embodiments, the storage unit 705 may include a hard disk drive. In some embodiments, the storage unit may be a solid state drive. In some embodiments, the storage unit 705 may include a removable storage drive. By way of example only, the removable storage drive includes, but is not limited to, a combination of one or more of a flash drive, an optical drive, and the like.
Fig. 8 is an exemplary flow diagram illustrating partitioning of data according to physiological motion information and image reconstruction based on the grouped data according to some embodiments of the present application. Process flow 800 may be performed by processing logic. The processing logic includes, but is not limited to, a combination of one or more of hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions that are executed when a hardware emulation is run on a processing device), and the like. In some embodiments, process flow 800 may be performed by one or more processing devices.
Step 801: raw data is acquired. The raw data may be data from a Single Photon Emission Computed Tomography (SPECT) or data from Positron Emission Tomography (PET), etc. The raw data may be data obtained by scanning a scanning object, or data from a storage unit or database. The raw data includes temporal and spatial information of the occurrence of annihilation events. In some embodiments, the raw data may be list-mode data. In some embodiments, the raw data may be data in a chord graph mode. The data of the chord pattern contains temporal and/or spatial information of the occurrence of annihilation events, or other information known to those of ordinary skill in the art.
Step 802: a model is selected. The model may be a method of processing raw data to obtain physiological motion information. For example, the model includes, but is not limited to, a combination of one or more of constructing the shape of the VOI, setting the spatial coordinates of the VOI, specifying the type of TOF probability distribution, the method of computing the SQI, the physiological spectrum of the target signal, and the like. The operation of selecting a model may be performed by the model setting module 302.
Step 803: the VOI is constructed based on the model. The optimal VOI may be found based on the model according to some criteria. For example only, the SQI value may be used as an index to find the optimal VOI. Fig. 9 illustrates a process of constructing a VOI. Step 804: physiological motion information is obtained based on the VOI. In some embodiments, the physiological motion information may include a physiological motion profile. Comparing fig. 16A and 16B, fig. 16A shows a spectral analysis of the weighted signals of an unspecified VOI as shown in accordance with some embodiments of the present application, and fig. 16B shows a spectral analysis of the weighted signals within the specified VOI as shown in accordance with some embodiments of the present application. Fig. 17 shows a time plot (solid line) of the respiration waveforms of the weighting signals for an unspecified VOI and a time plot (dashed line) of the respiration waveforms of the weighting signals within a specified VOI, shown in accordance with some embodiments of the present application.
Step 805: and performing binning processing on the original data based on the physiological motion information. In some embodiments, the raw data may be binned according to the phase of the physiological motion curve. For example, raw data located in the same phase interval of the physiological motion curve belong to the same bin. In some embodiments, the raw data may be binned according to the amplitude of the physiological motion curve. Step 806: one or more reconstructed images are obtained based on the binned data.
Fig. 9 is an exemplary flow diagram illustrating the acquisition of a VOI from raw data according to some embodiments of the application. Process flow 900 may be performed by processing logic. The processing logic includes, but is not limited to, a combination of one or more of hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions that are executed when a hardware emulation is run on a processing device), and the like. In some embodiments, process flow 900 may be performed by one or more processing devices.
Step 901: the original data is divided into a plurality of portions. The raw data may be divided into data segments belonging to several time intervals, which may be, for example, 100 milliseconds, 5 seconds, 10 seconds, etc. Step 902: a parameterized region of interest (VOI) is constructed. A parameterized VOI may include a number of parameters defining its shape, volume, and position. The shape of the VOI includes, but is not limited to, a combination of one or more of a sphere, a cube, a block, and the like. The VOI, for example, is spherical in shape and may be defined by parameters (X1, X2, X3) and X4. Wherein the parameters (X1, X2, X3) represent the sphere center coordinates and the parameter X4 represents the sphere radius. In another embodiment, the shape of the VOI is a block, which may be defined by parameters (X1, X2, X3), X4, X5, X6. The parameters (X1, X2, and X3) represent the coordinates of the center of the block, and the parameters X4, X5, and X6 represent the length, width, and height of the block, respectively. Defining the shape of a complex geometry requires more parameters than defining the shape of a simple geometry. In some embodiments, the shape of the VOI may be defined by within 10 parameters in step 902. In some embodiments, in step 902, the shape of the VOI is defined by using parameters within 20.
Step 903: a weighted signal of the raw data contained within the VOI is calculated. By way of example only, coordinate information for each annihilation event (or event for short) is acquired using raw data in list modeWherein z iseRepresenting the coordinates of the event on the Z-axis, which is the foot-to-head direction along the scanned object (e.g., patient), Se,Two axes constituting the chord graph: radial distance and angle of the projected line of the event, theta is the angle of inclination of the event in the Z direction, lambdaeIs the position of the event from the center point and Time information t recorded using a Time of flight (TOF) method.
Using coordinate information for each eventThe most likely annihilation point (x, y, z) for each event can be calculated by the method described in equation 2:
in some embodiments, the parameter λ may be optimizedeThe TOF coordinates Λ of an event can be viewed as a random variable of a probability distribution, which distribution is expressed in equation 3:
wherein λ iseSince TOF coordinates Λ are considered to be a random variable, the calculated annihilation point for this event is not a fixed point, but a random variable that varies with Λ, the relationship between random variables is still determined by equation 2
Thus, a probability distribution of events detected within time t can be obtained:
after obtaining the probability distribution of the detected events within the time t, the physiological motion information of the scanned object can be obtained by calculating the weighted sum of the detected events and the coordinate information, and the calculation method can be expressed as formula 5:
when w is1(w, y, Z) Z, Z being the value of each event in the Z direction, w2When (w, y, Z) is 1, a gravity center signal of the data in the Z direction is obtained, and a calculation method thereof can be expressed as formula 6:
in formula 5 and formula 6, dv is a spatial integral parameter, and dv ═ dxdydz; d τ is an event integration parameter, and the integration over the scan field of view (FOV) indicates that the position of this event space is contained in the entire FOV of this scan. [ T, T + Δ T ] represents a time interval in which the event probability distribution F (x, y, z, T) is acquired, and Δ T is a time interval. In some embodiments, Δ T may be between 50 milliseconds and 100 milliseconds. The value of the time interval delta T can ensure enough sampling points according to the requirement of the sampling law. In some embodiments, the value of the time interval Δ T may be determined when a sufficient number of sample points are obtained. Com (t) represents the distribution of events in the z-direction over the entire scan interval.
Since the drug distribution at not every spatial position reflects the physiological motion over the entire scan, these extra positions only cause extra noise, and therefore, a region of interest (VOI) can be selected to obtain better physiological motion information. Namely, updating formula 6 to formula 7, the gravity center signal of the data in the Z direction is obtained:
in equation 7, V (x, y, z) is a function of VOI, and if (x, y, z) falls within the VOI, V (x, y, z) takes a value of 1, and if (x, y, z) lies outside the VOI, V (x, y, z) takes a value of 0.
Step 904: the weighted signals are subjected to fourier spectrum analysis to obtain a Signal Quality Indicator (SQI). Here, SQI may refer to signal-to-noise ratio. By way of example only, fourier spectrum analysis is performed on the weighted signals to obtain the signal-to-noise ratio of the weighted signals contained within the VOI, which may be calculated as shown in equation 8.
Wherein, FT (signal (t)) represents Fourier transform of signal (t), f ∈ signal _ space represents that the frequency belongs to the signal interval,indicating that the frequency f does not belong to the signal interval. For the detection of respiratory motion, the "respiratory motion frequency range" is defined as the signal interval, and for the detection of cardiac motion, the "cardiac signal frequency interval" is defined as the signal interval. G1And G2Is two functions for measuring the energy value of g (f), which can be expressed as formula 9:
G1(g(f))=G2(g(f))=‖g(f)‖2(9)
where g (f) is an arbitrary variable as a function of f, | g (f) | denotes the absolute value of g (f), which is the energy in the frequency domain.
Step 905: a weighted signal is selected whose corresponding signal quality indicator has a maximum value. For example only, the signal quality indicator may be a signal-to-noise ratio. The VOI is composed of N parameters X1、X2、X3…XNDefining or determining, wherein N is an integer. At least one parameter is selected and set as a variable parameter (other fixed parameters), and a weighting signal of original data contained in a VOI (VOI constrained by the corresponding variable parameter) corresponding to the variable parameter at different values is calculated. Fourier analysis is performed on the (each) weighted signal to obtain a corresponding signal-to-noise ratio. And traversing different values of variable parameters defining the VOI to obtain the signal-to-noise ratio of the weighting signals contained in the VOI respectively defined by the variable parameters.
Step 906, selecting the VOI corresponding to the maximum SQI of the weighting signal as the optimal VOI. In some embodiments, the optimal VOI may be determined by an indicator, such as a signal-to-noise ratio, of the signal. For example, the VOI corresponding to the weighted signal with the largest signal-to-noise ratio is used as the optimal VOI.
FIG. 10A illustrates a schematic diagram of a rigid body motion detector according to some embodiments of the present application. The rigid body motion processor 503 may include a rigid body motion detector 1001 and a rigid body motion corrector 1002. As shown in fig. 10A, the rigid body motion detector 1001 may obtain rigid body motion information including the occurrence of rigid body motion and/or the amplitude of rigid body motion by analyzing list mode data. List mode is a form of data in which data is stored. In some embodiments, the occurrence of rigid body motion and/or the amplitude of rigid body motion may be determined using a thresholding method (see fig. 11 and its description for details). In some embodiments, the rigid body motion detector 1001 may include one dividing unit. The dividing unit may be used to divide the data into a plurality of sub data sets. Motion information of the scanned object may be derived by analyzing the plurality of sub data sets. The rigid body motion corrector 1002 may correct the data of the list mode according to the acquired motion information.
The foregoing disclosure is by way of example only and is not intended as limiting the scope of the invention. Various modifications, improvements and adaptations to the present application may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present application and thus fall within the spirit and scope of the exemplary embodiments of the present application.
FIG. 10B is an exemplary flow diagram illustrating rigid body motion detection and correction according to some embodiments of the present application. The process of rigid body motion detection and correction may be performed by the rigid body motion detector 503. The processes shown in the flowcharts may be performed by one or more processing and/or computer-controlled electronic components. Although specific steps have been disclosed in the flowcharts, these steps are exemplary only. That is, the present embodiment is suitable for performing various other steps or variations of the steps in the flowcharts. It should be understood that in various embodiments, the steps in this flowchart may be performed by software, hardware, or a combination of both.
As shown in fig. 10B, step 1020: data of a list mode of a scan object is acquired. The list mode data may be obtained from the acquisition circuitry 202, from the data storage 201, or from any of the memories disclosed in the present application. Step 1040: rigid body motion information is obtained from the list mode data. Rigid body motion information for the scanned object may be obtained based on a similarity measure between the sub data sets of the list mode data. Rigid body motion information of the scanned object may be obtained based on the motion field matrix. In some embodiments, the motion field matrix information for a plurality of frames of images in the image set is obtained by an image registration technique. Image registration techniques include, but are not limited to, LBFS method, Demons method.
Step 1080: and correcting the data of the list mode according to the obtained rigid body motion information. In some embodiments, the correction may be made to an image generated from the list mode data.
FIG. 11 is an exemplary flow diagram illustrating rigid body motion detection according to some embodiments of the present application. The rigid body motion detection process may be performed by the rigid body motion detector 1001. The processes shown in the flow charts may be performed by one or more processors and/or computer-controlled electronic components. Although specific steps have been disclosed in the flowcharts, these steps are exemplary only. That is, the present embodiment is suitable for performing various other steps or variations of the steps in the flowcharts. It should be understood that in various embodiments, the steps in this flowchart may be performed by software, hardware, or a combination of both.
As shown in fig. 11, step 1101: data of a scanned object is acquired. The data may be list mode data. The data may also be in a chord chart mode or other modes of data. The data may be acquired from the acquisition circuitry 202, from the data storage 201, or from any of the memories disclosed herein. Step 1102: and dividing the data of the scanning object into a plurality of sub data sets. The data division operation may be implemented by a division unit. In some embodiments, the collected list-mode data may be data partitioned by time interval. For example, one sub data set corresponds to data acquired within a certain time interval. The time interval may be fixed or variable. For example, the time interval may range from 1 second to 10 seconds. The time intervals of the plurality of sub data sets may be the same or different from each other. The plurality of sub data sets obtained by the division can be expressed as formula 10:
S={si,1≤i≤T/Δt} (10)
wherein s isiDenotes the ith sub-data set obtained by data division, T denotes the acquisition time of data in list mode, Δt represents a time interval.
Step 1103: one of the sub data sets is selected as a reference data set. In one embodiment, one sub data set is selected from all of the sub data sets as a reference data set. In other embodiments, one data set may be selected from the other data sets as the reference data set. The selection of the reference data set may be performed according to default settings of the imaging system 100 or instructions of an operator (e.g., a physician). In some embodiments, a first sub data set s obtained during a first time interval may be selected1As a reference data set, the reference data set may be denoted as sref。
A similarity measure is computed for the reference data set and each of the plurality of sub-data sets. In one embodiment, the data of the scan object may be divided into i sub-data sets at equal time intervals; calculating the similarity measurement degree I (n) of the reference data set and the subdata set obtained at the nth time interval, wherein n is more than or equal to 1 and is less than or equal to i; calculating a similarity measurement value I (n +1) of the reference data set and the subdata set obtained at the (n +1) th time interval; calculating a gradient value of the similarity measure I (n +1) and the similarity measure I (n); and performing box separation processing on the sub data sets according to the gradient values. For example, a similarity measure may be obtained for the reference data set and each of the plurality of sub data sets as follows from steps 1104 through 1106.
Step 1104: and transforming the sub data sets into the images or the histograms to obtain the corresponding images or the histograms. And transforming the plurality of sub data sets into an image or a histogram to obtain a plurality of corresponding images or histograms. And changing the sub data set as the reference data set into an image or a histogram to obtain a corresponding reference image or a reference histogram. In some embodiments, a sub data set siAnd a reference data set srefEach response line in the sub-data set can be converted into a two-dimensional image according to a certain rule to obtain a sub-data set siAnd a reference data set srefCorresponding two-dimensional image imgiReference image imgrefThe obtained image can be expressed as formula 11:
wherein the crystalx,crystaly,ringzThe crystal numbers and crystal layers of a pair of scintillation crystals corresponding to the response line are respectively shown. In some embodiments, a sub data set siAnd a reference data set srefMay be converted to a histogram. The reference data set srefMay correspond to a reference histogram. The reference histogram may be denoted as historefWhile the corresponding histogram of data of the other subdata sets may be represented as histoi。
Step 1105: a similarity measure is calculated between the reference image or reference histogram and the other images or histograms. In some embodiments, a similarity measure between the reference image and the other images is calculated, which calculation can be expressed as equation 12:
I={Ii|Ii=similarity(imgref,imgi),1≤i≤T/Δt} (12)
wherein similarity (a, b) represents a similarity measureMeasurement function, IiRepresenting a reference image imgrefImage img corresponding to sub data set iiMeasure the similarity between them.
In some embodiments, a similarity measure between the reference histogram and the other histograms may be calculated, the calculation of which may be expressed as equation 13. The similarity measure includes, but is not limited to, a combination of one or more of a similarity measure of shapes, a similarity measure of distributions, and the like.
I={Ii|Ii=similarity(historef,histoi),1≤i≤T/Δt} (13)
Wherein similarity (a, b) represents a similarity measure function, IiIs a reference histogram historefWith histogram histo corresponding to subset iiA measure of similarity.
Step 1106: the motion amplitude is obtained from the similarity measure and a threshold. The threshold may be set by default settings of the imaging system 100 or by instructions of an operator (e.g., a physician). In some embodiments, the occurrence of motion may be determined according to equation 14:
wherein 0 means no motion, IiRepresenting the magnitude of the motion occurring at the ith time interval.
Fig. 12A is an exemplary flow diagram illustrating obtaining rigid body motion information according to some embodiments of the present application. Process flow 1200 may be implemented by rigid body motion detector 1001. Process flow 1200 may be performed by one or more processors and/or computer-controlled electronic components. Although specific steps have been disclosed in the flowcharts, these steps are exemplary only. That is, the present embodiment is suitable for performing various other steps or variations of the steps in the flowcharts. It should be understood that in various embodiments, the steps in this flowchart may be performed by software, hardware, or a combination of both.
Step 1202: data of a scanned object is acquired. The data may be list mode data and/or chord pattern data. The data of the chord graph mode can be obtained by converting the data of the list mode. The list mode data and/or the chord pattern data may be obtained from the acquisition circuitry 202, from the data storage 201, or from any of the memories disclosed herein. The list mode data and/or the chord graph mode data include, but are not limited to, a combination of one or more of temporal information, spatial information, etc. of the event. In some embodiments, the data for this list mode and/or chord pattern may be subject to pre-processing modifications, such as modification of random events or modification of TOF information.
Step 1204: the data is divided into several data segments according to the temporal information and/or the spatial information. For example, the data may be divided at time intervals of 5 seconds or less to obtain a plurality of data segments. In other embodiments, the data may be divided at time intervals of 10 seconds or less. In some embodiments, the data of the list mode may be divided into data segments belonging to several time intervals according to the time information. In some embodiments, the data within each data segment may be further classified according to spatial information provided by the data in list mode.
Step 1206: and acquiring statistical information of the data in the data section. For example, the statistical information of the data in the data segment may be statistical information of a data space corresponding to the data in the data segment or statistical information of a reconstruction space corresponding to the data in the data segment. The statistical information includes, but is not limited to, a combination of one or more of the desired (first moment), second moment, variance, covariance information, and the like. In some embodiments, the statistical information of the coordinates of the reconstruction space corresponding to the data in the data segment may be obtained according to the statistical information of the data in the data segment, which may be the statistical information of the coordinates of the data space corresponding to the data in the data segment. For example, a transformation relationship between the data space and the reconstruction space may be established first; and acquiring the statistical information of the coordinates of the reconstruction space corresponding to the data in the data segment according to the statistical information of the coordinates of the data space corresponding to the data in the data segment and the conversion relation between the data space and the reconstruction space. In some embodiments, in data space, chordal coordinates are usedDefining an event, wherein z represents the intersection of the event with the z-axis, s andtwo axes constituting the chord graph respectively represent the radial distance and angle of the projection line corresponding to the event, and θ is the inclination angle of the event in the z-axis direction. 3D reconstructed coordinates (x, y, z) are used in the reconstruction space. A conversion relationship of the chord chart coordinates and the 3D reconstruction coordinates is established (as shown in fig. 12B). The conversion relationship between the chord graph coordinates and the 3D reconstructed coordinates can be expressed as formula 15:
where X is the X-axis coordinate and Y is the Y-axis coordinate.
From the above functional relations, functional relations between the expectation, second moment or variance of the reconstructed coordinates and the expectation, second moment or variance and covariance of the coordinates of the chord graph can be calculated.
Step 1208: and acquiring the motion information of the scanning object according to the statistical information of the data in the data segment. The motion information includes, but is not limited to, a combination of one or more of translation, rotation, and the like. By way of example only, translation/rotation information of the scanned object may be obtained from desired information of coordinates of the reconstruction space to which the data in the data segment corresponds. In some embodiments, translation information for the scan object may be obtained as desired for the data in the data segment. In some embodiments, rotation information of the scanned object may be obtained from second moment and/or variance and/or covariance matrix information of the data in the data segment, for example, the rotation information may be obtained by obtaining a covariance matrix.
Indicating a projection angle ofThe mathematical expectation of s.The calculation method of (c) can be expressed as formula 16:
where E (X (n)) and E (Y (n)) are mathematical expectations of the translation of the center of mass of the scanned object along the X-axis and the Y-axis at the nth time interval, the calculation method can be expressed as formula 17:
wherein,is discretizedThe value is obtained.
Computing a covariance matrixThe calculation method can be expressed as formula 18:
calculation of E (X)2(n)),E(Y2(n)) and E (x (n) y (n)), the calculation formula of which can be expressed as formula 19:
computingThe calculation method can be shown in the tableThe following is equation 20:
the calculation methods for E (Z (n) X (n)) and E (Y (n) Z (n)) can be expressed as formula 21:
computingThe calculation formula can be expressed as formula 22:
calculation of E (Z)2(n)) m, the calculation formula of which can be expressed as formula 23:
E(Z2(n))=E(Z2(0,n)), (23)
calculation of E (Z)2(0, n)), the calculation formula of which can be expressed as formula 24:
in this application, the translation information may refer to a translation motion of the scan object along an X-axis, a Y-axis, and/or a Z-axis. For example, the translational motion of the center of mass of the scanned object along the X-axis, Y-axis and/or Z-axis is obtained and defined as U (n), V (n), and W (n), respectively.
Calculating U (n), V (n), and W (n), wherein the calculation formula can be expressed as formula 25:
where E (X (n)) and E (Y (n)) are the mathematical expectations of the translation of the center of mass of the scanned object along the X and Y axes at the nth time interval.
In some embodiments, the mathematical expectation of translation of the center of mass of the scanned object along the Z-axis at the nth time interval is defined asThe calculation method can be expressed as formula 26:
in this application, the rotation information may refer to a rotation angle of the scan object about an X-axis, a Y-axis, and/or a Z-axis. For example, the rotation angles of the centroid of the scanned object around the X-axis, the Y-axis and/or the Z-axis, which are defined as α (n), β (n) and γ (n), respectively, may be calculated by the following formula 27:
calculating r32(n),r31(n) and r21(n), the calculation method thereof can be expressed as formula 28:
wherein, R (n) is a rotation matrix of the scanning object, R (n) can be calculated by formula 1,is a feature vector matrix, the calculation method of which can be expressed as formula 29:
wherein,is a diagonal matrix of 3 × 3,is a covariance matrix.
Step 1210: and acquiring a reconstructed image after motion correction according to the motion information of the scanning object. The data of the scanned object may be binned according to translation and/or rotation information of the scanned object. In some embodiments, image reconstruction is performed on the binned data to obtain a set of reconstructed images. In some embodiments, the binned data is motion corrected based on translation and/or rotation information of the scanned object; motion corrected reconstructed image sets are obtained using the motion corrected binned data.
In some embodiments, the accurate motion information may be further obtained through an image registration method, for example, one reconstructed image is selected from a reconstructed image set obtained based on the binned data as a reference image, and the rest reconstructed images are registered with the reference image by using the obtained translation and/or rotation information of the scanning object and are combined to obtain a final reconstructed image. In some embodiments, the data of the scanned object may be motion-supplemented in real time in image reconstruction through the obtained translation/rotation information, and a motion-corrected reconstructed image may be obtained.
In some embodiments, motion correction may be performed on data or reconstructed images of the scanned object. For example, an attenuation map of the scanned object may be acquired; performing motion correction on the attenuation map according to the translation and/or rotation information of the scanned object to obtain the attenuation map of the real-time motion of the scanned object; attenuation correction is performed on data and/or reconstructed images of the scanned object according to the attenuation map of real-time motion of the scanned object. In other embodiments, after obtaining the reconstructed image set based on the binned data, motion field data is obtained through image registration, and the obtained motion field data is used for performing motion correction on the attenuation map of the scanning object to obtain the attenuation map of the real-time motion of the scanning object.
The foregoing disclosure is by way of example only and is not intended as limiting the scope of the invention. Various modifications, improvements and adaptations to the present application may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present application and thus fall within the spirit and scope of the exemplary embodiments of the present application.
FIG. 12B is a schematic illustration of the chordal map coordinates as a function of 3D reconstruction coordinates, in accordance with some embodiments of the present application. As shown in FIG. 12B, z represents the intersection of the event with the z-axis, s andtwo axes forming a chord chart, respectivelyThe radial distance and angle of the projection line corresponding to the event are shown, theta is the inclination angle of the event in the z-axis direction, and t is time. a (x)a,ya) And b (x)b,yb) Respectively, two end points of the projection line. As shown in FIG. 12B, the chord chart coordinates are on the left and the 3D reconstructed coordinates are on the right, where a and B are a pair of detector pairs, e is an event, and x iseIs the x coordinate of event e, t is the TOF coordinate, and z is the axial position.
FIG. 13 is an exemplary flow diagram illustrating rigid body motion correction according to some embodiments of the present application. This motion correction process may be performed by the rigid body motion corrector 1002. The motion correction may be performed based on motion information determined by processor 1100. The processes shown in the flow charts may be performed by one or more processors and/or computer-controlled electronic components. Although specific steps have been disclosed in the flowcharts, these steps are exemplary only. That is, the present embodiment is suitable for performing various other steps or variations of the steps in the flowcharts. It should be understood that in various embodiments, the steps in this flowchart may be performed by software, hardware, or a combination of both.
In step 1301, data (e.g., raw data and/or motion amplitude) is loaded. As shown in fig. 11, the magnitude of rigid body motion may be determined based on the similarity measure and a threshold. In some embodiments, by setting a threshold, the motion amplitude information I may be obtained, and the obtaining method may be described as formula 30:
wherein 0 means no motion, IiTable the amplitude of the motion occurrence at the ith time interval.
Step 1302: and acquiring the binning data according to the motion information. The data in the list mode can be subjected to binning processing according to the gradient value and the threshold value, and binning data is obtained. In some embodiments, the motion amplitude information is subjected to gradient according to the obtained motion amplitude information I, and the calculation formula can be described as formula 31: :
where i (i) denotes the motion amplitude of the i-th time interval, i ═ 1, 2. The threshold may be set based on default settings of the imaging system 100 or instructions of an operator (e.g., a physician). In some embodiments, if the gradient value between the ith time interval and the (i-1) th time interval does not exceed the threshold, the (i) th sub-dataset belongs to the same bin as the (i-1) th sub-dataset. Otherwise, the ith sub-dataset is assigned to the new bin. The binned data obtained according to the variation information of the gradient values can be expressed as formula 32:
D={Di,1≤i≤n}, (32)
where n is the number of sub data sets of the list mode, DiIs a sub data set of the ith list mode. .
Step 1303: a set of reconstructed images is obtained based on the binned data. Dynamic reconstruction is performed based on the binned data to obtain a dynamic reconstruction set, which may be expressed as formula 33. The reconstruction may be performed by an image reconstruction processor 207.
f={f(Di),1≤i≤n}, (33)
Wherein, f (D)i) Is the reconstructed image of the data corresponding to the ith bin.
Step 1304: a reconstructed image is selected as a reference image. The selection may be selected according to default settings of the imaging system 100 or instructions of an operator (e.g., a physician). In some embodiments, an image reconstructed from the data of the first bin is selected as the reference image.
Step 1305: a motion field matrix of the set of reconstructed images is obtained by image registration. The image registration is performed between the reference image and the other reconstructed images. The image registration method may include, but is not limited to, a combination of one or more of a cross-correlation algorithm (CC), a normalized cross-correlation algorithm, a Sequential Similarity Detection Algorithm (SSDA), a mutual information algorithm (MI), and the like. In some embodiments, the motion field matrix may be expressed as equation 34:
T={Ti,1≤i≤n}, (34)
the motion field matrix can be calculated by equation 1 above.
Step 1306: the data of the list mode is corrected according to the motion field matrix. In some embodiments, the imaging system 100 may correct the spatial information of the list mode data based on the motion field matrix. In this application, the spatial information may refer to the spatial coordinates of a pair of scintillation crystals corresponding to a line of response in a PET system. The spatial information may be expressed as formula 35:
wherein,is a scintillation crystalaThe spatial coordinates of the optical system (c),is a scintillation crystalbSpatial coordinates of (a), xa,yaAnd zaRespectively a scintillation crystalaX-axis coordinate, Y-axis coordinate and Z-axis coordinate, Xb,ybAnd zbRespectively a scintillation crystalbX-axis coordinates, Y-axis coordinates, and Z-axis coordinates.
The spatial coordinates of a pair of scintillation crystals are corrected, and the correction method can be expressed as formula 36:
whereinIs a corrected scintillation crystalaThe spatial coordinates of the optical system (c),corrected scintillation crystalbThe spatial coordinates of (a).
In some embodiments, the imaging system 100 may correct the spatial information and temporal information of the data of the list mode based on the motion field matrix. In some embodiments, the format of the list mode data may be expressed as equation 37:
wherein, taAnd tbRespectively, the arrival of photons at the scintillation crystalaAnd a scintillation crystalbTime of flight of (a).
The spatial coordinates of the annihilation event are located from the list mode data. The timing resolution of the Maximum Likelihood Point (MLP) and the probability distribution of the MLP along the line of response (LOR) are obtained in view of the time resolution of the imaging system 100, which can be calculated by equation 38:
where c is the speed of light.
In some embodiments, the format of the data in list mode may be expressed as formula 39:
the probability distributions are symmetrically distributed along a line of response (LOR) centered on the MLP. A new probability distribution is obtained using the corrected spatial coordinates of the MLP. The correction method of the data of the correction list mode can be expressed as formula 40:
wherein,are the spatial coordinates of the corrected MLP.
Optional step 1307: a reconstructed image is obtained using the corrected list mode data. The reconstruction may be performed by an image reconstruction processor 207.
It should be noted that the above-described embodiments are provided for illustrative purposes and are not intended to limit the scope of the present disclosure. Numerous other changes, substitutions, variations, alterations, and modifications may be devised by those skilled in the art without departing from the inventive activity after reviewing the present disclosure, and it is intended that the present invention encompass all such changes, substitutions, variations, alterations, and modifications as fall within the scope thereof.
The foregoing disclosure is by way of example only and is not intended as limiting the scope of the invention. Various modifications, improvements and adaptations to the present application may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present application and thus fall within the spirit and scope of the exemplary embodiments of the present application.
FIG. 14 is an exemplary flow chart of motion correction shown in accordance with some embodiments of the present application. Process 1400 may be performed by rigid body motion detector 503. The processes shown in the flow diagrams may be performed by one or more processors and/or computer-controlled electronic components. Although specific steps have been disclosed in the flowcharts, these steps are exemplary only. That is, the present embodiment is suitable for performing various other steps or variations of the steps in the flowcharts. It should be understood that in various embodiments, the steps in this flowchart may be performed by software, hardware, or a combination of both.
Step 1402: data of the list mode is acquired. The list mode data may be obtained from the acquisition circuitry 202, from the data storage 201, or from any of the memories disclosed herein. At fixed time intervals, the data in list mode is partitioned into a plurality of sub data sets, step 1404. The time interval may range from 1 second to 10 seconds. Step 1406: a reference data set is selected. In some embodiments, the sub data set S is selected1As a reference data set, the sub data set corresponds to a first time interval. In some embodiments, multiple sub data sets are obtained. In this application, the plurality of sub data sets refers to subsets of data other than the reference data set. In some embodiments, the reference data set and the plurality of sub data sets may be obtained simultaneously or sequentially.
Step 1408: a reconstructed image is obtained from the reference data set. The reconstructed image may be used as a reference image in step 1420, as will be further described below.
Step 1410: a similarity measure between the reference data set and other subdata sets is measured (see in detail the description of fig. 11). Based on the similarity measure information obtained from step 1410, in step 1412, rigid body motion information of the list-mode data is obtained (see the description of fig. 11 for details). In step 1414, a gradient value between any two similarity measures is obtained. In step 1416, the data of the list mode is divided according to the gradient values. In step 1418, an image is reconstructed from the divided list mode data. In step 1420, a picture is selected from the reconstructed pictures as a reference picture. The reference image and the other images may be acquired simultaneously or sequentially. In step 1422, image registration is performed between the reference image and the other reconstructed images. The image registration method may include, but is not limited to, a combination of one or more of a cross-correlation algorithm (CC), a normalized cross-correlation algorithm, a Sequential Similarity Detection Algorithm (SSDA), a mutual information algorithm (MI), and the like. Based on the registration results between the reference image and the other reconstructed images, the motion parameters can be obtained. The motion parameters may include a rotation matrix about an X-axis, a rotation matrix about a Y-axis, a rotation matrix about a Z-axis, a translation matrix, a motion field matrix, and the like. In step 1424, the list mode data is corrected based on the motion parameters. In some embodiments, spatial information and/or temporal information of the list mode data may be corrected.
Fig. 18A-18D are four exemplary gated images, shown corrected and/or uncorrected, according to some embodiments of the present application. 18A and 18B illustrate two exemplary gated images without correction according to some embodiments of the present application; fig. 18C and 18D are two exemplary gated images corrected by the correction process shown in fig. 11, according to some embodiments of the present application. As can be seen by comparing fig. 18A and 18C, and fig. 18B and 18D, the corrected gated image artifacts are reduced.
Fig. 19A is a schematic diagram of a motion profile shown in accordance with some embodiments of the present application. FIG. 19A shows the variation over time of the magnitude of motion, whose values represent the similarity measure between the reference data set and the other sub-data sets. Fig. 19B is a schematic diagram of a motion gating curve shown in accordance with some embodiments of the present application. Fig. 19B shows dividing the data of the list mode and acquiring the rigid body motion according to the motion information. And reconstructing an image based on the divided data. A reference image is selected, and image registration is performed to obtain a motion parameter, as shown in fig. 20A to 20F. Fig. 20A is a schematic illustration of two motion profiles shown according to some embodiments of the present application. Wherein one motion curve describes data of a non-gated list mode moving along the Y-axis and the other motion curve describes data of a gated list mode moving along the Y-axis. Fig. 20B is a schematic illustration of two motion profiles shown according to some embodiments of the present application. Wherein one motion curve describes data for a non-gated list mode of motion along the X-axis and the other motion curve describes data for a gated list mode of motion along the X-axis. Fig. 20C is a schematic illustration of two motion profiles shown according to some embodiments of the present application. Wherein one motion curve describes data for a non-gated list mode of motion along the Z-axis and the other motion curve describes data for a gated list mode of motion along the Z-axis. Fig. 20D is a schematic illustration of two motion profiles shown according to some embodiments of the present application. Wherein one motion curve describes data of a non-gated list pattern rotated about the X-axis and the other motion curve describes data of a gated list pattern rotated about the X-axis. Fig. 20E is a schematic illustration of two motion profiles shown according to some embodiments of the present application. Wherein one motion curve describes data of a non-gated list pattern rotated about the Y-axis and the other motion curve describes data of a gated list pattern rotated about the Y-axis. Fig. 20F is a schematic illustration of two motion profiles shown according to some embodiments of the present application. Wherein one motion curve describes data of a non-gated list pattern rotated about the Z-axis and the other motion curve describes data of a gated list pattern rotated about the Z-axis.
Fig. 21 is a schematic diagram of a head image reconstructed based on raw data, according to some embodiments of the present application. As shown in fig. 21. The data of the list mode is corrected based on the motion information. Located at the top of fig. 21 are three uncorrected head images. Located in the middle are two sets of corrected reconstructed images (TOF), located to the left of the middle are images corrected based on motion information, and located to the right of the middle are images corrected based on motion information obtained in the present application. The lower part is two sets of corrected images (non-TOF), the left part is the image corrected based on motion information, and the right part is the image corrected based on motion information obtained in the present application.
FIG. 22 is a comparative schematic illustration of translations along the X-axis acquired based on different methods, shown in accordance with some embodiments of the present application. As shown, the solid line is the translation of the scanned object along the X-axis acquired using the method disclosed herein. The dashed line is the translation of the scanned object along the X-axis acquired based on image registration after image reconstruction.
FIG. 23 is an exemplary flow diagram illustrating motion detection based on translation and rotation information according to some embodiments of the present application. As shown in the figure, the motion can be detected based on the gradient values (Δ e (x)) of the translation information and the gradient values (Δ e (xy)) of the rotation information. The arrows indicate the position of the scanned object after rigid body motion, e.g. translation and/or rotation.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing disclosure is by way of example only, and is not intended to limit the present application. Various modifications, improvements and adaptations to the present application may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present application and thus fall within the spirit and scope of the exemplary embodiments of the present application.
Also, this application uses specific language to describe embodiments of the application. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the present application is included in at least one embodiment of the present application. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of one or more embodiments of the present application may be combined as appropriate.
Moreover, those skilled in the art will appreciate that aspects of the present application may be illustrated and described in terms of several patentable species or situations, including any new and useful combination of processes, machines, manufacture, or materials, or any new and useful improvement thereon. Accordingly, various aspects of the present application may be embodied entirely in hardware, entirely in software (including firmware, resident software, micro-code, etc.) or in a combination of hardware and software. The above hardware or software may be referred to as "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the present application may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media.
A computer readable signal medium may comprise a propagated data signal with computer program code embodied therein, for example, on a baseband or as part of a carrier wave. The propagated signal may take any of a variety of forms, including electromagnetic, optical, and the like, or any suitable combination. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code on a computer readable signal medium may be propagated over any suitable medium, including radio, electrical cable, fiber optic cable, RF, or the like, or any combination of the preceding.
Computer program code required for the operation of various portions of the present application may be written in any one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C + +, C #, VB.NET, Python, and the like, a conventional programming language such as C, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP, ABAP, a dynamic programming language such as Python, Ruby, and Groovy, or other programming languages, and the like. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any network format, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or in a cloud computing environment, or as a service, such as a software as a service (SaaS).
Additionally, the order in which elements and sequences of the processes described herein are processed, the use of alphanumeric characters, or the use of other designations, is not intended to limit the order of the processes and methods described herein, unless explicitly claimed. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing server or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the application, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to require more features than are expressly recited in the claims. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
Claims (10)
1. An image reconstruction method, comprising:
acquiring data of a scanned object, wherein the data comprises time information and/or space information of an event;
dividing the data into a plurality of data segments according to the time information and/or the space information;
acquiring statistical information of data in the data segment;
acquiring motion information of the scanning object according to the statistical information of the data in the data segment;
and acquiring a reconstructed image after motion correction according to the motion information of the scanning object.
2. The method of claim 1, wherein the data of the scanned object comprises data in a list mode and/or data in a chord graph mode.
3. The method of claim 1, wherein the motion information of the scanned object comprises translation and/or rotation information.
4. The method of claim 1, wherein the statistical information of the data in the data segment includes one or more of expectation, second moment, variance, and covariance information.
5. The method of claim 1, wherein the obtaining motion information of the scanned object according to the statistical information of the data in the data segments comprises: and acquiring the motion information of the scanning object according to the statistical information of the reconstruction space corresponding to the data in the data segment.
6. The method of claim 5, wherein the obtaining statistical information of a reconstruction space corresponding to the data in the data segment comprises:
acquiring statistical information of a data space corresponding to the data in the data section;
establishing a conversion relation between the data space and the reconstruction space;
and acquiring the statistical information of the reconstruction space corresponding to the data in the data section according to the statistical information of the data space corresponding to the data in the data section and the conversion relation between the data space and the reconstruction space.
7. The method of claim 1, wherein the obtaining a motion corrected reconstructed image from the motion information of the scanned object comprises:
according to the motion information of the scanning object, carrying out box separation processing on the data of the scanning object;
and performing image reconstruction on the binned data to obtain a reconstructed image set.
8. The method of claim 7, wherein the image reconstructing the binned data to obtain a set of reconstructed images comprises: performing motion correction on the binned data according to the motion information of the scanning object; and obtaining a reconstructed image set after motion correction according to the bin data after motion correction.
9. The method of claim 7, further comprising:
acquiring an attenuation map of the scanned object;
performing motion correction on the attenuation map according to the motion information of the scanning object to obtain an attenuation map of the real-time motion of the scanning object;
and carrying out attenuation correction on the data and/or the reconstructed image of the scanning object according to the attenuation map of the real-time motion of the scanning object.
10. The method of claim 7, wherein the obtaining a motion corrected reconstructed image from the motion information of the scanned object further comprises: and merging the reconstructed image sets to obtain a final reconstructed image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610617163.3A CN106251380B (en) | 2016-07-29 | 2016-07-29 | Image reconstruction method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610617163.3A CN106251380B (en) | 2016-07-29 | 2016-07-29 | Image reconstruction method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106251380A true CN106251380A (en) | 2016-12-21 |
CN106251380B CN106251380B (en) | 2022-07-15 |
Family
ID=57606100
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610617163.3A Active CN106251380B (en) | 2016-07-29 | 2016-07-29 | Image reconstruction method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106251380B (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107016661A (en) * | 2017-06-19 | 2017-08-04 | 上海联影医疗科技有限公司 | A kind of medical imaging procedure and device |
CN107133996A (en) * | 2017-03-21 | 2017-09-05 | 上海联影医疗科技有限公司 | Produce the method and PET/CT systems for the decay pattern rebuild for PET data |
CN107610083A (en) * | 2017-09-26 | 2018-01-19 | 上海联影医疗科技有限公司 | The acquisition methods of data processing method, equipment and medical image |
CN108876730A (en) * | 2018-05-24 | 2018-11-23 | 沈阳东软医疗系统有限公司 | The method, device and equipment and storage medium of correction of movement artifact |
CN109077745A (en) * | 2017-07-31 | 2018-12-25 | 上海联影医疗科技有限公司 | System and method for motor message correction |
CN109544657A (en) * | 2018-12-05 | 2019-03-29 | 上海联影医疗科技有限公司 | Medical image iterative reconstruction approach, device, computer equipment and storage medium |
CN110009709A (en) * | 2019-05-08 | 2019-07-12 | 上海联影医疗科技有限公司 | Medical image imaging method and system |
CN110187356A (en) * | 2019-06-14 | 2019-08-30 | 中国科学技术大学 | Remote super-resolution single photon image reconstructing method |
CN110443866A (en) * | 2019-07-31 | 2019-11-12 | 东软医疗系统股份有限公司 | Image rebuilding method, device, terminal device and PET system |
US11567156B2 (en) | 2019-05-08 | 2023-01-31 | Shanghai United Imaging Healthcare Co., Ltd. | Systems and methods for magnetic resonance imaging |
US12042669B2 (en) | 2019-05-08 | 2024-07-23 | Shanghai United Imaging Healthcare Co., Ltd. | Systems and methods for motion tracking in radiation therapy |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060034545A1 (en) * | 2001-03-08 | 2006-02-16 | Universite Joseph Fourier | Quantitative analysis, visualization and movement correction in dynamic processes |
CN101017196A (en) * | 2006-02-06 | 2007-08-15 | 株式会社东芝 | MRI device and method |
CN102005031A (en) * | 2010-11-03 | 2011-04-06 | 宁波鑫高益磁材有限公司 | Method and device for eliminating motion artifact of K spacial sampled data in MRI system |
CN102781331A (en) * | 2009-12-10 | 2012-11-14 | 皇家飞利浦电子股份有限公司 | Method and apparatus for using time of flight information to detect and correct for motion in imaging scans |
CN103054605A (en) * | 2012-12-25 | 2013-04-24 | 沈阳东软医疗系统有限公司 | Attenuation rectifying method and system |
CN103534730A (en) * | 2011-05-12 | 2014-01-22 | 皇家飞利浦有限公司 | List mode dynamic image reconstruction |
CN104183012A (en) * | 2013-10-31 | 2014-12-03 | 上海联影医疗科技有限公司 | PET (Polyethylene terephthalate) three-dimensional image reconstruction method and device |
-
2016
- 2016-07-29 CN CN201610617163.3A patent/CN106251380B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060034545A1 (en) * | 2001-03-08 | 2006-02-16 | Universite Joseph Fourier | Quantitative analysis, visualization and movement correction in dynamic processes |
CN101017196A (en) * | 2006-02-06 | 2007-08-15 | 株式会社东芝 | MRI device and method |
CN102781331A (en) * | 2009-12-10 | 2012-11-14 | 皇家飞利浦电子股份有限公司 | Method and apparatus for using time of flight information to detect and correct for motion in imaging scans |
CN102005031A (en) * | 2010-11-03 | 2011-04-06 | 宁波鑫高益磁材有限公司 | Method and device for eliminating motion artifact of K spacial sampled data in MRI system |
CN103534730A (en) * | 2011-05-12 | 2014-01-22 | 皇家飞利浦有限公司 | List mode dynamic image reconstruction |
CN103054605A (en) * | 2012-12-25 | 2013-04-24 | 沈阳东软医疗系统有限公司 | Attenuation rectifying method and system |
CN104183012A (en) * | 2013-10-31 | 2014-12-03 | 上海联影医疗科技有限公司 | PET (Polyethylene terephthalate) three-dimensional image reconstruction method and device |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107133996A (en) * | 2017-03-21 | 2017-09-05 | 上海联影医疗科技有限公司 | Produce the method and PET/CT systems for the decay pattern rebuild for PET data |
CN107133996B (en) * | 2017-03-21 | 2020-08-04 | 上海联影医疗科技有限公司 | Method for generating an attenuation map for PET data reconstruction and PET/CT system |
CN107016661A (en) * | 2017-06-19 | 2017-08-04 | 上海联影医疗科技有限公司 | A kind of medical imaging procedure and device |
CN107016661B (en) * | 2017-06-19 | 2019-10-15 | 上海联影医疗科技有限公司 | A kind of medical imaging procedure and device |
CN109077745A (en) * | 2017-07-31 | 2018-12-25 | 上海联影医疗科技有限公司 | System and method for motor message correction |
CN107610083A (en) * | 2017-09-26 | 2018-01-19 | 上海联影医疗科技有限公司 | The acquisition methods of data processing method, equipment and medical image |
CN107610083B (en) * | 2017-09-26 | 2020-12-04 | 上海联影医疗科技股份有限公司 | Data processing method and equipment and medical image acquisition method |
CN108876730A (en) * | 2018-05-24 | 2018-11-23 | 沈阳东软医疗系统有限公司 | The method, device and equipment and storage medium of correction of movement artifact |
CN108876730B (en) * | 2018-05-24 | 2022-03-04 | 东软医疗系统股份有限公司 | Method, device and equipment for correcting motion artifact and storage medium |
CN109544657A (en) * | 2018-12-05 | 2019-03-29 | 上海联影医疗科技有限公司 | Medical image iterative reconstruction approach, device, computer equipment and storage medium |
CN109544657B (en) * | 2018-12-05 | 2023-11-28 | 上海联影医疗科技股份有限公司 | Medical image iterative reconstruction method, device, computer equipment and storage medium |
CN110009709A (en) * | 2019-05-08 | 2019-07-12 | 上海联影医疗科技有限公司 | Medical image imaging method and system |
US11567156B2 (en) | 2019-05-08 | 2023-01-31 | Shanghai United Imaging Healthcare Co., Ltd. | Systems and methods for magnetic resonance imaging |
US11707235B2 (en) | 2019-05-08 | 2023-07-25 | Shanghai United Imaging Healthcare Co., Ltd. | Systems and methods for controlling imaging |
US12042669B2 (en) | 2019-05-08 | 2024-07-23 | Shanghai United Imaging Healthcare Co., Ltd. | Systems and methods for motion tracking in radiation therapy |
US12115006B2 (en) | 2019-05-08 | 2024-10-15 | Shanghai United Imaging Healthcare Co., Ltd. | Systems and methods for controlling imaging |
CN110187356A (en) * | 2019-06-14 | 2019-08-30 | 中国科学技术大学 | Remote super-resolution single photon image reconstructing method |
CN110187356B (en) * | 2019-06-14 | 2021-07-09 | 中国科学技术大学 | Remote super-resolution single photon imaging reconstruction method |
CN110443866A (en) * | 2019-07-31 | 2019-11-12 | 东软医疗系统股份有限公司 | Image rebuilding method, device, terminal device and PET system |
Also Published As
Publication number | Publication date |
---|---|
CN106251380B (en) | 2022-07-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106251380B (en) | Image reconstruction method | |
CN106251381B (en) | Image reconstruction method | |
US11557067B2 (en) | System and method for reconstructing ECT image | |
CN109035355B (en) | System and method for PET image reconstruction | |
CN109009200B (en) | System and method for positron emission tomography image reconstruction | |
JP6440230B2 (en) | Multi-modality imaging system and method | |
CN111540025B (en) | Predicting images for image processing | |
US10529130B2 (en) | Methods and systems for emission computed tomography image reconstruction | |
CN108209954B (en) | Emission type computed tomography image reconstruction method and system | |
US10695009B2 (en) | System and method for detecting organ motion | |
CN108742679B (en) | Nodule detection apparatus and method | |
CN106691487A (en) | Imaging method and imaging system | |
US20110275933A1 (en) | Motion compensation in nuclear imaging | |
JP7122301B2 (en) | Noise reduction in image data | |
CN113674377B (en) | System and method for positron emission tomography image reconstruction | |
US20230045406A1 (en) | System and method for hybrid imaging | |
He et al. | Initial Investigation of Geometric Sensitivity by Gate for Oscillation Motion Correction in 3D Pet |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: 201807 No. 2258 Chengbei Road, Jiading Industrial Zone, Jiading District, Shanghai. Applicant after: Shanghai Lianying Medical Technology Co.,Ltd. Address before: 201807 No. 2258 Chengbei Road, Jiading Industrial Zone, Jiading District, Shanghai. Applicant before: SHANGHAI UNITED IMAGING HEALTHCARE Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |