WO2018023380A1 - 图像重建方法及系统 - Google Patents

图像重建方法及系统 Download PDF

Info

Publication number
WO2018023380A1
WO2018023380A1 PCT/CN2016/092881 CN2016092881W WO2018023380A1 WO 2018023380 A1 WO2018023380 A1 WO 2018023380A1 CN 2016092881 W CN2016092881 W CN 2016092881W WO 2018023380 A1 WO2018023380 A1 WO 2018023380A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
matrix
image matrix
voxel
region
Prior art date
Application number
PCT/CN2016/092881
Other languages
English (en)
French (fr)
Inventor
吕杨
丁喻
Original Assignee
上海联影医疗科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海联影医疗科技有限公司 filed Critical 上海联影医疗科技有限公司
Priority to PCT/CN2016/092881 priority Critical patent/WO2018023380A1/zh
Priority to US15/394,633 priority patent/US10347014B2/en
Publication of WO2018023380A1 publication Critical patent/WO2018023380A1/zh
Priority to US16/448,052 priority patent/US11308662B2/en
Priority to US17/659,660 priority patent/US11869120B2/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Definitions

  • the present application relates to a method and system for image reconstruction, and in particular to a reconstruction with a multi-resolution medical image.
  • PET Positron Emission Tomography
  • the ultra-long axial PET system (which can be composed of several short-axis PET) has an ultra-long axial field of view, which can obtain multiple parts or even whole body images in a single bed scan.
  • Image reconstruction is a key technology in PET technology research. Although there are mature PET image reconstruction methods, such as regional reconstruction of spatial distribution functions, there are still image reconstruction processes for ultra-long axial PET systems. How to reconstruct the reconstruction parameters of different parts in one time and how to reduce the calculation amount in the reconstruction process. Therefore, a new image reconstruction method and system is needed to solve the above problems.
  • Some embodiments of the present application provide an image reconstruction method.
  • the method includes one or more of the following operations. Determining a first area of the object and setting a size of the first voxel corresponding to the first area. Determining a second region of the object and setting a size of the second voxel corresponding to the second region. Get the scan data for this object. Based on the scan data, the first region image is reconstructed.
  • the reconstructing the first region image may include an orthographic projection of the first voxel and the second voxel, and a back projection of the first voxel.
  • the second area image is reconstructed according to the scan data, including an orthographic projection of the first voxel and the second voxel, and a back projection of the second voxel.
  • the second area and the first area may be spatially continuous.
  • the second area and the first area may be discontinuous in space.
  • reconstructing the first area image may include performing a first filtering process on the first area image.
  • Reconstructing the second region image may include performing a second filtering process on the second region image.
  • reconstructing the first area image may include iteratively reconstructing the first area according to the scan data.
  • Reconstructing the second region image may include iteratively reconstructing the second region image based on the scan data.
  • the number of iterations for reconstructing the image of the first region may be different from the number of iterations for reconstructing the image of the second region.
  • the number of iterations for reconstructing the image of the first region may be more than the number of iterations for reconstructing the image of the second region.
  • iteratively reconstructing the first region image or iteratively reconstructing the second region image may be based on the ordered subset maximum expectation method.
  • the orthographic projection of the first voxel and the second voxel may include orthographic projection of the first voxel and the second voxel along a line of response.
  • the method may further comprise correcting the first area image and the second area image.
  • the method may further comprise acquiring structural information of the object.
  • the method can further include determining the first region and the second region based on the structural information.
  • the method may further comprise determining a first image matrix and storing the first voxel in the first image matrix.
  • Reconstructing the first region image can include reconstructing the first image matrix.
  • the method can further include determining a second image matrix and storing the second voxel in the second image matrix.
  • Reconstructing the second region image can include reconstructing the second image matrix.
  • the method may further comprise generating a lookup table.
  • the lookup table may record a correspondence between the first image matrix and the first voxel, and/or a correspondence relationship between the second image matrix and the second voxel, and the like.
  • the correspondence between the first image matrix and the first voxel may include rearranging the first voxel and storing the first voxel in the first image matrix.
  • the correspondence between the first image matrix and the first voxel may include storing the first voxel in the first image matrix after compressing and rearranging the first voxel.
  • the method may further comprise generating a merge matrix.
  • the voxel size corresponding to the merged matrix may be the smaller of the first voxel and the second voxel.
  • the method can further include populating the first image matrix and the second image in the merge matrix, respectively, to generate a final image matrix corresponding to the final image.
  • an image reconstruction method can include determining an image matrix.
  • the image matrix can correspond to a scan area and corresponds to a certain number of voxels.
  • the method can further include dividing the image matrix into a plurality of sub-image matrices.
  • the plurality of sub-image matrices At least one of the sub-image matrices may correspond to a sub-scanning area of the scanned area and include a portion of the voxel.
  • the method can further include transforming at least one of the sub-image matrices to generate at least one transform matrix.
  • the method can further include reconstructing the at least one sub-image matrix based on the transformation matrix.
  • the method can further include reconstructing the image matrix based on the reconstructed at least one sub-image matrix.
  • the transforming may include translating at least some of the elements in the sub-image matrix, and the like.
  • Transforming can include compressing or rearranging sub-image matrices and the like.
  • the method may further comprise establishing a lookup table of the image matrix and the sub-image matrix.
  • the lookup table can record the manner in which the sub-image matrix is compressed or rearranged.
  • Transforming can include decompressing the sub-image matrix from the lookup table.
  • an image reconstruction system can include an imaging device.
  • the imaging device can be configured to generate scan data of the object.
  • the system can further include an image processor.
  • the image processor can include a receiving module.
  • the receiving module may be configured to acquire a size of the first region of the object and the first voxel corresponding to the first region.
  • the receiving module may be further configured to acquire a size of the second region of the object and the second voxel corresponding to the second region.
  • the image processor can further include a reconstruction module.
  • the reconstruction module can be configured to reconstruct the first region image.
  • the reconstructing the first region image may include an orthographic projection of the first voxel and the second voxel, and a back projection of the first voxel.
  • the reconstruction module is further configured to reconstruct the second region image, including an orthographic projection of the first voxel and the second voxel, and a back projection of the second voxel.
  • the system includes a post-processing module configured to obtain the first area image and the second area image by post-processing.
  • This post-processing may include filtering processing, noise reduction processing, merging processing, or division processing, and the like.
  • the reconstruction module may further include an image matrix generating unit.
  • the image matrix generation unit may be configured to determine a first image matrix.
  • the first voxel can be stored in the first image matrix.
  • Reconstructing the first region image can include reconstructing the first image matrix.
  • the image matrix generation unit may be configured to determine a second image matrix.
  • the reconstruction module may further comprise an image matrix processing unit.
  • Image matrix processing list The element may be configured to rotate, compress and decompress, rearrange and reverse rearrange, fill, decompose, merge, and combine at least one of the first image matrix and the second image matrix.
  • the image matrix processing unit may further include a lookup table generating unit.
  • the lookup table generation unit can be configured to generate a lookup table.
  • the lookup table records a correspondence between the first image matrix and the first voxel, and/or a correspondence between the second image matrix and the second voxel, and the like.
  • the correspondence between the first image matrix and the first voxel may include storing the first voxel in the first image matrix after compressing or rearranging the first voxel.
  • the post-processing module may include a merging unit.
  • the merging unit may be configured to generate a merging matrix, and fill the first image matrix and the second image matrix in the merging matrix, respectively, to generate a final image matrix corresponding to the final image.
  • the voxel size corresponding to the merged matrix may be the smaller of the first voxel and the second voxel.
  • an image reconstruction system can include an image matrix generation unit.
  • the image matrix generation unit may be configured to generate an image matrix.
  • the image matrix can correspond to a scan area and contains a certain number of voxels.
  • the system can further include an image matrix processing unit.
  • the image matrix processing unit may be configured to divide the image matrix into a plurality of sub-image matrices, compress at least one sub-image matrix of the plurality of sub-image matrices, and reconstruct the at least one sub-image matrix based on the compressed sub-image matrix, And reconstructing the image matrix based on the reconstructed at least one sub-image matrix.
  • FIG. 1 is a schematic diagram of a multi-resolution image reconstruction and storage system, in accordance with some embodiments of the present application.
  • FIG. 2 is a schematic diagram of a processor shown in accordance with some embodiments of the present application.
  • FIG. 3 is a schematic diagram of a reconstruction module shown in accordance with some embodiments of the present application.
  • FIG. 4 is a flow diagram of multi-resolution image reconstruction shown in accordance with some embodiments of the present application.
  • FIG. 5 is a schematic diagram of a post-processing module shown in accordance with some embodiments of the present application.
  • 6-A and 6-B are flowcharts of post processing shown in accordance with some embodiments of the present application.
  • FIG. 7 is a schematic diagram of a voxel correspondence matrix shown in accordance with some embodiments of the present application.
  • FIG. 8 is a schematic illustration of module pairing in accordance with some embodiments of the present application.
  • FIG. 9 is a schematic diagram of an image matrix processing unit shown in accordance with some embodiments of the present application.
  • FIG. 10 is a schematic diagram of image matrix processing illustrated in accordance with some embodiments of the present application.
  • FIG. 11 is a flow diagram of image matrix reconstruction shown in accordance with some embodiments of the present application.
  • FIG. 12 is a flow diagram of image matrix processing illustrated in accordance with some embodiments of the present application.
  • the "scanning area” represents the actual area in which the scanning is performed, corresponding to the image matrix, which represents the actual area corresponding to the reconstruction of the image matrix. Unless the context clearly indicates an exception, in this application "scanned area”, “reconstructed area”, “actual area” may mean the same meaning and may be replaced.
  • the "element” represents the smallest component in the image matrix
  • the "voxel” represents the smallest component in the actual region. Unless the context clearly indicates an exception, the "element” in the image matrix and the “voxel” in the actual region corresponding to the image matrix in this application may mean the same meaning and may be replaced.
  • the multi-resolution image reconstruction and storage method described herein includes reconstructing and storing object images at different resolutions (ie, different voxel sizes) in different regions of the object.
  • an aspect of the present application relates to a multi-resolution image reconstruction and storage system.
  • the multi-resolution image reconstruction and storage system can include a receiving module, a storage module, a reconstruction module, a post-processing module, and a display module.
  • Another aspect of the present application relates to an image matrix processing method that can be applied in the multi-resolution image reconstruction and storage system.
  • the image matrix processing method may include compressing and decompressing, rearranging, and rearranging the image matrix.
  • Embodiments of the present application can be applied to different image processing systems.
  • Different image processing systems may include positron emission tomography (PET) systems, computed tomography-positron emission tomography hybrid systems (CT-PET systems), nuclear magnetic resonance-positron emission computed tomography Hybrid system (MR-PET system), etc.
  • PET positron emission tomography
  • CT-PET computed tomography-positron emission tomography hybrid systems
  • MR-PET system nuclear magnetic resonance-positron emission computed tomography Hybrid system
  • System 100 can include an image processor 120 (referred to simply as processor 120), a network 130, and an imaging device 110.
  • the processor 120 performs multi-pointing on the collected information (eg, data, etc.)
  • a system for recognizing image reconstruction and storage can be an entity electronic device or a server.
  • the electronic device may include a portable computer, a tablet, a mobile phone, a smart terminal device, and the like.
  • the processor 120 can be centralized, such as a data center; or it can be distributed, such as a distributed system.
  • Processor 120 can be local or remote.
  • the information may be image information of one or more objects obtained by scanning or otherwise.
  • the processor 120 may include a central processing unit (CPU), an application specific integrated circuit (ASIC), an application specific instruction set processor (ASIP), and a physical Processor (Physics Processing Unit, PPU), Digital Processing Processor (DSP), Field-Programmable Gate Array (FPGA), Programmable Logic Device (PLD), A combination of one or more of a processor, a microprocessor, a controller, a microcontroller, and the like.
  • CPU central processing unit
  • ASIC application specific integrated circuit
  • ASIP application specific instruction set processor
  • PPU Physical Processor
  • DSP Digital Processing Processor
  • FPGA Field-Programmable Gate Array
  • PLD Programmable Logic Device
  • Network 130 can be a single network or a combination of multiple different networks.
  • the network 130 may be a local area network (LAN), a wide area network (WAN), a public network, a private network, a private network, a public switched telephone network (PSTN), the Internet, A wireless network, a virtual network, or any combination of the above.
  • Network 130 may also include multiple network access points.
  • a wired network may include one or a combination of one or more of a metal cable, a hybrid cable, one or more interfaces, and the like.
  • the wireless network may include one or a combination of one or more of Bluetooth, a local area network (LAN), a wide area network (WAN), a wireless personal area network (WPAN), and Near Field Communication (NFC).
  • Network 130 may be suitable for use in the scope of the present application, but is not limited to the description.
  • Imaging device 110 may include one or more devices that scan one or more targets. Further, the devices for scanning may be used in, but not limited to, applications in the medical field, such as medical testing.
  • medical testing may include magnetic resonance imaging (MRI), X-ray computed tomography (X-ray-CT), positron emission computed tomography (PET), single photon emission computed tomography (SPECT) Or a combination of one or more of the above medical tests.
  • the target can be a combination of one or more of an organ, a body, an object, a dysfunction, a tumor, and the like.
  • the target can be a group of one or more of a head, a chest, an organ, a bone, a blood vessel, and the like. Hehe.
  • imaging device 110 can be spliced from one or more imaging modules. Further, the detectors of the one or more imaging modules may be placed continuously around the target.
  • imaging device 110 and processor 120 may be integral. In some embodiments, imaging device 110 can transmit information to processor 120 over network 130. In some embodiments, imaging device 110 may also send information directly to processor 120. In some embodiments, processor 120 may also include information stored by the processing itself.
  • Processor 120 may include one or more receiving modules 210, one or more reconstruction modules 220, one or more post-processing modules 230, one or more display modules 240, and one or more storage modules 250.
  • the receiving module 210 can collect the required information in one or more ways.
  • the manner of collecting information may include scanning an object (for example, acquiring information of an object by the imaging device 110) by collecting pre-stored information (for example, by collecting information in the storage module 250 or remote information obtained through the network 130), and the like.
  • the types of information may include voxel data, counts, matrices, images, vectors, vector libraries, and the like.
  • the reconstruction module 220 can reconstruct the information collected in the receiving module 210. Reconstructing the information may include generating an image matrix corresponding to the scanned object as a whole or one or more portions of the scanned object based on the collected information. In some embodiments, the reconstructing of the information can include determining one or more scanned regions and one or more voxels corresponding to the one or more scanned regions, respectively. The one or more voxels may correspond to one or more elements in one or more image matrices. The one or more image matrices may be iteratively reconstructed based on the collected information. In some embodiments, the reconstruction of the iteration may include performing one or more orthographic processing and back projection processing on the image matrix. In some embodiments, the reconstruction of the information may also include removing portions of the information to improve the computational and storage efficiency of the system. In some embodiments, the information can be converted into a form of an image matrix that can include compression and/or rearrangement of the image matrix.
  • the post-processing module 230 may perform a post-processing operation on the reconstructed information generated by the reconstruction module.
  • the post-processing operation can include post-processing the reconstructed matrix of the iteration based on the one or more voxels to produce an image of the scanned object as a whole or one or more portions of the scanned object or The matrix corresponding to the image.
  • the post-processing may include filtering the iteratively reconstructed matrix Wave processing, noise reduction processing, combining processing, division processing, and the like.
  • the display module 240 can display images generated by the post-processing module.
  • display module 240 can include a display device such as a display screen or the like.
  • display module 240 can render, scale, rotate, maximize density, etc. the image as needed prior to displaying the final image.
  • display module 240 can further include one or more input devices such as a keyboard, touch screen, touch pad, mouse, remote control, and the like.
  • a user may input some raw parameters and/or set initialization conditions for corresponding image display and/or processing through the one or more input devices.
  • the user may perform settings and/or operations according to images displayed by the display module 240, such as display for displaying a two-dimensional image, display for displaying a three-dimensional image, displaying an image corresponding to the scanned data, a display control interface, Displaying the input interface, displaying images of different areas, displaying the process of image reconstruction, displaying the result of image reconstruction, receiving the input of the user, enlarging the display image, reducing the processing, setting a plurality of images simultaneously, and the like A combination of settings and/or operations.
  • images displayed by the display module 240 such as display for displaying a two-dimensional image, display for displaying a three-dimensional image, displaying an image corresponding to the scanned data, a display control interface, Displaying the input interface, displaying images of different areas, displaying the process of image reconstruction, displaying the result of image reconstruction, receiving the input of the user, enlarging the display image, reducing the processing, setting a plurality of images simultaneously, and the like
  • the storage module 250 can store data.
  • the stored data may be from imaging device 110, network 130, and/or other modules/units in processor 120 (receiving module 210, reconstruction module 220, post-processing module 230, display module 240, or other related modules (not shown) Out)).
  • the storage module 250 may be a device that stores information by using an electric energy method, such as various memories, such as a random access memory (RAM), a read only memory (ROM), and the like.
  • the random access memory may include a decimal counter, a select tube, a delay line memory, a Williams tube, a dynamic random access memory (DRAM), a static random access memory (SRAM), a thyristor random access memory (T-RAM), a zero capacitance random access memory.
  • DRAM dynamic random access memory
  • SRAM static random access memory
  • T-RAM thyristor random access memory
  • the read only memory may include a bubble memory, a magnetic button line memory, a thin film memory, a magnetic plate line memory, a magnetic core memory, a drum memory, an optical disk drive, a hard disk, a magnetic tape, an early nonvolatile memory (NVRAM), a phase change memory, Reluctance random access memory, ferroelectric random access memory, nonvolatile SRAM, flash memory, electronic erasable rewritable read only memory, erasable programmable read only memory, programmable read only memory, shielded heap read A combination of one or more of memory, floating connection gate random access memory, nano random access memory, track memory, variable resistive memory, programmable metallization cells, and the like.
  • the storage module 250 may be a device that stores information using magnetic energy, such as a hard disk, a floppy disk, a magnetic tape, a magnetic core memory, a magnetic bubble memory, a USB flash drive, a flash memory, or the like.
  • the storage module 250 may be a device that optically stores information, such as a CD or a DVD. Wait.
  • the storage module 250 may be a device that stores information by magneto-optical means, such as a magneto-optical disk or the like.
  • the access mode of the storage module 250 may be one or a combination of random storage, serial access storage, read-only storage, and the like.
  • the storage module 250 can be a non-permanent memory, or a permanent memory.
  • the storage module 250 can be associated with one or more of the receiving modules 210, the reconstruction module 220, the post-processing module 230, the display module 240, or other related modules (not shown). In some embodiments, storage module 250 can selectively associate one or more virtual storage resources, such as cloud storage, a virtual private network, and/or other virtual storage resources, over network 130. .
  • the stored data may be in various forms of data, such as values, signals, images, related information of a given target, commands, algorithms, programs, and the like.
  • the above modules may be different modules embodied in one system, or may be a module to implement the functions of two or more modules described above.
  • storage module 250 can be included in any one or more of the modules.
  • the receiving module 210 and the display module 240 can be combined into one input/output module.
  • the reconstruction module 220 and the post-processing module 230 can be combined into one image generation module.
  • the reconstruction module 220 can include one or more parameter setting units 310, one or more region selection units 320, one or more image matrix generation units 340, one or more image matrix processing units 350, one or more computing units 360 One or more distribution units 370.
  • the parameter setting unit 310 can perform parameter setting in the process of reconstruction.
  • the parameter may include one or a combination of two or more of the size of the reconstruction region, the location of the reconstruction region, the size of the voxel in the reconstruction region, the algorithm of the iteration, the number of iterations, or the termination condition.
  • the parameters may be obtained from storage module 250.
  • the user can make settings of the parameters through the receiving module 210 or the display module 240.
  • parameter setting unit 310 can store default values for one or more parameters that can be used when settings for the parameters are not available.
  • the region selection unit 320 can select an area in which reconstruction is to be performed.
  • the selection of the reconstruction region may include selecting a size and location of the reconstruction region.
  • the region selection unit 320 may obtain the settings of the reconstructed region size and location from the parameter setting unit 310.
  • the region selection unit 320 can store default locales for a plurality of scan sites, such as the head cavity, chest cavity, abdominal cavity, etc., which can be recalled or adjusted at any time.
  • region selection unit 320 can be combined with display module 240. Further, the user may select one or more regions for scanning and/or reconstruction in the image displayed by the display module 240, and the region selecting unit 320 may scan and/or reconstruct the corresponding region after receiving the user selection. .
  • Image matrix generation unit 340 can generate one or more image matrices.
  • the one or more image matrices may correspond to one or more scan regions.
  • the image matrix and the scan area may be in one-to-one correspondence.
  • each element in the image matrix corresponds to the value of each voxel in the scan region.
  • the numerical values include one or more values such as an X-ray attenuation coefficient, a gamma ray attenuation coefficient, a hydrogen atom density, and a density of a voxel.
  • the value of the voxel corresponding to the element in the image matrix may be modified and/or updated in the reconstruction of the iteration.
  • the values of the voxels corresponding to the elements in the image matrix can be converted to grayscale or RGB chrominance of the image. Further, the image matrix may correspond to one image and/or be converted into one image.
  • Image matrix processing unit 350 may process the resulting image matrix.
  • the processing may include dividing one image matrix into a plurality of sub-image matrices, or performing one or more combinations of operations such as rotating, compressing and decompressing, rearranging, rearranging, filling, decomposing, merging, and the like of one image matrix.
  • the rotation of the image matrix can include rotating the image matrix clockwise or counterclockwise.
  • the compression of the image matrix may include removing a portion of the elements in the image matrix.
  • the voxels corresponding to the removed elements are not penetrated by one or more rays (eg, response lines in a PET system, or x-rays in a CT system, etc.) during image reconstruction.
  • the representation value of the removed element may be set to be zero or other fixed value.
  • the removed elements may be conditional, such as values less than a threshold or at certain locations in a matrix, and the like. Accordingly, decompression of the matrix may include adding some elements to portions of the image matrix.
  • decompression of the matrix may include adding elements that were removed when the image matrix was compressed back to the original location of the element. In some embodiments, these are removed from the matrix and added back to the map. The values of elements like matrices remain unchanged during compression and decompression.
  • rearranging the matrix may include translating a portion or all of the elements in the matrix from the first location to a second location in the image matrix.
  • rearrangement of a matrix can translate elements of a certain category or feature to a particular location. Accordingly, the inverse rearrangement of the image matrix can include translating some or all of the translated elements from the second position back to the first position. In some embodiments, the values of the elements that are rearranged or inversely rearranged in the image matrix remain unchanged.
  • the filling of the image matrix may include filling corresponding numerical values into certain empty image matrices in the image matrix according to certain rules or algorithms.
  • the filling in a system related to PET, may include an image matrix corresponding to the voxel through which the response line passes according to the position of the voxel through which the Line of Response (LOR) passes.
  • LOR Line of Response
  • the elements in the fill are filled.
  • the filling may be based on the count of detectors corresponding to the response line and the effect of the voxel pair counts (also referred to as sensitivity) through which the response line passes.
  • Decomposition of the image matrix may include decomposing the image matrix into a plurality of sub-image matrices.
  • the sub-image matrices may each cover a portion of the elements of the original image matrix.
  • the sub-image matrix may be comprised of scan areas through which one or more lines of response pass. Similarly, a line of response can pass through an area corresponding to one or more sub-image matrices.
  • the merging of the image matrices may include merging a plurality of sub-image matrices into one image matrix. In some embodiments, a plurality of sub-image matrices after image matrix decomposition may be merged back into the image matrix.
  • Computation unit 360 may calculate the values of the elements in the image matrix as well as other values. In some embodiments, the calculating unit 360 may calculate the value of the element in the image matrix corresponding to the scanned object through which the one or more response lines pass according to the representation of the detector corresponding to the one or more response lines. .
  • computing unit 360 can include one primary computing node and one or more secondary computing nodes. In some embodiments, the one or more secondary computing nodes respectively calculate a sub-image matrix, and the sub-image matrix may correspond to one sub-scanning region. In some embodiments, the sub-scanning area can be formed by one or more detector scans.
  • the secondary computing node may calculate the value of the voxel in the sub-image matrix corresponding to the sub-scanning region according to the count of the detector corresponding to the sub-scanning region.
  • the primary computing node may include combining and superimposing the values of the voxels corresponding to the sub-image matrix corresponding to the sub-scanning regions calculated by the secondary computing node. For example, if a voxel is in a plurality of sub-image matrices, the main computing node may locate the voxel calculated by the plurality of sub-computing nodes The corresponding values of the sub-image matrix are added.
  • Allocation unit 370 can assign the computing tasks to different computing nodes of the computing unit, which can include one or more primary computing nodes and one or more secondary computing nodes.
  • the allocation unit 370 can pair or group the detectors and determine the size and location of the sub-scanning regions corresponding to the paired or grouped detectors.
  • the allocating unit 370 can allocate reconstruction and calculation tasks of the sub-image matrix corresponding to the sub-scanning region to the secondary computing node.
  • image matrix generation unit 340 and image matrix processing unit 350 may be combined into one image matrix unit.
  • the multi-resolution image reconstruction may be implemented by processor 120.
  • processor 120 may first obtain structural information for an object in step 402.
  • the structural information refers to contour information or appearance information of the object.
  • step 402 can be implemented by receiving module 210.
  • the structural information can be obtained by scanning the object. Further, the structural information can be obtained by scanning by CT, MRI, PET, or the like. Alternatively, the structural information may also be obtained by other means.
  • Step 404 can include determining a size of the first region and its corresponding first voxel based on the structural information of the scanned object.
  • step 404 can be implemented by receiving module 210.
  • the first area may correspond to the entirety of the scanned object.
  • the value of the first voxel may be stored in the first image matrix M 0 to form a first element.
  • the receiving module 210 may determine the second region and its corresponding second voxel size according to the structural information of the scanned object.
  • the second region may correspond to a portion of the scanned object.
  • the second voxel value may be stored in the second image matrix M 1, the second element is formed.
  • the second voxel is smaller than the first voxel.
  • the smaller the voxel the higher the resolution of the corresponding image.
  • the second region corresponds to a region that needs to be imaged at a high resolution.
  • the processor 120 can obtain scan information for an object.
  • the processor 120 can acquire the scan information through the imaging device 110.
  • the imaging device 110 may include a PET imaging device.
  • the scan information can be obtained from the storage module 250.
  • the scan information may also be obtained from a remote storage module (eg, a cloud disk) over the network 130.
  • the processor 120 may reconstruct the first image matrix M 0 and the second image matrix M 1 corresponding to the first region and the second region in steps 410 and 412, respectively.
  • reconstruction of the first image matrix M 0 and the second image matrix M 1 may be through an iterative reconstruction algorithm.
  • the reconstruction of the first image matrix M 0 and the second image matrix M 1 may be implemented by an Ordered Subset Expectation Maximization (OSEM):
  • OSEM Ordered Subset Expectation Maximization
  • i is the number of the response line (detector pair)
  • m is the reconstructed image matrix number
  • j is the number of the element in matrix m. Is the value of element j in the reconstructed image matrix m at the nth iteration
  • y i is the actual count measured on response line i
  • F is the forward projection operator
  • B(y i , F) is the back projection operator.
  • the ordered subset maximum expectation method needs to perform, for example, orthographic projection of an image matrix (ie, orthographic projection of a voxel corresponding to an element in an image matrix), calculation of a correction coefficient, and back projection of the image matrix (ie, an image).
  • orthographic projection of an image matrix ie, orthographic projection of a voxel corresponding to an element in an image matrix
  • calculation of a correction coefficient ie, an image
  • back projection of the image matrix ie, an image
  • the first image matrix M 0 is reconstructed to obtain a first region image
  • the reconstructing the first image matrix M 0 may include orthographically projecting the first voxel and the second voxel, and then The voxel performs a process such as back projection; reconstructing the second image matrix M 1 to obtain a second region image, the reconstruction of the second image matrix M 1 may include orthographic projection of the first voxel and the second voxel, and The second voxel is subjected to back projection or the like.
  • the first voxel and the second voxel may be different in size.
  • the image matrix is orthographically projected to obtain the detector result, wherein the orthographic operator can be expressed as:
  • k is the number of all elements of the response line i associated with the image matrix m
  • c ikm is the sensitivity of the response line i to the element j in the image matrix m.
  • different image matrices correspond to different sized voxels.
  • a line of response may pass through the first region (corresponding to the first voxel) and the second region (corresponding to the second voxel), and according to formula (2), orthographic projection of the image matrix includes the first voxel And the orthographic projection of the second voxel.
  • the correction coefficient is a ratio of a count measured on a certain response line to an orthographic projection of the reconstructed image along the response line, that is,
  • the number of iterations required for the corresponding image is different for different image matrices. For example, for an image matrix of the body, iterations may need to be repeated twice; for images of the brain, iterations may be required four times.
  • Equation (1) can be written as:
  • n is the sequence number of the current iteration. If the preset iteration number d(m) of the image matrix is greater than the sequence number n of the current iteration, the image matrix is further iteratively processed to update the image; if the preset iteration number d(m) of the image matrix is less than or equal to the sequence number of the current iteration Then, the iteration of the image matrix is stopped, and the image corresponding to the current image matrix is obtained.
  • the processor 120 may convert the image matrix into a first region image and a second region image, respectively, according to values of elements in the image matrix.
  • the values of the elements in the image matrix can be represented as grayscale or RGB chrominance of the voxels in the image.
  • the processor 120 may perform the first region image and the second region image according to the present Apply the methods mentioned in the other examples to perform post-processing operations.
  • FIG. 5 is a schematic diagram of a post-processing module shown in accordance with some embodiments of the present application.
  • Post-processing module 230 may include one or more filter processing units 510, one or more partition units 520, one or more merge units 530.
  • the filtering processing unit 510 may perform filtering processing on data or images corresponding to the image matrix or the image matrix.
  • the filtering process may include one or a combination of Gaussian filtering, Metz filtering, Butterworth filtering, Hamming filtering, Hanning filtering, Parzen filtering, Ramp filtering, Shepp-logan filtering, and Wiener filtering.
  • different scanning regions or different portions of the scanned object may use different filtering processes.
  • Metz filtering can be used for brain scanning
  • Gaussian filtering can be used for body scanning.
  • the dividing unit 520 may separately store the one or more image matrices into different matrices according to the size of the corresponding voxels of the filtered one or more image matrices.
  • the filtered image matrices placed in different matrices have the same or similar voxel sizes.
  • the merging unit 530 can merge the image matrices corresponding to the actual regions of different voxel sizes.
  • merging includes creating a merging matrix, the region corresponding to the merging matrix being the largest region corresponding to the image matrix to be merged.
  • the voxel size corresponding to the merged matrix is the minimum voxel size corresponding to the image matrix to be merged.
  • the smaller the voxel the higher the resolution.
  • the merging includes interpolating the image matrix to be merged.
  • the interpolation process may refer to predicting a part of a voxel having no value in a high resolution image by a specific algorithm or process in the process of converting a low resolution image into a high resolution.
  • the algorithm and processing may include bilinear interpolation processing, bicubic interpolation processing, fractal interpolation processing, natural neighbor interpolation, nearest neighbor interpolation, minimum curvature, local polynomial, and the like. Or a combination of several.
  • 6-A and 6-B are flow diagrams of post processing shown in accordance with some embodiments of the present application.
  • the post processing may be implemented by post processing module 230.
  • filtering processing may first be performed in step 602, which may include Gaussian filtering, Metz filtering, Butterworth filtering, Hamming filtering, Hanning filtering, One or a combination of Parzen filtering, Ramp filtering, Shepp-logan filtering, Wiener filtering, and the like.
  • different scanning regions or different portions of the scanned object may use different filtering processes. For example, Metz filtering can be used for brain scanning, and Gaussian filtering can be used for body scanning.
  • the image may be divided into different levels. For example, the image may be divided according to the voxel size corresponding to the image matrix.
  • the image matrix can be written into a Dicom file after being divided into different levels.
  • the Dicom file can record hierarchical information of the image, as well as image matrices and their corresponding voxel size information.
  • the voxel size information shown may also refer to hierarchical information, where a larger voxel represents a lower level.
  • step 606 image matrices corresponding to different image levels are merged.
  • the hierarchical information of the different images may be included in one of the Dicom files mentioned above.
  • the merging includes storing images of different levels in separate matrices and populating the images into a final image matrix according to the level of the image.
  • the step of merging can be as shown in Figure 6-B.
  • the post-processing module 230 can store images of different voxel sizes in different matrices to be merged according to the hierarchical information.
  • a certain actual area may correspond to a plurality of matrices to be merged, wherein a plurality of voxels corresponding to the matrix to be merged have different sizes.
  • the images corresponding to the two or more matrices to be merged may have overlapping regions or may not overlap each other.
  • post-processing module 230 can establish a merge matrix M.
  • the actual area corresponding to the merge matrix M is the largest actual area corresponding to the matrix to be merged.
  • the voxel size corresponding to the merge matrix M is the minimum voxel size corresponding to the matrix to be merged. In some embodiments, the smaller the voxel, the higher the resolution.
  • the post-processing module 230 may fill the matrix to be merged with the actual region smaller than the largest actual region. Processing to generate a final image matrix.
  • post-processing module 230 may interpolate the matrices to be merged with voxel sizes greater than the minimum voxel size. The interpolation process may refer to predicting a part of a voxel having no value in a high resolution image by some algorithm or process in the process of converting a low resolution image into a high resolution.
  • the algorithms and processes can include Combination application of one or several of bilinear interpolation processing, bicubic interpolation processing, fractal interpolation processing, natural neighbor interpolation, nearest neighbor interpolation, minimum curvature method, local polynomial method, and the like.
  • the post-processing module 230 may merge the image matrix to be merged through the zero-filling process and the interpolation process into a final matrix M.
  • image matrices of different levels may be sequentially populated into the merge matrix M. For example, an image matrix with a lower level (eg, a larger voxel) may be filled first, followed by an image matrix with a higher level (eg, a smaller voxel).
  • the element values of the lower-level image matrix and the element values of the higher-level image matrix may be respectively filled into the final matrix.
  • the element values of the image matrix of the higher-level image cover the element values of the image matrix of the lower-level image, that is, the voxel values in the overlapping image regions are in accordance with the hierarchy.
  • the voxel values corresponding to the image matrix of the higher level are filled in.
  • the lookup table can record the correspondence of image matrices and voxels.
  • M 0 and M 1 respectively represent two image matrices, and the voxels corresponding to M 1 are smaller than the voxels corresponding to M 0 .
  • region 730 can be simultaneously covered by regions corresponding to M 0 and M 1 .
  • the contribution of the corresponding voxel 740 in the M 0 to the count on the response line i can be obtained by calculating the contribution of the 8 voxels 720 corresponding to M 1 to the count on the response line i, respectively.
  • the lookup table includes a correspondence between voxels of one or more image matrices.
  • lookup table may comprise matrices M 0 corresponding voxel 740M 0 (X, Y, Z ) corresponding to M 1 corresponding to the 8 voxels 720M 1 (X 1, Y 1 , Z 1), M 1 (X 1, Y 2 , Z 1 ), M 1 (X 2 , Y 1 , Z 1 ), M 1 (X 2 , Y 2 , Z 1 ), M 1 (X 1 , Y 1 , Z 2 ), M 1 (X 1 , Y 1 , Z 2 ), M 1 (X 1 , Y 2 , Z 2 ), M 1 (X 1 , Y 2 , Z 2 ), M 1 (X 2 , Y 1 , Z 2 ), M 1 (X 2 , Y 2 , Z 2 ) information.
  • the correspondence of the different hierarchical image matrices in the lookup table is determined by the positional relationship of the image regions corresponding to the respective image matrices.
  • the lookup table may also include a position, a direction, and the like that need to be translated when the image matrix is rearranged. For example, the correspondence between the compressed and/or rearranged voxels and the elements in the image matrix M 0 can be recorded in the lookup table.
  • imaging device 110 may include one or more imaging modules. Further, detectors of the one or more imaging modules are placed continuously around the target. Just as an example, this An imaging module referred to therein can correspond to a PET detector, and the positional relationship between the detectors is as described in FIG. As shown in FIG. 8, the imaging device 110 can be composed of six imaging modules. The six imaging modules shown can be paired in pairs to form 21 module pairs (as shown in FIG. 8, including the pairing module 810, the pairing module 820, and the pairing module 830).
  • the module pairing 810 may represent a pairing of the sixth imaging module and the sixth imaging module, that is, the response line is only received by the detectors on the left and right sides of the sixth imaging module; the module pairing 820 may represent the first imaging module and the first The pairing of the imaging module, that is, the response line can be received by the detector corresponding to the first imaging module and the sixth imaging module; the module pairing 830 can represent the pairing of the first imaging module and the fourth imaging module, that is, the response line can be It is received by a detector corresponding to the first imaging module and the fourth imaging module.
  • the calculation of each module pairing can be calculated by the secondary computing nodes described in other embodiments; the primary computing node can integrate and count the results of all of the secondary computing nodes.
  • the black line portion of the figure represents an element of the image matrix that needs to be modified in the corresponding module pairing calculation, the specific content of which will be in FIG.
  • each module pairing can be matrix compressed and rearranged according to the elements of the image matrix that need to be modified to reduce the amount of storage and the amount of computation.
  • module pairing 810 can be done by placing the black line portion below
  • the module 820 can achieve matrix compression by first shifting and aggregating the black line portions together and then removing elements other than the black line portion.
  • Image matrix processing unit 350 may include one or more image matrix compression sub-units 910, one or more image matrix rearrangement sub-units 920, one or more image matrix inverse rearrangement sub-units 930, one or more image matrix solutions Compression subunit 940, one or more lookup table generation units 950.
  • the image matrix compression sub-unit 910 can compress the image matrix.
  • the compression of the image matrix may include removing a portion of the elements in the image matrix, and in some embodiments, the removed elements may be empty.
  • the empty elements referred to herein may correspond to voxels that are not traversed by the response line, or in the process of image reconstruction (eg, orthographic projection, back projection, etc.) or part of the process, not on the detector. The count produces a contribution to the voxel.
  • the removed elements may be conditional, such as less than a threshold or at certain locations in a matrix, such as locations that do not affect image reconstruction and subsequent steps.
  • the image matrix rearranging sub-unit 920 can take some or all of the elements in the image matrix from the first A position is translated to a second position in the image matrix.
  • the element in the second position prior to translation will be removed after translation.
  • the translating may include reversing the position of some or all of the elements in the first position and the second position.
  • rearranging the matrix may include panning an element of a certain category or feature to a particular location.
  • rearranging the matrix may include translating and grouping together non-zero elements in the matrix.
  • the image matrix inverse rearrangement sub-unit 930 can translate some or all of the translated elements from the second position back to the first position. In some embodiments, the values of the elements that are rearranged or inversely rearranged in the image matrix may remain unchanged.
  • Image matrix decompression sub-unit 940 may add some elements to portions of the image matrix.
  • decompression of the matrix may include adding elements that were removed when the image matrix was compressed back to the original location of the element.
  • the values of the elements that are removed and added back to the image matrix in the matrix may not change during compression and decompression.
  • the lookup table generation unit 950 can generate a lookup table.
  • the lookup table may include locations and directions, etc., that need to be translated when the image matrix is rearranged.
  • the lookup table may include a conversion relationship between elements of one or more image matrices.
  • the lookup table may include image matrices of different levels as described in FIG. 7, and positional relationships of image regions corresponding to elements included in image matrices of different levels.
  • the lookup table generation unit 950 can be combined with the image matrix rearrangement subunit 920 into one subunit, which can implement the functions of the lookup table generation unit 950 and the image matrix rearrangement subunit 920 described above.
  • the image matrix 1010 corresponds to a scanning area that is jointly determined by the first imaging module 1011, the second imaging module 1012, the third imaging module 1013, and the fourth imaging module 1014.
  • the first imaging module 1011 can be paired with the fourth imaging module 1014.
  • the first imaging module 1011 and the fourth imaging module 1014 correspond to the first detector and the fourth detector, respectively.
  • assigning a secondary computing node to the first imaging module and the fourth imaging module assigning a secondary computing node to the first imaging module and the fourth imaging module, the secondary computing node calculating a response line that is receivable by the detectors of the first imaging module and the fourth imaging module. As shown in FIG.
  • the shaded portion in the image matrix 1010 is an element that needs to be updated and calculated in the reconstruction after the first imaging module 1011 and the fourth imaging module 1014 are paired.
  • the values of the elements corresponding to other portions of the image matrix 1010 may not change during the reconstruction process.
  • image matrix 1010 can be compressed into image matrix 1020, ie, elements that do not change in some reconstructions above and below image matrix 1010 can be removed.
  • elements in the image matrix 1010 whose coordinates are located in Z1, Z2, Z3, Z18, Z19, Z20 may be removed to be compressed into an image matrix 1020.
  • image matrix 1020 can be rearranged and compressed into image matrix 1030. That is, elements in the image matrix 1020 that may change in value during reconstruction are translated and aggregated. More specifically, each T dimension in the image matrix 1020 can be translated, such as removing elements Z9, Z10, Z11, Z12 in T1 coordinates, and translating the remaining elements in T1 coordinates.
  • the location of the removed element and the position and orientation of the element translation can be obtained by querying a lookup table.
  • the image matrix 1010 (20x10) is compressed and rearranged into an image matrix 1030 (10x10) without affecting the reconstruction, thereby reducing the storage space and the amount of calculation.
  • the compressed and rearranged image matrix is stored in storage module 250.
  • the lookup table may record information that compresses and rearranges the image matrix, which may also be stored in the storage module 250.
  • the image matrix processing can be implemented by the reconstruction module 220.
  • the reconstruction module may first determine the primary compute node and the secondary compute node in step 1102.
  • the secondary computing node can calculate a sub-image matrix.
  • the sub-image matrix corresponds to one sub-scanning area.
  • the sub-scanning area is formed by one or more detectors.
  • the secondary computing node may calculate the value of the element in the sub-image matrix corresponding to the sub-scanning region according to the count of the detector corresponding to the sub-scanning region.
  • the secondary computing node corresponds to the calculation of the image matrix corresponding to a pair of paired imaging modules.
  • the primary computing node may include combining and integrating the computational results of the secondary computing nodes.
  • an image matrix can be assigned to the secondary compute node.
  • Each secondary computing node corresponds to the calculation of the image matrix corresponding to a pair of paired imaging modules.
  • step 1106 and step 1108 the image matrix corresponding to the paired imaging module is compressed and rearranged.
  • the method of compression and rearrangement can be referred to the description in other embodiments of the present application. It is worth noting that the paired imaging modules corresponding to different secondary computing nodes may be different, and the required compression and rearrangement methods may be different.
  • the secondary computing node in FIG. 10 calculates an image matrix corresponding to the first imaging module 1011 and the fourth imaging module 1014, and the image matrix needs to be compressed and rearranged, and the method of compressing and rearranging is performed by the first imaging module. The shaded portion between 1011 and fourth imaging module 1014 is determined.
  • a secondary computing node may calculate an image matrix corresponding to the first imaging module 1011 (ie, a region defined by the first detector, represented as a rectangle, not shown in FIG. 10), the secondary The computing node only needs to compress the image matrix, that is, only the values of the voxels corresponding to the regions between the first imaging modules 1011 are calculated.
  • the orthographic/backprojection results under a single subset are calculated in step 1110.
  • the orthographic projection result refers to calculating a count of detectors of a pair of paired imaging modules corresponding to the response lines from the reconstructed image matrix.
  • the back projection result refers to calculating and reconstructing a value of an element included in the image matrix based on a count of detectors of the set of paired imaging modules corresponding to the response line.
  • coordinate transformation can be performed on the rearranged image matrix through a lookup table.
  • all of the projection data can be divided into groups, and one or more groups can form a subset. For example, the projection data can be grouped according to the projection direction.
  • the image to be reconstructed contains image matrices of different levels, as depicted in Figure 4, the image matrices of different levels correspond to the size of different voxels. Since a response line can pass through the area corresponding to the image matrix of one or more levels, in the process of orthographic projection/backprojection, the information of the image matrix of different levels marked by a lookup table can be separately calculated. The contribution of one or more element sizes to the response line.
  • the lookup table includes a conversion relationship between elements of one or more image matrices. After reconstructing the image matrix, i.e., calculating the values of the elements of the image matrix, reconstruction module 220 may inverse rearrange the image matrix in step 1112. The inverse rearrangement refers to restoring the rearranged image matrix to an image matrix corresponding to an actual image region.
  • step 1114 it is determined whether the back projection results of all angles under a single subset have been cumulatively completed, i.e., whether the back projection results satisfying all angles between the paired imaging modules have been calculated. And accumulate. If not completed, the matrix needs to be recompressed, rearranged, backprojected result calculated, and inversely rearranged according to the angle (ie, steps 1106-1112). If completed, the reconstruction module can decompress the image matrix in step 1116. The size of the decompressed image matrix is the same as the size before compression.
  • the primary compute node accumulates the back projection results of all of the secondary compute nodes.
  • the size of the decompressed image matrix may be the same at different angles obtained after decompressing in step 1116, and the main computing node may each element of the same position of the decompressed image matrix at the different angles. The values are added separately to obtain an accumulated image matrix.
  • the primary computing node may update the image matrix based on the accumulated results and process the next subset. After the update is completed, it is considered to be a reconstruction of a subset of the image matrix.
  • reconstruction module 220 may perform a reconstruction of the next subset of the image matrix and update the image matrix based on the reconstruction results until all subsets have been traversed. If all subsets have been traversed, the next steps are implemented. If there are other subsets, then return to step 1110 to recalculate the orthographic/backprojection results of the secondary nodes under a single subset.
  • the image matrix can be reconstructed by the Ordered Subset Expectation Maximization (OSEM). After traversing all of the above subsets, a reconstructed image matrix is obtained, completing an iterative process.
  • OSEM Ordered Subset Expectation Maximization
  • step 1124 It is determined in step 1124 whether the iteration stop condition is satisfied, and if the stop condition is satisfied, the reconstruction process ends. If not, return to step 1104 to proceed to the next iterative process and re-assign the image matrix to the secondary compute node.
  • the iteration stop condition can be related to the image matrix reconstructed in this iteration, or it can be set according to the artificial.
  • the satisfied iterative stop condition may be that the difference between the image matrix reconstructed by the iteration and the image matrix of the previous iteration is less than a certain threshold, or the image matrix reconstructed directly by the iteration may satisfy the certain conditions of. In still other embodiments, the satisfied iterative stop condition may be to complete a certain number of iterations.
  • the image matrix processing can be implemented by image matrix processing unit 350.
  • the process 1200 can correspond to steps 1106 through 1114 shown in FIG.
  • the imaging device may be spliced from one or more imaging modules.
  • the detectors of the one or more imaging modules are placed continuously around the target.
  • the calculation of each module pairing can be calculated by the secondary computing nodes described in other embodiments; the primary computing node can integrate and count the results of all of the secondary computing nodes.
  • the paired imaging modules can each correspond to an image matrix.
  • the image matrix corresponding to the module pairing may be module-compressed by the image matrix processing unit 350 to form a third image matrix.
  • the third image matrix can be rotated in the forward direction in step 1204.
  • the image matrix processing unit 350 can calculate a reference layer position and an effective matrix range.
  • the reference layer range and the effective matrix range may represent locations and directions in which the third image matrix needs to be translated in subsequent rearrangement steps.
  • image matrix processing unit 350 may store the locations and directions that the respective elements need to translate in subsequent rearrangement steps in a lookup table.
  • image matrix processing unit 350 may generate a fourth image matrix based on the lookup table and the third image matrix.
  • the third image matrix can be obtained by the matrix rearrangement method described in other embodiments of the present application.
  • the fourth image matrix is orthographically processed in step 1212 and a projection matrix is generated.
  • the image matrix processing unit 350 can calculate a correction coefficient.
  • the correction factor may be a ratio of a count measured on a response line to an orthographic projection of the reconstructed image along the line of response.
  • the projection matrix can be backprojected to produce a fifth image matrix.
  • the generation of the fifth image matrix may be based on the correction coefficients.
  • Image matrix processing unit 350 may inverse rearrange the fifth image matrix in step 1218 to generate a sixth image matrix. Further, the sixth image matrix may be performed in step 1220. Reverse rotation. In some embodiments, the direction and size of the third image matrix are consistent with the sixth image matrix.
  • the present application uses specific words to describe embodiments of the present application.
  • a "one embodiment,” “an embodiment,” and/or “some embodiments” means a feature, structure, or feature associated with at least one embodiment of the present application. Therefore, it should be emphasized and noted that “an embodiment” or “an embodiment” or “an alternative embodiment” that is referred to in this specification two or more times in different positions does not necessarily refer to the same embodiment. . Furthermore, some of the features, structures, or characteristics of one or more embodiments of the present application can be combined as appropriate.
  • aspects of the present application can be illustrated and described by a number of patentable categories or conditions, including any new and useful process, machine, product, or combination of materials, or Any new and useful improvements. Accordingly, various aspects of the present application can be performed entirely by hardware, entirely by software (including firmware, resident software, microcode, etc.) or by a combination of hardware and software.
  • the above hardware or software may be referred to as a "data block,” “module,” “engine,” “unit,” “component,” or “system.”
  • aspects of the present application may be embodied in a computer product located in one or more computer readable medium(s) including a computer readable program code.
  • a computer readable signal medium may contain a propagated data signal containing a computer program code, for example, on a baseband or as part of a carrier.
  • the propagated signal may have a variety of manifestations, including electromagnetic forms, optical forms, and the like, or a suitable combination.
  • the computer readable signal medium may be any computer readable medium other than a computer readable storage medium that can be communicated, propagated, or transmitted for use by connection to an instruction execution system, apparatus, or device.
  • Program code located on a computer readable signal medium can be propagated through any suitable medium, including a radio, cable, fiber optic cable, RF, or similar medium, or a combination of any of the above.
  • the computer program code required for the operation of each part of the application may be in any one or more of the programming languages Written, including object-oriented programming languages such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB.NET, Python, etc., conventional programming languages such as C, Visual Basic, Fortran 2003, Perl, COBOL 2002 , PHP, ABAP, dynamic programming languages such as Python, Ruby, and Groovy, or other programming languages.
  • the program code can run entirely on the user's computer, or run as a stand-alone software package on the user's computer, or partially on the user's computer, partly on a remote computer, or entirely on a remote computer or server.
  • the remote computer can be connected to the user's computer via any network, such as a local area network (LAN) or wide area network (WAN), or connected to an external computer (eg via the Internet), or in a cloud computing environment, or as a service.
  • LAN local area network
  • WAN wide area network
  • an external computer eg via the Internet
  • SaaS software as a service

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Software Systems (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)

Abstract

一种图像重建的方法和系统。在图像重建的过程中,设定不同区域对应不同体素的大小,根据不同体素大小分别重建不同区域的图像。不同区域所对应的不同大小的体素存储在不同的图像矩阵中,将不同的图像矩阵填充到一个合并矩阵中可以获得最终的图像矩阵,从而获得最终图像。

Description

图像重建方法及系统 技术领域
本申请涉及一种图像重建的方法及系统,特别地,涉及一种具有多分辨率医学图像的重建。
背景技术
经过多年的发展,正电子发射断层成像(Positron Emission Tomography,PET)技术已在临床检查和疾病诊断等方面取得广泛应用。其中,超长轴向PET系统(可由若干个短轴PET构成)具有超长轴向视野,可在单床扫描时得到多个部位乃至全身图像。图像重建是PET技术研究中的一项关键技术,虽然现已有比较成熟的PET图像重建方法,如空间分布函数的区域重建等,但对超长轴向PET系统进行图像重建过程中,仍存在如何对不同部位不同重建参数进行一次性重建、如何降低重建过程中计算量的问题。因此,需要一种新的图像重建方法和系统用于解决上述问题。
简述
本申请的一些实施例,提供了一种图像重建方法。该方法包括以下操作中的一步或多步。确定对象的第一区域,以及设定该第一区域所对应的第一体素的大小。确定该对象的第二区域,以及设定该第二区域所对应的第二体素的大小。获取该对象的扫描数据。根据该扫描数据,重建第一区域图像。所述重建第一区域图像可以包括对第一体素和第二体素的正投影,以及对第一体素的反投影。
可选的,根据所述扫描数据,重建第二区域图像,包括对第一体素和第二体素的正投影,以及对第二体素的反投影。
可选的,第二区域与第一区域在空间上可以是连续的。可选的,第二区域与第一区域在空间上可以是不连续的。
可选的,重建第一区域图像可以包含对第一区域图像进行第一种滤波处理。重建第二区域图像可以包含对第二区域图像进行第二种滤波处理。
可选的,重建第一区域图像可以包括根据扫描数据,迭代地重建第一区域 图像。重建第二区域图像可以包括根据该扫描数据,迭代地重建第二区域图像。
可选的,重建第一区域图像的迭代次数可以与重建第二区域图像的迭代次数不同。例如,重建第一区域图像的迭代次数可以多于重建第二区域图像的迭代次数。
可选的,迭代地重建第一区域图像或迭代地重建第二区域图像可以是基于有序子集最大期望值法。
可选的,对第一体素和第二体素的正投影可以包括沿着一条响应线对该第一体素和该第二体素进行正投影。
可选的,该方法可以进一步包括对第一区域图像和第二区域图像进行校正。
可选的,该方法可以进一步包括获取对象的结构信息。该方法可以进一步包括根据该结构信息,确定第一区域和第二区域。
可选的,该方法可以进一步包括确定第一图像矩阵,并将第一体素存储在该第一图像矩阵中。重建第一区域图像可以包括重建该第一图像矩阵。该方法可以进一步包括确定第二图像矩阵,并将第二体素存储在该第二图像矩阵中。重建第二区域图像可以包括重建该第二图像矩阵。
可选的,该方法可以进一步包括生成查找表。该查找表可以记录第一图像矩阵和第一体素的对应关系,和/或第二图像矩阵和第二体素的对应关系等。
可选的,第一图像矩阵和第一体素的对应关系可以包括将该第一体素重排后存储在该第一图像矩阵中。
可选的,第一图像矩阵和第一体素的对应关系可以包括将该第一体素压缩和重排后存储在该第一图像矩阵中。
可选的,该方法可以进一步包括生成一合并矩阵。该合并矩阵对应的体素大小可以为第一体素和第二体素中的较小者。该方法可以进一步包括将该第一图像矩阵和该第二图像分别填充在该合并矩阵中,生成与最终图像对应的最终图像矩阵。
根据本申请的一些实施例,提供了一种图像重建方法。该方法可以包括确定一个图像矩阵。该图像矩阵可以对应一个扫描区域,并对应一定数量的体素。该方法可以进一步包括划分该图像矩阵到多个子图像矩阵。所述多个子图像矩阵 中的至少一个子图像矩阵可以对应该扫描区域的一个子扫描区域,并包含部分该体素。该方法可以进一步包括变换至少一个该子图像矩阵,生成至少一个变换矩阵。该方法可以进一步包括基于该变换矩阵重建该至少一个子图像矩阵。该方法可以进一步包括基于该重建的至少一个子图像矩阵重建该图像矩阵。
可选的,变换可以包括将子图像矩阵中至少部分元素进行平移等。变换可以包括压缩或重排子图像矩阵等。
可选的,该方法可以进一步包括建立图像矩阵和子图像矩阵的查找表。该查找表可以记录该子图像矩阵的压缩的方式或重排的方式。变换可以包括根据查找表对子图像矩阵进行解压缩。
根据本申请的一些实施例,提供了一种图像重建系统。该系统可以包括成像设备。该成像设备可以被配置为生成对象的扫描数据。该系统可以进一步包括图像处理器。该图像处理器可以包括接收模块。该接收模块可以被配置为获取该对象的第一区域及该第一区域所对应的第一体素的大小。该接收模块可以进一步被配置为获取该对象的第二区域及该第二区域所对应的第二体素的大小。该图像处理器可以进一步包括重建模块。该重建模块可以被配置为重建第一区域图像。所述重建第一区域图像可以包括对第一体素和第二体素的正投影,以及对第一体素的反投影。
该重建模块进一步被配置为重建第二区域图像,包括对第一体素和第二体素的正投影,以及对第二体素的反投影。
可选的,该系统包括后处理模块,被配置为通过后处理获得第一区域图像和第二区域图像。该后处理可以包括滤波处理、降噪处理、合并处理或划分处理等。
可选的,重建模块可以进一步包括图像矩阵生成单元。该图像矩阵生成单元可以被配置为确定第一图像矩阵。第一体素可以存储在该第一图像矩阵中。重建第一区域图像可以包括重建该第一图像矩阵。该图像矩阵生成单元可以被配置为确定第二图像矩阵。第二体素可以存储在该第二图像矩阵中。重建第二区域图像可以包括重建该第二图像矩阵。
可选的,重建模块可以进一步包括图像矩阵处理单元。该图像矩阵处理单 元可以被配置为对第一图像矩阵和第二图像矩阵进行旋转、压缩与解压缩、重排与逆重排、填充、分解、合并中至少一种处理。
可选的,图像矩阵处理单元可以进一步包括查找表生成单元。该查找表生成单元可以被配置为生成查找表。该查找表记录可以第一图像矩阵和第一体素的对应关系,和/或第二图像矩阵和第二体素的对应关系等。
可选的,第一图像矩阵和第一体素的对应关系可以包括将该第一体素压缩或重排后存储在该第一图像矩阵中。
可选的,后处理模块可以包括合并单元。所述合并单元可以被配置为生成一个合并矩阵,将该第一图像矩阵和该第二图像矩阵分别填充在该合并矩阵中,生成与最终图像对应的最终图像矩阵。该合并矩阵对应的体素大小可以为第一体素和第二体素中的较小者。
根据本申请的一些实施例,提供了一种图像重建系统。该系统可以包括图像矩阵生成单元。所述图像矩阵生成单元可以被配置为生成一个图像矩阵。该图像矩阵可以对应一个扫描区域,并包含一定数量的体素。该系统可以进一步包括图像矩阵处理单元。所述图像矩阵处理单元可以被配置为划分该图像矩阵到多个子图像矩阵,压缩该多个子图像矩阵中的至少一个子图像矩阵,并基于该压缩的子图像矩阵重建该至少一个子图像矩阵,以及基于该重建的至少一个子图像矩阵重建该图像矩阵。
本申请的一部分附加特性可以在下面的描述中进行说明。通过对以下描述和相应附图的检查或者对实施例的生产或操作的了解,本申请的一部分附加特性对于本领域技术人员是明显的。本披露的特性可以通过对以下描述的具体实施例的各种方面的方法、手段和组合的实践或使用得以实现和达到。
附图描述
为了更清楚地说明本发明实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单的介绍。显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图将本发明应用于其它类似情景。除非从语言环境中显而易见或 另做说明,图中相同标号代表相同结构和操作。
图1是根据本申请的一些实施例所示的多分辨率图像重建与存储系统的示意图;
图2是根据本申请的一些实施例所示的处理器的示意图;
图3是根据本申请的一些实施例所示的重建模块的示意图;
图4是根据本申请的一些实施例所示的多分辨率图像重建的流程图;
图5是根据本申请的一些实施例所示的后处理模块的示意图;
图6-A和图6-B是根据本申请的一些实施例所示的后处理的流程图;
图7是根据本申请的一些实施例所示的体素对应矩阵的示意图;
图8是根据本申请的一些实施例所示的模块配对的示意图;
图9是根据本申请的一些实施例所示的图像矩阵处理单元的示意图;
图10是根据本申请的一些实施例所示的图像矩阵处理的示意图;
图11是根据本申请的一些实施例所示的图像矩阵重建的流程图;以及
图12是根据本申请的一些实施例所示的图像矩阵处理的流程图。
具体描述
为了更清楚地说明本申请的实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单的介绍。显而易见地,下面描述中的附图仅仅是本申请的一些示例或实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图将本申请应用于其他类似情景。除非从语言环境中显而易见或另做说明,图中相同标号代表相同结构或操作。
如本申请和权利要求书中所示,除非上下文明确提示例外情形,“一”、“一个”、“一种”和/或“该”等词并非特指单数,也可包括复数。一般说来,术语“包括”与“包含”仅提示包括已明确标识的步骤和元素,而这些步骤和元素不构成一个排它性的罗列,方法或者设备也可能包含其他的步骤或元素。
虽然本申请对根据本申请的实施例的系统中的某些模块做出了各种引用,然而,任何数量的不同模块可以被使用并运行在客户端和/或服务器上。所述模块仅是说明性的,并且所述系统和方法的不同方面可以使用不同模块。
本申请中使用了流程图用来说明根据本申请的实施例的系统所执行的操作。 应当理解的是,前面或下面操作不一定按照顺序来精确地执行。相反,可以按照倒序或同时处理各种步骤。同时,也可以将其他操作添加到这些过程中,或从这些过程移除某一步或数步操作。
“扫描区域”代表了进行扫描的实际区域,与图像矩阵相对应,“重建区域”代表了与图像矩阵的重建相对应的实际区域。除非上下文明确提示例外情形,在本申请中“扫描区域”、“重建区域”、“实际区域”可以表示相同的意思并可以进行替换。
“元素”代表了图像矩阵中的最小的成分,“体素”代表了实际区域中最小的成分。除非上下文明确提示例外情形,在本申请中图像矩阵中的“元素”和与图像矩阵相对应的实际区域中的“体素”可以表示相同的意思并可以进行替换。
本申请中使用了流程图用来说明根据本申请的实施例的系统所执行的操作。应当理解的是,前面或下面操作不一定按照顺序来精确地执行。相反,可以按照倒序或同时处理各种步骤。同时,也可以将其他操作添加到这些过程中,或从这些过程移除某一步或数步操作。
本申请所述的多分辨率图像重建与储存方法包括在物体中的不同区域采用不同分辨率(即不同的体素大小)对物体图像进行重建与储存。在一些实施例中,本申请一方面涉及一种多分辨率图像重建与储存系统。该多分辨率图像重建与储存系统可以包括接收模块、存储模块、重建模块、后处理模块和显示模块。本申请另一方面涉及一种可以被应用在所述多分辨率图像重建与存储系统中的图像矩阵处理方法。所述图像矩阵处理方法可以包括对图像矩阵进行压缩与解压缩、重排与逆重排等。
本申请的实施例可以应用于不同的图像处理系统。不同的图像处理系统可以包括正电子发射计算机断层显像系统(PET系统)、计算机断层扫描-正电子发射计算机断层显像混合系统(CT-PET系统)、核磁共振-正电子发射计算机断层显像混合系统(MR-PET系统)等。
图1是根据本申请的一些实施例所示的多分辨率图像重建与存储系统的示意图。系统100可以包含一个图像处理器120(简称为处理器120)、一个网络130和一个成像设备110。处理器120对收集到的信息(例如数据等)进行多分 辨率图像重建与储存的系统。处理器120可以是一个实体的电子设备,也可以是一个服务器。所述电子设备可以包括便携式计算机、平板、手机、智能终端设备等。处理器120可以是集中式的,例如数据中心;也可以是分布式的,例如一个分布式系统。处理器120可以是本地的,也可以是远程的。在一些实施例中,所述信息可以是通过扫描或其他方式获得的一个或多个对象的图像信息。
在一些实施例中,处理器120可以包括中央处理器(Central Processing Unit,CPU)、专门应用集成电路(Application Specific Integrated Circuit,ASIC)、专用指令处理器(Application Specific Instruction Set Processor,ASIP)、物理处理器(Physics Processing Unit,PPU)、数字信号处理器(Digital Processing Processor,DSP)、现场可编程逻辑门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑器件(Programmable Logic Device,PLD)、处理器、微处理器、控制器、微控制器等中的一种或几种的组合。
网络130可以是单个网络,也可以是多个不同网络的组合。例如,网络130可能是一个局域网(Local Area Network,LAN)、广域网(Wide Area Network,WAN)、公用网络、私人网络、专有网络、公共交换电话网(Public Switched Telephone Network,PSTN)、互联网、无线网络、虚拟网络、或者上述网络的任何组合。网络130也可以包括多个网络接入点。有线网络可以包括利用金属电缆、混合电缆、一个或多个接口等一种或多种组合的方式。无线网络可以包括利用蓝牙、区域局域网(LAN)、广域局域网(WAN)、无线个域网(WPAN)、近源场通信(Near Field Communication,NFC)等一种或多种组合的方式。网络130可以适用于本申请所描述的范围内,但并不局限于所述描述。
成像设备110可以包括对一个或多个目标进行扫描的一个或多个设备,进一步地,所述用于扫描的设备可以被用在但不仅限于医学领域的应用,例如医学检测等。在一些实施例中,医学检测可以包括磁共振成像(MRI)、X射线计算机断层扫描(X-ray-CT)、正电子发射计算机断层显像(PET)、单光子发射计算机断层显像(SPECT)或者上述一种或多种医学检测的组合。在一些实施例中,所述目标可以是器官、机体、物体、机能障碍、肿瘤等一种或多种的组合。在一些实施例中,所述目标可以是头部、胸腔、器官、骨骼、血管等一种或多种的组 合。在一些实施例中,成像设备110可以由一个或多个成像模块拼接而成。进一步地,所述一个或多个成像模块的探测器可以连续地放置在所述目标的周围。
在一些实施例中,成像设备110和处理器120可以是一体的。在一些实施例中,成像设备110可以通过网络130发送信息到处理器120。在一些实施例中,成像设备110也可以直接发送信息到处理器120。在一些实施例中,处理器120也可以包含处理本身存储的信息。
图2是根据本申请的一些实施例所示的处理器的示意图。处理器120可以包含一个或多个接收模块210,、一个或多个重建模块220、一个或多个后处理模块230、一个或多个显示模块240和一个或多个存储模块250。
接收模块210可以以一种或多种方式收集所需要的信息。所述收集信息的方式可以包括扫描一个对象(例如通过成像设备110获取一个对象的信息),通过收集预先存储的信息(例如通过收集存储模块250中的信息或通过网络130获得的远程信息)等。信息的种类可以包括体素数据、计数、矩阵、图像、向量、向量库等。
重建模块220可以对接收模块210中所收集到的信息进行重建。信息的重建可以包括根据收集到的信息生成被扫描对象整体或者被扫描对象的一个或多个部分所对应的图像矩阵。在一些实施例中,所述信息的重建可以包括确定一个或多个被扫描区域以及所述一个或多个被扫描区域所分别对应的一个或多个体素。所述一个或多个体素可以对应到一个或多个图像矩阵中的一个或多个元素。所述一个或多个图像矩阵可以根据收集到的信息进行迭代的重建。在一些实施例中,所述迭代的重建可以包括对所述图像矩阵进行一次或多次正投影处理和反投影处理。在一些实施例中,信息的重建还可以包括去除信息中的部分内容从而提高系统的运算与存储效率。在一些实施例中,信息可以被转化成图像矩阵的形式,所述提高运算和储存效率的方式可以包括对所述图像矩阵进行压缩和/或重排。
后处理模块230可以对重建模块所产生的重建后的信息进行后处理操作。在一些实施例中,后处理操作可以包括根据所述一个或多个体素对所述迭代的重建后的矩阵进行后处理从而产生被扫描物体整体或者被扫描物体的一个或多个部分的图像或图像所对应的矩阵。所述后处理可以包括对迭代重建后的矩阵进行滤 波处理、降噪处理、合并处理、划分处理等。
显示模块240可以显示后处理模块产生的图像。在一些实施例中,显示模块240可以包括一个显示设备,如显示屏等。在一些实施例中,显示模块240可以在显示最终图像之前根据需求对图像进行渲染、缩放、旋转、最大密度投影等操作。在一些实施例中,显示模块240可以进一步包括一个或多个输入设备,如键盘、触屏、触板、鼠标、远程控制等一个或多个。在一些实施例中,用户可以通过所述一个或多个输入设备输入一些原始参数和/或设置对应图像显示和/或处理的初始化条件。在一些实施例中,用户可以根据显示模块240所显示的图像进行设置和/或操作,如设置为二维图像的显示、设置为三维图像的显示、显示扫描数据对应的图像、显示控制界面、显示输入界面、显示不同区域的图像、显示图像重建的过程、显示图像重建的结果,接受到用户的输入后对显示图像进行放大处理、缩小处理、设置多个图像同时显示等一种或几种设置和/或操作的组合。
存储模块250可以存储数据。所述存储的数据可以来自成像设备110,网络130,和/或处理器120中的其他模块/单元(接收模块210、重建模块220、后处理模块230、显示模块240或其他相关模块(未示出))。存储模块250可以是利用电能方式存储信息的设备,例如各种存储器,如随机存取存储器(Random Access Memory(RAM))、只读存储器(Read Only Memory(ROM))等。其中随机存储器可以包括十进计数管、选数管、延迟线存储器、威廉姆斯管、动态随机存储器(DRAM)、静态随机存储器(SRAM)、晶闸管随机存储器(T-RAM)、零电容随机存储器(Z-RAM)等中的一种或几种的组合。只读存储器可以包括磁泡存储器、磁钮线存储器、薄膜存储器、磁镀线存储器、磁芯内存、磁鼓存储器、光盘驱动器、硬盘、磁带、早期非易失存储器(NVRAM)、相变化内存、磁阻式随机存储式内存、铁电随机存储内存、非易失SRAM、闪存、电子抹除式可复写只读存储器、可擦除可编程只读存储器、可编程只读存储器、屏蔽式堆读内存、浮动连接门随机存取存储器、纳米随机存储器、赛道内存、可变电阻式内存、可编程金属化单元等中的一种或几种的组合。存储模块250可以是利用磁能方式存储信息的设备,例如硬盘、软盘、磁带、磁芯存储器、磁泡存储器、U盘、闪存等。存储模块250可以是利用光学方式存储信息的设备,例如CD或DVD 等。存储模块250可以是利用磁光方式存储信息的设备,例如磁光盘等。存储模块250的存取方式可以是随机存储、串行访问存储、只读存储等中的一种或几种的组合。存储模块250可以是非永久记忆存储器,或永久记忆存储器。
存储模块250可以与一个或多个接收模块210,重建模块220,后处理模块230、显示模块240或其他相关模块(未示出)关联。在一些实施例中,存储模块250可以通过网络130选择性地关联一个或多个虚拟存储资源,例如云盘存储(cloud storage)、虚拟私人网络(a virtual private network)和/或其他虚拟存储资源。存储的数据可以是各种形式的数据,例如数值、信号、图像、既定目标的相关信息、命令、算法、程序等一种或多种组合。
对于本领域的专业人员来说,在了解多分辨率图像重建与储存系统及方法的原理后,可能在不背离这一原理的情况下,对各个模块进行任意组合,或者构成子系统与其他模块连接,对实施上述方法和系统的应用领域形式和细节上的各种修正和改变,但是这些修正和改变仍在以上描述的范围之内。例如,上述模块可以是体现在一个系统中的不同模块,也可以是一个模块实现上述的两个或两个以上模块的功能。比如,在本申请的一些实施例中,存储模块250可以被包含在任何一个或多个所述模块中。在一些实施例中,接收模块210和显示模块240可以合并成一个输入/输出模块。在一些实施例中,重建模块220和后处理模块230可以合并成一个图像生成模块。
图3是根据本申请的一些实施例所示的重建模块的示意图。重建模块220可以包括一个或多个参数设置单元310、一个或多个区域选择单元320、一个或多个图像矩阵生成单元340、一个或多个图像矩阵处理单元350、一个或多个计算单元360、一个或多个分配单元370。
参数设置单元310可以在重建的过程进行参数的设置。所述参数可以包括重建区域的大小、重建区域的位置、重建区域中体素的大小、迭代的算法、迭代的次数或终止条件等一种或两种以上的组合。在一些实施例中,所述参数可以从存储模块250中获得。在一些实施例中,用户可以通过接收模块210或者显示模块240进行所述参数的设置。在一些实施例中,参数设置单元310可以存储一个或多个参数的默认值,所述默认值可以在无法获得参数的设置时使用。
区域选择单元320可以选择进行重建的区域。所述重建区域的选择可以包括对所述重建区域的大小和位置进行选择。在一些实施例中,区域选择单元320可以从参数设置单元310获得所述重建区域大小和位置的设置。在一些实施例中,区域选择单元320可以存储多个扫描部位如头腔、胸腔、腹腔等部位的默认区域设置,所述默认区域设置可以随时调用或者调整。在一些实施例中,区域选择单元320可以与显示模块240相结合。进一步地,用户可以在显示模块240所显示的图像中选择一个或多个区域用于扫描和/或重建的区域,区域选择单元320可以在收到用户选择后对对应区域进行扫描和/或重建。
图像矩阵生成单元340可以产生一个或多个图像矩阵。所述一个或多个图像矩阵可以对应到一个或多个扫描区域中。在一些实施例中,所述图像矩阵和所述扫描区域可以是一一对应的。在一些实施例中,图像矩阵中的每一个元素对应扫描区域中的每一个体素的数值。所述数值包括X射线衰减系数、γ射线衰减系数、氢原子密度、体素的密度等一种或多种数值。在一些实施例中,图像矩阵中的元素所对应的体素的数值可以在所述迭代的重建中被修改和/或更新。在一些实施例中,图像矩阵中的元素所对应的体素的数值可以被转化成图像的灰度或RGB色度。进一步地,图像矩阵可以对应一个图像和/或转化成一个图像。
图像矩阵处理单元350可以对产生的图像矩阵进行处理。所述处理可以包括将一个图像矩阵划分为多个子图像矩阵,或者对一个图像矩阵进行旋转、压缩与解压缩、重排与逆重排、填充、分解、合并等一种或多种组合的操作。在一些实施例中,所述图像矩阵的旋转可以包括将图像矩阵进行顺时针或者逆时针的旋转。所述图像矩阵的压缩可以包括将图像矩阵中一部分元素去除。在一些实施例中,所述去除的元素所对应的体素没有被一条或多条射线(例如,PET系统中的响应线,或者CT系统中的x射线等)穿透,在图像重建的过程中,可以将所述去除的元素的表示值设定为是零或者其它固定数值。在一些实施例中,所述去除的元素可以是符合一定条件的,比如数值小于一个阈值或处于某个矩阵中的某些位置等。相应的,矩阵的解压缩可以包括将一些元素加入到图像矩阵中的一些部分。在一些实施例中,矩阵的解压缩可以包括将在图像矩阵压缩时被去除的元素添加回所述元素原始的位置。在一些实施例中,这些在矩阵中被去除又重新添加回图 像矩阵的元素的数值在压缩和解压缩的过程中保持不变。在一些实施例中,矩阵的重排可以包括将矩阵中的一部分元素或全部元素从第一位置平移到图像矩阵中的第二位置。在一些实施例中,矩阵的重排可以将某一类别或特点的元素平移到某一特定的位置。相应地,所述图像矩阵的逆重排可以包括将部分或所有平移后的元素从第二位置平移回所述第一位置。在一些实施例中,在图像矩阵中被重排或逆重排的元素的数值保持不变。
所述图像矩阵的填充可以包括根据某些规则或者算法对图像矩阵中某些空的图像矩阵中填入对应数值。在一些实施例中,在与PET有关的系统中,填充可以包括根据响应线(Line of Response,LOR)穿过的体素的位置,对所述响应线穿过的体素所对应的图像矩阵中的元素进行填充。在一些实施例中,所述填充可以基于响应线所对应探测器的计数以及响应线所穿过的体素对计数的影响(亦可称作灵敏度)。所述图像矩阵的分解可以包括将图像矩阵分解成多个子图像矩阵。在一些实施例中,子图像矩阵可以各自覆盖一部分的原始图像矩阵的元素。在一些实施例中,子图像矩阵可以由一条或多条响应线所穿过的扫描区域构成。类似的,一条响应线可以穿过一个或多个子图像矩阵所对应的区域。所述图像矩阵的合并可以包括将多个子图像矩阵合并成一个图像矩阵。在一些实施例中,一个图像矩阵分解后的多个子图像矩阵可以被合并回所述图像矩阵。
计算单元360可以对图像矩阵中元素的数值以及其他数值的计算。在一些实施例中,计算单元360可以根据一条或多条响应线所对应的探测器的示数计算出所述一条或多条响应线所穿过的扫描对象所对应的图像矩阵中元素的数值。在一些实施例中,计算单元360可以包括一个主计算节点和一个或多个副计算节点。在一些实施例中,所述一个或多个副计算节点分别计算一个子图像矩阵,所述子图像矩阵可以对应一个子扫描区域。在一些实施例中,子扫描区域可以由一个或多个探测器扫描形成。在一些实施例中,副计算节点可以根据子扫描区域所对应的探测器的计数计算出所述子扫描区域所对应的子图像矩阵中体素的数值。在一些实施例中,主计算节点可以包括将副计算节点计算出的子扫描区域所对应的子图像矩阵中所对应的体素的数值进行合并与叠加。例如,如果一个体素处在多个子图像矩阵中,主计算节点可以将所述多个副计算节点所计算出的该体素所处在 的子图像矩阵的对应数值相加。
分配单元370可以将所述计算任务分配到计算单元不同的计算节点里面,所述计算节点可以包括一个或多个主计算节点和一个或多个副计算节点。在一些实施例中,分配单元370可以将探测器进行配对或分组,并确定配对或分组后的探测器所对应的子扫描区域的大小与位置。在一些实施例中,分配单元370可以将所述子扫描区域所对应的子图像矩阵的重建和计算任务分配到副计算节点中。
对于本领域的专业人员来说,在了解多分辨率图像重建与储存系统及方法的原理后,可能在不背离这一原理的情况下,对上述重建模块220进行形式上和/或细节上的各种修正和改变,但是这些修正和改变仍在本申请所披露的范围之内。例如,在本申请的一些实施例中,图像矩阵生成单元340和图像矩阵处理单元350可以合并成一个图像矩阵单元。在一些实施例中,重建模块220中可以没有计算单元360,计算单元360的功能可以在其他单元中实现。
图4是根据本申请的一些实施例所示的多分辨率图像重建的流程图。在一些实施例中,所述多分辨率图像重建可以由处理器120实现。如图4所示,处理器120可以首先在步骤402中获得一个对象的结构信息。在一些实施例中,所述结构信息指的是对象的轮廓信息或外表信息。在一些实施例中,步骤402可以通过接收模块210实现。在一些实施例中,所述结构信息可以通过扫描所述对象所获得。进一步地,所述结构信息可以通过CT、MRI、PET等扫描获得。可选地,所述结构信息也可以通过其他方式获得。
步骤404可以包括根据所述扫描对象的结构信息确定第一区域以及其所对应的第一体素的大小。在一些实施例中,步骤404可以通过接收模块210实现。在一些实施例中,所述第一区域可以对应扫描对象的整体。在一些实施例中,所述第一体素的数值可以被存储在第一图像矩阵M0中,形成第一元素。
如步骤406所示,接收模块210可以根据所述扫描对象的结构信息确定第二区域以及其所对应的第二体素大小。在一些实施例中,第二区域可以对应扫描对象的一部分。在一些实施例中,所述第二体素的数值可以被存储在第二图像矩阵M1中,形成第二元素。在一些实施例中,第二体素比第一体素小。在一些实施例中,体素越小,所对应的图像分辨率越高。在一些实施例中,第二区域对应 需要被高分辨率成像的区域。
如步骤408所示,处理器120可以获取一个对象的扫描信息。在一些实施例中,处理器120可以通过成像设备110获取所述扫描信息。进一步地,所述成像设备110可以包括PET成像设备。在一些实施例中,所述扫描信息可以从存储模块250中获得。在一些实施例中,所述扫描信息也可以通过网络130从远程存储模块(如云盘)中获得。
在获得了对象的扫描信息之后,处理器120可以在步骤410和步骤412中分别对第一区域和第二区域所对应的第一图像矩阵M0和第二图像矩阵M1进行重建,分别得到第一区域图像和第二区域图像。在一些实施例中,对所述第一图像矩阵M0和第二图像矩阵M1的重建可以通过一个迭代的重建算法。
仅仅作为例子,所述第一图像矩阵M0和第二图像矩阵M1的重建可以通过有序子集最大期望值法(Ordered Subset Expectation Maximization,OSEM)实现:
Figure PCTCN2016092881-appb-000001
其中i是响应线(探测器对)的编号,m是重建的图像矩阵编号,j是矩阵m中元素的编号,
Figure PCTCN2016092881-appb-000002
是重建的图像矩阵m中元素j在第n迭代的值,yi是响应线i上测量到的实际计数,F是正投影算子,以及B(yi,F)是反投影算子。
其中所述有序子集最大期望值法需要进行诸如对图像矩阵进行正投影(即对图像矩阵中元素所对应的体素进行正投影)、计算校正系数、对图像矩阵进行反投影(即对图像矩阵中元素所对应的体素进行反投影)、更新图像矩阵等步骤,具体见下文的描述。
在一些实施例中,重建第一图像矩阵M0得到第一区域图像,所述对第一图像矩阵M0的重建可以包括对第一体素和第二体素进行正投影,再对第一体素进行反投影等处理;重建第二图像矩阵M1得到第二区域图像,所述对第二图像矩阵M1的重建可以包括对第一体素和第二体素进行正投影,以及对第二体素进行反投影等处理。在一些实施例中,所述第一体素和第二体素的大小可以不相同。
对图像矩阵进行正投影从而获得探测器结果,其中正投影算子可表示为:
Figure PCTCN2016092881-appb-000003
其中k是响应线i与图像矩阵m相关的所有元素的编号,以及cikm是响应线i对于图像矩阵m中的元素j的灵敏度。在一些实施例中,不同图像矩阵对应不同大小的体素。例如,一条响应线可以穿过第一区域(对应于第一体素)和第二区域(对应于第二体素),根据公式(2),对图像矩阵进行正投影包括对第一体素和第二体素的正投影。
计算校正系数:
所述校正系数为某一响应线上测量得到的计数与对重建图像沿着该响应线进行的正投影的比值,即
Figure PCTCN2016092881-appb-000004
对校正系数进行反投影从而更新图像矩阵:
Figure PCTCN2016092881-appb-000005
在一些实施例中,对于不同的图像矩阵,其对应的图像需要的迭代次数不同。例如对于体部的图像矩阵,可能需要迭代两次;对于脑部的图像,可能需要迭代四次。
不同图像矩阵的预设迭代次数可以记为d(m),其中,m为图像矩阵的编号,m=0,1,2…。则公式(3)可以记为:
Figure PCTCN2016092881-appb-000006
而公式(1)可以记为:
Figure PCTCN2016092881-appb-000007
其中,n为当前迭代的序号。若图像矩阵的预设迭代次数d(m)大于当前迭代的序号n,则对图像矩阵继续进行迭代处理,更新图像;若图像矩阵的预设迭代次数d(m)小于或等于当前迭代的序号n,则停止对图像矩阵的迭代,获得当前图像矩阵所对应的图像。
在获得第一图像矩阵M0和第二图像矩阵M1之后,处理器120可以根据所述图像矩阵中元素的数值将所述图像矩阵分别转化为第一区域图像和第二区域图像。所述图像矩阵中元素的数值可以被表示成所述图像中体素的灰度或RGB色度。 在获得第一图像矩阵M0和第二图像矩阵M1以及其所对应的第一区域图像和第二区域图像之后,处理器120可以对所述第一区域图像和第二区域图像,根据本申请其他实施例中提到的方法,进行后处理操作。
图5是根据本申请的一些实施例所示的后处理模块的示意图。后处理模块230可以包括一个或多个滤波处理单元510、一个或多个划分单元520、一个或多个合并单元530。
滤波处理单元510可以对图像矩阵或者图像矩阵所对应的数据或图像进行滤波处理。所述滤波处理可以包括Gaussian滤波、Metz滤波、Butterworth滤波、Hamming滤波、Hanning滤波、Parzen滤波、Ramp滤波、Shepp-logan滤波、Wiener滤波的一种或几种的组合。在一些实施例中,不同扫描区域或扫描对象的不同部位可以使用不同的滤波处理。比如,对于脑部扫描可以采用Metz滤波,而对于体部扫描可以采用Gaussian滤波。
划分单元520可以根据滤波后的一个或多个图像矩阵各自对应的体素的大小将所述的一个或多个图像矩阵分别存放到不同的矩阵里。在一些实施例中,所述被放置在不同矩阵里的滤波后的图像矩阵拥有相同或相似的体素大小。
合并单元530可以将不同体素大小的实际区域所对应的图像矩阵进行合并。在一些实施例中,合并包括创建一个合并矩阵,所述合并矩阵对应的区域为待合并图像矩阵所对应的最大区域。在一些实施例中,所述合并矩阵所对应的体素大小为待合并图像矩阵中所对应最小体素大小。在一些实施例中,体素越小意味着分辨率越高。在一些实施例中,所述合并包括对待合并图像矩阵进行插值处理。所述插值处理可以指在低分辨率图像转化为高分辨率过程中通过特定算法或处理预测高分辨率图像中一部分没有数值的体素。在一些实施例中,所述算法和处理可以包括双线性插值处理、双三次插值处理、分形插值处理、自然邻点插值法、最近邻点插值法、最小曲率法、局部多项式法等一种或几种的组合。
图6-A和图6-B是根据本申请的一些实施例所示的后处理的流程图。在一些实施例中,所述后处理可以由后处理模块230实现。如图6-A所示,在图像矩阵被重建之后可以首先在步骤602中进行滤波处理,所述滤波处理可以包括Gaussian滤波、Metz滤波、Butterworth滤波、Hamming滤波、Hanning滤波、 Parzen滤波、Ramp滤波、Shepp-logan滤波、Wiener滤波等一种或几种的组合。在一些实施例中,不同扫描区域或扫描对象的不同部位可以使用不同的滤波处理。比如,对于脑部扫描可以采用Metz滤波,而对于体部扫描可以采用Gaussian滤波。
如步骤604所示,在图像矩阵或图像矩阵所对应的数据或图像经过滤波处理之后,可以将图像划分为不同层级。例如,可以按照图像矩阵所对应的体素大小对图像进行划分。在一些实施例中,图像矩阵在划分为不同层级后可以被写入一个Dicom文件中。所述Dicom文件可以记录图像的层级信息,以及图像矩阵和它们对应的体素大小信息。在一些实施例中,所示体素大小信息也可以指层级信息,其中体素越大代表层级越低。
如步骤606所示,合并不同图像层级对应的图像矩阵。所述不同图像的层级信息可以是包含在以上提到的一个Dicom文件中。所述合并包括将不同层级的图像分别存放在不同的矩阵中,并根据图像的层级将图像填充到一个最终的图像矩阵中。在一些实施例中,合并的步骤可以如图6-B所示。
如步骤608所示,后处理模块230可以根据层级信息将不同体素大小的图像存放在不同的待合并的矩阵内。在一些实施例中,某一实际区域可能对应多个待合并的矩阵,其中多个待合并的矩阵所对应的体素大小不同。所述两个或两个以上的待合并的矩阵所对应的图像可以存在重叠区域,也可以互不重叠。
如步骤610所示,后处理模块230可以建立一个合并矩阵M。所述合并矩阵M对应的实际区域为待合并矩阵所对应的最大实际区域。在一些实施例中,所述合并矩阵M所对应的体素大小为待合并的矩阵所对应最小体素大小。在一些实施例中,体素越小意味着分辨率越高。
如步骤612所示,在建立好合并矩阵M并确定好合并矩阵M的实际区域和所对应的体素大小后,后处理模块230可以对实际区域小于最大实际区域的待合并的矩阵进行填零处理,生成最终图像矩阵。在一些实施例中,后处理模块230可以对体素大小大于最小体素大小的待合并的矩阵进行插值处理。所述插值处理可以指在低分辨率图像转化为高分辨率过程中通过某些算法或处理预测高分辨率图像中一部分没有数值的体素。在一些实施例中,所述算法和处理可以包括 双线性插值处理、双三次插值处理、分形插值处理、自然邻点插值法、最近邻点插值法、最小曲率法、局部多项式法等的一种或几种的组合应用。后处理模块230可以将所述经过填零处理和插值处理的待合并图像矩阵合并成最终矩阵M。在一些实施例中,可以依次将不同层级的图像矩阵填充到合并矩阵M中。例如,可以先填充层级较低(例如体素较大)的图像矩阵,再填入层级较高(例如体素较小)的图像矩阵。在较高层级图像矩阵和较低层级图像不存在重叠的区域,可以分别将层级较低的图像矩阵的元素值和层级较高的图像矩阵的元素值对应填入最终矩阵中。在较高层级图像矩阵和较低层级图像矩阵存在重叠的区域,较高层级的图像矩阵的元素值覆盖较低层级的图像矩阵的元素值,即存在重叠的图像区域中的体素值按照层级较高层级的图像矩阵所对应的体素值填入。
图7是根据本申请的一些实施例所示的体素对应矩阵的示意图。在一些实施例中,查找表可以记录图像矩阵及体素的对应关系。如图7所示,M0和M1分别代表两个图像矩阵,M1对应的体素小于M0对应的体素。在一些实施例中,区域730可以同时被M0和M1所对应的区域覆盖。当计算M0中对应的体素740对响应线i上计数的贡献时,可以通过一个查找表(Lookup Table,LUT)得知体素740在M1中所对应的8个体素720。所述M0中对应的体素740对响应线i上计数的贡献可以通过计算M1中所对应的8个体素720分别对响应线i上计数的贡献来获得。在一些实施例中,所述查找表包含了一个或多个图像矩阵的与体素之间之间的对应关系。比如查找表可以包含矩阵M0中对应的体素740M0(X,Y,Z)对应M1中对应的8个体素720M1(X1,Y1,Z1)、M1(X1,Y2,Z1)、M1(X2,Y1,Z1)、M1(X2,Y2,Z1)、M1(X1,Y1,Z2)、M1(X1,Y2,Z2)、M1(X2,Y1,Z2)、M1(X2,Y2,Z2)的信息。在一些实施例中,不同层级图像矩阵在查找表中的对应关系由各自图像矩阵所对应的图像区域的位置关系所决定。在一些实施例中,所述查找表,根据本申请其他实施例的内容,也可以包含图像矩阵重排时需要平移的位置和方向等。例如,查找表中可以记录压缩、和/或重排后的体素与图像矩阵M0中元素的对应关系。
图8是根据本申请的一些实施例所示的模块配对的示意图。根据本申请其他实施例的描述,成像设备110可以包括一个或多个成像模块。进一步地,所述一个或多个成像模块的探测器连续的放置在所述目标的周围。仅仅作为例子,这 里所说的一个成像模块可以对应一个PET探测器,探测器间的位置关系参看图10中的描述。如图8所示,成像设备110可以由6个成像模块构成。所示6个成像模块可以两两配对从而形成21个模块配对(如图8所示,包括配对模块810、配对模块820及配对模块830)。例如,所述模块配对810可以表示第6成像模块与第6成像模块的配对,即响应线只被第6成像模块左右两边的探测器接收;所述模块配对820可以表示第1成像模块与第6成像模块的配对,即响应线可被第1成像模块和第6成像模块所对应的探测器接收;所述模块配对830可以表示第1成像模块与第4成像模块的配对,即响应线可被第1成像模块和第4成像模块所对应的探测器接收。在一些实施例中,每个模块配对的计算可以由其他实施例中所述的副计算节点计算;所述的主计算节点可以整合与统计所有副计算节点的结果。在一些实施例中,图中黑线部分(矩形框内类似于“x”形状或者-”形状的部分)表示在相应模块配对计算中需要被修改的图像矩阵的元素,具体内容将在图10中描述到。在一些实施例中,每个模块配对可以根据需要被修改的图像矩阵的元素进行矩阵压缩和重排从而减少储存量和运算量。例如,模块配对810可以通过将黑线部分下方的元素去除。又例如,模块820可以通过先将黑线部分平移和聚集到一起,再将黑线部分以外的元素去除从而实现矩阵的压缩。
图9是根据本申请的一些实施例所示的图像矩阵处理单元的示意图。图像矩阵处理单元350可以包括一个或多个图像矩阵压缩子单元910、一个或多个图像矩阵重排子单元920、一个或多个图像矩阵逆重排子单元930、一个或多个图像矩阵解压缩子单元940、一个或多个查找表生成单元950。
图像矩阵压缩子单元910可以对图像矩阵进行压缩。在一些实施例中,所述图像矩阵的压缩可以包括将图像矩阵中一部分元素去除,在一些实施例中,所述去除的元素可以是空的。在与PET系统中,这里所说的空的元素可以对应于没有被响应线穿过的体素,或者在图像重建(例如正投影,反投影等)过程或部分过程中,没有对探测器上的计数产生贡献的体素。在一些实施例中,所述去除的元素可以是符合一定条件的,比如小于一个阈值或处于某个矩阵中的某些位置,如不会对图像重建以及后续步骤有影响的位置等。
图像矩阵重排子单元920可以将图像矩阵中的一部分元素或全部元素从第 一位置平移到图像矩阵中的第二位置。在一些实施例中,平移之前处于第二位置的元素在平移之后会被去除。可选地,所述平移可以包括将所述处于第一位置和第二位置的部分或全部元素位置对调。在一些实施例中,矩阵的重排可以包括将某一类别或特点的元素平移到某一特定的位置。在一些实施例中,矩阵的重排可以包括将矩阵中非零的元素平移并集合在一起。
图像矩阵逆重排子单元930可以将部分或所有平移后的元素从第二位置平移回所述第一位置。在一些实施例中,在图像矩阵中被重排或逆重排的元素的数值可以保持不变。
图像矩阵解压缩子单元940可以将一些元素加入到图像矩阵中的一些部分。在一些实施例中,矩阵的解压缩可以包括将在图像矩阵压缩时被去除的元素添加回所述元素原始的位置。在一些实施例中,在矩阵中被去除又重新添加回图像矩阵的元素的数值在压缩和解压缩的过程中可以不发生改变。
查找表生成单元950可以生成一个查找表。在一些实施例中,所述查找表可以包括图像矩阵重排时需要平移的位置和方向等。在一些实施例中,所述查找表可以包括一个或多个图像矩阵的元素之间的转换关系。例如,查找表中可以包括如图7中所述的不同层级的图像矩阵,以及不同层级的图像矩阵所包含的元素所对应的图像区域的位置关系。
以上的描述仅仅是本发明的具体实施例,不应被视为是唯一的实施例。显然,对于本领域的专业人员来说,在了解本发明内容和原理后,都可能在不背离本发明原理、结构的情况下,进行形式和细节上的各种修正和改变,但是这些修正和改变仍在本发明的权利要求保护范围之内。例如,查找表生成单元950可以与图像矩阵重排子单元920合并成一个子单元,所述子单元可以实现上述查找表生成单元950与与图像矩阵重排子单元920的功能。
图10是根据本申请的一些实施例所示的图像矩阵处理的示意图。如图10所示,图像矩阵1010对应一个扫描区域,所述扫描区域由第一成像模块1011,第二成像模块1012,第三成像模块1013和第四成像模块1014共同确定。第一成像模块1011可以与第四成像模块1014进行配对。所述第一成像模块1011和第四成像模块1014分别对应第一探测器和第四探测器。根据本申请其他实施例的 描述,为第一成像模块和第四成像模块分配一个副计算节点,该副计算节点计算处在第一成像模块和第四成像模块的探测器所能接收到的响应线。如图10所示,图像矩阵1010中的阴影部分是第一成像模块1011和第四成像模块1014配对后在重建中所需要更新和计算的元素。图像矩阵1010的其他部分所对应的元素在重建过程中的数值可以不发生改变。
在一些实施例中,图像矩阵1010可以被压缩成图像矩阵1020,即可以将处于图像矩阵1010上部和下部的一些重建中不发生改变的元素去除。例如,坐标位于Z1、Z2、Z3、Z18、Z19、Z20的图像矩阵1010中的元素可以被去除从而压缩成图像矩阵1020。进一步地,图像矩阵1020可以被重排和压缩成图像矩阵1030。即可以将图像矩阵1020中在重建中数值可能会发生变化的元素进行平移和集合。更具体的,可以将图像矩阵1020中每一个T维度进行平移,比如将T1坐标下的元素Z9、Z10、Z11、Z12去除,而将其余同处于T1坐标下的元素进行平移。在一些实施例中,所述去除的元素的位置以及元素平移的位置和方向可以通过查询查找表获得。
如图10所示,图像矩阵1010(20x10)在对重建没有影响的情况下被压缩和重排成了图像矩阵1030(10x10),从而降低了存储空间和计算量。在一些实施例中,所述压缩和重排后的图像矩阵被存储在存储模块250中。所述查找表可以记录对图像矩阵进行压缩和重排的信息,所述信息也可以存储在存储模块250中。
图11是根据本申请的一些实施例所示的图像矩阵重建的流程图。在一些实施例中,所述图像矩阵处理可以由重建模块220实现。如图11所示,重建模块可以首先在步骤1102中确定主计算节点和副计算节点。根据本申请其他实施例中的描述,副计算节点可以计算一个子图像矩阵。所述子图像矩阵对应一个子扫描区域。在一些实施例中,子扫描区域由一个或多个探测器形成。在一些实施例中,副计算节点可以根据子扫描区域所对应的探测器的计数计算出所述子扫描区域所对应的子图像矩阵中元素的数值。在一些实施例中,副计算节点对应一组配对的成像模块所对应图像矩阵的计算。在一些实施例中,主计算节点可以包括将副计算节点的计算结果进行合并与整合。
如步骤1104所示,可以分配图像矩阵到副计算节点中。在一些实施例中, 每一个副计算节点对应一组配对的成像模块所对应图像矩阵的计算。
在步骤1106和步骤1108中对所述配对的成像模块所对应的图像矩阵进行压缩与重排。压缩与重排的方法可以参见本申请其他实施例中的说明。值得注意的是,不同副计算节点对应的配对的成像模块可能不一样,所需的压缩与重排的方法可能有所区别。例如,图10中的副计算节点计算第一成像模块1011和第四成像模块1014所对应的图像矩阵,需要对该图像矩阵进行压缩与重排,并且压缩与重排的方法由第一成像模块1011和第四成像模块1014之间的阴影部分所决定。在一些实施例中,一个副计算节点可能计算第一成像模块1011所对应的图像矩阵(即第一探测器所限定的区域,表现为一个矩形,在图10中未标出),所述副计算节点只需要对图像矩阵进行压缩,即只计算第一成像模块1011之间的区域所对应的体素的值。
在步骤1110中计算单个子集下的正投影/反投影结果。在一些实施例中,所述正投影结果指的是根据重建的图像矩阵计算出响应线所对应的一组配对的成像模块的探测器的计数。在一些实施例中,所述反投影结果指的是根据响应线所对应的所述一组配对的成像模块的探测器的计数,计算并重建图像矩阵所包含的元素的值。在正投影/反投影过程中,可以通过查找表对经过重排后的图像矩阵进行坐标转换。在一些实施例中,可以将全部投影数据划分成多个组,一个或多个组可以构成一个子集。例如,可以根据投影方向对投影数据进行分组。在一些实施例中,所需重建的图像中包含不同层级的图像矩阵,如图4所描述的,不同层级的图像矩阵对应不同体素的大小。由于一条响应线可以穿过一个或多个层级的图像矩阵所对应的区域,则在进行正投影/反投影的过程中,可以根据一个查找表所标注的不同层级的图像矩阵的信息,分别计算一种或多种元素大小对该响应线的贡献。在一些实施例中,所述查找表包含了一个或多个图像矩阵的元素之间的转换关系。在重建完图像矩阵,即计算出图像矩阵的元素的值以后,重建模块220可以在步骤1112中对图像矩阵进行逆重排。所述逆重排是指将重排后的图像矩阵还原成与实际图像区域对应的图像矩阵。
在步骤1114中判断是否在单个子集下所有角度的反投影结果已经累加完成,即是否已经计算了满足所述配对的成像模块间所有角度的反投影结果的计算 和累加。如果未完成,则需要对矩阵根据角度的不同进行重新的压缩、重排、反投影结果计算和逆重排等步骤(即步骤1106-1112)。如果已完成,重建模块可以在步骤1116中对所述图像矩阵进行解压缩。所述经过解压缩的图像矩阵大小与压缩前的大小一致。
在步骤1118中主计算节点累加所有副计算节点的反投影结果。在一些实施例中,经步骤1116解压缩后得到的不同角度下解压缩后的图像矩阵的大小可能相同,主计算节点可以将所述不同角度下解压缩后的图像矩阵相同位置的每一个元素的值分别进行相加,得到累加后的图像矩阵。
在累加完成后,在步骤1120,主计算节点可以根据累加结果对图像矩阵进行更新,并对下一个子集进行处理。所述更新完成后视为对图像矩阵进行了一个子集的重建。在一些实施例中,重建模块220可以对图像矩阵进行下一个子集的重建并根据重建结果更新图像矩阵,直到所有子集均已遍历为止。如果已遍历所有子集,则实施后续步骤。如果还存在其它子集,则返回步骤1110,重新计算副节点在单个子集下的正投影/反投影结果。根据本申请其他实施例的描述,可以通过有序子集最大期望值法(Ordered Subset Expectation Maximization,OSEM)对图像矩阵进行重建。在遍历上述所有子集后,获得一个重建图像矩阵,完成一个迭代过程。
在步骤1124中判断是否满足迭代停止条件,如果满足停止条件,则重建过程结束。如果不满足,则返回步骤1104,进入下一个迭代过程,重新将图像矩阵分配到副计算节点。迭代停止条件可以与本次迭代重建的图像矩阵有关,也可以依据人为设定。在一些实施例中,满足的迭代停止条件可以是本次迭代重建的图像矩阵与上一次迭代的图像矩阵之间的差值小于一定的阈值,也可以直接是本次迭代重建的图像矩阵满足一定的条件。在另外一些实施例中,满足的迭代停止条件可以是完成一定数量的迭代次数。
以上的描述仅仅是本发明的具体实施例,不应被视为是唯一的实施例。显然,对于本领域的专业人员来说,在了解本发明内容和原理后,都可能在不背离本发明原理、结构的情况下,进行形式和细节上的各种修正和改变,但是这些修正和改变仍在本发明的权利要求保护范围之内。例如,可选的,在重排图像矩阵 前,可以引入正向点扩散函数模型,在矩阵逆重排前引入反向点扩散模型,对图像的重构过程进行修正。
图12是根据本申请的一些实施例所示的图像矩阵处理的流程图。在一些实施例中,所述图像矩阵处理可以由图像矩阵处理单元350实现。在一些实施例中,流程1200可以对应图11所示的步骤1106至步骤1114。根据本申请其他实施例中的描述,成像设备可以由一个或多个成像模块拼接而成。例如,所述一个或多个成像模块的探测器连续的放置在所述目标的周围。在一些实施例中,每个模块配对的计算可以由其他实施例中所述的副计算节点计算;所述的主计算节点可以整合与统计所有副计算节点的结果。在一些实施例中,配对的成像模块可以分别对应一个图像矩阵。在步骤1202中,可以通过图像矩阵处理单元350对模块配对所对应的图像矩阵进行模块压缩从而形成一个第三图像矩阵。
第三图像矩阵可以在步骤1204中被正向旋转。根据第三图像矩阵所对应的配对的成像模块的信息,图像矩阵处理单元350可以计算一个基准层位置和一个有效矩阵范围。所述基准层范围和所述有效矩阵范围可以表示第三图像矩阵在后续的重排步骤中各个元素需要平移的位置和方向。在一些实施例中,图像矩阵处理单元350可以将所述各个元素在后续重排步骤中需要平移的位置和方向储存在一个查找表中。
如步骤1210所示,图像矩阵处理单元350可以根据所述查找表和第三图像矩阵,生成一个第四图像矩阵。在一些实施例中,第三图像矩阵可以通过本申请其他实施例中所述的矩阵重排的方法获得所述第四图像矩阵。
在步骤1212中对第四图像矩阵进行正投影处理并生成一个投影矩阵。根据所述正投影处理的结果,图像矩阵处理单元350可以计算出一个校正系数。所述校正系数可以是某一响应线上测量得到的计数与对重建图像沿着该响应线进行的正投影的比值。如步骤1216所示,可以对投影矩阵进行反投影处理从而产生一个第五图像矩阵。在一些实施例中,第五图像矩阵的产生可以基于所述校正系数。
图像矩阵处理单元350可以在步骤1218中对第五图像矩阵进行逆重排从而生成一个第六图像矩阵。进一步地,可以在步骤1220中对所述第六图像矩阵 进行反向旋转。在一些实施例中,所述第三图像矩阵的方向与大小和所述第六图像矩阵保持一致。
上文已对基本概念做了描述,显然,对于本领域技术人员来说,上述发明披露仅仅作为示例,而并不构成对本申请的限定。虽然此处并没有明确说明,本领域技术人员可能会对本申请进行各种修改、改进和修正。该类修改、改进和修正在本申请中被建议,所以该类修改、改进、修正仍属于本申请示范实施例的精神和范围。
同时,本申请使用了特定词语来描述本申请的实施例。如“一个实施例”、“一实施例”、和/或“一些实施例”意指与本申请至少一个实施例相关的某一特征、结构或特点。因此,应强调并注意的是,本说明书中在不同位置两次或多次提及的“一实施例”或“一个实施例”或“一替代性实施例”并不一定是指同一实施例。此外,本申请的一个或多个实施例中的某些特征、结构或特点可以进行适当的组合。
此外,本领域技术人员可以理解,本申请的各方面可以通过若干具有可专利性的种类或情况进行说明和描述,包括任何新的和有用的工序、机器、产品或物质的组合,或对他们的任何新的和有用的改进。相应地,本申请的各个方面可以完全由硬件执行、可以完全由软件(包括固件、常驻软件、微码等)执行、也可以由硬件和软件组合执行。以上硬件或软件均可被称为“数据块”、“模块”、“引擎”、“单元”、“组件”或“系统”。此外,本申请的各方面可能表现为位于一个或多个计算机可读介质中的计算机产品,该产品包括计算机可读程序编码。
计算机可读信号介质可能包含一个内含有计算机程序编码的传播数据信号,例如在基带上或作为载波的一部分。该传播信号可能有多种表现形式,包括电磁形式、光形式等等、或合适的组合形式。计算机可读信号介质可以是除计算机可读存储介质之外的任何计算机可读介质,该介质可以通过连接至一个指令执行系统、装置或设备以实现通讯、传播或传输供使用的程序。位于计算机可读信号介质上的程序编码可以通过任何合适的介质进行传播,包括无线电、电缆、光纤电缆、RF、或类似介质、或任何上述介质的组合。
本申请各部分操作所需的计算机程序编码可以用任意一种或多种程序语言 编写,包括面向对象编程语言如Java、Scala、Smalltalk、Eiffel、JADE、Emerald、C++、C#、VB.NET、Python等,常规程序化编程语言如C语言、Visual Basic、Fortran 2003、Perl、COBOL 2002、PHP、ABAP,动态编程语言如Python、Ruby和Groovy,或其他编程语言等。该程序编码可以完全在用户计算机上运行、或作为独立的软件包在用户计算机上运行、或部分在用户计算机上运行部分在远程计算机运行、或完全在远程计算机或服务器上运行。在后种情况下,远程计算机可以通过任何网络形式与用户计算机连接,比如局域网(LAN)或广域网(WAN),或连接至外部计算机(例如通过因特网),或在云计算环境中,或作为服务使用如软件即服务(SaaS)。
此外,除非权利要求中明确说明,本申请所述处理元素和序列的顺序、数字字母的使用、或其他名称的使用,并非用于限定本申请流程和方法的顺序。尽管上述披露中通过各种示例讨论了一些目前认为有用的发明实施例,但应当理解的是,该类细节仅起到说明的目的,附加的权利要求并不仅限于披露的实施例,相反,权利要求旨在覆盖所有符合本申请实施例实质和范围的修正和等价组合。例如,虽然以上所描述的系统组件可以通过硬件设备实现,但是也可以只通过软件的解决方案得以实现,如在现有的服务器或移动设备上安装所描述的系统。
同理,应当注意的是,为了简化本申请披露的表述,从而帮助对一个或多个发明实施例的理解,前文对本申请实施例的描述中,有时会将多种特征归并至一个实施例、附图或对其的描述中。但是,这种披露方法并不意味着本申请对象所需要的特征比权利要求中提及的特征多。实际上,实施例的特征要少于上述披露的单个实施例的全部特征。
一些实施例中使用了描述成分、属性数量的数字,应当理解的是,此类用于实施例描述的数字,在一些示例中使用了修饰词“大约”、“近似”或“大体上”来修饰。除非另外说明,“大约”、“近似”或“大体上”表明所述数字允许有±20%的变化。相应地,在一些实施例中,说明书和权利要求中使用的数值参数均为近似值,该近似值根据个别实施例所需特点可以发生改变。在一些实施例中,数值参数应考虑规定的有效数位并采用一般位数保留的方法。尽管本申请一些实施例中用于确认其范围广度的数值域和参数为近似值,在具体实施例中,此类数值的 设定在可行范围内尽可能精确。
针对本申请引用的每个专利、专利申请、专利申请公开物和其他材料,如文章、书籍、说明书、出版物、文档、物件等,特此将其全部内容并入本申请作为参考。与本申请内容不一致或产生冲突的申请历史文件除外,对本申请权利要求最广范围有限制的文件(当前或之后附加于本申请中的)也除外。需要说明的是,如果本申请附属材料中的描述、定义、和/或术语的使用与本申请所述内容有不一致或冲突的地方,以本申请的描述、定义和/或术语的使用为准。
最后,应当理解的是,本申请中所述实施例仅用以说明本申请实施例的原则。其他的变形也可能属于本申请的范围。因此,作为示例而非限制,本申请实施例的替代配置可视为与本申请的教导一致。相应地,本申请的实施例不仅限于本申请明确介绍和描述的实施例。

Claims (30)

  1. 一种图像重建方法,包括:
    确定对象的第一区域;
    设定所述第一区域所对应的第一体素的大小;
    确定所述对象的第二区域;
    设定所述第二区域所对应的第二体素的大小;
    获取所述对象的扫描数据;
    根据所述扫描数据,重建第一区域图像,包括对第一体素和第二体素的正投影,以及对第一体素的反投影。
  2. 根据权利要求1所述的方法,还包括:根据所述扫描数据,重建第二区域图像,包括对第一体素和第二体素的正投影,以及对第二体素的反投影。
  3. 根据权利要求1所述的方法,所述第二区域与所述第一区域在空间上是连续的。
  4. 根据权利要求1所述的方法,所述第二区域与所述第一区域在空间上是不连续的。
  5. 根据权利要求1所述的方法,所述重建第一区域图像包含对第一区域图像进行第一种滤波处理,所述重建第二区域图像包含对第二区域图像进行第二种滤波处理。
  6. 根据权利要求2所述的方法,所述重建第一区域图像包括:根据所述扫描数据,迭代地重建所述第一区域图像;所述重建第二区域图像包括:根据所述扫描数据,迭代地重建所述第二区域图像。
  7. 根据权利要求6所述的方法,所述重建第一区域图像的迭代次数与所述重建第二区域图像的迭代次数不同。
  8. 根据权利要求2所述的方法,所述迭代地重建第一区域图像或所述迭代地重建第二区域图像是基于有序子集最大期望值法。
  9. 根据权利要求1所述的方法,所述对第一体素和第二体素的正投影包括沿着一条响应线对所述第一体素和所述第二体素进行正投影。
  10. 根据权利要求2所述的方法,包括:对所述第一区域图像和第二区域图像进行校正。
  11. 根据权利要求1所述的方法,包括:获取所述对象的结构信息;根据所述结构信息,确定所述第一区域和所述第二区域。
  12. 根据权利要求2所述的方法,包括:
    确定第一图像矩阵,所述第一体素存储在所述第一图像矩阵中,所述重建第一区域图像包括重建所述第一图像矩阵;
    确定第二图像矩阵,所述第二体素存储在所述第二图像矩阵中,所述重建第二区域图像包括重建所述第二图像矩阵。
  13. 根据权利要求12所述的方法,包括:生成查找表,所述查找表记录所述第一图像矩阵和所述第一体素的对应关系,或所述第二图像矩阵和所述第二体素的对应关系。
  14. 根据权利要求13所述的方法,所述第一图像矩阵和所述第一体素的所述对应关系包括将所述第一体素重排后存储在所述第一图像矩阵中。
  15. 根据权利要求13所述的方法,所述第一图像矩阵和所述第一体素的对应关系包括将所述第一体素压缩和重排后存储在所述第一图像矩阵中。
  16. 根据权利要求12所述的方法,包括:
    生成一合并矩阵,所述合并矩阵对应的体素大小为所述第一体素和所述第二体素中的较小者;
    将所述第一图像矩阵和所述第二图像矩阵分别填充在所述合并矩阵中,生成与所述最终图像对应的最终图像矩阵。
  17. 一种图像重建方法,包括:
    确定一个图像矩阵,所述图像矩阵对应一个扫描区域,并对应一定数量的体素;
    划分所述图像矩阵到多个子图像矩阵,所述多个子图像矩阵中的至少一个子图像矩阵对应所述扫描区域的一个子扫描区域,并包含部分所述体素;
    变换所述至少一个子图像矩阵,生成至少一个变换矩阵;
    基于所述变换矩阵重建所述至少一个子图像矩阵;以及
    基于重建的所述至少一个子图像矩阵重建所述图像矩阵。
  18. 根据权利要求17所述的方法,所述变换包括将所述子图像矩阵中至少部分元素进行平移。
  19. 根据权利要求17所述的方法,所述变换包括压缩或重排所述子图像矩阵。
  20. 根据权利要求19所述的方法,包括建立所述图像矩阵和所述子图像矩阵的查找表,所述查找表记录所述子图像矩阵的所述压缩的方式或所述重排的方式。
  21. 根据权利要求20所述的方法,所述变换包括根据所述查找表对所述子图像矩阵进行解压缩。
  22. 一种图像重建系统,包括:
    成像设备,被配置为生成对象的扫描数据;及
    图像处理器包括:
    接收模块,被配置为获取所述对象的第一区域及所述第一区域所对应的第一体素的大小;获取所述对象的第二区域及所述第二区域所对应的第二体素的大小;及
    重建模块,被配置为重建第一区域图像,包括对第一体素和第二体素的正投影,以及对第一体素的反投影。
  23. 根据权利要求22所述的系统,所述重建模块进一步被配置为重建第二区域图像,包括对第一体素和第二体素的正投影,以及对第二体素的反投影。
  24. 根据权利要求23所述的系统,包括后处理模块,被配置为通过后处理获得所述第一区域图像和所述第二区域图像,所述后处理包括滤波处理、降噪处理、合并处理或划分处理。
  25. 根据权利要求23所述的系统,所述重建模块进一步包括图像矩阵生成单元,所述图像矩阵生成单元被配置为:
    确定第一图像矩阵,所述第一体素存储在所述第一图像矩阵中,所述重建第一区域图像包括重建所述第一图像矩阵;
    确定第二图像矩阵,所述第二体素存储在所述第二图像矩阵中,所述重建第二区域图像包括重建所述第二图像矩阵。
  26. 根据权利要求25所述的系统,所述重建模块进一步包括图像矩阵处理单元,所述图像矩阵处理单元被配置为对所述第一图像矩阵和所述第二图像矩阵进行旋转、压缩与解压缩、重排与逆重排、填充、分解、合并中至少一种处理。
  27. 根据权利要求25所述的系统,所述图像矩阵处理单元进一步包括查找表生成单元,所述查找表生成单元被配置为生成查找表,所述查找表记录所述第一图像矩阵和所述第一体素的对应关系,以及所述第二图像矩阵和所述第二体素的对应关系。
  28. 根据权利要求27所述的系统,所述第一图像矩阵和所述第一体素的所述对应关系包括将所述第一体素压缩或重排后存储在所述第一图像矩阵中。
  29. 根据权利要求25所述的系统,所述后处理模块包含:
    合并单元,被配置为生成一个合并矩阵,将所述第一图像矩阵和所述第二图像矩阵分别填充在所述合并矩阵中,生成与所述最终图像对应的最终图像矩阵;
    所述合并矩阵对应的体素大小为所述第一体素和所述第二体素中的较小者。
  30. 一种图像重建系统,包括:
    图像矩阵生成单元,被配置为生成一个图像矩阵,所述图像矩阵对应一个扫描区域,并对应一定数量的体素;
    图像矩阵处理单元,被配置为划分所述图像矩阵到多个子图像矩阵,压缩所述多个子图像矩阵中的至少一个子图像矩阵,并基于所述压缩的子图像矩阵重建所述至少一个子图像矩阵;以及基于重建的所述至少一个子图像矩阵重建所述图像矩阵。
PCT/CN2016/092881 2016-08-02 2016-08-02 图像重建方法及系统 WO2018023380A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
PCT/CN2016/092881 WO2018023380A1 (zh) 2016-08-02 2016-08-02 图像重建方法及系统
US15/394,633 US10347014B2 (en) 2016-08-02 2016-12-29 System and method for image reconstruction
US16/448,052 US11308662B2 (en) 2016-08-02 2019-06-21 System and method for image reconstruction
US17/659,660 US11869120B2 (en) 2016-08-02 2022-04-18 System and method for image reconstruction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/092881 WO2018023380A1 (zh) 2016-08-02 2016-08-02 图像重建方法及系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/394,633 Continuation US10347014B2 (en) 2016-08-02 2016-12-29 System and method for image reconstruction

Publications (1)

Publication Number Publication Date
WO2018023380A1 true WO2018023380A1 (zh) 2018-02-08

Family

ID=61072344

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/092881 WO2018023380A1 (zh) 2016-08-02 2016-08-02 图像重建方法及系统

Country Status (1)

Country Link
WO (1) WO2018023380A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112017228A (zh) * 2019-05-31 2020-12-01 华为技术有限公司 一种对物体三维重建的方法及相关设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101610719A (zh) * 2007-02-07 2009-12-23 皇家飞利浦电子股份有限公司 治疗计划中的运动估计
WO2013177661A1 (en) * 2012-05-29 2013-12-05 University Of Manitoba Systems and methods for improving the quality of images in a pet scan
CN104751499A (zh) * 2013-12-31 2015-07-01 上海联影医疗科技有限公司 一种pet二维图像重建方法及装置
CN105389788A (zh) * 2015-10-13 2016-03-09 沈阳东软医疗系统有限公司 Pet多床位图像的重建方法及装置、合并方法及装置
CN105741303A (zh) * 2016-02-26 2016-07-06 上海联影医疗科技有限公司 一种获取医学影像的方法
CN106296765A (zh) * 2016-08-02 2017-01-04 上海联影医疗科技有限公司 图像重建方法及系统
CN106296764A (zh) * 2016-08-02 2017-01-04 上海联影医疗科技有限公司 图像重建方法及系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101610719A (zh) * 2007-02-07 2009-12-23 皇家飞利浦电子股份有限公司 治疗计划中的运动估计
WO2013177661A1 (en) * 2012-05-29 2013-12-05 University Of Manitoba Systems and methods for improving the quality of images in a pet scan
CN104751499A (zh) * 2013-12-31 2015-07-01 上海联影医疗科技有限公司 一种pet二维图像重建方法及装置
CN105389788A (zh) * 2015-10-13 2016-03-09 沈阳东软医疗系统有限公司 Pet多床位图像的重建方法及装置、合并方法及装置
CN105741303A (zh) * 2016-02-26 2016-07-06 上海联影医疗科技有限公司 一种获取医学影像的方法
CN106296765A (zh) * 2016-08-02 2017-01-04 上海联影医疗科技有限公司 图像重建方法及系统
CN106296764A (zh) * 2016-08-02 2017-01-04 上海联影医疗科技有限公司 图像重建方法及系统

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112017228A (zh) * 2019-05-31 2020-12-01 华为技术有限公司 一种对物体三维重建的方法及相关设备

Similar Documents

Publication Publication Date Title
CN106296764B (zh) 图像重建方法及系统
CN106296765B (zh) 图像重建方法及系统
JP5543448B2 (ja) 高効率コンピュータ断層撮影
CN103514615B (zh) 用于迭代重建的方法和设备
CN103514629B (zh) 用于迭代重建的方法和设备
US11869120B2 (en) System and method for image reconstruction
US11244481B2 (en) Multi-scale image reconstruction of three-dimensional objects
JP7463317B2 (ja) ホモグラフィ再サンプリング変換による再投影および逆投影のためのシステムおよび方法
CN107016672B (zh) 医学扫描图像的重建方法和装置以及医学成像系统
WO2010011676A2 (en) Incorporation of mathematical constraints in methods for dose reduction and image enhancement in tomography
JP2003529423A (ja) トモグラフィー用高速階層的再投影アルゴリズム
WO2018023380A1 (zh) 图像重建方法及系统
WO2023156242A1 (en) Learned invertible reconstruction
US10347014B2 (en) System and method for image reconstruction
Galve et al. Super-iterative image reconstruction in PET
JP7139397B2 (ja) ディープニューラルネットワークおよび測定データの再帰的間引きを使用して医用画像を再構築するためのシステムおよび方法
CN115205415A (zh) Ct均值图像生成方法、装置、系统和计算机设备
JP7459243B2 (ja) 1以上のニューラルネットワークとしての画像形成のモデル化による画像再構成
US20240185484A1 (en) System and method for image reconstruction
WO2024078049A1 (en) System and method for near real-time and unsupervised coordinate projection network for computed tomography images reconstruction
WO2018133003A1 (zh) Ct三维重建方法及系统
US11398064B2 (en) Real time reconstruction-native image element resampling for high definition image generation and processing
CN118034598A (zh) 数据存储方法、装置和计算机设备
Der Sarkissian et al. Rotations in the Mojette space
Zheng et al. Reconstruction and visualization of model-based volume representations

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16910978

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16910978

Country of ref document: EP

Kind code of ref document: A1