WO2022213321A1 - Wavelet fusion-based pet image reconstruction method - Google Patents

Wavelet fusion-based pet image reconstruction method Download PDF

Info

Publication number
WO2022213321A1
WO2022213321A1 PCT/CN2021/085939 CN2021085939W WO2022213321A1 WO 2022213321 A1 WO2022213321 A1 WO 2022213321A1 CN 2021085939 W CN2021085939 W CN 2021085939W WO 2022213321 A1 WO2022213321 A1 WO 2022213321A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pet
frequency
wavelet
low
Prior art date
Application number
PCT/CN2021/085939
Other languages
French (fr)
Chinese (zh)
Inventor
万丽雯
李彦明
郑海荣
刘新
任溥弦
Original Assignee
深圳高性能医疗器械国家研究院有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳高性能医疗器械国家研究院有限公司 filed Critical 深圳高性能医疗器械国家研究院有限公司
Priority to PCT/CN2021/085939 priority Critical patent/WO2022213321A1/en
Publication of WO2022213321A1 publication Critical patent/WO2022213321A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation

Definitions

  • the invention relates to the technical field of medical image processing, and more particularly, to a PET image reconstruction method based on wavelet fusion.
  • PET Positron emission tomography
  • ALARA Low As Reasonably Achievable, ALARA
  • the purpose of the present invention is to overcome the above-mentioned defects of the prior art, and to provide a PET image reconstruction method based on wavelet fusion. Based on the wavelet transform, the images reconstructed by Kernel EM and EM are decomposed respectively, and then features are extracted to perform frame-by-frame fusion, so that the PET image is decomposed. Reconstruction results in relatively clear images at reduced doses.
  • the technical solution of the present invention is to provide a PET image reconstruction method based on wavelet fusion.
  • the method includes the following steps:
  • the wavelet coefficients are processed according to the set fusion criteria according to the wavelet coefficients, and the processed new wavelet coefficients are subjected to wavelet inverse transformation to obtain the fused image;
  • the fused image is evaluated and the optimal solution is obtained iteratively, and the obtained fused image that meets the set evaluation criteria is used as the reconstructed image.
  • the advantage of the present invention is that, in order to solve the technical problem of improving the quality of the low-dose image, the present invention uses its respective advantages on the basis of the reconstruction algorithm to convert it from the time domain to the time domain through wavelet decomposition.
  • the frequency domain extracts the advantages of their respective reconstruction effects in the form of weights and fuses them to improve the quality of the reconstructed image, and uses wavelets to extract high and low frequency information and feature fusion.
  • FIG. 1 is a flowchart of a method for reconstructing a PET image based on wavelet fusion according to an embodiment of the present invention
  • FIG. 2 is a schematic process diagram of a method for reconstructing a PET image based on wavelet fusion according to an embodiment of the present invention
  • FIG. 3 is a frequency distribution diagram after performing three-layer wavelet transform on an image according to an embodiment of the present invention
  • FIG. 4 is a schematic process diagram of a method for reconstructing a PET image based on wavelet fusion according to another embodiment of the present invention.
  • the PET image reconstruction method based on wavelet fusion includes the following steps.
  • Step S1 reconstruct the PET image to obtain a reconstructed image.
  • H ij is a constant, which represents the system response matrix, which is expressed as the probability that the gamma photon emitted by the jth pixel is detected by the ith LOR
  • X is a dimensional array to represent a two-dimensional image.
  • const represents some other quantity unrelated to X.
  • the original random variable A ij is replaced by A ij under the condition that the expected number X of the number of photons emitted by all pixels facing all directions obtained by the maximum likelihood is known, so that achieve the purpose of iterative update.
  • step S2 the dynamic frame in the dynamic PET is fused into the forward projection model as prior information to obtain a reconstructed image.
  • represents the coefficient vector of K
  • P represents the system matrix
  • r represents the scattering of photons and random numbers
  • K is the key
  • K represents the relationship between the neighbors f j of each pixel f i using the KNN method.
  • KNN k-nearest neighbor algorithm
  • Step S1 is used as the basis for reconstructing the image, and the time frame is divided into three time periods to be accumulated to obtain features, which are used as prior information to reconstruct the PET image.
  • bP(a) is the penalty function
  • b is its regularization parameter
  • the reconstructed image x has been regularized by K, so when estimating the maximum likelihood function, the regularization parameter b can be set to 0, and the simplified formula is:
  • step S1 Similar to step S1, after performing the desired calculation and logarithmic simplification, the iterative formula based on K can be obtained:
  • the present invention reconstructs dynamic PET images, obtains each slice image corresponding to the first group of time frames, reconstructs the PET image based on the kernel method, and obtains each slice image corresponding to the second group of time frames, wherein the first group of time frames corresponds to each slice image.
  • the second group of image reconstruction results are PET reconstructed images obtained according to different ways.
  • step S3 the images obtained in step S1 and step S2 are extracted frame by frame using wavelet transform for fusion.
  • Wavelet transform is to decompose the image into sub-images of each frequency segment in the frequency domain, which are used to represent each feature component of the original image. That is, the image is decomposed into a low-frequency image and a high-frequency image after being decomposed by wavelet transform, and the coefficients of low-frequency and high-frequency are obtained at the same time.
  • the low-frequency image contains low-frequency components
  • the high-frequency image contains high-frequency components.
  • the high frequency components are the details of the image.
  • different fusion criteria are used to process the wavelet coefficients according to the difference of the wavelet coefficients.
  • the processed new wavelet coefficients preserve more frequency band features.
  • inverse wavelet transform is performed on the new wavelet coefficients to obtain the fused image.
  • the image is decomposed into four frequency bands: LL, HL, LH, and HH.
  • this band maintains the original image content information, the energy of the image is concentrated in this band:
  • this band maintains the high frequency information of the image in the diagonal direction:
  • f(k,l) represents the details of the image projected at each scale.
  • (m,n) represents the pixels of the image, and j represents the number of layers of low-pass and high-pass decomposition coefficients.
  • the low-frequency information L1(j,k) and L2(j,k) of the two images the low-frequency information L(j,k) of the fused image is obtained by averaging.
  • step S4 the fusion result is evaluated and an optimal solution is obtained iteratively.
  • the mean value and the standard deviation are used to judge the fusion result.
  • the mean value can reflect the average brightness of the image
  • the standard deviation reflects the degree of dispersion of the image gray level relative to the mean value.
  • a larger standard deviation means that most values are more different from their mean; a smaller standard deviation means that these values are closer to the mean. That is, the smaller the standard deviation, the more accurate the data.
  • the fusion results are evaluated using the information entropy of the image, which defines the average amount of information obtained when observing the output of a single source symbol.
  • the maximum entropy occurs when the probability of occurrence of each symbol of the source is equal, and the source provides the maximum possible average information amount of the source symbol at this time. The greater the information entropy, the greater the amount of information contained in the image, that is, the better the fusion effect.
  • the average gradient can be used to represent an improvement in the sharpness of the image.
  • Image gradients can also reflect small detail contrast and texture transformation characteristics in the image:
  • a and B represent the size of the image
  • ⁇ x f(i, j) and ⁇ y f(i, j) are the first-order differences of the pixel (i, j) in the x and y directions, respectively
  • Each frame of dynamic PET imaging is iteratively fused to obtain a wavelet-fused dynamic PET image.
  • the present invention can also use other reconstruction algorithms to perform fusion according to the reconstruction effect after proper deformation, for example, filtered back-projection reconstruction method, based on least squares PET image reconstruction, etc.
  • the present invention performs frame-by-frame fusion for details of high-frequency components and contours of low-frequency components by considering the differences in the results of different PET image reconstruction algorithms, thereby reducing image noise after image reconstruction and improving image details.
  • the reconstructed images based on low-dose images can preserve the contours and details well, and improve the fitting degree between the net inflow of the tumor area and the true value.
  • the present invention may be a system, method and/or computer program product.
  • the computer program product may include a computer-readable storage medium having computer-readable program instructions loaded thereon for causing a processor to implement various aspects of the present invention.
  • a computer-readable storage medium may be a tangible device that can hold and store instructions for use by the instruction execution device.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Non-exhaustive list of computer readable storage media include: portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM) or flash memory), static random access memory (SRAM), portable compact disk read only memory (CD-ROM), digital versatile disk (DVD), memory sticks, floppy disks, mechanically coded devices, such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above.
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable read only memory
  • flash memory static random access memory
  • SRAM static random access memory
  • CD-ROM compact disk read only memory
  • DVD digital versatile disk
  • memory sticks floppy disks
  • mechanically coded devices such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above.
  • Computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (eg, light pulses through fiber optic cables), or through electrical wires transmitted electrical signals.
  • the computer readable program instructions described herein may be downloaded to various computing/processing devices from a computer readable storage medium, or to an external computer or external storage device over a network such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer-readable program instructions from a network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
  • the computer program instructions for carrying out the operations of the present invention may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state setting data, or instructions in one or more programming languages.
  • Source or object code written in any combination including object-oriented programming languages, such as Smalltalk, C++, Python, etc., and conventional procedural programming languages, such as the "C" language or similar programming languages.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through the Internet connect).
  • LAN local area network
  • WAN wide area network
  • custom electronic circuits such as programmable logic circuits, field programmable gate arrays (FPGAs), or programmable logic arrays (PLAs)
  • FPGAs field programmable gate arrays
  • PDAs programmable logic arrays
  • Computer readable program instructions are executed to implement various aspects of the present invention.
  • These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer or other programmable data processing apparatus to produce a machine that causes the instructions when executed by the processor of the computer or other programmable data processing apparatus , resulting in means for implementing the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • These computer readable program instructions can also be stored in a computer readable storage medium, these instructions cause a computer, programmable data processing apparatus and/or other equipment to operate in a specific manner, so that the computer readable medium on which the instructions are stored includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • Computer readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other equipment to cause a series of operational steps to be performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , thereby causing instructions executing on a computer, other programmable data processing apparatus, or other device to implement the functions/acts specified in one or more blocks of the flowcharts and/or block diagrams.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more functions for implementing the specified logical function(s) executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or actions , or can be implemented in a combination of dedicated hardware and computer instructions. It is well known to those skilled in the art that implementation in hardware, implementation in software, and implementation in a combination of software and hardware are all equivalent.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

Disclosed in the present invention is a wavelet fusion-based PET image reconstruction method. The method comprises: performing wavelet decomposition on a first PET image to obtain a corresponding first high-frequency image and a corresponding first low-frequency image, and performing wavelet decomposition on a second PET image to obtain a corresponding second high-frequency image and a corresponding second low-frequency image, wherein the first PET image and the second PET image are PET reconstructed images obtained by means of different modes; within different frequency channels, processing a wavelet coefficient by using a set fusion rule according to the wavelet coefficient, and performing inverse wavelet transformation on a processed new wavelet coefficient to obtain a fused image; and evaluating the fused image and iterating to obtain an optimal solution, and using the obtained fused image that meets a set evaluation criterion as a reconstructed image. By means of the reconstructed image obtained in the present invention, the contour and details of an image can be well preserved.

Description

一种基于小波融合的PET图像重建方法A PET Image Reconstruction Method Based on Wavelet Fusion 技术领域technical field
本发明涉及医学图像处理技术领域,更具体地,涉及一种基于小波融合的PET图像重建方法。The invention relates to the technical field of medical image processing, and more particularly, to a PET image reconstruction method based on wavelet fusion.
背景技术Background technique
正电子发射断层成像(PET)是一种高度灵敏的多功能代谢显像的分子成像方法,被广泛用于临床。利用PET图像,可以发现肿瘤的早期情况,查处心血管疾病及神经系统疾病。然而,由于PET扫描需要给病人注入显影剂,且长时间的动态扫描以及高剂量的药物虽然可以令PET成像更为清晰,但是却加重了患者的负担。因此在满足临床诊断的前提下合理使用低剂量(As Low As Reasonably Achievable,ALARA)原则,尽量降低对患者的辐射剂量是必不可少的。然而,由于PET图像重建随着剂量的降低,会导致成像质量差,也会影响到Patlak图以及一些其他数据的准确度。因此,研究和开发新的低剂量PET图像重建的方法,在保证PET图像重建以及Patlak图的效果下,降低对患者药剂的注入量,对于临床医疗手段具有更容易被患者接受以及降低高剂量风险的应用前景。Positron emission tomography (PET) is a highly sensitive and versatile molecular imaging method for metabolic imaging and is widely used in clinical practice. Using PET images, it is possible to find the early stage of tumors, investigate and deal with cardiovascular diseases and neurological diseases. However, since PET scans need to inject contrast agents into patients, long-term dynamic scans and high doses of drugs can make PET images clearer, but they increase the burden on patients. Therefore, it is essential to use the low-dose (As Low As Reasonably Achievable, ALARA) principle reasonably under the premise of satisfying the clinical diagnosis, and to minimize the radiation dose to patients. However, since the PET image reconstruction decreases with the dose, the image quality will be poor, which will also affect the accuracy of the Patlak map and some other data. Therefore, research and development of new low-dose PET image reconstruction methods can reduce the amount of drug injected to patients under the effect of PET image reconstruction and Patlak map, which is easier for patients to accept and reduce the risk of high doses for clinical medical methods. application prospects.
在现有技术中,PET图像重建算法有很多种,各有利弊,对于参数图像的重建,EM(期望最大化)虽然可以很好的对图像进行重建,但是噪声大且计数过高,而MR引导的Kernel EM可以在降低计数(正负电子煙灭产生的方向相反的光子数)的情况下降低噪声并重建图像,但细节不够好,且对于参数图像中感兴趣的部分Ki,即净流入率较真实值偏低。In the prior art, there are many kinds of PET image reconstruction algorithms, each with its own advantages and disadvantages. For the reconstruction of parametric images, although EM (expectation maximization) can reconstruct the image very well, the noise is large and the count is too high, while MR Guided Kernel EM can reduce noise and reconstruct the image at lower counts (the number of photons in opposite directions generated by positive and negative e-cigarette extinguishing), but the details are not good enough, and for the part of interest Ki in the parametric image, the net inflow rate is lower than the actual value.
发明内容SUMMARY OF THE INVENTION
本发明的目的是克服上述现有技术的缺陷,提供一种基于小波融合的PET图像重建方法,基于小波变换将Kernel EM和EM重建的图像分别分 解后提取特征出来进行逐帧融合,令PET图像重建在降低剂量的情况下可以得到相对清晰的图像。The purpose of the present invention is to overcome the above-mentioned defects of the prior art, and to provide a PET image reconstruction method based on wavelet fusion. Based on the wavelet transform, the images reconstructed by Kernel EM and EM are decomposed respectively, and then features are extracted to perform frame-by-frame fusion, so that the PET image is decomposed. Reconstruction results in relatively clear images at reduced doses.
本发明的技术方案是提供一种基于小波融合的PET图像重建方法。该方法包括以下步骤:The technical solution of the present invention is to provide a PET image reconstruction method based on wavelet fusion. The method includes the following steps:
对第一PET图像进行小波分解,获得对应的第一高频图像和第一低频图像,对第二PET图像进行小波分解,获得对应的第二高频图像和第二低频图像,其中第一PET图像和第二PET是根据不同方式获得的PET重建图像;Perform wavelet decomposition on the first PET image to obtain a corresponding first high-frequency image and a first low-frequency image, and perform wavelet decomposition on the second PET image to obtain a corresponding second high-frequency image and second low-frequency image, wherein the first PET image The image and the second PET are PET reconstructed images obtained in different ways;
在不同的频率通道内,依据小波系数利用设定的融合准则对小波系数进行处理,并对处理后的新小波系数进行小波逆变换得到融合后的图像;In different frequency channels, the wavelet coefficients are processed according to the set fusion criteria according to the wavelet coefficients, and the processed new wavelet coefficients are subjected to wavelet inverse transformation to obtain the fused image;
对融合后的图像进行评价并迭代得到最优解,将获得的满足设定评价标准的融合图像作为重建图像。The fused image is evaluated and the optimal solution is obtained iteratively, and the obtained fused image that meets the set evaluation criteria is used as the reconstructed image.
与现有技术相比,本发明的优点在于,为解决低剂量图像质量提升的技术问题,本发明在重建算法的基础上,利用其各自的优点,将其通过小波分解,由时域转为频域通过权重的形式提取各自重建效果的优点将其融合,以提升重建图像的质量,利用小波来提取高低频信息特征融合。特征融合的决策,提取不同图像重建方法结果的优点,提高图像质量。Compared with the prior art, the advantage of the present invention is that, in order to solve the technical problem of improving the quality of the low-dose image, the present invention uses its respective advantages on the basis of the reconstruction algorithm to convert it from the time domain to the time domain through wavelet decomposition. The frequency domain extracts the advantages of their respective reconstruction effects in the form of weights and fuses them to improve the quality of the reconstructed image, and uses wavelets to extract high and low frequency information and feature fusion. The decision of feature fusion, extracting the advantages of the results of different image reconstruction methods, improves the image quality.
通过以下参照附图对本发明的示例性实施例的详细描述,本发明的其它特征及其优点将会变得清楚。Other features and advantages of the present invention will become apparent from the following detailed description of exemplary embodiments of the present invention with reference to the accompanying drawings.
附图说明Description of drawings
被结合在说明书中并构成说明书的一部分的附图示出了本发明的实施例,并且连同其说明一起用于解释本发明的原理。The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention.
图1是根据本发明一个实施例的基于小波融合的PET图像重建方法的流程图;1 is a flowchart of a method for reconstructing a PET image based on wavelet fusion according to an embodiment of the present invention;
图2是根据本发明一个实施例的基于小波融合的PET图像重建方法的过程示意图;2 is a schematic process diagram of a method for reconstructing a PET image based on wavelet fusion according to an embodiment of the present invention;
图3是根据本发明一个是实施例的对图像进行三层小波变换后的频率分布图;FIG. 3 is a frequency distribution diagram after performing three-layer wavelet transform on an image according to an embodiment of the present invention;
图4是根据本发明另一实施例的基于小波融合的PET图像重建方法的过程示意图。FIG. 4 is a schematic process diagram of a method for reconstructing a PET image based on wavelet fusion according to another embodiment of the present invention.
具体实施方式Detailed ways
现在将参照附图来详细描述本发明的各种示例性实施例。应注意到:除非另外具体说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本发明的范围。Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that the relative arrangement of components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the invention unless specifically stated otherwise.
以下对至少一个示例性实施例的描述实际上仅仅是说明性的,决不作为对本发明及其应用或使用的任何限制。The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail, but where appropriate, such techniques, methods, and apparatus should be considered part of the specification.
在这里示出和讨论的所有例子中,任何具体值应被解释为仅仅是示例性的,而不是作为限制。因此,示例性实施例的其它例子可以具有不同的值。In all examples shown and discussed herein, any specific values should be construed as illustrative only and not limiting. Accordingly, other instances of the exemplary embodiment may have different values.
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。It should be noted that like numerals and letters refer to like items in the following figures, so once an item is defined in one figure, it does not require further discussion in subsequent figures.
结合图1和图2所示,本发明提供的基于小波融合的PET图像重建方法包括以下步骤。1 and 2, the PET image reconstruction method based on wavelet fusion provided by the present invention includes the following steps.
步骤S1,对PET图像进行重建,获得重建图像。Step S1, reconstruct the PET image to obtain a reconstructed image.
具体地,由数据集A构成的似然函数表示为:Specifically, the likelihood function composed of dataset A is expressed as:
L(A|X)=∏ ijp(A ij|β=H ij*X j)      (1) L(A|X)=∏ ij p(A ij |β=H ij *X j ) (1)
其中β表示期望为where β represents the expectation of
E(β)=H ij*X j     (2) E(β)=H ij *X j (2)
的泊松分布在变量取A ij的时候的概率,H ij是一个常量,表示系统响应矩阵,表示为第j个像素发射的伽马光子被第i条LOR探测到的概率,X是用一维数组来表示二维的图像。 The probability of the Poisson distribution when the variable takes A ij , H ij is a constant, which represents the system response matrix, which is expressed as the probability that the gamma photon emitted by the jth pixel is detected by the ith LOR, X is a dimensional array to represent a two-dimensional image.
此时A ij的概率分布函数表示为: At this time, the probability distribution function of A ij is expressed as:
Figure PCTCN2021085939-appb-000001
Figure PCTCN2021085939-appb-000001
在取对数化简后可得到After taking logarithmic simplification, we can get
Figure PCTCN2021085939-appb-000002
Figure PCTCN2021085939-appb-000002
其中,const表示一些其他与X无关的量。Among them, const represents some other quantity unrelated to X.
为了消去上式中的随机变量A ij,这里用A ij在极大似然后得到的所有像素面向各个方向发射的光子数的期望X已知的情况下来替换掉原有的随机变量A ij,从而达到迭代更新的目的。 In order to eliminate the random variable A ij in the above formula, the original random variable A ij is replaced by A ij under the condition that the expected number X of the number of photons emitted by all pixels facing all directions obtained by the maximum likelihood is known, so that achieve the purpose of iterative update.
因为需要进行迭代更新来重建图像,所以在迭代中,令当前的A ij的期望作为下一次迭代中的A ij。X current为当前图像,用c来表示current。 Because an iterative update is required to reconstruct the image, in an iteration, let the expectation of the current A ij be the A ij in the next iteration. X current is the current image, and c is used to represent current.
Figure PCTCN2021085939-appb-000003
which is
Figure PCTCN2021085939-appb-000003
Figure PCTCN2021085939-appb-000004
Figure PCTCN2021085939-appb-000004
由二项分布求期望公式:若r~B(r,p),那么E(r)=np,可知,n为P i,p为
Figure PCTCN2021085939-appb-000005
且β ij=H ij*X j
Find the expectation formula from the binomial distribution: if r~B(r,p), then E(r)=np, it can be known that n is Pi and p is
Figure PCTCN2021085939-appb-000005
and β ij =H ij *X j .
因而可得到thus obtainable
Figure PCTCN2021085939-appb-000006
Figure PCTCN2021085939-appb-000006
对已求得的对数化简后的式子L(A|X)=∑ ij(A ijln(H ijX j)-H ijX j),求偏导以得到其极值点: For the obtained logarithmically simplified formula L(A|X)=∑ ij (A ij ln(H ij X j )-H ij X j ), find the partial derivative to get its extreme point :
Figure PCTCN2021085939-appb-000007
Figure PCTCN2021085939-appb-000007
将求得的A ij代入,令偏导为0求其极值点: Substitute the obtained A ij into, let the partial derivative be 0 to find its extreme point:
Figure PCTCN2021085939-appb-000008
Figure PCTCN2021085939-appb-000008
Figure PCTCN2021085939-appb-000009
Figure PCTCN2021085939-appb-000009
每次X j作为X current进行迭代可得到更新的X j。即得到迭代式: An updated X j is obtained each time X j is iterated as X current . That is to get the iterative formula:
Figure PCTCN2021085939-appb-000010
Figure PCTCN2021085939-appb-000010
步骤S2,将动态PET中的动态帧作为先验信息融合进正向投影模型,获得重建图像。In step S2, the dynamic frame in the dynamic PET is fused into the forward projection model as prior information to obtain a reconstructed image.
具体地,原数据正向投影模型Specifically, the original data forward projection model
y=PKα+r    (12)y=PKα+r (12)
其中,α表示K的系数向量,P表示系统矩阵,r表示光子的散射和随机数,K是关键,K表示用KNN方法来找到每个像素f i的近邻f j的关系。 Among them, α represents the coefficient vector of K, P represents the system matrix, r represents the scattering of photons and random numbers, K is the key, and K represents the relationship between the neighbors f j of each pixel f i using the KNN method.
K ij=K(f i,f j)     (13) K ij =K(fi , f j ) (13)
基于其欧几里得距离来寻求KNN(k最近邻算法),将动态PET中的动态帧作为先验信息融合进正向投影模型,由此来改变低计数扫描的PET图像重建。由步骤S1作为基础重建图像,并将时间帧分割为三个时间段分别进行累加得到特征,作为先验信息对PET图像进行重建。KNN (k-nearest neighbor algorithm) is sought based on its Euclidean distance, and dynamic frames in dynamic PET are fused into the forward projection model as a priori information, thereby changing the PET image reconstruction of low-count scans. Step S1 is used as the basis for reconstructing the image, and the time frame is divided into three time periods to be accumulated to obtain features, which are used as prior information to reconstruct the PET image.
与步骤S1类似,结合投影模型Similar to step S1, combine the projection model
y=PKα+r    (14)y=PKα+r (14)
与泊松似然函数来估计核的系数图像。Use the Poisson-likelihood function to estimate the kernel's coefficient image.
Figure PCTCN2021085939-appb-000011
Figure PCTCN2021085939-appb-000011
上式中bP(a)为惩罚函数,b为其正则化参数。在求得基于核的系数图像后,根据投影模型可得到重建图像:In the above formula, bP(a) is the penalty function, and b is its regularization parameter. After obtaining the kernel-based coefficient image, the reconstructed image can be obtained according to the projection model:
Figure PCTCN2021085939-appb-000012
Figure PCTCN2021085939-appb-000012
重建图像x已经过K进行了正则化,因而在估计最大似然函数时,可令正则化参数b为0,简化上式为:The reconstructed image x has been regularized by K, so when estimating the maximum likelihood function, the regularization parameter b can be set to 0, and the simplified formula is:
Figure PCTCN2021085939-appb-000013
Figure PCTCN2021085939-appb-000013
与步骤S1同理,进行期望的计算以及对数化简后,可得到基于K的迭代公式:Similar to step S1, after performing the desired calculation and logarithmic simplification, the iterative formula based on K can be obtained:
Figure PCTCN2021085939-appb-000014
Figure PCTCN2021085939-appb-000014
综上,本发明对动态PET图像进行重建,获得第一组时间帧对应的各个切片图像,对PET图像进行基于核方法的重建,获得第二组时间帧对应的各个切片图像,其中第一组和第二组图像重建结果是根据不同方式获得的PET重建图像。To sum up, the present invention reconstructs dynamic PET images, obtains each slice image corresponding to the first group of time frames, reconstructs the PET image based on the kernel method, and obtains each slice image corresponding to the second group of time frames, wherein the first group of time frames corresponds to each slice image. And the second group of image reconstruction results are PET reconstructed images obtained according to different ways.
步骤S3,利用小波变换逐帧提取步骤S1和步骤S2得到的图像进行融合。In step S3, the images obtained in step S1 and step S2 are extracted frame by frame using wavelet transform for fusion.
小波变换是将图像分解成为频域上的各个频率段的子图,用来代表原图像的各个特征分量。即图像经过小波变换分解后被分解为低频图像和高频图像,同时得到低频和高频的系数,低频图像中包含低频分量,高频图像中包含高频分量,其中低频分量为图像的轮廓部分,高频分量为图像的细节部分。然后在不同的频率通道内,依据小波系数的不同采用不同的融合准则对小波系数进行处理,处理后的新的小波系数完好地保存了更多的频带特征。最后再对新的小波系数进行小波逆变换得到融合后的图像。Wavelet transform is to decompose the image into sub-images of each frequency segment in the frequency domain, which are used to represent each feature component of the original image. That is, the image is decomposed into a low-frequency image and a high-frequency image after being decomposed by wavelet transform, and the coefficients of low-frequency and high-frequency are obtained at the same time. The low-frequency image contains low-frequency components, and the high-frequency image contains high-frequency components. , the high frequency components are the details of the image. Then, in different frequency channels, different fusion criteria are used to process the wavelet coefficients according to the difference of the wavelet coefficients. The processed new wavelet coefficients preserve more frequency band features. Finally, inverse wavelet transform is performed on the new wavelet coefficients to obtain the fused image.
具体地,每次小波变换后,图像便分解为LL,HL,LH,HH四个频带。Specifically, after each wavelet transform, the image is decomposed into four frequency bands: LL, HL, LH, and HH.
LL频带,此频带保持了原始图像内容信息,图像的能量集中于此频带:LL band, this band maintains the original image content information, the energy of the image is concentrated in this band:
Figure PCTCN2021085939-appb-000015
Figure PCTCN2021085939-appb-000015
HL频带,此频带保持了图像水平方向上的高频信息:HL band, which preserves high-frequency information in the horizontal direction of the image:
Figure PCTCN2021085939-appb-000016
Figure PCTCN2021085939-appb-000016
LH频带,此频带保持了图像竖直方向上的高频信息:LH band, which preserves high-frequency information in the vertical direction of the image:
Figure PCTCN2021085939-appb-000017
Figure PCTCN2021085939-appb-000017
HH频带,此频带保持了图像在对角线方向上的高频信息:HH band, this band maintains the high frequency information of the image in the diagonal direction:
Figure PCTCN2021085939-appb-000018
Figure PCTCN2021085939-appb-000018
其中,f(k,l)表示图像在各个尺度下投影的细节。(m,n)表示图像的像素,j表示低通和高通分解系数的层数。Among them, f(k,l) represents the details of the image projected at each scale. (m,n) represents the pixels of the image, and j represents the number of layers of low-pass and high-pass decomposition coefficients.
例如,当图像进行三层小波变换后,其频率的分布图如图3所示。应理解的是,也可以采用更多层的小波变换。For example, when the image is subjected to three-layer wavelet transform, the frequency distribution diagram is shown in Figure 3. It should be understood that more layers of wavelet transform may also be employed.
在一个实施例中,以三层小波变换为例,对于两幅图像的高频H1(j,k)和H2(j,k),取他们的最大值作为融合图像的高频的信息H(j,k),j=1,2,3,j为分解层数。对于两幅图像的低频信息L1(j,k),L2(j,k),采用取平均的方法得到其融合图像的低频信息L(j,k)。In one embodiment, taking the three-layer wavelet transform as an example, for the high-frequency H1(j,k) and H2(j,k) of the two images, take their maximum value as the high-frequency information H( j, k), j=1, 2, 3, j is the number of decomposition layers. For the low-frequency information L1(j,k) and L2(j,k) of the two images, the low-frequency information L(j,k) of the fused image is obtained by averaging.
步骤S4,对融合结果进行评价并迭代得到最优解。In step S4, the fusion result is evaluated and an optimal solution is obtained iteratively.
例如,结合图4所示,对于得到的融合图像,利用均值和标准差对其融合的结果进行判定。其中,均值可以体现出图像的平均亮度,而标准差反应的是图像灰度相对于均值的离散程度。一个较大的标准差,代表大部分数值和其平均值之间差异较大;一个较小的标准差,代表这些数值较接近平均值。即标准差越小,数据越准确。For example, as shown in FIG. 4 , for the obtained fused image, the mean value and the standard deviation are used to judge the fusion result. Among them, the mean value can reflect the average brightness of the image, and the standard deviation reflects the degree of dispersion of the image gray level relative to the mean value. A larger standard deviation means that most values are more different from their mean; a smaller standard deviation means that these values are closer to the mean. That is, the smaller the standard deviation, the more accurate the data.
例如,利用图像的信息熵来评价融合结果,信息熵定义了观察到单个信源符号输出时所获得的平均信息量。熵达到最大的情况出现在信源各符号的出现概率相等时,而信源此时提供最大可能的信源符号平均信息量。信息熵越大表示图像中包含的信息量越大,即融合的效果越好。For example, the fusion results are evaluated using the information entropy of the image, which defines the average amount of information obtained when observing the output of a single source symbol. The maximum entropy occurs when the probability of occurrence of each symbol of the source is equal, and the source provides the maximum possible average information amount of the source symbol at this time. The greater the information entropy, the greater the amount of information contained in the image, that is, the better the fusion effect.
例如,可用平均梯度来表示图像的清晰程度的改善。图像梯度也可以反映出图像中的微小细节反差和纹理变换特征:For example, the average gradient can be used to represent an improvement in the sharpness of the image. Image gradients can also reflect small detail contrast and texture transformation characteristics in the image:
Figure PCTCN2021085939-appb-000019
Figure PCTCN2021085939-appb-000019
其中,A、B表示图像的尺寸,Δ xf(i,j)和Δ yf(i,j)分别是像素点(i,j)在x和y方向上面的一阶差分 Among them, A and B represent the size of the image, Δ x f(i, j) and Δ y f(i, j) are the first-order differences of the pixel (i, j) in the x and y directions, respectively
Δ xf(i,j)=f(i+1,j)-f(i,j)     (24) Δ x f(i,j)=f(i+1,j)-f(i,j) (24)
Δ yf(i,j)=f(i,j+1)-f(i,j)    (25) Δy f (i,j)=f(i,j+1)-f(i,j) (25)
经过迭代将动态PET成像的每一帧图像进行融合得到小波融合的动态PET图像。Each frame of dynamic PET imaging is iteratively fused to obtain a wavelet-fused dynamic PET image.
需说明的是,除利用步骤S1和步骤S2获得的图像进行融合以外,本发明在适当变形后,也可根据重建效果用其他重建算法进行融合,例如,滤波反投影重建法、基于最小二乘的PET图像重建等。It should be noted that, in addition to using the images obtained in steps S1 and S2 for fusion, the present invention can also use other reconstruction algorithms to perform fusion according to the reconstruction effect after proper deformation, for example, filtered back-projection reconstruction method, based on least squares PET image reconstruction, etc.
综上所述,本发明通过考虑不同PET图像重建算法结果的不同,针对高频分量的细节和低频分量的轮廓,进行逐帧融合,降低了图像重建后的图像噪声并改善了图像细节。此外,在对PET图像重建的计数降低的基础 上,使得基于低剂量图像进行重建后的图像可以很好地保留轮廓和细节,且改善了肿瘤区域的净流入和真实值的拟合程度。To sum up, the present invention performs frame-by-frame fusion for details of high-frequency components and contours of low-frequency components by considering the differences in the results of different PET image reconstruction algorithms, thereby reducing image noise after image reconstruction and improving image details. In addition, on the basis of reducing the count of reconstructed PET images, the reconstructed images based on low-dose images can preserve the contours and details well, and improve the fitting degree between the net inflow of the tumor area and the true value.
本发明可以是系统、方法和/或计算机程序产品。计算机程序产品可以包括计算机可读存储介质,其上载有用于使处理器实现本发明的各个方面的计算机可读程序指令。The present invention may be a system, method and/or computer program product. The computer program product may include a computer-readable storage medium having computer-readable program instructions loaded thereon for causing a processor to implement various aspects of the present invention.
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是――但不限于――电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、静态随机存取存储器(SRAM)、便携式压缩盘只读存储器(CD-ROM)、数字多功能盘(DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。这里所使用的计算机可读存储介质不被解释为瞬时信号本身,诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的电信号。A computer-readable storage medium may be a tangible device that can hold and store instructions for use by the instruction execution device. The computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (non-exhaustive list) of computer readable storage media include: portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM) or flash memory), static random access memory (SRAM), portable compact disk read only memory (CD-ROM), digital versatile disk (DVD), memory sticks, floppy disks, mechanically coded devices, such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above. Computer-readable storage media, as used herein, are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (eg, light pulses through fiber optic cables), or through electrical wires transmitted electrical signals.
这里所描述的计算机可读程序指令可以从计算机可读存储介质下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。The computer readable program instructions described herein may be downloaded to various computing/processing devices from a computer readable storage medium, or to an external computer or external storage device over a network such as the Internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer-readable program instructions from a network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
用于执行本发明操作的计算机程序指令可以是汇编指令、指令集架构(ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,所述编程语言包括面向对象的编程语言—诸如Smalltalk、C++、Python等,以及常规的过程式编程语言—诸如“C”语言或类似的编程语言。计算机可读 程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络—包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、现场可编程门阵列(FPGA)或可编程逻辑阵列(PLA),该电子电路可以执行计算机可读程序指令,从而实现本发明的各个方面。The computer program instructions for carrying out the operations of the present invention may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state setting data, or instructions in one or more programming languages. Source or object code written in any combination, including object-oriented programming languages, such as Smalltalk, C++, Python, etc., and conventional procedural programming languages, such as the "C" language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through the Internet connect). In some embodiments, custom electronic circuits, such as programmable logic circuits, field programmable gate arrays (FPGAs), or programmable logic arrays (PLAs), can be personalized by utilizing state information of computer readable program instructions. Computer readable program instructions are executed to implement various aspects of the present invention.
这里参照根据本发明实施例的方法、装置(系统)和计算机程序产品的流程图和/或框图描述了本发明的各个方面。应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理器,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer or other programmable data processing apparatus to produce a machine that causes the instructions when executed by the processor of the computer or other programmable data processing apparatus , resulting in means for implementing the functions/acts specified in one or more blocks of the flowchart and/or block diagrams. These computer readable program instructions can also be stored in a computer readable storage medium, these instructions cause a computer, programmable data processing apparatus and/or other equipment to operate in a specific manner, so that the computer readable medium on which the instructions are stored includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。Computer readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other equipment to cause a series of operational steps to be performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , thereby causing instructions executing on a computer, other programmable data processing apparatus, or other device to implement the functions/acts specified in one or more blocks of the flowcharts and/or block diagrams.
附图中的流程图和框图显示了根据本发明的多个实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的一部分,所述 模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。对于本领域技术人员来说公知的是,通过硬件方式实现、通过软件方式实现以及通过软件和硬件结合的方式实现都是等价的。The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more functions for implementing the specified logical function(s) executable instructions. In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It is also noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented in dedicated hardware-based systems that perform the specified functions or actions , or can be implemented in a combination of dedicated hardware and computer instructions. It is well known to those skilled in the art that implementation in hardware, implementation in software, and implementation in a combination of software and hardware are all equivalent.
以上已经描述了本发明的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。本发明的范围由所附权利要求来限定。Various embodiments of the present invention have been described above, and the foregoing descriptions are exemplary, not exhaustive, and not limiting of the disclosed embodiments. Numerous modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the invention is defined by the appended claims.

Claims (10)

  1. 一种基于小波融合的PET图像重建方法,包括以下步骤:A PET image reconstruction method based on wavelet fusion, comprising the following steps:
    步骤S11,对第一PET图像进行小波分解,获得对应的第一高频图像和第一低频图像,对第二PET图像进行小波分解,获得对应的第二高频图像和第二低频图像,其中第一PET图像和第二PET是根据不同方式获得的PET重建图像;Step S11, performing wavelet decomposition on the first PET image to obtain a corresponding first high-frequency image and a first low-frequency image, and performing wavelet decomposition on the second PET image to obtain a corresponding second high-frequency image and a second low-frequency image, wherein The first PET image and the second PET are PET reconstructed images obtained according to different methods;
    步骤S12,在不同的频率通道内,依据小波系数利用设定的融合准则对小波系数进行处理,并对处理后的新小波系数进行小波逆变换得到融合后的图像;Step S12, in different frequency channels, process the wavelet coefficients according to the set fusion criteria according to the wavelet coefficients, and perform wavelet inverse transformation on the processed new wavelet coefficients to obtain a fused image;
    步骤S13,对融合后的图像进行评价并迭代得到最优解,将获得的满足设定评价标准的融合图像作为重建图像。In step S13, the fused image is evaluated and an optimal solution is obtained iteratively, and the obtained fused image satisfying the set evaluation standard is used as a reconstructed image.
  2. 根据权利要求1所述的方法,其特征在于,步骤S11包括:The method according to claim 1, wherein step S11 comprises:
    对第一PET图像和第二PET图像分别进行多层小波分解,将获得的第一高频图像表示为H1(j,k),第一低频图像表示为L1(j,k),第二高频图像表示为H2(j,k),第二低频图像表示为L1(j,k),其中,j表示小波分解的层数索引。Multi-layer wavelet decomposition is performed on the first PET image and the second PET image respectively, and the obtained first high-frequency image is expressed as H1(j,k), the first low-frequency image is expressed as L1(j,k), and the second high-frequency image is expressed as L1(j,k). The low-frequency image is represented as H2(j,k), and the second low-frequency image is represented as L1(j,k), where j represents the layer index of wavelet decomposition.
  3. 根据权利要求2所述的方法,其特征在于,步骤S12包括:The method according to claim 2, wherein step S12 comprises:
    对于第一高频图像H1(j,k)和第二高频图像H2(j,k),取最大值作为融合图像的高频图像信息H(j,k);For the first high-frequency image H1(j,k) and the second high-frequency image H2(j,k), take the maximum value as the high-frequency image information H(j,k) of the fused image;
    对于第一低频图像L1(j,k)和第二低频图像L2(j,k),采用取平均的方法得到融合图像的低频图像信息L(j,k)。For the first low-frequency image L1(j,k) and the second low-frequency image L2(j,k), the low-frequency image information L(j,k) of the fused image is obtained by taking an average.
  4. 根据权利要求1所述的方法,其特征在于,利用均值、标准差和平均梯度中的一项或多项对融合后的图像进行评价,其中,均值用于体现融合图像的平均亮度,标准差反应融合图像灰度相对于均值的离散程度,平均梯度反映融合图像中的微小细节反差和纹理变换特征。The method according to claim 1, wherein the fused image is evaluated by one or more of mean, standard deviation and mean gradient, wherein the mean is used to represent the average brightness of the fused image, and the standard deviation It reflects the degree of dispersion of the gray level of the fusion image relative to the mean value, and the average gradient reflects the small detail contrast and texture transformation characteristics in the fusion image.
  5. 根据权利要求4所述的方法,其特征在于,所述平均梯度表示为:The method according to claim 4, wherein the average gradient is expressed as:
    Figure PCTCN2021085939-appb-100001
    Figure PCTCN2021085939-appb-100001
    其中,A和B表示图像的尺寸,Δ xf(i,j)和Δ yf(i,j)分别是像素点(i,j)在 x和y方向上面的一阶差分。 Among them, A and B represent the size of the image, and Δ x f(i, j) and Δ y f(i, j) are the first-order differences of the pixel point (i, j) in the x and y directions, respectively.
  6. 根据权利要求1所述的方法,其中,第一PET图像是根据Kernel EM算法获得的PET重建图像。The method of claim 1, wherein the first PET image is a PET reconstructed image obtained according to a Kernel EM algorithm.
  7. 根据权利要求1所述的方法,其中,根据以下步骤获得第二PET图像:The method of claim 1, wherein the second PET image is obtained according to the steps of:
    结合投影模型y=PKα+r与泊松似然函数估计核的系数图像:Combine the projection model y=PKα+r with the Poisson likelihood function to estimate the coefficient image of the kernel:
    Figure PCTCN2021085939-appb-100002
    Figure PCTCN2021085939-appb-100002
    在求得基于核的系数图像后,根据投影模型得到重建图像,表示为:After the kernel-based coefficient image is obtained, the reconstructed image is obtained according to the projection model, which is expressed as:
    Figure PCTCN2021085939-appb-100003
    Figure PCTCN2021085939-appb-100003
    令正则化参数b为0,估计最大似然函数时,表示为:Let the regularization parameter b be 0, when estimating the maximum likelihood function, it is expressed as:
    Figure PCTCN2021085939-appb-100004
    Figure PCTCN2021085939-appb-100004
    进行期望的计算以及对数化简后,得到基于K的迭代公式:After performing the desired calculations and logarithmic simplification, the iterative formula based on K is obtained:
    Figure PCTCN2021085939-appb-100005
    Figure PCTCN2021085939-appb-100005
    其中,α表示K的系数向量,P表示系统矩阵,r表示光子的散射和随机数,K表示每个像素f i的近邻f j的关系,bP(a)为惩罚函数,b为其正则化参数,X current表示当前图像。 Among them, α represents the coefficient vector of K, P represents the system matrix, r represents the scattering of photons and random numbers, K represents the relationship between the neighbors f j of each pixel f i , bP(a) is the penalty function, and b is its regularization parameter, X current represents the current image.
  8. 根据权利要求2所述的方法,其中,对第一PET图像和第二PET图像分别进行三层小波分解。The method of claim 2, wherein three-layer wavelet decomposition is performed on the first PET image and the second PET image respectively.
  9. 一种计算机可读存储介质,其上存储有计算机程序,其中,该程序被处理器执行时实现根据权利要求1至8中任一项所述方法的步骤。A computer-readable storage medium having stored thereon a computer program, wherein the program, when executed by a processor, implements the steps of the method according to any one of claims 1 to 8.
  10. 一种计算机设备,包括存储器和处理器,在所述存储器上存储有能够在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现权利要求1至8中任一项所述的方法的步骤。A computer device, comprising a memory and a processor, a computer program that can be run on the processor is stored in the memory, and characterized in that, when the processor executes the program, any one of claims 1 to 8 is implemented The steps of the method described in item.
PCT/CN2021/085939 2021-04-08 2021-04-08 Wavelet fusion-based pet image reconstruction method WO2022213321A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/085939 WO2022213321A1 (en) 2021-04-08 2021-04-08 Wavelet fusion-based pet image reconstruction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/085939 WO2022213321A1 (en) 2021-04-08 2021-04-08 Wavelet fusion-based pet image reconstruction method

Publications (1)

Publication Number Publication Date
WO2022213321A1 true WO2022213321A1 (en) 2022-10-13

Family

ID=83545945

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/085939 WO2022213321A1 (en) 2021-04-08 2021-04-08 Wavelet fusion-based pet image reconstruction method

Country Status (1)

Country Link
WO (1) WO2022213321A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105844606A (en) * 2016-03-22 2016-08-10 博康智能网络科技股份有限公司 Wavelet transform-based image fusion method and system thereof
CN107527359A (en) * 2017-08-07 2017-12-29 沈阳东软医疗系统有限公司 A kind of PET image reconstruction method and PET imaging devices
CN109300096A (en) * 2018-08-07 2019-02-01 北京智脉识别科技有限公司 A kind of multi-focus image fusing method and device
US20210035338A1 (en) * 2019-07-31 2021-02-04 Z2Sky Technologies Inc. Unified Dual-Domain Network for Medical Image Formation, Recovery, and Analysis
CN112488952A (en) * 2020-12-07 2021-03-12 深圳先进技术研究院 Reconstruction method and reconstruction terminal for PET image and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105844606A (en) * 2016-03-22 2016-08-10 博康智能网络科技股份有限公司 Wavelet transform-based image fusion method and system thereof
CN107527359A (en) * 2017-08-07 2017-12-29 沈阳东软医疗系统有限公司 A kind of PET image reconstruction method and PET imaging devices
CN109300096A (en) * 2018-08-07 2019-02-01 北京智脉识别科技有限公司 A kind of multi-focus image fusing method and device
US20210035338A1 (en) * 2019-07-31 2021-02-04 Z2Sky Technologies Inc. Unified Dual-Domain Network for Medical Image Formation, Recovery, and Analysis
CN112488952A (en) * 2020-12-07 2021-03-12 深圳先进技术研究院 Reconstruction method and reconstruction terminal for PET image and computer readable storage medium

Similar Documents

Publication Publication Date Title
Wu et al. Computationally efficient deep neural network for computed tomography image reconstruction
Sutour et al. Adaptive regularization of the NL-means: Application to image and video denoising
Salmon et al. Poisson noise reduction with non-local PCA
CN111709897B (en) Domain transformation-based positron emission tomography image reconstruction method
Marais et al. Proximal-gradient methods for poisson image reconstruction with bm3d-based regularization
Gong et al. The evolution of image reconstruction in PET: from filtered back-projection to artificial intelligence
Mohsin et al. Iterative shrinkage algorithm for patch-smoothness regularized medical image recovery
WO2022226886A1 (en) Image processing method based on transform domain denoising autoencoder as a priori
Rai et al. An unsupervised deep learning framework for medical image denoising
Ma et al. Generalized Gibbs priors based positron emission tomography reconstruction
Rai et al. Augmented noise learning framework for enhancing medical image denoising
Lyu et al. Iterative reconstruction for low dose CT using Plug-and-Play alternating direction method of multipliers (ADMM) framework
WO2023279316A1 (en) Pet reconstruction method based on denoising score matching network
Unal et al. An unsupervised reconstruction method for low-dose CT using deep generative regularization prior
Yin et al. Unpaired low-dose CT denoising via an improved cycle-consistent adversarial network with attention ensemble
Muhammad et al. Frequency component vectorisation for image dehazing
Teodoro et al. Image restoration with locally selected class-adapted models
Wang et al. Joint patch clustering-based adaptive dictionary and sparse representation for multi-modality image fusion
Li et al. Neural KEM: A kernel method with deep coefficient prior for PET image reconstruction
Fermanian et al. PnP-ReG: Learned regularizing gradient for plug-and-play gradient descent
Habring et al. Neural-network-based regularization methods for inverse problems in imaging
Salvadeo et al. Nonlocal Markovian models for image denoising
Khan et al. Multi‐scale GAN with residual image learning for removing heterogeneous blur
WO2022213321A1 (en) Wavelet fusion-based pet image reconstruction method
Du et al. DRGAN: a deep residual generative adversarial network for PET image reconstruction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21935551

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE