WO2024031643A1 - 基于偏振多参量融合的ps-oct可视度提升方法及系统 - Google Patents

基于偏振多参量融合的ps-oct可视度提升方法及系统 Download PDF

Info

Publication number
WO2024031643A1
WO2024031643A1 PCT/CN2022/112138 CN2022112138W WO2024031643A1 WO 2024031643 A1 WO2024031643 A1 WO 2024031643A1 CN 2022112138 W CN2022112138 W CN 2022112138W WO 2024031643 A1 WO2024031643 A1 WO 2024031643A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
fusion
oct
images
polarization
Prior art date
Application number
PCT/CN2022/112138
Other languages
English (en)
French (fr)
Inventor
周翔
武西宁
赵士勇
Original Assignee
天津恒宇医疗科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 天津恒宇医疗科技有限公司 filed Critical 天津恒宇医疗科技有限公司
Publication of WO2024031643A1 publication Critical patent/WO2024031643A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/005Specific pre-processing for tomographic reconstruction, e.g. calibration, source positioning, rebinning, scatter correction, retrospective gating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • G06T7/44Analysis of texture based on statistical description of texture using image operators, e.g. filters, edge density metrics or local histograms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10101Optical tomography; Optical coherence tomography [OCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the invention relates to the technical field of optical coherence tomography, and in particular to a PS-OCT visibility improvement method and system based on polarization multi-parameter fusion.
  • Optical coherence tomography is currently recognized as the highest resolution imaging method in the field of intracavity imaging.
  • OCT optical coherence tomography
  • the traditional OCT system based on tissue intensity imaging has shortcomings in analyzing tissue characteristics, which makes artificial or When AI analyzes images, it will be difficult to distinguish different biological tissues such as different membrane structures and plaques.
  • the image expression results obtained by the intensity OCT currently used clinically are blurred.
  • light carries other additional characteristics.
  • the additional characteristics carried by light can be used to analyze or quantitatively measure different tissues to improve visibility.
  • the principle of polarization-sensitive OCT (PS-OCT) is based on this.
  • Various biological tissues or samples can achieve polarization modulation of the input light, change the polarization state of the input light, and obtain additional ways to characterize the visibility of biological tissues or samples. Obtain feature information other than intensity.
  • PS-OCT technology is based on the propagation and reflection of polarized light incident on the sample tissue through the sample medium, causing the polarization state of the input polarized light to change.
  • the polarization characteristics of the sample can be obtained information to achieve depth-resolved imaging of tissue birefringence.
  • This special property is particularly important in samples or biological tissues. Proteins or biological macromolecules with isotropic tissue structures in blood vessels can change the polarization state of incident light, such as collagen and actin. Inducing shape birefringence and producing a measurable optical signal.
  • the polarization information of the sample at depth resolution can be obtained, such as phase retardation and optical axis orientation.
  • the purpose of the present invention is to provide a PS-OCT visibility improvement method and system based on polarization multi-parameter fusion to achieve color images of different tissue structures of samples, more intuitively help doctors make diagnosis, and reduce the cost of existing intracavity The difficulty of learning imaging products.
  • the present invention provides a PS-OCT visibility improvement method based on polarization multi-parameter fusion, which includes the following steps:
  • the preprocessing includes the following steps:
  • the fusion formula is as follows:
  • Stru total is the fused image
  • pH 1 and pH 2 are the upper and lower images of the H channel respectively
  • pV 1 and pV 2 are the upper and lower images of the V channel respectively.
  • the polarization state image adopts the following method:
  • the Stokes matrix is normalized to form a QUV three-dimensional array.
  • the contour of the sample to be tested is used to filter the QUV three-dimensional array.
  • the pixels outside the contour are set to 0, and the QUV three-dimensional array is drawn in RGB to obtain the test sample.
  • the Poincaré sphere is used to extract the spatial subnormal vector Bn of the PS-OCT image plane, and the second and third dimensions of Bn are swapped to obtain an x*y*3 matrix, where x and y represent the number of rows of image pixels. and the number of columns; after filtering the contour of the sample to be tested, the optical axis image is obtained.
  • the local phase delay is calculated using the following formula:
  • ⁇ n is the local phase delay
  • N n is the normal vector in the nth intimate plane
  • N n-1 is the normal vector of the n-1th intimate plane.
  • the local optical axis image is calculated using the following formula:
  • a n R n-1 (- ⁇ n-1 ; A n-1 )R n-2 (- ⁇ n-2 ; A n-2 )...R 1 (- ⁇ 1 ; A 1 )B n
  • a n is the local optical axis
  • B n represents the optical axis superimposed with the birefringence effect of tissues at different depths
  • R n is the 3*3 rotation matrix from the n-1th optical axis to the nth optical axis
  • ⁇ n is The phase retardation of the nth close plane
  • a n (x), A n (y), and A n (z) are respectively the three dimensions of the local optical axis An of the three-dimensional array.
  • the average gradient fusion of multiple images includes the following method:
  • the polarization state image, the local optical axis image and the local phase delay image are normalized; the gradient features and adjustable fusion weight coefficients are used to fuse the three image calculation results.
  • the weighted fusion of multiple images includes the following methods:
  • d i is the fusion coefficient
  • d 1 0.4
  • d 2 0.2
  • d 3 0.4
  • Fusimage i represents the image after grayscale feature fusion, shape feature fusion and texture feature fusion
  • the grayscale feature fusion includes the following process:
  • grayscale feature values from grayscale images, where the grayscale feature values include mean, variance, energy, slope, and kurtosis;
  • the fused five grayscale feature value fused images are fused again to finally become one grayscale fused image.
  • the shape feature fusion includes: extracting shape features from three types of grayscale images and normalizing the central moments; obtaining 7 invariant moment shape features, and combining the 7 shapes
  • the features are used as shape feature vectors to form a shape feature matrix, and the shape feature matrix is used for shape fusion to obtain a shape fusion image.
  • the texture feature fusion includes: extracting texture features from three types of grayscale images, where the texture features include: energy, entropy, contrast and correlation; using texture features to construct a texture feature vector, and The three types of grayscale images are fused according to the constructed four types of texture feature vectors; four types of texture feature images are formed, and then the four types of texture feature images are fused according to equal weights to form a texture fusion image.
  • the present invention also provides a PS-OCT visibility improvement system based on polarization multi-parameter fusion, which is used to implement the above-mentioned PS-OCT visibility improvement method based on polarization multi-parameter fusion, including an image acquisition module, an image processing module and an image Fusion module;
  • the image acquisition module is used to acquire the original PS-OCT image, preprocess the original PS-OCT image, and obtain the outline of the sample to be tested;
  • the image processing module is used to use the contour of the sample to be tested to filter the QUV three-dimensional array constructed based on the Stokes matrix to obtain a polarization state image. Based on the calculated polarization state, the Poincaré sphere is used to calculate the local optical axis image. and local phase delay images;
  • the image fusion module is used to perform average gradient fusion or weighted fusion on the multiple images obtained above, and obtain the final PS-OCT image after fusion.
  • the PS-OCT visibility improvement method and system provided by this application based on polarization multi-parameter fusion requires only a single input polarization state when acquiring PS-OCT, and does not require mutually coherent polarization inputs, which increases the complexity of the system. Less demanding. It has a wide range of applications and can be used for both endoscope-based PSOCT and galvanometer-based planar scanning PSOCT.
  • the PS-OCT visibility improvement method and system based on polarization multi-parameter fusion uses multiple methods to resolve and fuse polarization information, which has better visibility than the existing technology.
  • Figure 1 is a flow chart of the PS-OCT visibility improvement method based on polarization multi-parameter fusion provided by the present invention.
  • Figure 2 is a schematic diagram of the PS-OCT image interface in the PS-OCT visibility improvement method based on polarization multi-parameter fusion provided by the present invention.
  • Figure 3 is a schematic diagram of the Poincaré sphere of the PS-OCT visibility improvement method based on polarization multi-parameter fusion provided by the present invention.
  • Figure 4 is a schematic diagram of the local optical axis of the PS-OCT visibility improvement method based on polarization multi-parameter fusion provided by the present invention
  • Figure 5 is a schematic structural diagram of the PS-OCT visibility improvement system based on polarization multi-parameter fusion provided by the present invention.
  • an embodiment of the present invention provides a PS-OCT visibility improvement method based on polarization multi-parameter fusion, which includes the following steps:
  • the preprocessing process includes the following steps
  • Stru total is the fused image
  • pH 1 and pH 2 are the upper and lower images of the H channel respectively
  • pV 1 and pV 2 are the upper and lower images of the V channel respectively.
  • the system collects two orthogonal polarization A-scan data H and V, subtracts the reference plane, multiplies it by a cosine cone window for shaping, adds dispersion compensation, and then performs FFT to obtain H and the Fourier domain IMG_H and IMG_V of the V channel. They are complex matrices of x*y*4. Therefore, when displayed as an image, their absolute values are taken and averaged along the third dimension to obtain the original image. Because the birefringence effect of polarization produces phase delay, both the H and V channels have two upper and lower images, a total of 4 images, as shown in Figure 2:
  • the fusion of these four images can obtain the structural diagram of the sample to be tested.
  • the fusion formula is as follows:
  • pH1, pH2, pV1 and pV2 are the upper and lower images of H and V channels respectively. It also includes setting the brightness threshold Thr. Pixels whose Stru total is lower than the threshold are regarded as noise, and the contour Msk_Thr of the sample to be tested is obtained. At this point, the data preprocessing is completed.
  • the polarization state image adopts the following method:
  • the Stokes matrix is normalized to form a QUV three-dimensional array.
  • the contour of the sample to be tested is used to filter the QUV three-dimensional array.
  • the pixels outside the contour are set to 0, and the QUV three-dimensional array is drawn in RGB to obtain the test sample.
  • the following is the calculation process of polarization state:
  • is the phase difference between the H and V channels
  • IMG_V * is the complex conjugate of IMG_V
  • imag represents the complex
  • the imaginary part of a number, real represents the real part of a complex number.
  • Step 3 Filter.
  • the filtering methods can be the following but are not limited to: median filtering, Gaussian filtering, mean filtering, imbox filtering, wiener filtering, dilation and corrosion, etc.
  • Step 4 Construct a three-dimensional array. Construct Q, U, and V into x*y*3 three-dimensional array Stokes, then filter the Stokes through the contour Msk_Thr array, set the pixel values outside the contour to 0, and finally draw the resulting three-dimensional array in RGB format , obtain the polarization color image of the contour of the sample to be tested
  • the Poincaré sphere is used to extract the spatial subnormal vector Bn of the PS-OCT image plane, and the second and third dimensions of Bn are swapped to obtain an x*y*3 matrix, where x and y represent the number of rows of image pixels. and the number of columns; after filtering the contour of the sample to be tested, the optical axis image is obtained.
  • P1, P2, and P3 are three polarization states represented by Stokes parameters (S1, S2, S3), on a sphere; plane a is a plane fitted by three points P1, P2, and P3; A1 is the plane normal vector of a, which is the optical axis required here.
  • P1 is the incident polarization state or input polarization state. Since P1 is incident on the surface of the sample and is directly reflected by the surface into the output polarization state without changing the polarization information, P1 is also the output polarization state.
  • P2 and P3 are obtained by rotating P1 at a certain angle around the optical axis A1 of the sample.
  • P1, P2, and P3 are the output polarization states received by the balanced detector
  • N n B n ⁇ T n
  • T n is the unit tangent vector formed by P n+1 and P n , T n-1 and T n form a close plane
  • B n is the space subnormal vector of the plane, which is used to calculate the optical axis
  • N n is the normal vector in this plane and is used to calculate the phase delay.
  • the second and third dimensions of Bn are swapped to obtain an x*y*3 matrix (x and y represent the number of rows and columns of image pixels), and then through Msk_Thr filtering, the optical axis image can be obtained.
  • ⁇ n is the local phase delay
  • N n is the normal vector in the nth close plane
  • N n-1 is the normal vector of the n-1th close plane
  • the optical axis represented by Bn is the result of superimposing the birefringence effect of tissues at different depths. Deep in the tissue, the accumulated birefringence effect will cause distortion of the results. Therefore, the birefringence effect accumulated with depth must be removed to restore the depth of the tissue.
  • the real optical axis information that is, the local optical axis, is A 2 in the figure below.
  • a n R n-1 (- ⁇ n-1 ; A n-2 )R n-2 (- ⁇ n-2 ; A n-2 )...R 1 (- ⁇ 1 ; A 1 )B n
  • a 2 R 1 (- ⁇ 1 ; A 1 )B 2
  • E is the third-order standard identity matrix
  • R n is the 3*3 rotation matrix from the n-1th optical axis to the nth optical axis
  • ⁇ n is the phase delay of the nth close plane
  • a n (x) , A n (y), and A n (z) are respectively the three dimensions of the local optical axis An of the three-dimensional array.
  • the average gradient fusion of multiple images includes the following methods:
  • the gradient features and adjustable fusion weight coefficients are used to fuse the three image calculation results.
  • P represents the six images of polarization state, local phase retardation, and local optical axis, all of which are normalized by the above formula.
  • M and N represent the width and height of the image respectively
  • ⁇ x P (x, y) and ⁇ y P (x, y) respectively represent the difference calculation of the image P (x, y) in the x, y direction, Calculated as follows:
  • the gradient features are used to fuse the three solution results.
  • the fusion formula is as follows:
  • F P a 1 *G Stokes *Stokes′+a 2 *G LocDP *LocDP′+a 3 *G LocOptAxis *LocOptAxis′
  • a 1 + a 2 + a 3 1, which is the fusion weight coefficient.
  • a clearer fusion image can be obtained, the same below. Images with greater information content have greater weights.
  • weighted fusion of multiple images includes the following methods:
  • d i is the fusion coefficient
  • d 1 0.4
  • d 2 0.2
  • d 3 0.4
  • Fusimage i represents the image after grayscale feature fusion, shape feature fusion and texture feature fusion
  • the grayscale feature fusion includes the following process:
  • grayscale feature values from grayscale images, where the grayscale feature values include mean, variance, energy, slope, and kurtosis;
  • the fused five grayscale feature value fused images are fused again to finally become one grayscale fused image.
  • the shape feature fusion includes: extracting shape features from three types of grayscale images and normalizing the central moments; obtaining 7 invariant moment shape features, and using the 7 shape features as shape features Vector, form a shape feature matrix, use the shape feature matrix to perform shape fusion, and obtain the shape fusion image.
  • the texture feature fusion includes: extracting texture features from three types of grayscale images, where the texture features include: energy, entropy, contrast and correlation; using texture features to construct a texture feature vector, and combining the three types of grayscale images.
  • the images are fused according to the constructed four texture feature vectors; four texture feature images are formed, and then the four texture feature images are fused according to equal weights to form a texture fusion image.
  • Grayscale features include five statistics: mean m, variance v 2 , energy e, slope s, and kurtosis u. The meanings and calculation formulas of these five statistics are as follows:
  • T represents the gray level
  • di represents the number of pixels when the gray value is i
  • D represents the total number of pixels in the image.
  • Mean m represents the average value of image energy.
  • the calculation formula is as follows:
  • Variance v 2 represents the distribution of image gray value, the calculation formula is as follows:
  • Energy e represents the distribution of image grayscale. When the distribution is more uniform, the energy is greater.
  • the calculation formula is as follows:
  • Slope s represents the asymmetry of the image histogram distribution. The higher the slope, the more asymmetric the histogram distribution.
  • the calculation formula is as follows:
  • Kurtosis u represents that the grayscale distribution of the image is close to its mean. It is used to analyze whether the grayscale distribution converges around its mean. The smaller the kurtosis value, the more concentrated the distribution is at the mean. The calculation formula is as follows:
  • the results of the three solving methods are fused into five feature value maps based on five grayscale features using weighted fusion.
  • the characteristic vectors of the three solution results are represented as h 1 , h 2 , h 3 , respectively, and the weights of each characteristic quantity are as follows:
  • the five fused feature quantities are fused again to obtain an image with grayscale features.
  • the formula is as follows:
  • b 1 +b 2 +b 3 +b 4 +b 5 1, which are 0.2, 0.2, 0.3, 0.3, and 0 respectively in the experiment.
  • X and Y represent the width and height of the image
  • x and y represent the x-th and y-th pixels in the width and height directions, that is, the coordinates in the image.
  • the central moment is normalized according to the following formula:
  • ⁇ 3 (n 30 -3n 12 ) 2 + (3n 21 -n 03 ) 2
  • ⁇ 5 ( ⁇ 30 -3 ⁇ 12 )( ⁇ 32 + ⁇ 12 ) ⁇ [( ⁇ 30 + ⁇ 12 ) 2 -3( ⁇ 21 + ⁇ 23 ) 2 ]
  • ⁇ 6 ( ⁇ 20 - ⁇ 02 ) ⁇ [( ⁇ 30 + ⁇ 12 ) 2 -( ⁇ 21 + ⁇ 03 ) 2 ]+4 ⁇ 11 ( ⁇ 30 + ⁇ 12 )( ⁇ 21 + ⁇ 03 )
  • ⁇ 7 (3n 21 -n 03 ) (n 30 +n 12 ) ⁇ [(n 30 +n 12 ) 2 -3 (n 21 +n 03 ) 2 ]
  • the eta value with numerical subscript in the formula represents the normalized central moment value corresponding to p and q orders.
  • the extraction of texture features is based on the gray level co-occurrence matrix.
  • the gray level histogram can only directly describe the gray level distribution of one pixel, but the gray level co-occurrence matrix can describe the combined gray level distribution of two pixels.
  • its grayscale distribution is (gx, gy).
  • (x, y) moves (x+i, y+j) points will be obtained, and the corresponding (gx’, gy’) will also be generated.
  • Count the number of grayscale values that appear in an image, and form all the grayscale values into a square matrix.
  • the probability of is called the gray level co-occurrence matrix.
  • the normalized formula of the gray level co-occurrence matrix is as follows: where Z represents the width and height of the square matrix image, that is, the image size is Z ⁇ Z:
  • the energy value can not only describe the uniformity of grayscale distribution, but also express the texture thickness to a certain extent.
  • E When the values of all parameters in the gray-level co-occurrence matrix P (gx, gy) are equal, E is relatively small; when the difference in parameter values is obvious, E becomes large.
  • E When the parameters in the gray-level co-occurrence matrix are close to the center, E is larger, indicating that the image texture has good uniformity and changes regularly.
  • the calculation method is to square the parameters of the elements in the gray level co-occurrence matrix P (gx, gy) first and then sum them, as follows:
  • Entropy is used to represent the complexity of the image texture. When the values of the co-occurrence matrix are relatively uniform, the entropy is larger. The calculation formula is as follows:
  • Contrast I Contrast is used to express the clarity of an image. The smaller the contrast, the shallower the grooves used to express the unevenness of the object's surface, and the lower the clarity of the image.
  • the calculation formula is as follows:
  • Correlation is used to represent the degree to which the parameters of the gray-level co-occurrence matrix P(gx,gy) are the same in the horizontal and vertical directions.
  • the correlation value in a certain direction is greater than other directions, the texture characteristics in that direction Obviously, the correlation can therefore look for directions in which the texture is relatively strong.
  • the calculation formula is as follows:
  • ⁇ x , ⁇ y , ⁇ y , and ⁇ x are intermediate variables, and their calculation formulas are as follows:
  • the feature vectors of the three solution result images are Y 1 , Y 2 , Y 3 respectively, and the weight
  • the value calculation formula is as follows:
  • Pi (x, y) represents the result image of the three solution methods.
  • the FI j distribution represents the four feature images after fusion.
  • d i is the fusion coefficient.
  • the present invention also provides a PS-OCT visibility improvement system based on polarization multi-parameter fusion, which is used to implement the above-mentioned PS-OCT visibility improvement method based on polarization multi-parameter fusion, including an image acquisition module , image processing module and image fusion module;
  • the image acquisition module is used to acquire the original PS-OCT image, preprocess the original PS-OCT image, and obtain the outline of the sample to be tested;
  • the image processing module is used to filter the QUV three-dimensional array constructed based on the Stokes matrix using the contour of the sample to be tested, and obtain the polarization state image, polarization degree DOPU image and phase delay DPPR image; according to the calculated polarization state, Use the Poincaré sphere to calculate the optical axis image, local optical axis image and local phase delay image;
  • the image fusion module is used to perform average gradient fusion or weighted fusion on the multiple images obtained above, and obtain the final PS-OCT image after fusion.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)
  • Endoscopes (AREA)

Abstract

本发明公开了一种基于偏振多参量融合的PS-OCT可视度提升方法及系统,该方法包括获取原始的PS-OCT图像,并进行预处理,获取待测样品的轮廓;利用待测样品轮廓,对基于Stokes矩阵,构建的QUV三维数组进行滤波,得到偏振态图像;根据计算得出的偏振态,利用庞加莱球计算局部光轴图像和局部相位延迟图像;将得出的多种图像进行平均梯度融合或加权融合,融合后得到最终PS-OCT图像。本方法及系统只需一个单一的输入偏振态,不要求相互相干的偏振输入,对系统复杂度要求较低。适用范围广,既可用于基于内窥镜的PSOCT,也可用于基于振镜的平面扫描PSOCT。

Description

基于偏振多参量融合的PS-OCT可视度提升方法及系统 技术领域
本发明涉及光学相干层析成像技术领域,尤其涉及一种基于偏振多参量融合的PS-OCT可视度提升方法及系统。
背景技术
光学相干层析成像技术(Optical coherence tomography,OCT)是目前腔内成像领域公认的分辨率最高的成像方式,然而基于组织强度成像的传统OCT系统在分析组织特性方面存在不足,这使得在人工或AI分析图像时想要区分不同膜结构、斑块等不同生物组织会比较困难,目前临床使用的强度OCT在使用中得到的图像表达结果是模糊的。然而,光所携带的除了强度信息外,更包含其他的附加特性,可以利用光携带的附加特性针对不同组织进行分析或定量测量,提升可视度。偏振敏感OCT(PS-OCT)的原理即基于此,多种生物组织或样品可实现对输入光的偏振调制,改变输入光的偏振态,可获得额外表征生物组织或样品可视度的方式,获得除强度外的特征信息。
PS-OCT技术是通过入射到样品组织的偏振光经过样品介质内的传播和反射,引起输入偏振光的偏振状态发生变化,通过对背向反射光的偏振态解调,可以获得样品的偏振特性信息,实现组织双折射的深度分辨成像。这种特殊的性质在样品或生物组织中显得尤为重要,在血管内的具有各向同性组织结构的蛋白质或生物大分子物质,能够改变入射光的偏振态,如胶原蛋白和肌动蛋白等。诱导其发生形状双折射并产生可测量的光学信号。通过测量从样品或生物组织背向反射光或背向散射光的偏振态,可以得到样品在深度分辨率下的偏振 信息,如相位延迟和光轴取向等。
目前现有的解算PS-OCT偏振信息的技术大多是基于琼斯矩阵或穆勒矩阵的相位延迟算法和偏振度算法,但是因为系统有较明显的二向衰减效应和退偏振效应,当前算法解算出的偏振可视度较低,因此提出一种基于多参量融合解析的方法来解算偏振特征信息,提升偏振图像的可视度。
发明内容
因此,本发明的目的在于提供一种基于偏振多参量融合的PS-OCT可视度提升方法及系统,以实现样品不同组织结构的彩色图像,更直观的帮助医生进行诊断,降低现有腔内影像产品的学习难度。
为了实现上述目的,本发明的提供的一种基于偏振多参量融合的PS-OCT可视度提升方法,包括以下步骤:
S1、获取原始的PS-OCT图像,并对原始的PS-OCT图像进行预处理,获取待测样品的轮廓;
S2、利用待测样品轮廓,对基于Stokes矩阵,构建的QUV三维数组进行滤波,得到偏振态图像;
S3、根据计算得出的偏振态,利用庞加莱球计算局部光轴图像和局部相位延迟图像;
S4、将S2和S3中得出的多种图像进行平均梯度融合或加权融合,融合后得到最终PS-OCT图像。
进一步优选的,在S1中,所述预处理包括如下步骤:
S101、将原始的PS-OCT图像中的两个偏振态H通道数据和V通道数据,乘以余弦锥型窗进行整形;
S102、对整形后的数据进行傅里叶变换,得到H通道和V通道的傅里叶域 矩阵;
S103、对H通道和V通道的傅里叶域矩阵,取平均值后,作为H通道和V通道的原始图像,将H通道和V通道的原始图像进行融合;
S104、对融合后的图像按照设定的阈值滤除噪声,得到待测样品的轮廓。
进一步优选的,在S103中,将H通道和V通道的原始图像进行融合时,融合公式如下所示:
Figure PCTCN2022112138-appb-000001
其中,Stru total为融合后的图像,pH 1、pH 2、分别是H通道的上、下两个图像,pV 1和pV 2分别是V通道的上、下两个图像。
进一步优选的,在S2中,所述偏振态图像采用如下方法:
对Stokes矩阵进行归一化后形成QUV三维数组,利用待测样品轮廓,对QUV三维数组进行滤波,将轮廓外的像素点置0,并将QUV三维数组以RGB的方式描绘出来,得到待测样品轮廓的偏振态彩色图像。
进一步优选的,在S3中,在计算局部光轴图像时,还包括
利用庞加莱球提取PS-OCT图像平面的空间副法向量Bn,将Bn的第二维和第三维进行对调,得到x*y*3的矩阵,其中,x和y代表图像像素的行数和列数;经过待测样品的轮廓滤波,得到光轴图像。
进一步优选的,所述局部相位延迟采用如下公式计算:
Figure PCTCN2022112138-appb-000002
其中,δ n为局部相位延迟,N n为第n个密切平面内的法向量,N n-1为第n-1个密切平面的法向量。
进一步优选的,所述局部光轴图像采用如下公式计算:
A n=R n-1(-δ n-1;A n-1)R n-2(-δ n-2;A n-2)…R 1(-δ 1;A 1)B n
其中,A n为局部光轴,B n代表叠加了不同深度组织双折射效应的光轴,R n 是第n-1个光轴到第n个光轴的3*3旋转矩阵,δ n为第n个密切平面的相位延迟,A n(x)、A n(y)、A n(z)分别为三维数组局部光轴An的三个维度。
进一步优选的,在S4中,将多种图像进行平均梯度融合包括如下方法:
将偏振态图像、局部光轴图像和局部相位延迟图像,3种图像进行归一化;利用梯度特征和可调的融合权值系数,将3种图像解算结果进行融合。
进一步优选的,在S4中,将多种图像进行加权融合包括如下方法:
S401、将偏振态图像、局部光轴图像和局部相位延迟图像,3种图像转换为灰度图像;
S402、对3种灰度图像分别进行灰度特征融合、形状特征融合和纹理特征融合后再次进行融合,得到最终的融合图像,按照如下融合公式得到最终PS-OCT图像:
Figure PCTCN2022112138-appb-000003
其中,d i为融合系数,d 1=0.4,d 2=0.2,d 3=0.4,Fusimage i表示灰度特征融合、形状特征融合和纹理特征融合后的图像;
进一步优选的,在S402中,所述灰度特征融合包括以下过程:
对灰度图像进行灰度特征值提取,所述灰度特征值包括均值、方差、能量、倾斜度、峰态;
将原来3种图像采用加权融合的方式融合成5张基于所述灰度特征值的图;分别计算五个灰度特征值的融合图像;
对融合后的五张灰度特征值的融合图像再次进行融合,最终成为一张灰度融合图像。
进一步优选的,在S402中,所述形状特征融合包括:对3种灰度图像进行形状特征提取,并对中心矩进行归一化;得出7个不变矩的形状特征,将7个 形状特征作为形状特征矢量,形成一个形状特征矩阵,利用形状特征矩阵进行形状融合,得到形状融合图像。
进一步优选的,在S402中,所述纹理特征融合包括:对3种灰度图像进行纹理特征提取,所述纹理特征包括:能量、熵、对比度和相关性;利用纹理特征构建纹理特征矢量,将3种灰度图像按照构建的4种纹理特征矢量,进行融合;形成4种纹理特征图像,再将所述4种纹理特征图像按照等权重进行融合,形成纹理融合图像。
本发明还提供一种基于偏振多参量融合的PS-OCT可视度提升系统,用于实施上述基于偏振多参量融合的PS-OCT可视度提升方法,包括图像获取模块,图像处理模块和图像融合模块;
所述图像获取模块,用于获取原始的PS-OCT图像,并对原始的PS-OCT图像进行预处理,获取待测样品的轮廓;
所述图像处理模块,用于利用待测样品轮廓,对基于Stokes矩阵,构建的QUV三维数组进行滤波,得到偏振态图像,根据计算得出的偏振态,利用庞加莱球计算局部光轴图像和局部相位延迟图像;
所述图像融合模块,用于将上述得出的多种图像进行平均梯度融合或加权融合,融合后得到最终PS-OCT图像。
本申请公开的基于偏振多参量融合的PS-OCT可视度提升方法及系统,相比于现有技术至少具有以下优点:
1、本申请提供的基于偏振多参量融合的PS-OCT可视度提升方法及系统,获取PS-OCT时,只需一个单一的输入偏振态,不要求相互相干的偏振输入,对系统复杂度要求较低。适用范围广,既可用于基于内窥镜的PSOCT,也可用于基于振镜的平面扫描PSOCT。
2.本申请提供的基于偏振多参量融合的PS-OCT可视度提升方法及系统,采用多种方式解算偏振信息并加以融合,相比目前已有技术的可视度更。
附图说明
图1为本发明提供的基于偏振多参量融合的PS-OCT可视度提升方法的流程图
图2为本发明提供的基于偏振多参量融合的PS-OCT可视度提升方法中PS-OCT图像界面示意图。
图3为本发明提供的基于偏振多参量融合的PS-OCT可视度提升方法的中庞加莱球示意图。
图4为本发明提供的基于偏振多参量融合的PS-OCT可视度提升方法的中局部局部光轴示意图;
图5为本发明提供的基于偏振多参量融合的PS-OCT可视度提升系统的结构示意图。
具体实施方式
以下通过附图和具体实施方式对本发明作进一步的详细说明。
如图1所示,本发明一方面实施例提供的一种基于偏振多参量融合的PS-OCT可视度提升方法,包括以下步骤:
S1、获取原始的PS-OCT图像,并对原始的PS-OCT图像进行预处理,获取待测样品的轮廓;
S2、利用待测样品轮廓,对基于Stokes矩阵,构建的QUV三维数组进行滤波,得到偏振态图像;
S3、根据计算得出的偏振态,利用庞加莱球计算局部光轴图像和局部相位延迟图像;
S4、将S2和S3中得出的多种图像进行平均梯度融合或加权融合,融合后得到最终PS-OCT图像。
在本申请的一个实施例中,预处理过程包括如下步骤
S101、将原始的PS-OCT图像中的两个偏振态H通道数据和V通道数据,乘以余弦锥型窗进行整形;
S102、对整形后的数据进行傅里叶变换,得到H通道和V通道的傅里叶域矩阵;
S103、对H通道和V通道的傅里叶域矩阵,取平均值后,作为H通道和V通道的原始图像,将H通道和V通道的原始图像进行融合;
S104、对融合后的图像按照设定的阈值滤除噪声,得到待测样品的轮廓。
在S103中,将H通道和V通道的原始图像进行融合时,融合公式如下所示:
Figure PCTCN2022112138-appb-000004
其中,Stru total为融合后的图像,pH 1、pH 2、分别是H通道的上、下两个图像,pV 1和pV 2分别是V通道的上、下两个图像。
在具体实施过程中,系统采集得到正交的两个偏振态A-scan数据H和V,减去参考面,乘以一个余弦锥型窗进行整形,加上色散补偿,然后做FFT,得到H和V通道的傅里叶域IMG_H和IMG_V,它们是x*y*4的复矩阵,因此作为图像显示时取其绝对值,并沿第三个维度取平均,可得到H和V通道的原始图像。因为偏振的双折射效应产生相位延迟,所以H和V通道均有上下两幅图像,共4幅图像,如图2所示:
这4幅图像融合可得出待测样品的结构图,融合公式如下:
Figure PCTCN2022112138-appb-000005
其中,pH1、pH2、pV1和pV2分别是H和V通道的上、下图像。还包括设定亮度的阈值Thr,Stru total低于阈值的像素点视为噪声,得到待测样品的轮廓 Msk_Thr,至此数据预处理完毕。
在S2中,所述偏振态图像采用如下方法:
对Stokes矩阵进行归一化后形成QUV三维数组,利用待测样品轮廓,对QUV三维数组进行滤波,将轮廓外的像素点置0,并将QUV三维数组以RGB的方式描绘出来,得到待测样品轮廓的偏振态彩色图像。
用Stokes参数表示偏振态:S 0=e 2x+e 2y;S 1=e 2x-e 2y;S 2=2e xe ycosθ;S 3=2e xe ysinθ。显然S 0 2=S 1 2+S 2 2+S 3 2,所以式中只有3个变量是独立的,理想情况下(即在无损传输时)有S 0=常数,所以S 1,S 2,S 3所表示的是一个球面。S 0=1的球称为庞加莱球,球面上各点与光的全偏振态一一对应。以下是偏振态的计算过程:
第一步:
S 0=|IMG_H| 2+|IMG_V| 2
S 1=|IMG_H| 2-|IMG_V| 2
S 2=2*|IMG_H|*|IMG_V|*cos(-θ)
S 3=2*|IMG_H|*|IMG_V|*sin(-θ)
Figure PCTCN2022112138-appb-000006
θ是H和V通道之间的相位差,IMG_V *是IMG_V的共轭复数,imag表示复
数的虚部,real表示复数的实部。
第二步:归一化;
Q=S 1/S 0;U=S 2/S 0;V=S 3/S 0;Q、U、V为归一化后的坐标。
第三步:滤波。滤波方式可以为以下但不限于:中值滤波、高斯滤波、均值滤波、imbox滤波、wiener滤波、膨胀和腐蚀等。
第四步:构建三维数组。将Q、U、V构建成x*y*3的三维数组Stokes,再通过轮廓Msk_Thr数组将Stokes滤波,将轮廓之外的像素点值置0,最后将得到的三维数组以RGB的方式描绘出来,得到待测样品轮廓的偏振态彩色图像
如图2-3所示,在S3中,在计算局部光轴图像时,还包括:
利用庞加莱球提取PS-OCT图像平面的空间副法向量Bn,将Bn的第二维和第三维进行对调,得到x*y*3的矩阵,其中,x和y代表图像像素的行数和列数;经过待测样品的轮廓滤波,得到光轴图像。
利用上述偏振态的结果Stokes,Stokes可以在庞加莱球上表示。图3,P1、P2、P3是三个由斯托克斯参数(S1、S2、S3)表示的偏振态,在球面上;平面a是由P1、P2、P3三点拟合的平面;A1是a的平面法向量,即这里要求的光轴。P1为入射偏振态或输入偏振态,由于P1是入射到样品表层,被表层直接反射成输出偏振态且偏振信息未改变,故P1也是输出偏振态。P2、P3是由P1围绕样品光轴A1旋转某角度得到的。P1、P2、P3是平衡探测器接收到的输出偏振态
Figure PCTCN2022112138-appb-000007
Figure PCTCN2022112138-appb-000008
N n=B n×T n
其中,T n为P n+1与P n所构成的单位切向量,T n-1与T n组成一个密切平面,B n为该平面的空间副法向量,用于计算光轴,N n为该平面内的法向量,用于计算相位延迟。
基于B n计算光轴的方法如下:
首先将Bn的第二维和第三维进行对调,得到x*y*3的矩阵(x和y代表图像像素的行数和列数),然后经过Msk_Thr滤波,即可得到光轴图像。
进一步,局部相位延迟采用如下公式计算:
Figure PCTCN2022112138-appb-000009
其中,δ n为局部相位延迟,N n为第n个密切平面内的法向量,N n-1为第n-1个密 切平面的法向量,然后将δ n转换为RGB的三维数组,再经过Msk_Thr滤波,即可得到样品的局部相位延迟LocDP图像。
然后将δ n转换为RGB的三维数组,再经过Msk_Thr滤波,即可得到样品的局部相位延迟LocDP图像。
Bn所代表的光轴为叠加了不同深度组织双折射效应的结果,在组织深处,累积的双折射效应会导致结果的失真,所以要去除随深度累积的双折射效应,才能还原组织深处真实的光轴信息,即局部光轴,如下图中的A 2
局部光轴A n计算过程如下:
A n=R n-1(-δ n-1;A n-2)R n-2(-δ n-2;A n-2)…R 1(-δ 1;A 1)B n
A 1=B 1
A 2=R 1(-δ 1;A 1)B 2
Figure PCTCN2022112138-appb-000010
其中,E为三阶标准单位矩阵,R n是第n-1个光轴到第n个光轴的3*3旋转矩阵,δ n为第n个密切平面的相位延迟,A n(x)、A n(y)、A n(z)分别为三维数组局部光轴An的三个维度。将A n通过待测样品的轮廓滤波,即可得到待测样品的局部光轴。
在S4中,将多种图像进行平均梯度融合包括如下方法:
将偏振态图像、局部光轴图像和局部相位延迟图像,3种图像进行归一化;
利用梯度特征和可调的融合权值系数,将3种图像解算结果进行融合。
首先,将3种解算方法的结果进行归一化,在同一范围内计算。
Figure PCTCN2022112138-appb-000011
式中,P代表偏振态、局部相位延迟、局部光轴这六种图像,均通过上式进行归一化。
梯度特征向量G计算公式如下:
Figure PCTCN2022112138-appb-000012
式中,M和N分别代表图像的宽度和高度,Δ xP(x,y)和Δ yP(x,y)分别代表图像P(x,y)在x,y方向上的差分计算,计算公式如下:
Δ xP(x,y)=P(x+1,y)-P(x,y)
Δ yP(x,y)=P(x,y+1)-P(x,y)
利用梯度特征将3种解算结果进行融合,融合公式如下:
F P=a 1*G Stokes*Stokes′+a 2*G LocDP*LocDP′+a 3*G LocOptAxis*LocOptAxis′
式中,a 1+a 2+a 3=1,为融合权值系数。通过选取不同的融合权值系数,可以得到更清晰的融合图像,下同。信息量更大的图像权值更大,在具体实验中取a 1=0.4,a 2=0.3,a 3=0.3。
在S4中,将多种图像进行加权融合包括如下方法:
S401、将偏振态图像、局部光轴图像和局部相位延迟图像,3种图像转换为灰度图像;
S402、对3种灰度图像分别进行灰度特征融合、形状特征融合和纹理特征融合后再次进行融合,得到最终的融合图像,按照如下融合公式得到最终PS-OCT图像:
Figure PCTCN2022112138-appb-000013
其中,d i为融合系数,d 1=0.4,d 2=0.2,d 3=0.4,Fusimage i表示灰度特征融合、形状特征融合和纹理特征融合后的图像;
在S402中,所述灰度特征融合包括以下过程:
对灰度图像进行灰度特征值提取,所述灰度特征值包括均值、方差、能量、倾斜度、峰态;
将原来3种图像采用加权融合的方式融合成5张基于所述灰度特征值的图; 分别计算五个灰度特征值的融合图像;
对融合后的五张灰度特征值的融合图像再次进行融合,最终成为一张灰度融合图像。
在S402中,所述形状特征融合包括:对3种灰度图像进行形状特征提取,并对中心矩进行归一化;得出7个不变矩的形状特征,将7个形状特征作为形状特征矢量,形成一个形状特征矩阵,利用形状特征矩阵进行形状融合,得到形状融合图像。
在S402中,所述纹理特征融合包括:对3种灰度图像进行纹理特征提取,所述纹理特征包括:能量、熵、对比度和相关性;利用纹理特征构建纹理特征矢量,将3种灰度图像按照构建的4种纹理特征矢量,进行融合;形成4种纹理特征图像,再将所述4种纹理特征图像按照等权重进行融合,形成纹理融合图像。
1)灰度特征
a.灰度特征提取
灰度特征包括均值m、方差v 2、能量e、倾斜度s、峰态u五种统计量,这五种统计量的含义和计算公式如下:
首先给出灰度直方图的定义:
Figure PCTCN2022112138-appb-000014
其中,i=0,1,2,……,T-1,T代表灰度级,d i代表灰度值为i时的像素数量,D代表图像的总像素数。
均值m:代表图像能量的平均值,计算公式如下:
Figure PCTCN2022112138-appb-000015
方差v 2:代表图像灰度值的分布,计算公式如下:
Figure PCTCN2022112138-appb-000016
能量e:代表图像灰度的分布,当分布越均匀时,能量越大,计算公式如下:
Figure PCTCN2022112138-appb-000017
倾斜度s:表示图像直方图分布时的不对称性,倾斜度越高,直方图分布越不对称,计算公式如下:
Figure PCTCN2022112138-appb-000018
峰态u:代表图像的灰度分布靠近于其均值,用于分析灰度分布是否汇聚在其均值周围,峰态值越小,则分布越集中于均值,计算公式如下:
Figure PCTCN2022112138-appb-000019
b.灰度特征融合
将3种解算方法的结果基于五种灰度特征采用加权融合的方式分别融合成5张特征值的图,用h来表示特征矢量,h=(m,v 2,e,s,u),3种解算结果的特征矢量分别表示为h 1、h 2、h 3、,各特征量的权值如下:
Figure PCTCN2022112138-appb-000020
式中:i=1,2,3,j=1,2,3,4,5。
分别计算五个特征量的融合图像F j,公式如下:
Figure PCTCN2022112138-appb-000021
其中,X i(i=1,2,3)分别对应6种解算结果,F j(j=1,2,3,4,5)分别表示五种特征量的融合图像。
对融合后的五种特征量再次进行融合,得到一张灰度特征的图像,公式如下:
Figure PCTCN2022112138-appb-000022
其中b 1+b 2+b 3+b 4+b 5=1,实验中分别为0.2、0.2、0.3、0.3、0。
2)形状特征
a.形状特征提取
对于离散的数字图像P(x,y),图像的p+q阶标准矩表示为:
Figure PCTCN2022112138-appb-000023
X、Y表示图像的宽度和高度,x,y表示宽度和高度方向的第x个和第y个像素,即在图像中的坐标。
P+q阶中心矩表示为:
Figure PCTCN2022112138-appb-000024
其中,
Figure PCTCN2022112138-appb-000025
Figure PCTCN2022112138-appb-000026
代表图像的重心,计算如下:
Figure PCTCN2022112138-appb-000027
Figure PCTCN2022112138-appb-000028
按照下式对中心矩进行归一化:
Figure PCTCN2022112138-appb-000029
得出七个不变矩的形状特征:
Ф 1=η 2002
Ф 2=(η 2002) 2+4η 11 2
Ф 3=(η 30-3η 12) 2+(3η 2103) 2
Ф 4=(η 3012) 2+(η 2103) 2
Ф 5=(η 30-3η 12)(η 3212)×[(η 3012) 2-3(η 2123) 2]
+(3η 2103)(η 2103)×[3(η 3012) 2-(η 2103) 2]
Ф 6=(η 2002)×[(η 3012) 2-(η 2103) 2]+4η 113012)(η 2103)
Ф 7=(3η 2103)(η 3012)×[(η 3012) 2-3(η 2103) 2]
+(3η 1230)(η 2103)×[3(η 3012) 2-(η 2103) 2]
式中带数字下标的η值表示对应p和q阶的归一化中心矩值
b.形状特征融合
形状特征矢量表示为M,M=(Φ 1234567)。
用M 1、M 2、M 3表示3种解算结果的形状特征矢量,组成一个矩阵表示为:
Figure PCTCN2022112138-appb-000030
求出每一列(即3幅图像)的最大值m j,且m j对应的初始图像为P j,得到形状特征的融合公式为:
Figure PCTCN2022112138-appb-000031
3)纹理特征
a.纹理特征提取
纹理特征的提取是基于灰度共生矩阵提取的,灰度直方图只能直接描述一个像素的灰度分布情况,灰度共生矩阵却能描述两个像素的结合灰度分布情况。设图像中某一点(x,y),其灰度分布为(gx,gy)。当(x,y)移动时,会得到(x+i,y+j)点,也会产生对应的(gx’,gy’)。统计一张图像中出现的灰度值次数,并把所有的灰度值组成一个方阵,将某灰度值出现的次数与总次数进行归一化,得到P(gx,gy),即出现的概率,称为灰度共生矩阵。灰度共生矩阵的归一化公式如下:其中Z表示方阵图像的宽度和高度,即图像大小为Z×Z:
Figure PCTCN2022112138-appb-000032
只利用灰度共生矩阵无法全面表示图像的纹理特征,因此引入能量、熵、 对比度和相关性四个标量来辅助表示图像纹理特征。这四个标量的含义和计算方法如下:
能量E:能量数值不仅能描述灰度分布的均匀程度,而且在一定程度上表达了纹理粗细的情况。当灰度共生矩阵P(gx,gy)中全部参数数值相等时,E相对较小;当参数数值大小区分较明显时,E变大。当灰度共生矩阵中的参数靠近中心时,E较大,说明该图像纹理均匀性好且变化有规律。计算方法为把灰度共生矩阵P(gx,gy)中的元素各个参数先平方再求和,如下式:
Figure PCTCN2022112138-appb-000033
熵S:熵用来表示图像纹理的复杂度,当共生矩阵的值相对均匀时,熵较大。其计算公式如下:
Figure PCTCN2022112138-appb-000034
对比度I:对比度是用来表达图像的清晰程度,对比度越小,用来表达物体表面凹凸程度的沟纹越浅,图像清晰程度越低。其计算公式如下:
Figure PCTCN2022112138-appb-000035
相关性R:相关性用来表示灰度共生矩阵P(gx,gy)的参数在水平和竖直方向上的相同程度,当某个方向的相关性值大于其他方向时,该方向的纹理特征明显,因此相关性可以寻找纹理相对强的方向。其计算公式如下:
Figure PCTCN2022112138-appb-000036
其中μ x、μ y、δ y、δ x为中间变量,它们的计算公式如下:
Figure PCTCN2022112138-appb-000037
Figure PCTCN2022112138-appb-000038
Figure PCTCN2022112138-appb-000039
Figure PCTCN2022112138-appb-000040
b.纹理特征融合
将能量E、熵S、对比度I、相关性R构建纹理特征矢量Y=(E,S,I,R),3种解算结果图像的特征矢量分别为Y 1、Y 2、Y 3,权值计算公式如下:
Figure PCTCN2022112138-appb-000041
先分别对3幅图像的不同纹理特征进行融合,得到四种融合后的特征图像,融合公式如下:
Figure PCTCN2022112138-appb-000042
P i(x,y)代表3种解算方式的结果图像。FI j分布表示融合后的四种特征图像。
最后再将四种特征图像进行融合,融合公式如下:
Figure PCTCN2022112138-appb-000043
实验中采用平均融合,即c 1=c 2=c 3=c 4=0.25。
4)特征加权融合
通过以上三种特征分别得到了灰度、形状、纹理的融合结果,将这三种特征的结果再次进行融合,得到最终的融合图像。融合公式如下:
Figure PCTCN2022112138-appb-000044
d i为融合系数,实验中取,d 1=0.4,d 2=0.2,d 3=0.4,即:
F=0.4Fusimage 1+0.2Fusimage 2+0.4Fusimage 3
最后再将F转换为RGB的图像,即为本算法最终得到的图像处理结果。
如图5所示,本发明还提供一种基于偏振多参量融合的PS-OCT可视度提升系统,用于实施上述基于偏振多参量融合的PS-OCT可视度提升方法,包括图像获取模块,图像处理模块和图像融合模块;
所述图像获取模块,用于获取原始的PS-OCT图像,并对原始的PS-OCT图像进行预处理,获取待测样品的轮廓;
所述图像处理模块,用于利用待测样品轮廓,对基于Stokes矩阵,构建的QUV三维数组进行滤波,得到偏振态图像、偏振度DOPU图像和相位延迟DPPR图像;根据计算得出的偏振态,利用庞加莱球计算光轴图像、局部光轴图像和局部相位延迟图像;
所述图像融合模块,用于将上述得出的多种图像进行平均梯度融合或加权融合,融合后得到最终PS-OCT图像。
显然,上述实施例仅是为清楚地说明所作的举例,而并非对实施方式的限定。对于所属领域的普通技术人员来说,在上述说明的基础上还可以做出其它不同形式的变化或变动。这里无需也无法对所有的实施方式予以穷举。而由此所引伸出的显而易见的变化或变动仍处于本发明创造的保护范围之中。

Claims (13)

  1. 一种基于偏振多参量融合的PS-OCT可视度提升方法,其特征在于,包括以下步骤:
    S1、获取原始的PS-OCT图像,并对原始的PS-OCT图像进行预处理,获取待测样品的轮廓;
    S2、利用待测样品轮廓,对基于Stokes矩阵,构建的QUV三维数组进行滤波,得到偏振态图像;
    S3、根据计算得出的偏振态,利用庞加莱球计算局部光轴图像和局部相位延迟图像;
    S4、将S2和S3中得出的多种图像进行平均梯度融合或加权融合,融合后得到最终PS-OCT图像。
  2. 根据权利要求1所述的基于偏振多参量融合的PS-OCT可视度提升方法,其特征在于,在S1中,所述预处理包括如下步骤:
    S101、将原始的PS-OCT图像中的两个偏振态H通道数据和V通道数据,乘以余弦锥型窗进行整形;
    S102、对整形后的数据进行傅里叶变换,得到H通道和V通道的傅里叶域矩阵;
    S103、对H通道和V通道的傅里叶域矩阵,取平均值后,作为H通道和V通道的原始图像,将H通道和V通道的原始图像进行融合;
    S104、对融合后的图像按照设定的阈值滤除噪声,得到待测样品的轮廓。
  3. 根据权利要求2所述的基于偏振多参量融合的PS-OCT可视度提升方法,其特征在于,在S103中,将H通道和V通道的原始图像进行融合时,融合公式如下所示:
    Figure PCTCN2022112138-appb-100001
    其中,Stru total为融合后的图像,pH 1、pH 2、分别是H通道的上、下两个图 像,pV 1和pV 2分别是V通道的上、下两个图像。
  4. 根据权利要求1所述的基于偏振多参量融合的PS-OCT可视度提升方法,其特征在于,在S2中,所述偏振态图像采用如下方法:
    对Stokes矩阵进行归一化后形成QUV三维数组,利用待测样品轮廓,对QUV三维数组进行滤波,将轮廓外的像素点置0,并将QUV三维数组以RGB的方式描绘出来,得到待测样品轮廓的偏振态彩色图像。
  5. 根据权利要求1所述的基于偏振多参量融合的PS-OCT可视度提升方法,其特征在于,在S3中,在计算局部光轴图像时,还包括
    利用庞加莱球提取PS-OCT图像平面的空间副法向量Bn,将Bn的第二维和第三维进行对调,得到x*y*3的矩阵,其中,x和y代表图像像素的行数和列数;经过待测样品的轮廓滤波,得到光轴图像。
  6. 根据权利要求5所述的基于偏振多参量融合的PS-OCT可视度提升方法,其特征在于,所述局部相位延迟采用如下公式计算:
    Figure PCTCN2022112138-appb-100002
    其中,δ n为局部相位延迟,N n为第n个密切平面内的法向量,N n-1为第n-1个密切平面的法向量。
  7. 根据权利要求6所述的基于偏振多参量融合的PS-OCT可视度提升方法,其特征在于,所述局部光轴图像采用如下公式计算:
    A n=R n-1(-δ n-1;A n-1)R n-2(-δ n-2;A n-2)…R 1(-δ 1;A 1)B n
    其中,A n为局部光轴,B n代表叠加了不同深度组织双折射效应的光轴,R n是第n-1个光轴到第n个光轴的3*3旋转矩阵,δ n为第n个密切平面的相位延迟,A n(x)、A n(y)、A n(z)分别为三维数组局部光轴An的三个维度。
  8. 根据权利要求1所述的基于偏振多参量融合的PS-OCT可视度提升方法, 其特征在于,在S4中,将多种图像进行平均梯度融合包括如下方法:
    将偏振态图像、局部光轴图像和局部相位延迟图像,3种图像进行归一化;利用梯度特征和可调的融合权值系数,将3种图像解算结果进行融合。
  9. 根据权利要求1所述的基于偏振多参量融合的PS-OCT可视度提升方法,其特征在于,在S4中,将多种图像进行加权融合包括如下方法:
    S401、将偏振态图像、局部光轴图像和局部相位延迟图像,3种图像转换为灰度图像;
    S402、对3种灰度图像分别进行灰度特征融合、形状特征融合和纹理特征融合后再次进行融合,得到最终的融合图像,按照如下融合公式得到最终PS-OCT图像:
    Figure PCTCN2022112138-appb-100003
    其中,d i为融合系数,d 1=0.4,d 2=0.2,d 3=0.4,Fusimage i表示灰度特征融合、形状特征融合和纹理特征融合后的图像;
  10. 根据权利要求9所述的基于偏振多参量融合的PS-OCT可视度提升方法,其特征在于,在S402中,所述灰度特征融合包括以下过程:
    对灰度图像进行灰度特征值提取,所述灰度特征值包括均值、方差、能量、倾斜度、峰态;
    将原来3种图像采用加权融合的方式融合成5张基于所述灰度特征值的图;分别计算五个灰度特征值的融合图像;
    对融合后的五张灰度特征值的融合图像再次进行融合,最终成为一张灰度融合图像。
  11. 根据权利要求9所述的基于偏振多参量融合的PS-OCT可视度提升方法,其特征在于,在S402中,所述形状特征融合包括:对3种灰度图像进行形状特 征提取,并对中心矩进行归一化;得出7个不变矩的形状特征,将7个形状特征作为形状特征矢量,形成一个形状特征矩阵,利用形状特征矩阵进行形状融合,得到形状融合图像。
  12. 根据权利要求9所述的基于偏振多参量融合的PS-OCT可视度提升方法,其特征在于,在S402中,所述纹理特征融合包括:对3种灰度图像进行纹理特征提取,所述纹理特征包括:能量、熵、对比度和相关性;利用纹理特征构建纹理特征矢量,将3种灰度图像按照构建的4种纹理特征矢量,进行融合;形成4种纹理特征图像,再将所述4种纹理特征图像按照等权重进行融合,形成纹理融合图像。
  13. 一种基于偏振多参量融合的PS-OCT可视度提升系统,用于实施上述权利要求1-12中任意一项所述的基于偏振多参量融合的PS-OCT可视度提升方法,其特征在于,包括图像获取模块,图像处理模块和图像融合模块;
    所述图像获取模块,用于获取原始的PS-OCT图像,并对原始的PS-OCT图像进行预处理,获取待测样品的轮廓;
    所述图像处理模块,用于利用待测样品轮廓,对基于Stokes矩阵,构建的QUV三维数组进行滤波,得到偏振态图像,根据计算得出的偏振态,利用庞加莱球计算局部光轴图像和局部相位延迟图像;
    所述图像融合模块,用于将上述得出的多种图像进行平均梯度融合或加权融合,融合后得到最终PS-OCT图像。
PCT/CN2022/112138 2022-08-10 2022-08-12 基于偏振多参量融合的ps-oct可视度提升方法及系统 WO2024031643A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210953003.1 2022-08-10
CN202210953003.1A CN115035210B (zh) 2022-08-10 2022-08-10 基于偏振多参量融合的ps-oct可视度提升方法及系统

Publications (1)

Publication Number Publication Date
WO2024031643A1 true WO2024031643A1 (zh) 2024-02-15

Family

ID=83130328

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/112138 WO2024031643A1 (zh) 2022-08-10 2022-08-12 基于偏振多参量融合的ps-oct可视度提升方法及系统

Country Status (2)

Country Link
CN (1) CN115035210B (zh)
WO (1) WO2024031643A1 (zh)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090021480A (ko) * 2007-08-27 2009-03-04 연세대학교 산학협력단 자궁경부 진단용 편광감도-광간섭 영상시스템
CN102124299A (zh) * 2008-08-20 2011-07-13 国立大学法人东北大学 形状、倾斜度检测和/或计测光学装置和方法及其关联装置
CN102749600A (zh) * 2012-05-30 2012-10-24 苏州安科医疗系统有限公司 一种磁共振多通道图像的合成方法
CN105139367A (zh) * 2015-07-27 2015-12-09 中国科学院光电技术研究所 一种基于非下采样剪切波的可见光偏振图像融合方法
CN107909112A (zh) * 2017-11-27 2018-04-13 中北大学 一种红外光强与偏振图像多类变元组合的融合方法
CN109410160A (zh) * 2018-10-09 2019-03-01 湖南源信光电科技股份有限公司 基于多特征和特征差异驱动的红外偏振图像融合方法
CN109919949A (zh) * 2019-03-06 2019-06-21 中国科学院自动化研究所 图像精细化阴影区域分割系统、方法、装置
US20200396397A1 (en) * 2019-06-13 2020-12-17 Apple Inc. Multispectral Image Processing System and Method
CN114129139A (zh) * 2021-12-29 2022-03-04 南开大学 一种基于功能性光学相干层析术的斑马鱼活体成像系统及成像方法

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4378533B2 (ja) * 2005-10-04 2009-12-09 国立大学法人 筑波大学 光コヒーレンストモグラフィーの構成機器の較正方法
CN105606217B (zh) * 2016-01-08 2017-10-20 西安交通大学 一种图像、光谱、偏振态一体化获取装置及方法
CN107228712B (zh) * 2017-06-12 2019-04-12 哈尔滨工程大学 一种基于双窗口共路干涉成像的偏振态参量测量装置与方法
CN109164048B (zh) * 2018-09-18 2021-01-05 天津大学 一种对导管偏振敏感光学相干层析成像偏振解调方法
CN109886883A (zh) * 2019-01-21 2019-06-14 吉林大学 实时偏振透雾成像图像增强处理方法
CN110742584A (zh) * 2019-10-09 2020-02-04 南京沃福曼医疗科技有限公司 一种导管偏振敏感光学相干层析成像解调方法用偏振解算方法
CN110584613A (zh) * 2019-10-09 2019-12-20 南京沃福曼医疗科技有限公司 一种导管偏振敏感光学相干层析成像系统及解调方法
CN113888478B (zh) * 2021-09-15 2022-11-11 天津大学 一种对血管内导管偏振敏感相干层析成像优化去消偏方法
CN114755837B (zh) * 2022-06-15 2022-09-02 苏州大学 一种全庞加莱球偏振阵列光束的产生方法及装置

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090021480A (ko) * 2007-08-27 2009-03-04 연세대학교 산학협력단 자궁경부 진단용 편광감도-광간섭 영상시스템
CN102124299A (zh) * 2008-08-20 2011-07-13 国立大学法人东北大学 形状、倾斜度检测和/或计测光学装置和方法及其关联装置
CN102749600A (zh) * 2012-05-30 2012-10-24 苏州安科医疗系统有限公司 一种磁共振多通道图像的合成方法
CN105139367A (zh) * 2015-07-27 2015-12-09 中国科学院光电技术研究所 一种基于非下采样剪切波的可见光偏振图像融合方法
CN107909112A (zh) * 2017-11-27 2018-04-13 中北大学 一种红外光强与偏振图像多类变元组合的融合方法
CN109410160A (zh) * 2018-10-09 2019-03-01 湖南源信光电科技股份有限公司 基于多特征和特征差异驱动的红外偏振图像融合方法
CN109919949A (zh) * 2019-03-06 2019-06-21 中国科学院自动化研究所 图像精细化阴影区域分割系统、方法、装置
US20200396397A1 (en) * 2019-06-13 2020-12-17 Apple Inc. Multispectral Image Processing System and Method
CN114129139A (zh) * 2021-12-29 2022-03-04 南开大学 一种基于功能性光学相干层析术的斑马鱼活体成像系统及成像方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LI SHI-WEI; HUANG DAN-FEI; LIU HUI;WANG HUI-MIN;: "Secondary fusion algorithm for polarization images based on the multi-scale edge representation", LASER & INFRARED, vol. 48, no. 1, 31 January 2018 (2018-01-31), CN , pages 113 - 118, XP009552577, ISSN: 1001-5078 *

Also Published As

Publication number Publication date
CN115035210B (zh) 2022-11-11
CN115035210A (zh) 2022-09-09

Similar Documents

Publication Publication Date Title
Yu et al. Ultrasound speckle reduction by a SUSAN-controlled anisotropic diffusion method
Liebling et al. Four-dimensional cardiac imaging in living embryos<? xpp qa?> via postacquisition synchronization of nongated<? xpp qa?> slice sequences
CN113960908B (zh) 用于表征样本中的颗粒的全息方法
Li et al. Nonnegative mixed-norm preconditioning for microscopy image segmentation
CN109345469A (zh) 一种基于条件生成对抗网络的oct成像中散斑去噪方法
CN113689374B (zh) 一种植物叶片表面粗糙度确定方法及系统
CN108765476A (zh) 一种偏振图像配准方法
CN105842642A (zh) 基于峰度张量分数各向异性微结构特征提取方法与装置
CN111062978B (zh) 基于频域滤波技术的时空图像测流的纹理识别方法
CN112017130B (zh) 基于自适应各向异性全变分正则化的图像复原方法
Ramaswamy et al. Pre‐processing, registration and selection of adaptive optics corrected retinal images
CN117391955A (zh) 基于多帧光学相干层析图像的凸集投影超分辨率重建方法
CN111292256A (zh) 一种基于显微高光谱成像的纹理增强算法
WO2024031643A1 (zh) 基于偏振多参量融合的ps-oct可视度提升方法及系统
Izadi et al. Whitenner-blind image denoising via noise whiteness priors
CN106469450A (zh) 一种印刷品墨斑的检测方法及装置
Shi et al. A novel underwater sonar image enhancement algorithm based on approximation spaces of random sets
CN113935906B (zh) 傅里叶域光学相干层析成像的强反射条纹噪声去除方法
CN115147364A (zh) 一种适用于测量人眼睫状肌和巩膜结构多指标的方法
CN115511814A (zh) 一种基于兴趣区域多纹理特征融合的图像质量评价方法
Tournemenne et al. 3D shape analysis using overcomplete spherical wavelets: Application to BLEB detection in cell biology
Guryleva et al. Application Software for Visualization and Quantification of Capillary Networks
CN108648202A (zh) 一种带补偿策略的火山口形sar图像边缘检测方法
CN109949279A (zh) 一种基于皮肤图像的年龄分析系统
CN114549353B (zh) 一种用于核磁共振图像的去噪方法与系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22954602

Country of ref document: EP

Kind code of ref document: A1