CN116740216B - Ophthalmic optical coherence tomography image restoration method - Google Patents

Ophthalmic optical coherence tomography image restoration method Download PDF

Info

Publication number
CN116740216B
CN116740216B CN202310995767.1A CN202310995767A CN116740216B CN 116740216 B CN116740216 B CN 116740216B CN 202310995767 A CN202310995767 A CN 202310995767A CN 116740216 B CN116740216 B CN 116740216B
Authority
CN
China
Prior art keywords
image
artifact
scale
under
thumbnail
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310995767.1A
Other languages
Chinese (zh)
Other versions
CN116740216A (en
Inventor
凌玉烨
唐瑶琦
张�杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Always Wuxi Medical Technology Co ltd
Original Assignee
Always Wuxi Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Always Wuxi Medical Technology Co ltd filed Critical Always Wuxi Medical Technology Co ltd
Priority to CN202310995767.1A priority Critical patent/CN116740216B/en
Publication of CN116740216A publication Critical patent/CN116740216A/en
Application granted granted Critical
Publication of CN116740216B publication Critical patent/CN116740216B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The application relates to the technical field of image processing, and discloses an ophthalmic optical coherence tomography image restoration method, which comprises the following steps: constructing an artifact position detection module, and inputting an image with artifacts into the artifact position detection module to obtain the width of the artifacts in the imageAnd a start position. According to the application, by combining a sparse representation technology and a deep learning method, artifacts in images of different scales are repaired from bottom to top, and meanwhile, multi-scale image information is integrated through a super-resolution neural network and an attention moment array, so that a better artifact repairing effect is obtained in limited computing resources and training time, and the problem that the repairing effect is rapidly reduced along with the increase of the artifact width in the traditional method is solved.

Description

Ophthalmic optical coherence tomography image restoration method
Technical Field
The application relates to the technical field of image processing, in particular to an ophthalmic optical coherence tomography image restoration method.
Background
Optical coherence tomography is a non-invasive, high resolution imaging method that is widely used in ophthalmic diagnosis and treatment and intravascular imaging to obtain structural information of the full depth of biological tissue. Ophthalmic OCT images are widely used for ophthalmic clinical diagnosis and computer-aided analysis, but due to scattering and occlusion of the blood vessels inside the retina to the incident light, artifacts of varying widths inevitably exist in the image, which affect the visibility of deep tissue structures while reducing the image quality. In order to solve the artifact problem in the image, with the rapid development of image processing technology, image restoration technology based on traditional methods and deep learning is also widely developed. The repairing effect of the traditional repairing technology is greatly dependent on the image quality in practical application, and can only be suitable for small artifact areas; in the offline training process, the deep learning repair technology often needs larger computing resources and longer training time, and additional requirements are put on data quantity and expert annotation.
The traditional repair technology and the deep learning repair technology have more or less problems in the practical application process, and can not effectively repair blood vessel artifacts in an Optical Coherence Tomography (OCT) image, particularly the artifacts with larger width, so that the contrast of a shielded area of the image is reduced, the tissue structure is unclear, and the accuracy of subsequent clinical diagnosis is affected.
For the problems in the related art, no effective solution has been proposed at present.
Disclosure of Invention
Aiming at the defects of the prior art, the application provides an ophthalmic optical coherence tomography image restoration method which has the advantages that a multi-scale artifact restoration scheme is adopted, image information under different scales is reserved, a label-free pre-training dictionary sparse representation technology is adopted, image information under different scales is extracted based on a learning dictionary, a deep learning method is used for fusion and super-resolution, shadows in an ophthalmic OCT image are restored, the image quality is improved, and the like, so that the problems that the traditional restoration technology and the deep learning restoration technology have more or less problems in the practical application process, and vascular artifacts in the ophthalmic Optical Coherence Tomography (OCT) image, particularly artifacts with larger width, cannot be effectively restored, and the problems that the contrast of an occluded area of an image is reduced, the tissue structure is unclear and the accuracy of subsequent clinical diagnosis is affected are solved.
In order to solve the technical problems that the traditional repair technology and the deep learning repair technology have more or less problems in the practical application process, the vascular artifact, particularly the artifact with larger width, in an Optical Coherence Tomography (OCT) image cannot be effectively repaired, so that the contrast of a shielded area of the image is reduced, the tissue structure is unclear and the accuracy of the subsequent clinical diagnosis is affected, the application provides the following technical scheme:
an ophthalmic optical coherence tomographic image restoration method, comprising the steps of: constructing an artifact position detection module, and inputting an image with an artifact into the artifact position detection module to obtain the width w and the starting position x of the artifact in the image;
step two: 2 for an image with artifacts based on the width information of the artifacts N Sampling under multiplying power to obtain thumbnail images of T grades, wherein the T grades respectively correspond to 2 N A multiple scale, wherein T is a natural number greater than 0, and the value of T is the same as the number of multiplying power adopted;
wherein n= [ log ] 2 ([max(w)*p]+1)]Max (w) refers to the width of the widest artifact in the artifact-bearing image;
the image block size p is set to 8, and the range of the image downsampling scale is [1,2 2 ,…,2 N ],T=N+1;
Step three: acquiring a pre-trained sparse dictionary and a super-resolution neural network;
s1, constructing a sparse dictionary, a super-resolution neural network and the method in 2 N Training set under multiple scale;
s2, pre-training 2 N Sparse dictionary and super-resolution neural network under multiple scale;
step four: under the T level, according to 2 N Pre-trained sparse dictionary at multiple scales, pair 2 N Artifact restoration is carried out on thumbnail under multiple scale to obtain 2 N A restored image at a multiple scale;
step five: will 2 N The restored image at the double scale is taken as 2 N Super-resolution neural network under multiple scaleInput of the complex, output of the super-resolution neural network to obtain a T-level image, and calculation of the T-level image to obtain 2 N Attention weight matrix under the multiple scale;
step six: will 2 N Attention weight matrix at multiple scale and 2 N The restored image at the double scale is used for guiding the thumbnail at the T-1 level according to 2 N-1 Artifact repair is carried out on the pre-trained sparse dictionary under multiple scales to obtain 2 corresponding to the T-1 grade N-1 The repaired image under the multiple scale is subjected to the treatment of 2 corresponding to the T-1 grade N-1 Taking the repaired image under the multiple scale and the T-level image corresponding to the T level as the input of a fusion module under the T-1 level, obtaining a suspected image through the output of the fusion module, and then entering a step seven;
step seven: judging whether the scale corresponding to the suspected image in the step six is 2 0 The scale is multiplied, if yes, the suspected image is the target output image; if not, the fifth step and the sixth step are circulated.
Preferably, the method for constructing an artifact position detection module and inputting an input image with an artifact to the artifact detection module to obtain width information and position information of the artifact in the image comprises the following steps:
a priori information based on retinal OCT images, namely: the brightness information of the Bruch film is reduced by the artifact, five brightest points are selected for each column to be averaged to obtain the depth of the Bruch film, then curve fitting is carried out on the depth according to rows by using a cubic spline interpolation method, the outlier of the curve is the position of the artifact, the starting position of the continuous outlier is the starting position x of the artifact, and the difference between the ending position and the starting position of the continuous outlier is the width w of the artifact.
Preferably, about the construction in 2 N The training set under the multiple scale comprises the following steps:
acquiring complete image by OCT system, selecting image without artifact, and processing image 2 i Bilinear interpolation downsampling of multiplying power to obtain thumbnail Y i All Y i Forming a sparse dictionary training set V i
Image is subjected to 2 i-1 Bilinear interpolation downsampling of multiplying power to obtain thumbnail Y i-1 All thumbnail images Y i And thumbnail image Y i-1 Constitute super-resolution neural network training set S i
Where i=1, 2, …, N.
Preferably, pretraining 2 N The method for the sparse dictionary and the super-resolution neural network under the multiple scale comprises the following steps: training set V of sparse dictionary i Each image Y of (3) i Randomly cut into J overlapping 8 x 8 tiles:wherein j=1, 2, …, J, and obtaining a trained pre-training sparse dictionary D by using a KSVD algorithm for each image block i
The super-resolution neural network is input as a training set S i Medium downsampled thumbnail Y i Obtaining an image with higher resolution through EDSR architecture output, and combining the image with a thumbnail Y i-1 L1 loss was calculated and the network was trained for 100 cycles by gradient back propagation using an AdamW optimizer.
Preferably according to 2 N Pre-trained sparse dictionary at multiple scales, pair 2 N Artifact repair by thumbnail under multiple scale, 2 N The process of artifact repair by thumbnail under multiple scale is as follows:
downsampling 2 N Double test image Y N Dividing all image blocks { y } containing artifact pixels in overlapping manner N Dictionary D based on the scale N Calculate { y } N Sparse coefficient alpha N The calculation formula is as follows:
wherein the sparsity parameter L is set to 4. The obtained sparse coefficient alpha N By y Nt =D N α N Calculating y N Repaired image block y Nt For each pixel of the column in which the artifact is locatedDots, which are finally repaired to obtain p-block repaired image blocksCalculating the similarity of the repaired image block and the corresponding artifact-free position of the original image, wherein the repaired image block is +.>And y is N The similarity calculation formula of (2) is: /> Obtain { beta } 12 ,…,β p Normalized, the final pixel value is the weighted average value of the pixel values corresponding to the p repaired image blocks, and the repaired image +.>
Preferably, about will 2 N The restored image at the double scale is taken as 2 N The input of super-resolution neural network under multiple scale, the output of the super-resolution neural network is used to obtain a T-level image, and the image is used to calculate 2 N In the attention weight matrix under the multiple scale, 2 N The method for obtaining the attention weight matrix under the multiple scale comprises the following steps:
will 2 N Repaired image under multiple scaleObtaining +.>According to 2 N-1 Downsampled thumbnail Y at a multiple scale N-1 Extracting->All corresponding Y in (3) N-1 Image block containing artefacts->Calculate +.>The weight matrix is centered, and the explicit expression is +.> V is x.times.p obtained by flattening x image blocks in a set window 2 The weight matrix representing the similarity of the current image block to other image blocks within the set window.
Preferably, will be 2 N Attention weight matrix at multiple scale and 2 N The restored image at the double scale is used for guiding the thumbnail at the T-1 level according to 2 N-1 The process of artifact repair by the pre-trained sparse dictionary under the multiple scale is as follows:
extraction 2 N-1 Thumbnail Y under multiple scale N-1 All image blocks { y to be repaired containing artifacts N-1 Calculation ofImage block of corresponding position->Is a sparse coefficient of (1):
by calculating the sparsity factor of all image blocks within window VSparseness coefficient and attention weight matrix in window +.>Multiplying to obtain the final repair coefficient alpha N-1,t Calculate the repair result y N-1,t =D N-1 α N-1,t Finally obtain 2 N-1 Repaired image at double scale +.>
Preferably, regarding the level 2 corresponding to the T-1 level N-1 The restored image under the multiple scale and the T-level image corresponding to the T-level are used as the input of the fusion module under the T-1 level, and the fusion process of the fusion module in the seventh step is carried out after the suspected image is obtained through the output of the fusion module, wherein the fusion process comprises the following steps: will beAnd->The artifact repair results of (2) were averaged as 2 N-2 And inputting the super-resolution neural network under the multiple scale.
The utility model provides an ophthalmology optics coherence tomographic image restoration system, includes artifact position detection module, artifact restoration module, multiscale information fusion module, fixes a position the image artifact position through artifact position detection module, carries out multiscale restoration from bottom to top later through artifact restoration module, fuses multiscale image information when guiding upper strata artifact restoration through multiscale information fusion module, obtains final target output image.
Compared with the prior art, the application provides an ophthalmic optical coherence tomography image restoration method, which has the following beneficial effects:
1. according to the application, by combining the traditional method and the deep learning method, aiming at the problem that the traditional algorithm cannot process the artifact with larger width or is limited by the artifact repairing effect and the image quality balance, a multi-scale artifact repairing scheme is adopted, so that the image information under different scales is reserved. Aiming at the requirements of a deep learning method on a data set and expert labeling, the application adopts a sparse representation technology of a label-free pre-training dictionary, extracts image information under different scales based on the learning dictionary, uses the deep learning method to carry out fusion and super-resolution, restores shadows in an ophthalmic OCT image, and improves the image quality.
2. According to the application, by combining a sparse representation technology and a deep learning method, artifacts in images of different scales are repaired from bottom to top, and meanwhile, multi-scale image information is integrated through a super-resolution neural network and an attention moment array, so that a better artifact repairing effect is obtained in limited computing resources and training time, and the problem that the repairing effect is rapidly reduced along with the increase of the artifact width in the traditional method is solved.
Drawings
FIG. 1 is a flow chart of the present application;
FIG. 2 is a schematic diagram of the effect of the synthetic narrow artifact repair of the present application;
FIG. 3 is a schematic diagram of a synthetic wide artifact repair effect according to the present application;
fig. 4 is a schematic diagram of the effect of repairing the real artifacts of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
As described in the background art, the present application provides an ophthalmic optical coherence tomography image restoration method for solving the above technical problems.
Referring to fig. 1-4, an ophthalmic optical coherence tomographic image restoration method includes:
step one: constructing an artifact position detection module, and inputting an image with an artifact into the artifact position detection module to obtain the width w and the starting position x of the artifact in the image;
a priori information based on retinal OCT images, namely: the brightness information of the Bruch film is reduced by the artifact, five brightest points are selected for each column to be averaged to obtain the depth of the Bruch film, then curve fitting is carried out on the depth according to rows by using a cubic spline interpolation method, the outlier of the curve is the position of the artifact, the starting position of the continuous outlier is the starting position x of the artifact, and the difference between the ending position and the starting position of the continuous outlier is the width w of the artifact.
Step two: 2 for an image with artifacts based on the width information of the artifacts N Sampling under multiplying power to obtain thumbnail images of T grades, wherein the T grades respectively correspond to 2 N A multiple scale, wherein T is a natural number greater than 0, and the value of T is the same as the number of multiplying power adopted;
wherein n= [ log ] 2 ([max(w)/p]+1)]Max (w) refers to the width of the widest artifact in the artifact-bearing image;
the image block size p is set to 8, and the range of the image downsampling scale is [1,2 2 ,…,2 N ],T=N+1;
Step three: acquiring a pre-trained sparse dictionary and a super-resolution neural network;
the SR module is a super-resolution neural network;
s1, constructing a sparse dictionary, a super-resolution neural network and the method in 2 N Training set under multiple scale;
acquiring a complete image by an OCT system, selecting an image which does not contain artifact and has higher quality, and carrying out 2 on the image i Bilinear interpolation downsampling of multiplying power to obtain thumbnail Y i All Y i Forming a sparse dictionary training set V i
Image is subjected to 2 i-1 Bilinear interpolation downsampling of multiplying power to obtain thumbnail Y i-1 All thumbnail images Y i And thumbnail image Y i-1 Constitute super-resolution neural network training set S i
Where i=1, 2, …, N;
s2, pre-training 2 N Multiple lengthSparse dictionary and super-resolution neural network under the degree;
training set V of sparse dictionary i Each image Y of (3) i Randomly cut into J overlapping 8 x 8 tiles:wherein j=1, 2, …, J, and obtaining a trained pre-training sparse dictionary D by using a KSVD algorithm for each image block i
Preferably, J is preferably 30000.
The super-resolution neural network is input as a training set S i Medium downsampled thumbnail Y i Obtaining an image with higher resolution through EDSR architecture output, and combining the image with a thumbnail Y i-1 Calculating an average absolute value error, and training the network for 100 periods by using an AdamW optimizer and through gradient back propagation;
step four: under the T level, according to 2 N Pre-trained sparse dictionary at multiple scales, pair 2 N Artifact restoration is carried out on thumbnail under multiple scale to obtain 2 N A restored image at a multiple scale;
downsampling 2 N Double test image Y N Dividing all image blocks { y } containing artifact pixels in overlapping manner N Dictionary D based on the scale N Calculate { y } N Sparse coefficient alpha N The calculation formula is as follows:
wherein the sparsity parameter L is set to 4. The obtained sparse coefficient alpha N By y Nt =D N α N Calculating y N Repaired image block y Nt For each pixel point of the row where the artifact is located, the final restoration of the pixel point is to obtain a p-block restored image blockCalculating no false corresponding to the restored image block and original imageSimilarity of shadow positions, repaired image block->And y is N The similarity calculation formula of (2) is: /> Obtain { beta } 12 ,…,β p Normalized, the final pixel value is the weighted average value of the pixel values corresponding to the p repaired image blocks, and the repaired image +.>
Step five: will 2 N The restored image at the double scale is taken as 2 N The input of super-resolution neural network under multiple scale, the output of the super-resolution neural network is used to obtain a T-level image, and the image is used to calculate 2 N Attention weight matrix under the multiple scale;
will 2 N Repaired image under multiple scaleObtaining +.>According to 2 N-1 Downsampled thumbnail Y at a multiple scale N-1 Extracting->All corresponding Y in (3) N-1 Image block containing artefacts->Calculate +.>The weight matrix is centered, and the explicit expression is +.> V is x.times.p obtained by flattening x image blocks in a set window 2 The weight matrix representing the similarity of the current image block to other image blocks within the set window.
Step six: will 2 N Attention weight matrix at multiple scale and 2 N The restored image at the double scale is used for guiding the thumbnail at the T-1 level according to 2 N-1 Artifact repair is carried out on the pre-trained sparse dictionary under multiple scales to obtain 2 corresponding to the T-1 grade N-1 The repaired image under the multiple scale is subjected to the treatment of 2 corresponding to the T-1 grade N-1 Taking the repaired image under the multiple scale and the T-level image corresponding to the T level as the input of a fusion module under the T-1 level, obtaining a suspected image through the output of the fusion module, and then entering a step seven;
wherein, will be 2 N Attention weight matrix at multiple scale and 2 N The restored image at the double scale is used for guiding the thumbnail at the T-1 level according to 2 N-1 The process of artifact repair by the pre-trained sparse dictionary under the multiple scale is as follows:
extraction 2 N-1 Thumbnail Y under multiple scale N-1 All image blocks { y to be repaired containing artifacts N-1 Calculation ofImage block of corresponding position->Is a sparse coefficient of (1):
through meterSparsity coefficient of all image blocks within calculation window VSparseness coefficient and attention weight matrix in window +.>Multiplying to obtain the final repair coefficient alpha N-1,t Calculate the repair result y N-1,t =D N-1 α N-1,t Finally obtain 2 N-1 Repaired image at double scale +.>
Regarding 2 corresponding to the T-1 level N-1 The restored image under the multiple scale and the T-level image corresponding to the T-level are used as the input of the fusion module under the T-1 level, and the fusion process of the fusion module in the seventh step is carried out after the suspected image is obtained through the output of the fusion module, wherein the fusion process comprises the following steps: will beAnd->The artifact repair results of (2) were averaged as 2 N-2 And inputting the super-resolution neural network under the multiple scale.
Step seven: judging whether the scale corresponding to the suspected image in the step six is 2 0 The scale is multiplied, if yes, the suspected image is the target output image; if not, the fifth step and the sixth step are circulated.
Further, the operating parameters: the parameter J is 30000, and the image block size p is 8; the dictionary atom parameter k is set to 128, and the sparsity L is set to 2; note that the moment array window size parameter x is set to 3, and the fusion function phi is set to be average; for training of EDSR network, randomly selecting 800 retina OCT images from OCTA-500 data set to pretrain super-resolution network, setting image block size to 48×48, setting batch size to 16, and initial value of learning rate to 1×10 -4 Training deviceThe training period is 200, and the optimizer is Adam. All experiments were trained using NVIDIA GeForce RTX 3090.
The utility model provides an ophthalmology optics coherence tomographic image restoration system, includes artifact position detection module, artifact restoration module, multiscale information fusion module, fixes a position the image artifact position through artifact position detection module, carries out multiscale restoration from bottom to top later through artifact restoration module, fuses multiscale image information when guiding upper strata artifact restoration through multiscale information fusion module, obtains final target output image.
Case description:
the performance of the application is evaluated by using peak signal to noise ratio (PSNR), structural Similarity Index (SSIM) and learning perceived image block similarity (LPIPS), and experiments are performed on synthetic artifacts, and the experimental results are shown in Table 1.
TABLE 1
TV details are: getreuer P.Total variation inpainting using split Bregman [ J ]. Image Processing On Line,2012, 2:147-157.;
RN details are as follows: yu T, guo Z, jin X, et al region normalization for image inpainting [ C ]// Proceedings of the AAAI Conference on Artificial Integence.2020, 34 (07): 12733-12740;
RFR is detailed in: li J, wang N, zhang L, et al, current feature reasoning for image inpainting [ C ]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recgnition.2020:7760-7768;
MSR is detailed in: tang Y, li Y, liu H, et al Multi-scale sparse representation-based shadow inpainting for retinal OCT images [ C ]// Medical Imaging 2022:Image Processing.SPIE,2022,12032:9-17.
Compared with the traditional TV method, the application realizes PSNR improvement of 2.82dB, and on the measurement index SSIM of the structural similarity, the application also obtains improvement of 0.78. Compared with the deep learning method RFR, the indexes of PSNR and SSIM are respectively 0.23dB and 0.0221 higher than the RFR, and the LP IPS is 0.32 lower than the MSR. The present application also achieves a PSNR boost of 0.31dB compared to MSR combining deep learning and conventional methods.
The visual effect of the repair of the synthetic artifact is shown in fig. 2 and 3. Fig. 2 and 3 show the case of smaller artifact width, and the experimental result shows that the repair image obtained by the method has fewer artifacts, and the multi-layer tissue structure of the retina is restored. Fig. 3 further illustrates the performance of the proposed solution when the artifact width is large, and the proposed solution can achieve a better repair effect than the conventional algorithm with significantly worse quality.
The repair effect of the real artifact is shown in fig. 4, and it can be seen that the scheme also obtains the best repair result in the real artifact, and can recover the continuous structure of the retina.
Although embodiments of the present application have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made hereto without departing from the spirit and principles of the present application.

Claims (8)

1. An ophthalmic optical coherence tomographic image restoration method, characterized in that:
step one: constructing an artifact position detection module, and inputting an image with an artifact into the artifact position detection module to obtain the width w and the starting position x of the artifact in the image;
step two: 2 for an image with artifacts based on the width information of the artifacts N Sampling under multiplying power to obtain thumbnail images of T grades, wherein the T grades respectively correspond to 2 N A multiple scale, wherein T is a natural number greater than 0, and the value of T is the same as the number of multiplying power adopted;
wherein n= [ log ] 2 ([max(w)/p]+1)]Max (w) refers to the width of the widest artifact in the artifact-bearing image;
the image block size p is set to 8, and the range of the image downsampling scale is [1,2 2 ,…,2 N ],T=N+1;
Step three: acquiring a pre-trained sparse dictionary and a super-resolution neural network;
s1, constructing a sparse dictionary, a super-resolution neural network and the method in 2 N Training set under multiple scale;
s2, pre-training 2 N Sparse dictionary and super-resolution neural network under multiple scale;
step four: under the T level, according to 2 N Pre-trained sparse dictionary at multiple scales, pair 2 N Artifact restoration is carried out on thumbnail under multiple scale to obtain 2 N A restored image at a multiple scale;
step five: will 2 N The restored image at the double scale is taken as 2 N The input of super-resolution neural network under multiple scale, the output of the super-resolution neural network is used to obtain a T-level image, and the image is used to calculate 2 N Attention weight matrix under the multiple scale;
will 2 N Repaired image under multiple scaleObtaining +.>According to 2 N-1 Downsampled thumbnail Y at a multiple scale N-1 Extracting->All corresponding Y in (3) N-1 Image block containing artefacts->Calculate +.>Weight as centerMatrix with explicit expression +.> V is x.times.p obtained by flattening x image blocks in a set window 2 The weight matrix represents the similarity between the current image block and other image blocks in the set window;
step six: will 2 N Attention weight matrix at multiple scale and 2 N The restored image at the double scale is used for guiding the thumbnail at the T-1 level according to 2 N-1 Artifact repair is carried out on the pre-trained sparse dictionary under multiple scales to obtain 2 corresponding to the T-1 grade N-1 The repaired image under the multiple scale is subjected to the treatment of 2 corresponding to the T-1 grade N-1 Taking the repaired image under the multiple scale and the T-level image corresponding to the T level as the input of a fusion module under the T-1 level, obtaining a suspected image through the output of the fusion module, and then entering a step seven;
step seven: judging whether the scale corresponding to the suspected image in the step six is 2 0 The scale is multiplied, if yes, the suspected image is the target output image; if not, the fifth step and the sixth step are circulated.
2. The ophthalmic optical coherence tomographic imaging restoration method according to claim 1, wherein: the method for constructing the artifact position detection module and inputting the input image with the artifact to the artifact detection module to obtain the width information and the position information of the artifact in the image comprises the following steps:
a priori information based on retinal OCT images, namely: the brightness information of the Bruch film is reduced by the artifact, five brightest points are selected for each column to be averaged to obtain the depth of the Bruch film, then curve fitting is carried out on the depth according to rows by using a cubic spline interpolation method, the outlier of the curve is the position of the artifact, the starting position of the continuous outlier is the starting position x of the artifact, and the difference between the ending position and the starting position of the continuous outlier is the width w of the artifact.
3. The ophthalmic optical coherence tomographic imaging restoration method according to claim 2, wherein: with respect to construction at 2 N The training set under the multiple scale comprises the following steps:
acquiring complete image by OCT system, selecting image without artifact, and processing image 2 i Bilinear interpolation downsampling of multiplying power to obtain thumbnail Y i All Y i Forming a sparse dictionary training set V i
Image is subjected to 2 i-1 Bilinear interpolation downsampling of multiplying power to obtain thumbnail Y i-1 All thumbnail images Y i And thumbnail image Y i-1 Constitute super-resolution neural network training set S i
Where i=1, 2, …, N.
4. A method of ophthalmic optical coherence tomography in accordance with claim 3, wherein: pre-training 2 N The method for the sparse dictionary and the super-resolution neural network under the multiple scale comprises the following steps: training set V of sparse dictionary i Each image Y of (3) i Randomly cut into J overlapping 8 x 8 tiles:wherein j=1, 2, …, J, and obtaining a trained pre-training sparse dictionary D by using a KSVD algorithm for each image block i
The super-resolution neural network is input as a training set S i Medium downsampled thumbnail Y i Obtaining an image with higher resolution through EDSR architecture output, and combining the image with a thumbnail Y i-1 L1 loss was calculated and the network was trained for 100 cycles by gradient back propagation using an AdamW optimizer.
5. The ophthalmic optical coherence tomographic imaging restoration method according to claim 4, wherein: according to 2 N Pre-trained sparse dictionary at multiple scales, pair 2 N Shrinkage at multiple scaleArtifact repair with thumbnail, 2 N The process of artifact repair by thumbnail under multiple scale is as follows:
downsampling 2 N Double test image Y N Dividing all image blocks { y } containing artifact pixels in overlapping manner N Dictionary D based on the scale N Calculate { y } N Sparse coefficient alpha N The calculation formula is as follows:
wherein the sparsity parameter L is set to 4, and the obtained sparsity coefficient alpha N By y Nt =D N α N Calculating y N Repaired image block y Nt For each pixel point of the row where the artifact is located, the final restoration of the pixel point is to obtain a p-block restored image blockCalculating the similarity of the repaired image block and the corresponding artifact-free position of the original image, wherein the repaired image block is +.>And y is N The similarity calculation formula of (2) is: /> Obtain { beta } 12 ,…,β p Normalized, the final pixel value is the weighted average value of the pixel values corresponding to the p repaired image blocks, and the repaired image +.>
6. An ophthalmic optical coherence tomographic image restoration according to claim 5The method is characterized in that: will 2 N Attention weight matrix at multiple scale and 2 N The restored image at the double scale is used for guiding the thumbnail at the T-1 level according to 2 N -1 The process of artifact repair by the pre-trained sparse dictionary under the multiple scale is as follows:
extraction 2 N-1 Thumbnail Y under multiple scale N-1 All image blocks { y to be repaired containing artifacts N-1 Calculation ofImage block of corresponding position->Is a sparse coefficient of (1):
by calculating the sparsity factor of all image blocks within window VSparseness coefficient and attention weight matrix in window +.>Multiplying to obtain the final repair coefficient alpha N-1,t Calculate the repair result y N-1,t =D N-1 α N-1,t Finally obtain 2 N-1 Repaired image at double scale +.>
7. The ophthalmic optical coherence tomographic imaging restoration method according to claim 6, wherein: regarding 2 corresponding to the T-1 level N-1 The repaired image under the multiple scale and the T-grade image corresponding to the T-grade are used as the input of a fusion module under the T-1 grade, and the suspected image is obtained through the output of the fusion moduleThe fusion process of the fusion module in the step seven is carried out later: will beAnd->The artifact repair results of (2) were averaged as 2 N-2 And inputting the super-resolution neural network under the multiple scale.
8. An ophthalmic optical coherence tomographic image restoration system using the ophthalmic optical coherence tomographic image restoration method according to claim 7, characterized in that: the image artifact position detection system comprises an artifact position detection module and an artifact repair module, wherein the multi-scale information fusion module is used for positioning an image artifact position through the artifact position detection module, then, the artifact repair module is used for performing multi-scale repair from bottom to top, and the multi-scale information fusion module is used for fusing multi-scale image information while guiding upper-layer artifact repair so as to obtain a final target output image.
CN202310995767.1A 2023-08-09 2023-08-09 Ophthalmic optical coherence tomography image restoration method Active CN116740216B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310995767.1A CN116740216B (en) 2023-08-09 2023-08-09 Ophthalmic optical coherence tomography image restoration method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310995767.1A CN116740216B (en) 2023-08-09 2023-08-09 Ophthalmic optical coherence tomography image restoration method

Publications (2)

Publication Number Publication Date
CN116740216A CN116740216A (en) 2023-09-12
CN116740216B true CN116740216B (en) 2023-11-07

Family

ID=87909878

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310995767.1A Active CN116740216B (en) 2023-08-09 2023-08-09 Ophthalmic optical coherence tomography image restoration method

Country Status (1)

Country Link
CN (1) CN116740216B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105550988A (en) * 2015-12-07 2016-05-04 天津大学 Super-resolution reconstruction algorithm based on improved neighborhood embedding and structure self-similarity
CN112862734A (en) * 2021-01-27 2021-05-28 四川警察学院 Multi-focus image fusion method using convolution analysis operator learning
CN114418915A (en) * 2022-01-21 2022-04-29 佛山科学技术学院 Eye fundus retina OCTA image fusion method and system
WO2023047118A1 (en) * 2021-09-23 2023-03-30 UCL Business Ltd. A computer-implemented method of enhancing object detection in a digital image of known underlying structure, and corresponding module, data processing apparatus and computer program
CN116563189A (en) * 2023-07-06 2023-08-08 长沙微妙医疗科技有限公司 Medical image cross-contrast synthesis method and system based on deep learning

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7840086B2 (en) * 2005-10-12 2010-11-23 The Regents Of The University Of California Method for inpainting of images
US8699790B2 (en) * 2011-11-18 2014-04-15 Mitsubishi Electric Research Laboratories, Inc. Method for pan-sharpening panchromatic and multispectral images using wavelet dictionaries
KR20220047141A (en) * 2020-10-08 2022-04-15 에스케이텔레콤 주식회사 Method and Apparatus for Video Inpainting

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105550988A (en) * 2015-12-07 2016-05-04 天津大学 Super-resolution reconstruction algorithm based on improved neighborhood embedding and structure self-similarity
CN112862734A (en) * 2021-01-27 2021-05-28 四川警察学院 Multi-focus image fusion method using convolution analysis operator learning
WO2023047118A1 (en) * 2021-09-23 2023-03-30 UCL Business Ltd. A computer-implemented method of enhancing object detection in a digital image of known underlying structure, and corresponding module, data processing apparatus and computer program
CN114418915A (en) * 2022-01-21 2022-04-29 佛山科学技术学院 Eye fundus retina OCTA image fusion method and system
CN116563189A (en) * 2023-07-06 2023-08-08 长沙微妙医疗科技有限公司 Medical image cross-contrast synthesis method and system based on deep learning

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Inpainting for saturation artifacts in optical coherence tomography using dictionary-based sparse representation;H Liu等;IEEE photonics journal;全文 *
Multi-scale Sparse Representation-Based Shadow Inpainting for Retinal OCT Images;Yaoqi Tang等;Medical Imaging 2022: Image Processing;全文 *
pgnet: Joint predictive filtering and generative network for image inpainting;Q Guo等;Proceedings of the 29th;全文 *
基于稀疏表示的超分辨率重建和图像修复研究;李民;知网;全文 *
特征聚类的局部敏感稀疏图像修复;薛俊韬等;红外与激光工程;全文 *

Also Published As

Publication number Publication date
CN116740216A (en) 2023-09-12

Similar Documents

Publication Publication Date Title
CN110390650B (en) OCT image denoising method based on dense connection and generation countermeasure network
Giancardo et al. Textureless macula swelling detection with multiple retinal fundus images
CN109584254A (en) A kind of heart left ventricle's dividing method based on the full convolutional neural networks of deep layer
CN108618749B (en) Retina blood vessel three-dimensional reconstruction method based on portable digital fundus camera
CN110675335B (en) Superficial vein enhancement method based on multi-resolution residual error fusion network
CN113205538A (en) Blood vessel image segmentation method and device based on CRDNet
CN112541923B (en) Cup optic disk segmentation method based on fundus image data set migration learning
US11830193B2 (en) Recognition method of intracranial vascular lesions based on transfer learning
CN114881968A (en) OCTA image vessel segmentation method, device and medium based on deep convolutional neural network
CN111489328A (en) Fundus image quality evaluation method based on blood vessel segmentation and background separation
Li et al. Generating fundus fluorescence angiography images from structure fundus images using generative adversarial networks
Zhang et al. MC-UNet multi-module concatenation based on U-shape network for retinal blood vessels segmentation
CN112562058B (en) Method for quickly establishing intracranial vascular simulation three-dimensional model based on transfer learning
Cheng et al. Choroid segmentation in OCT images based on improved U-net
CN116740216B (en) Ophthalmic optical coherence tomography image restoration method
Qayyum et al. Single-shot retinal image enhancement using deep image priors
CN112634291A (en) Automatic burn wound area segmentation method based on neural network
CN116309754A (en) Brain medical image registration method and system based on local-global information collaboration
Jadhav et al. Detection of blood vessels in retinal images for diagnosis of diabetics
Lermé et al. Coupled parallel snakes for segmenting healthy and pathological retinal arteries in adaptive optics images
CN112669256B (en) Medical image segmentation and display method based on transfer learning
Shabbir et al. A comparison and evaluation of computerized methods for blood vessel enhancement and segmentation in retinal images
CN115619814A (en) Method and system for jointly segmenting optic disk and optic cup
Nageswari et al. Automatic Detection and Classification of Diabetic Retinopathy using Modified UNET
CN117612221B (en) OCTA image blood vessel extraction method combined with attention shift

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant