CN115775210A - Wide-area fundus camera image fusion method, system and storage medium - Google Patents

Wide-area fundus camera image fusion method, system and storage medium Download PDF

Info

Publication number
CN115775210A
CN115775210A CN202211097419.4A CN202211097419A CN115775210A CN 115775210 A CN115775210 A CN 115775210A CN 202211097419 A CN202211097419 A CN 202211097419A CN 115775210 A CN115775210 A CN 115775210A
Authority
CN
China
Prior art keywords
image
wide
area
fusion
brightness
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211097419.4A
Other languages
Chinese (zh)
Inventor
何明鑫
刘洋
张瀚文
沈小厚
刘建坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Tongxin Huitu Medical Technology Co ltd
Original Assignee
Nanjing Tongxin Huitu Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Tongxin Huitu Medical Technology Co ltd filed Critical Nanjing Tongxin Huitu Medical Technology Co ltd
Priority to CN202211097419.4A priority Critical patent/CN115775210A/en
Publication of CN115775210A publication Critical patent/CN115775210A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Eye Examination Apparatus (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a wide-area fundus camera image fusion method, a wide-area fundus camera image fusion system and a storage medium, and belongs to the field of image processing. The invention provides a wide-area fundus camera image fusion method, a system and a storage medium, aiming at the problems that a wide-area fundus camera especially for a newborn is not friendly to a strong fog effect image due to the limitation of the design of a simplified light path, and the input wide-area fundus image group is firstly subjected to image preprocessing to improve the image quality. The color and brightness of the two pre-processed images are then normalized by a poisson fusion method to reduce color and brightness differences. Second, high quality regions of the two images are selected for fusion based on the template mask. And finally, enhancing the low-brightness fused image so as to further improve the overall image quality. The system can obtain a wide-area fundus image with a relatively complete fundus structure and high quality, thereby providing a basis for a doctor to diagnose the fundus diseases of the neonatus.

Description

Wide-area fundus camera image fusion method, system and storage medium
Technical Field
The present invention relates to the field of image processing, and more particularly, to a wide area fundus camera image fusion method, system, and storage medium.
Background
For the image fusion problem, in order to retain the original precise information as much as possible, pixel-level image fusion algorithms can be applied, and can be classified into two types: spatial domain based image fusion and transform domain based image fusion.
Image fusion based on spatial domain is usually performed directly on the gray-scale space of the image pixels. The most direct method is to select a pixel from two input images using a maximum value method or a weighted average method, which directly operates on a target pixel without considering the correlation between adjacent pixels.
Typical methods for fundus image fusion based on a transform domain mainly comprise image fusion based on pyramid transform and image fusion based on wavelet transform. The image fusion algorithm based on pyramid transformation extracts image detail information under different decomposition scales, and has a good fusion effect. However, after the pyramidal decomposition, the data between the decomposed layers is redundant and high frequency information may be severely lost. The image fusion algorithm based on wavelet transformation can not only extract low-frequency information of an image, but also obtain high-frequency detail information. However, since the wavelet transform uses row and column downsampling, the image is not shift invariant, easily resulting in distortion of the fused image.
Unlike conventional image fusion, image fusion of a wide-area fundus image should not only take into account the correlation between pixels, but also give up the low-quality area covered by retinal structures near the light source. Paul et al propose an image fusion algorithm, wherein a mask generated through spectral analysis is used for scoring the visibility of each pixel in a source image, and each pixel in an output image takes a corresponding source pixel with the highest score, but the transmission region derived from the mask is large and is not friendly to a strong fog effect image.
Disclosure of Invention
1. Technical problem to be solved
The invention provides a wide-area fundus camera image fusion method, a wide-area fundus camera image fusion system and a storage medium, which can realize obtaining a high-quality wide-area fundus image with a relatively complete fundus structure and provide a basis for a doctor to diagnose fundus diseases of a newborn.
2. Technical scheme
The purpose of the invention is realized by the following technical scheme.
A wide area fundus camera image fusion method comprises the following steps,
for an input wide-area fundus image group, image preprocessing is firstly carried out, then the colors and the brightness of two preprocessed images are normalized through a Poisson fusion method, the color and brightness difference is reduced, then a high-quality area of the two images is selected based on a template mask to be fused, and finally a low-brightness fused image is enhanced to obtain a high-quality image.
Further, the wide-area fundus image preprocessing includes the following specific steps,
image defogging
Firstly, calculating a dark channel image I consisting of three channels and minimum values of an input image dark1 Etching the dark channel image to obtain I dark2 And taking the original dark channel image as a guide image, and taking the corroded dark channel image as an image to be filtered for guide filtering to obtain I dark Calculating the pixel value of the brightest pixel point of the original input image as an atmospheric light value; finally combining the dark channel image I dark Estimating the image transmittance by the atmospheric light value, and recovering a fog-free image by an atmospheric scattering model, wherein the atmospheric scattering model is I (x) = J (x) t (x) + A (1-t (x)), I (x) is an image to be defogged, J (x) is an image after defogging, A is the atmospheric light value, and t (x) is the transmittance;
image alignment
Firstly, carrying out brightness adjustment and contrast ratio limiting self-adaptive histogram equalization on an image, highlighting retina details and optimizing a feature detection area, optimizing the efficiency of feature point detection through scale mapping, carrying out feature point detection based on an SIFT algorithm on a reduced image, mapping detected feature points back to the original size, finally carrying out feature point matching, screening optimization matching pairs, and carrying out image transformation by using an optimal homography matrix;
ROI extraction
Extracting a circular retina visual field, firstly enhancing an image when extracting an effective visual field area of a wide area fundus image, amplifying each pixel point in the image in proportion, enhancing the retina edge of the image, converting the enhanced wide area fundus image into a gray map, filtering the image, detecting Hough circles of the processed fundus image, detecting a plurality of circles which are possibly the retina edge in the wide area fundus image through the Hough circle detection, selecting a plurality of circles, calculating the distance from the circle center to the image center, screening the circles meeting the requirement, and applying the circle center coordinates and the radius average values of all the circles meeting the requirement after screening to an original image so as to extract the effective visual field area of the retina;
the FOV is extracted using the same circle, so the average value of the centers of the FOV and the average region-of-interest size of the two images detected respectively are taken as final FOV position and shape parameters, and the ROIs in the two images are extracted respectively using the result.
Further, the image defogging method is replaced by histogram equalization, retinex algorithm, wavelet transformation or homomorphic filtering.
Further, the image alignment method is replaced with a SURF feature or ORB feature method.
Further, the color and brightness normalization based on poisson fusion specifically comprises the steps of respectively considering the two images as foreground and background, and adjusting the color of the background image to the color of the foreground image through poisson fusion.
Furthermore, the specific steps of selecting the high-quality areas of the two images for fusion based on the template mask are that the high-quality areas in the original image are selected for fusion based on an image fusion algorithm of a diagonal mask, according to the distribution of stray light, the wide-area fundus image is divided into four parts along the diagonal, the area without the stray light is the high-quality area and is reserved, and other low-quality areas are discarded due to strong stray light.
Furthermore, the weight of the pixel gray level of the overlapping area of the two images in the fusion image is adjusted by adopting a weighted average method, the boundary effect is eliminated, and the weight of one image is reduced and the weight of the other image is increased according to the weight change curve of the two images in the overlapping area, so that the boundary transition is smooth.
Furthermore, the specific method for adaptively adjusting the brightness is that the average value of the pixel values of the fused image in the central area of the Y channel in the YUV color space is defined as the brightness, when the brightness of the fused image is smaller than a threshold value, the brightness of the image is low, all pixel points of the RGB three channels of the fused image are arranged according to the gray value, and the pixel points positioned in the Q & ltth & gt channel are arranged 1 Fractional sum Q 2 The value of each position is respectively used as the minimum value Pmin and the maximum value Pmax of the pixel, then the pixel value which is larger than Pmax in the fused image is set as Pmax, the pixel value which is smaller than Pmin is set as Pmin, the pixel values which are too large and too small in the image are removed, finally the image is stretched, and the image enhancement result, Q, is obtained 1 、Q 2 Are percentage values.
A wide area fundus camera image fusion system includes a fundus camera and a control system, the control system being disposed inside or outside the fundus camera, the control system performing the method as described above.
A readable storage medium, storing a computer program comprising program instructions, which when executed by a processor, cause the processor to perform the method as described above.
3. Advantageous effects
Compared with the prior art, the invention has the advantages that:
1) By preprocessing the fundus images, the interference of invalid background areas in the images is effectively removed, the color and brightness difference between the images is weakened, and the method plays an important role in subsequent image fusion work.
2) Aiming at the imaging characteristics of the wide-area fundus image, a wide-area fundus image fusion algorithm based on a space complementary mask pair is provided.
3) The image enhancement method suitable for brightness self-adaptive adjustment of the wide-area fundus image is provided, the image quality is effectively improved, a clear and high-quality neonate wide-area fundus image is finally output, and a doctor is assisted in medical diagnosis.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a flow chart of image normalization;
FIG. 3 is a schematic diagram of a region division method for a wide area fundus image and a fused image;
FIG. 4 is a set overlap region fused null bitmap;
FIG. 5 is a diagram illustrating the setting of the overlap region boundary S 1 A weight change curve diagram;
fig. 6 is a schematic diagram of a luminance adaptive adjustment method.
Detailed Description
The invention is described in detail below with reference to the drawings and specific examples.
Example 1
The wide area fundus camera of a neonate cannot obtain a usable fundus image in a single shot due to the design limitations of the simplified optical path. In a typical design, during the process of taking two fundus retina images in one set, two sets of illumination beams, upper, lower, left and right, are respectively lighted, which causes strong stray light near the light source position and image interference on the fundus retina in the area. According to the imaging principle and characteristics of the wide-area fundus image, the invention fully utilizes the complementarity of two times of imaging of the same shooting object on the space, designs a special image fusion method and a special image fusion system of the neonatal wide-area fundus camera, obtains a high-quality wide-area fundus image with relatively complete fundus structure, and provides a basis for a doctor to diagnose the neonatal fundus disease.
As shown in fig. 1, the corresponding method correspondingly includes four parts: wide area fundus image pre-processing, poisson fusion based color and brightness normalization, template mask based image fusion, and brightness adaptive adjustment. For the input wide-area fundus image group, image preprocessing is first performed to improve image quality. The color and brightness of the two pre-processed images are then normalized by a poisson fusion method to reduce color and brightness differences. Second, high quality regions of the two images are selected for fusion based on the template mask. And finally, enhancing the low-brightness fusion image so as to further improve the overall image quality.
Wide area fundus image pre-processing
Image defogging and alignment
Due to the imaging characteristics of the wide-area fundus camera, a stray light phenomenon (similar to a fogging phenomenon) occurs in the image, and there is a difference in capturing time between two wide-area fundus images, resulting in a possible occurrence of pixel shift. Therefore, first, image defogging based on dark channel prior and image registration based on SIFT feature points are performed on the images. .
Image defogging
Firstly, calculating a dark channel image I consisting of three channels and minimum values of an input image dark1 Etching the dark channel image to obtain I dark2 And taking the original dark channel image as a guide image, and taking the corroded dark channel image as an image to be filtered for guide filtering to obtain I dark . Calculating the pixel value of the brightest pixel point of the original input image as an atmospheric light value; finally combining the dark channel image I dark And estimating the image transmittance by the atmospheric light value, and recovering a fog-free image by an atmospheric scattering model, wherein the atmospheric scattering model is I (x) = J (x) t (x) + A (1-t (x)), wherein I (x) is an image to be defogged, J (x) is an image after defogging, A is the atmospheric light value, and t (x) is the transmittance. Preferably, image defogging can also use other methods, such as histogram equalization, retinex algorithm, wavelet transformation, homomorphic filtering, and the like.
Image alignment
Firstly, brightness adjustment and contrast limiting self-adaptive histogram equalization are carried out on the image, retina details are highlighted, and a feature detection area is optimized. And optimizing the efficiency of feature point detection through scale mapping, carrying out feature point detection based on SIFT algorithm on the reduced image, and mapping the detected feature points back to the original size. And finally, performing feature point matching, screening an optimized matching pair by using a random sample consensus (RANSAC), and performing image transformation by using an optimal homography matrix.
The image alignment can ensure the center consistency of the two images, effectively reduce the pixel error of the joint after the images are fused and improve the splicing precision. Preferably, other methods such as SURF (speedUp Robust Features), ORB (ordered FAST and ordered BRIEF) Features, etc. may also be used for image alignment.
ROI extraction
The effective retinal area of the wide-area fundus image is approximately circular. To avoid interference from extraneous regions outside the retina, a circular retinal Field of View (FOV) is extracted by the Hough circle detection algorithm.
The wide-area fundus image is low in brightness and not clear enough in retina edge, when the effective visual field area of the wide-area fundus image is extracted, the image is firstly enhanced, each pixel point in the image is amplified in proportion, and the retina edge of the image can be enhanced. The method comprises the steps of converting an enhanced wide area fundus image into a gray level image, and because Hough circle transformation is sensitive to noise, filtering the image by using a median filtering method, detecting Hough circles of the processed fundus image, detecting a plurality of circles which are possibly retina edges in the wide area fundus image by the Hough circle detection, selecting a plurality of circles, calculating the distance from the circle center to the image center, and discarding the circles with too large distance. And (4) averaging the center coordinates and the radiuses of all screened circles meeting the requirements and applying the center coordinates and the radiuses to an original image so as to extract the effective visual field area of the retina.
In order to ensure the accuracy of registration of the two images, it is necessary to extract the FOVs using the same circle, so the average value of the centers of the FOVs detected respectively and the average Region of Interest (ROI) size of the two images are taken as final FOV position and shape parameters, and the ROIs in the two images are extracted respectively using the result.
Poisson fusion based color and brightness normalization
As shown in fig. 2, in order to reduce the influence of the color and brightness difference on the fusion result, the color brightness of the two images needs to be normalized. And respectively regarding the two images as a foreground and a background, and adjusting the color of the background image to the color of the foreground image through Poisson fusion. If the 2 nd image is taken as the foreground image and the 1 st image is taken as the background image, the upper and lower regions of the 2 nd image are high-quality regions, the upper and lower regions of the left and right light source images are taken as the regions of interest when the images are fused, the mask image is shown in fig. 2 (c), and the poisson fusion result is shown in fig. 2 (d). When the 1 st image is taken as the foreground image and the 2 nd image is taken as the background image, the mask image is shown in fig. 2 (g), and the poisson fusion result is shown in fig. 2 (h). By respectively taking the two wide-area fundus images as the foreground image and the background image to perform Poisson fusion, the color and the brightness of the images are adjusted, and the boundary effect during image fusion can be weakened.
Template mask based image fusion
As shown in fig. 3, two fundus images of the same patient generally have spatial complementarity. Low quality areas that have to be discarded near the light source have a relatively good quality at the corresponding position of the other image. The invention uses custom masks to segment and fuse images based on the profile of stray light. An image fusion algorithm based on diagonal masks is proposed. Based on the mask template, high quality regions in the original image are selected for fusion. As shown in fig. 3, the wide-area fundus image is divided into four parts along the diagonal line according to the distribution of stray light. The upper and lower regions (A1, A2) in the diagram a and the left and right regions (B3, B4) in the diagram B are regions without stray light, and can be retained, and the other low-quality regions are discarded due to strong stray light. The overlapping area width w =100, and other settings may be made as needed.
If the fusion is performed directly according to the above method, there will be a significant boundary effect on the diagonal of the fused image. As shown in fig. 4 and 5, in order to advanceAnd the image quality is improved in one step, the weight of the pixel gray level of the overlapping area of the two images in the fusion image is adjusted by adopting a weighted average method, and the boundary effect is eliminated. The weight change curve of the two graphs in the overlapping region is shown in FIG. 4, and the weight of graph A is decreased and the weight of graph B is increased along the arrow direction, the weight of graph A is changed from 1 to 0, and the weight of graph B is changed from 0 to 1, so that the boundary S is formed 1 The transition is smooth. Same as S 2 、S 3 And S 4 The smooth transition at the boundary is achieved in the same way.
And generating a mask according to the method, wherein the size of the mask is the same as that of the image to be fused, the value of the position element corresponding to the high-quality area needing to be reserved in the mask is 1, the value of the position element corresponding to the low-quality area needing to be discarded is 0, and the value of the position element corresponding to the boundary transition area is 0-1. The input images are multiplied by corresponding masks to select active areas, and the images are combined to obtain a fused image with the complete retinal structure.
Brightness adaptive adjustment
As shown in fig. 6, generally, the luminance of the wide-area fundus image is relatively low, and image enhancement is required for the low-luminance fusion image. The average value of the pixel values of the fused image in the Y-channel center region of the YUV color space is defined as the luminance. When the brightness of the fused image is smaller than the threshold value, the brightness of the image is low, firstly all pixel points of three channels of RGB of the fused image are arranged according to the size of a gray value, and the pixel points are located at the Q & ltth & gt 1 Fractional sum Q 2 The values of the quantiles are respectively used as the minimum value Pmin and the maximum value Pmax of the pixel. Then setting the pixel value larger than Pmax in the fused image as Pmax, setting the pixel value smaller than Pmin as Pmin, removing the too large and too small pixel values in the image, and finally stretching the image to 0-255 to obtain the image enhancement result. Q 1 =1%,Q 2 =99%. And can be adjusted adaptively as needed.
To verify the effectiveness of the present invention, three experts scored the fused image from three aspects of image authenticity, image sharpness and overall image quality, and compared with the work of Paul et al, each scoring criterion being 0-10 points, the results are shown in Table 1.
TABLE 1 expert evaluation of the final results (average of 30 groups of data)
Figure BDA0003838707030000061
As can be seen from table 1, the fused image of the algorithm has a slightly lower score than Paul et al in image realism, but has a higher image sharpness and overall image quality score. It can be seen that the method proposed by the present invention is better and more efficient.
The invention and its embodiments have been described above schematically, without limitation, and the invention can be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The representation in the drawings is only one of the embodiments of the invention, the actual construction is not limited thereto, and any reference signs in the claims shall not limit the claims concerned. Therefore, if a person skilled in the art receives the teachings of the present invention, without inventive design, a similar structure and an embodiment to the above technical solution should be covered by the protection scope of the present patent. Furthermore, the word "comprising" does not exclude other elements or steps, and the word "a" or "an" preceding an element does not exclude the inclusion of a plurality of such elements. Several of the elements recited in the product claims may also be implemented by one element in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.

Claims (10)

1. A wide area fundus camera image fusion method comprises the following steps,
for an input wide-area fundus image group, image preprocessing is firstly carried out, then the colors and the brightness of two preprocessed images are normalized through a Poisson fusion method, the color and brightness difference is reduced, then high-quality areas of the two images are selected based on a template mask to be fused, and finally the low-brightness fused image is enhanced to obtain a high-quality image.
2. The wide-area fundus camera image fusion method of claim 1, wherein the wide-area fundus camera image pre-processing comprises the specific steps of,
image defogging
Firstly, calculating a dark channel image I consisting of three channels and minimum values of an input image dark1 Etching the dark channel image to obtain I dark2 And taking the original dark channel image as a guide image, and taking the corroded dark channel image as an image to be filtered for guide filtering to obtain I dark Calculating the pixel value of the brightest pixel point of the original input image as an atmospheric light value; finally combining the dark channel image I dark Estimating the image transmittance by the atmospheric light value, and recovering a fog-free image by an atmospheric scattering model, wherein the atmospheric scattering model is I (x) = J (x) t (x) + A (1-t (x)), I (x) is an image to be defogged, J (x) is an image after defogging, A is the atmospheric light value, and t (x) is the transmittance;
image alignment
Firstly, brightness adjustment and contrast limiting self-adaptive histogram equalization are carried out on an image, retina details are highlighted, a feature detection area is optimized, the efficiency of feature point detection is optimized through scale mapping, feature point detection based on an SIFT algorithm is carried out on the reduced image, the detected feature points are mapped back to the original size, finally feature point matching is carried out, optimization matching pairs are screened, and image transformation is carried out by using an optimal homography matrix;
ROI extraction
Extracting a circular retinal field, firstly enhancing an image when extracting an effective visual field area of a wide-area fundus image, amplifying each pixel point in the image in proportion, enhancing the retinal edge of the image, converting the enhanced wide-area fundus image into a gray-scale image, filtering the image, detecting Hough circles of the processed fundus image, detecting a plurality of circles which are possibly the retinal edge in the wide-area fundus image through the Hough circle detection, selecting a plurality of circles, calculating the distance from the circle center to the image center, screening the circles meeting the requirement, and applying the circle center coordinates and the radius average values of all the screened circles meeting the requirement to an original image so as to extract the effective visual field area of the retina;
the FOV is extracted using the same circle, so the average value of the centers of the FOV and the average region-of-interest size of the two images detected respectively are taken as final FOV position and shape parameters, and the ROIs in the two images are extracted respectively using the result.
3. The wide-area fundus camera image fusion method of claim 2, wherein the image defogging method is replaced by histogram equalization, retinex algorithm, wavelet transform or homomorphic filtering.
4. The wide area fundus camera image fusion method of claim 2, in which the image alignment method is replaced with a SURF feature or ORB feature method.
5. The wide-area fundus camera image fusion method according to claim 1, characterized in that the poisson fusion-based color and brightness normalization specifically comprises the steps of regarding the two images as foreground and background respectively, and adjusting the color of the background image to the color of the foreground image through poisson fusion.
6. The wide-area fundus camera image fusion method according to claim 1, characterized in that the specific steps of selecting the high-quality regions of the two images for fusion based on the template mask are to select the high-quality regions in the original image for fusion based on a diagonal mask image fusion algorithm, according to the distribution of stray light, the wide-area fundus image is divided into four parts along the diagonal, the regions without stray light are the high-quality regions to be reserved, and the other low-quality regions are discarded due to strong stray light.
7. The wide-area fundus camera image fusion method according to claim 6, wherein the weight of the pixel gray level of the overlapped area of the two images in the fusion image is adjusted by adopting a weighted average method to eliminate the boundary effect, and the weight of one image is reduced and the weight of the other image is increased according to the weight change curve of the two images in the overlapped area to smooth the boundary transition.
8. The wide-area fundus camera image fusion method according to claim 1, wherein the brightness adaptive adjustment method comprises defining an average value of pixel values of the fusion image in a central area of a Y channel of a YUV color space as brightness, indicating that the brightness of the image is low when the brightness of the fusion image is less than a threshold, arranging all pixel points of RGB three channels of the fusion image according to the gray value, and arranging pixel points at a Q-th channel 1 Fractional sum Q 2 The value of each position is respectively used as the minimum value Pmin and the maximum value Pmax of the pixel, then the pixel value which is larger than Pmax in the fused image is set as Pmax, the pixel value which is smaller than Pmin is set as Pmin, the pixel values which are too large and too small in the image are removed, finally the image is stretched, and the image enhancement result, Q, is obtained 1 、Q 2 Are percentage values.
9. A wide area fundus camera image fusion system comprising a fundus camera and a control system, the control system being located either internally or externally to the fundus camera, the control system performing the method of any one of claims 1 to 8.
10. A readable storage medium, characterized in that the storage medium stores a computer program comprising program instructions which, when executed by a processor, cause the processor to carry out the method according to any one of claims 1-8.
CN202211097419.4A 2022-09-08 2022-09-08 Wide-area fundus camera image fusion method, system and storage medium Pending CN115775210A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211097419.4A CN115775210A (en) 2022-09-08 2022-09-08 Wide-area fundus camera image fusion method, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211097419.4A CN115775210A (en) 2022-09-08 2022-09-08 Wide-area fundus camera image fusion method, system and storage medium

Publications (1)

Publication Number Publication Date
CN115775210A true CN115775210A (en) 2023-03-10

Family

ID=85388476

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211097419.4A Pending CN115775210A (en) 2022-09-08 2022-09-08 Wide-area fundus camera image fusion method, system and storage medium

Country Status (1)

Country Link
CN (1) CN115775210A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116152132A (en) * 2023-04-19 2023-05-23 山东仕达思医疗科技有限公司 Depth of field superposition method, device and equipment for microscope image
CN117649347A (en) * 2024-01-30 2024-03-05 宁乡爱尔眼科医院有限公司 Remote eye examination method and system based on ultra-wide-angle fundus imaging
CN117854700A (en) * 2024-01-19 2024-04-09 首都医科大学宣武医院 Postoperative management method and system based on wearable monitoring equipment

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116152132A (en) * 2023-04-19 2023-05-23 山东仕达思医疗科技有限公司 Depth of field superposition method, device and equipment for microscope image
CN116152132B (en) * 2023-04-19 2023-08-04 山东仕达思医疗科技有限公司 Depth of field superposition method, device and equipment for microscope image
CN117854700A (en) * 2024-01-19 2024-04-09 首都医科大学宣武医院 Postoperative management method and system based on wearable monitoring equipment
CN117649347A (en) * 2024-01-30 2024-03-05 宁乡爱尔眼科医院有限公司 Remote eye examination method and system based on ultra-wide-angle fundus imaging
CN117649347B (en) * 2024-01-30 2024-04-19 宁乡爱尔眼科医院有限公司 Remote eye examination method and system based on ultra-wide-angle fundus imaging

Similar Documents

Publication Publication Date Title
Bhalla et al. A fuzzy convolutional neural network for enhancing multi-focus image fusion
CN110276356B (en) Fundus image microaneurysm identification method based on R-CNN
CN115775210A (en) Wide-area fundus camera image fusion method, system and storage medium
Xu et al. Ffu-net: Feature fusion u-net for lesion segmentation of diabetic retinopathy
CN106683080B (en) A kind of retinal fundus images preprocess method
Li et al. Robust retinal image enhancement via dual-tree complex wavelet transform and morphology-based method
Dash et al. An unsupervised approach for extraction of blood vessels from fundus images
CN105761258A (en) Retinal fundus image bleeding detection method
CN111507932B (en) High-specificity diabetic retinopathy characteristic detection method and storage device
Chen et al. Blood vessel enhancement via multi-dictionary and sparse coding: Application to retinal vessel enhancing
Savelli et al. Illumination correction by dehazing for retinal vessel segmentation
Cao et al. Enhancement of blurry retinal image based on non-uniform contrast stretching and intensity transfer
CN114708258B (en) Eye fundus image detection method and system based on dynamic weighted attention mechanism
Lyu et al. Deep tessellated retinal image detection using Convolutional Neural Networks
Yang et al. Retinal image enhancement with artifact reduction and structure retention
Wisaeng et al. Automatic detection of exudates in retinal images based on threshold moving average models
CN111881924B (en) Dark-light vehicle illumination identification method combining illumination invariance and short-exposure illumination enhancement
Khan et al. Thin vessel detection and thick vessel edge enhancement to boost performance of retinal vessel extraction methods
Qin et al. A review of retinal vessel segmentation for fundus image analysis
Devi et al. Dehazing buried tissues in retinal fundus images using a multiple radiance pre-processing with deep learning based multiple feature-fusion
CN113139929A (en) Gastrointestinal tract endoscope image preprocessing method comprising information screening and fusion repairing
CN115829851A (en) Portable fundus camera image defect eliminating method and system and storage medium
Zhang et al. A fundus image enhancer based on illumination-guided attention and optic disc perception GAN
CN114022879A (en) Squamous cell structure enhancement method based on optical fiber endomicroscopy image
CN111652805A (en) Image preprocessing method for fundus image splicing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination