CN111563839B - Fundus image conversion method and device - Google Patents

Fundus image conversion method and device Download PDF

Info

Publication number
CN111563839B
CN111563839B CN202010401356.1A CN202010401356A CN111563839B CN 111563839 B CN111563839 B CN 111563839B CN 202010401356 A CN202010401356 A CN 202010401356A CN 111563839 B CN111563839 B CN 111563839B
Authority
CN
China
Prior art keywords
fundus image
fundus
image
processor
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010401356.1A
Other languages
Chinese (zh)
Other versions
CN111563839A (en
Inventor
张伊凡
付萌
郭子扬
熊健皓
戈宗元
赵昕
和超
张大磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eaglevision Medical Technology Co Ltd
Original Assignee
Shanghai Eaglevision Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eaglevision Medical Technology Co Ltd filed Critical Shanghai Eaglevision Medical Technology Co Ltd
Priority to CN202010401356.1A priority Critical patent/CN111563839B/en
Publication of CN111563839A publication Critical patent/CN111563839A/en
Application granted granted Critical
Publication of CN111563839B publication Critical patent/CN111563839B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Eye Examination Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a fundus image conversion method and device, wherein the model training method comprises the steps of obtaining a plurality of training data, wherein the training data comprise a first fundus image and a second fundus image, the first fundus image and the second fundus image are retina images of the same eyeball shot by different fundus cameras, and the first fundus image is desensitized to remove the influence of fundus camera attributes on image content; training the neural network by using the plurality of training data to generate a fundus image similar to the second fundus image from the first fundus image.

Description

Fundus image conversion method and device
Technical Field
The invention relates to the field of medical image processing, in particular to a fundus image conversion method and fundus image conversion equipment.
Background
With the development of machine learning techniques, such as deep learning, have been widely used in the medical imaging field. How to improve the accuracy and training efficiency of machine learning models remains a very challenging task. In the prior art, a certain promotion effect is generated on training efficiency and recognition accuracy of the model by optimizing the model structure and preprocessing the eye bottom image.
In particular, in the field of fundus image recognition, a large number of fundus image samples are required as training data, whether a neural network for recognizing a type of fundus lesion or a neural network for dividing an abnormal region. However, different fundus cameras have obvious differences in visual effects of photographed fundus images due to different camera structures, different parameters and even different imaging principles. According to the principle of the neural network, if a sample image photographed by a certain camera is fixedly used as training data in a training phase, and the neural network is applied to recognize an image photographed by another fundus camera at the time of use, the accuracy thereof may be degraded. Therefore, the network should be trained as much as possible using images photographed by various fundus cameras as training data to improve its adaptability, and the number of various fundus images should be balanced as much as possible.
However, in reality, the fundus image is not a public data set, so that it is difficult for a person skilled in the art to acquire fundus images captured by various cameras, and it is more difficult to acquire fundus images which have various focus characteristics and are captured by different cameras, and the lack of training data is always one of the difficulties in the field of fundus image recognition.
Disclosure of Invention
In view of the above, the present invention provides a fundus image conversion model training method, including:
acquiring a plurality of training data, wherein the training data comprises a first fundus image and a second fundus image, the first fundus image and the second fundus image are retina images of the same eyeball shot by different fundus cameras, and the first fundus image is desensitized to remove the influence of fundus camera attributes on image content;
training the neural network by using the plurality of training data to generate a fundus image similar to the second fundus image from the first fundus image.
Optionally, the neural network is a generation countermeasure network, and comprises a generator and a discriminator, wherein the generator is used for generating a fundus image according to the first fundus image, the discriminator is used for judging whether the generated fundus image is a real image belonging to the fundus camera domain characteristics of the second image, and parameters of the generator and the discriminator are optimized according to a loss function in a training process.
Optionally, the loss function includes three parts, a first part is used for coordinating the generator and the discriminator to synchronously improve, a second part is used for ensuring that the generated fundus image has a corresponding relation with the second fundus image, and a third part is used for ensuring that information of a key region of the fundus image is not modified.
Optionally, the generator includes a neural network with layers connected in a jumping manner, and is configured to extract feature data from the first fundus image, and splice the feature data extracted from the layers with different depths, so as to generate a fundus image according to the spliced feature data.
Optionally, the discriminator is configured to divide the generated fundus image into image blocks, and determine whether the generated fundus image is a real image belonging to the fundus camera domain feature of the second image by determining each image block separately.
Optionally, acquiring training data includes:
acquiring original fundus images of the same eyeball shot by two different fundus cameras;
identifying the same target in the two original fundus images respectively;
aligning positions of two fundus images based on the target;
the desensitization treatment is carried out on one fundus image to remove the influence of the fundus camera attribute on the image content.
The invention also provides a fundus image conversion method, which comprises the following steps:
acquiring a fundus image;
desensitizing the fundus image to remove influence of fundus camera attributes on image content;
the neural network trained by the method is used for processing the desensitized fundus image to obtain a converted fundus image.
Optionally, the neural network is a generation countermeasure network, and the fundus image is generated according to the fundus image by using a generator in the generation countermeasure network.
Correspondingly, the invention also provides fundus image conversion model training equipment, which comprises the following components: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the one processor, the instructions being executable by the at least one processor to cause the at least one processor to perform the fundus image transformation model training method described above.
Accordingly, the present invention also provides a fundus image conversion apparatus including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the one processor, the instructions being executable by the at least one processor to cause the at least one processor to perform the fundus image conversion method described above.
According to the fundus image conversion model training method and device provided by the invention, the neural network is trained through the retinal images of the same eyeball shot by two different fundus cameras, so that the neural network can learn how to convert the color attribute of the fundus image shot by one camera into the color attribute of the other camera, and can keep all lines and contours in the fundus, and the model trained by the scheme can effectively solve the problem of scarcity of training data.
According to the fundus image conversion method and the fundus image conversion equipment provided by the invention, firstly, the fundus image with any characteristic shot by a certain camera is subjected to desensitization treatment, the influence of camera attributes on the image content is removed, then the processed image is used as input data of a trained neural network, so that the fundus image with converted color attributes is obtained, the converted image can retain lines and contours of the input fundus image, the image can be regarded as an image shot by another camera, and the scheme can be used as a very effective data enhancement means in the field, so that the performance of a fundus image recognition model is optimized.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a set of training data used in an embodiment of the present invention;
FIG. 2 is a comparative view of desensitizing an eye bottom image in accordance with an embodiment of the present invention;
FIG. 3 is a schematic diagram of a preferred neural network model in an embodiment of the present invention;
FIG. 4 is a schematic diagram of preprocessing an eye bottom image according to an embodiment of the present invention;
fig. 5 is a schematic diagram of converting fundus images in an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made apparent and fully in view of the accompanying drawings, in which some, but not all embodiments of the invention are shown. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the description of the present invention, it should be noted that the terms "first," "second," and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In addition, the technical features of the different embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
The embodiment of the invention provides a fundus image conversion model training method which can be executed by electronic equipment such as a computer or a server and is used for training a neural network for processing fundus images by using training data as the model.
Firstly, a plurality of training data are acquired, each training data respectively comprises two eyeground images, the two eyeground images are retinal images of the same eyeball, which are shot by different eyeground cameras, so that the eyeground cameras can be of different models and different manufacturers, and the like.
In the training data shown in fig. 1 used in the present embodiment, on the left side is an image (first fundus image) taken by a fundus camera of Canon (Canon), and is subjected to a desensitization process, the influence of fundus camera attributes on the image content is removed, or fundus camera domain features are removed. The desensitization processing described in the present application has various optional embodiments, for example, the original fundus image may be directly converted into a gray scale image, or the fundus image may be restored by using the calculation result based on the values of the channels of the original fundus image, so that a specific functional relationship exists between the first fundus image and the original fundus image in the training data.
The desensitization treatment aims to remove the influence of various factors or characteristics of a camera on imaging as much as possible, and keep the most original information of retina. As shown in fig. 2, in which the original images photographed by the four fundus cameras (upper side) and the result of the desensitization processing (lower side) are shown, it can be seen that, although the original images are significantly different in color, brightness, etc. (since the drawings of the present application are gray scale images, the color differences are not significantly shown, the actual images are color images, the differences are more significantly), the images after the desensitization processing are already very close in color, brightness, etc. The desensitization processing method adopted in this embodiment is to perform blurring processing on the fundus image first, and then calculate the difference between the original fundus image and the blurred image, as the result of the desensitization processing.
Located on the right side in fig. 1 is an image (second fundus image) taken by a fundus camera of the Minda medical company (crystal queue). Since the two images are images taken for the same eyeball of the same person, the textures and contours of various tissues therein are identical, but the second fundus image remains as it is, that is, contains the influence of the attributes of the fundus camera on the image content. In order to facilitate the distinction between the two, the color attributes of the two fundus images are different, but the color attributes described in the present application should be understood in a broad sense, and all the effects of the camera features on the image content, such as hue, contrast, brightness, etc., belong to the color attributes described in the present application.
The purpose of the training model of this embodiment is to enable it to convert the color attribute of one fundus image into another without modifying the features such as the texture and contours of tissues or lesions such as blood vessels, optic discs and macula in the fundus image. In order to implement the present embodiment, a neural network having an encoder-decoder structure may be employed, and the neural network may be trained with a plurality of training data so as to learn knowledge related to color attributes in fundus images. The encoder in the network extracts the characteristic data in the first fundus image in the training data, the fundus image is reconstructed by the decoder according to the characteristic data, and parameters in the network are optimized through a large amount of data and a loss function, so that the color attribute of the reconstructed fundus image is as close to the color attribute of the second fundus image in the training data as possible. Since the texture and contour of the two fundus images are identical to those of the focus, this can indicate that the neural network does not change the texture and contour of the fundus image, but only changes the color attribute.
According to the fundus image conversion model training method provided by the embodiment of the invention, the neural network is trained through the retinal images of the same eyeball shot by two different fundus cameras, so that the neural network can learn how to convert the color attribute of the fundus image shot by one camera into the color attribute of the other camera, and all lines and contours in the fundus can be reserved.
In order to obtain a better conversion result, the present embodiment provides a preferable neural network as the model. As shown in fig. 3, the neural network of the present embodiment is a generation countermeasure network (GAN) including a Generator 21 (Generator) and a Discriminator 22 (Discriminator). The generator 21 is configured to generate a fundus image from a first fundus image in the training data, and the function of the generator 21 is, for example, to generate a fundus image G (x) from a left fundus image x, and in fact, the generator 21 extracts feature data from the fundus image x and then reconstructs the fundus image G (x) using the feature data and given information (such as random noise information z), as illustrated in connection with the image shown in fig. 1.
The discriminator 22 is for judging whether the generated fundus image G (x) is a true image belonging to the fundus camera domain feature of the second image. By way of illustration in conjunction with the image shown in fig. 1, both left (fundus image x) and right (fundus image y) images are input to the discriminator 22, trained with a large amount of training data, such that the loss function of the generator 21 is reduced, a distribution of the image fitting the real image is generated, and the discriminator 22 is not able to effectively discriminate the fundus image x from the fundus image y.
The parameters of the generator 21 and the discriminator 22 are optimized during training based on the loss function. Specifically, the loss function of the present embodiment is expressed as argmin G max D L GAN (G,D)+λ 1 L l1 (G)+λ 2 L l2 (G) Wherein lambda is 1 、λ 2 Is the weight. The first part is L GAN (G,D)=E x,y [logD(x,y)]+E x,z [log(1-D(x,G(x,z)))],L GAN (G, D) is a loss function improved simultaneously by the coordination generator 21 and the discriminator 22, G (x, z) represents the generated fundus image, and D (x, G (x, z)) and D (x, y) represent the discrimination of the generated fundus image and the discrimination of the real fundus image, respectively. When the discriminator 22 is stronger than the generator 21 then E x,y [logD(x,y)]Smaller, and E x,z [log(1-D(x,G(x,z)))]Larger, the generator 21 will receive a larger loss and be updated to a larger extent, and vice versa.
The above function cannot ensure that the generated fundus image and the real fundus image have a strict correspondence, so the second partial loss function L is introduced l1 (G) It is defined as:
L l1 (G)=E x,y,z ||y-G(x,z)||
i.e. the fundus image y and the generated image G (x, z) have minimal pixel requirement differences over the three color channels. Furthermore, different regions of the medical image have special medical significance, requiring that complete medical information be retained in the particular region. Critical areas or lesions of particular medical significance such as the macular area, optic disc area, blood vessels, bleeding sites, exudation, etc. cannot be modified or wiped off. The present embodiment thus introduces a third section L l2 (G) The model is constrained as an additional penalty term:
L l2 (G)=E x,y,z ||wy-wG(x,z)||
wherein w is the weight of a critical region having medical significance in the fundus image. The critical areas of the fundus image for macula, optic disc area, blood vessels, bleeding points, and exudation are all weighted higher to ensure that this information is not modified by the model. Whereas w may be obtained by manually segmenting the labels, these regions may be marked with masks, which may be set to a value of 1, while other regions are set to 0.
Further, as a preferred embodiment, the generator 21 includes a convolutional neural network having a skip-connection layer for extracting feature data for the fundus image x and concatenating (concatenating) the feature data extracted from the layers of different depths to generate the fundus image from the concatenated feature data. For example, the generator 21 may adopt a U-Net structure, and by a jump connection structure, the corresponding feature data (feature maps) and the feature data (feature maps) of the same size after decoding (decoding) can be spliced together by channels, and the effect of this structure on improving the details of the fundus image is very remarkable.
Compared with a common network of a code-Decoder (Encoder-Decoder) structure which firstly downsamples to a low dimension and then upsamples to an original resolution, the generator of the embodiment can keep detail information of pixel levels under different resolutions, has obvious effect of detail improvement, and can make the generated fundus image G (x) clearer.
Since the generation countermeasure network of the present embodiment solves the high frequency component of the fundus image, that is, the generator 21 is only used to construct the high frequency information of the fundus image G (x), it is sufficient for the discriminator 22 not to take the entire fundus image as input, but to divide the generated fundus image G (x) into smaller image blocks, take the image blocks as input to the discriminator 22, obtain the recognition results for the respective image blocks, respectively, and then average the results of all the image blocks as final discriminator output. Since the image dimension of the input discriminator 22 is greatly reduced, the number of parameters is small, and the operation speed is faster than that of the way of directly inputting the whole fundus image.
In practical application, considering the matching problem of the two fundus images, some preprocessing can be performed on the fundus images before training the model. In an alternative embodiment, fundus images of the same eyeball captured by two different fundus cameras are first acquired, the overall size of the two images may be different, and the positions of the fundus regions may also be mismatched. The same object, such as a disc center, a macula center, or the like, may be identified in the two fundus images, respectively, and then the positions of the two fundus images are aligned based on the object to form training data.
As shown in fig. 4, the fundus image is rotated, the optic disc (center point) is adjusted to a set horizontal line, and the left side in fig. 4 is the fundus image before adjustment, and the right side is the fundus image after rotation adjustment. According to the method, all fundus images are adjusted, black backgrounds of the fundus images are removed, the fundus images are adjusted to be the same size, the images are subjected to desensitization treatment, and then a pair of fundus images which are highly matched can be obtained, so that training efficiency of a model can be improved, and generated fundus images are consistent with lines and contours in real fundus images.
After the model performance reaches the expected performance through training, the model can be used for converting fundus images. The present embodiment provides a fundus image conversion method of extracting a portion for generating an image in a model, such as an encoder and a decoder, or a generator 21 generating an countermeasure network. The method comprises the following steps:
s1, acquiring a fundus image to be converted. As shown in fig. 5, the present embodiment takes a fundus image 41 photographed by a Canon camera as an image to be converted;
s2, desensitizing the fundus image to remove influence of fundus camera attributes on image content, and obtaining a desensitized fundus image 42.
S2, the fundus image 42 is processed by the generator 21, and a reconstructed fundus image 43 is obtained by extracting the feature data as a conversion result.
According to the fundus image conversion scheme provided by the embodiment of the invention, firstly, desensitization treatment is carried out on fundus images with any characteristics, which are shot by a certain camera, the influence of camera attributes on the image content is removed, then, the processed images are used as input data of a trained neural network, so that the fundus images with converted color attributes are obtained, the lines and the outlines of the input fundus images can be reserved by the converted images, the images can be regarded as images shot by another camera, and the scheme can be used as a very effective data enhancement means in the field, so that the performance of a fundus image recognition model is optimized.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It is apparent that the above examples are given by way of illustration only and are not limiting of the embodiments. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. While still being apparent from variations or modifications that may be made by those skilled in the art are within the scope of the invention.

Claims (6)

1. A fundus image transformation model training method, comprising:
acquiring a plurality of training data, wherein the training data comprises a first fundus image and a second fundus image, the first fundus image and the second fundus image are retina images of the same eyeball shot by different fundus cameras, and the first fundus image is desensitized to remove the influence of fundus camera attributes on image content;
training a neural network by using the plurality of training data to generate a fundus image similar to the second fundus image according to the first fundus image;
the neural network is used for generating an countermeasure network and comprises a generator and a discriminator, wherein the generator comprises the neural network with layers connected in a jumping way and is used for extracting characteristic data from the first fundus image and splicing the characteristic data extracted from the layers with different depths to generate a fundus image according to the spliced characteristic data, and the discriminator is used for dividing the generated fundus image into image blocks and judging whether the generated fundus image is a real image belonging to the fundus camera domain characteristics of the second fundus image by judging each image block; .
2. The method of claim 1, wherein the loss function comprises three parts, a first part for coordinating the generator and the discriminator to improve synchronously, a second part for ensuring that the generated fundus image has a correspondence with the second fundus image, and a third part for ensuring that information of a key region of the fundus image is not modified.
3. The method of claim 1, wherein obtaining training data comprises:
acquiring original fundus images of the same eyeball shot by two different fundus cameras;
identifying the same target in the two original fundus images respectively;
aligning positions of two fundus images based on the target;
the desensitization treatment is carried out on one fundus image to remove the influence of the fundus camera attribute on the image content.
4. A fundus image conversion method, comprising:
acquiring a fundus image;
desensitizing the fundus image to remove influence of fundus camera attributes on image content;
processing the desensitized fundus image using the neural network trained by the method of any of claims 1-3 to obtain a converted fundus image; wherein the neural network is a generation countermeasure network, and a fundus image is generated according to the processed fundus image by a generator in the generation countermeasure network.
5. A fundus image conversion model training apparatus, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the one processor to cause the at least one processor to perform the fundus image transformation model training method of any of claims 1-3.
6. A fundus image conversion apparatus, characterized by comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the one processor, the instructions being executable by the at least one processor to cause the at least one processor to perform the fundus image conversion method of claim 3.
CN202010401356.1A 2020-05-13 2020-05-13 Fundus image conversion method and device Active CN111563839B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010401356.1A CN111563839B (en) 2020-05-13 2020-05-13 Fundus image conversion method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010401356.1A CN111563839B (en) 2020-05-13 2020-05-13 Fundus image conversion method and device

Publications (2)

Publication Number Publication Date
CN111563839A CN111563839A (en) 2020-08-21
CN111563839B true CN111563839B (en) 2024-03-22

Family

ID=72073443

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010401356.1A Active CN111563839B (en) 2020-05-13 2020-05-13 Fundus image conversion method and device

Country Status (1)

Country Link
CN (1) CN111563839B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950737B (en) * 2021-03-17 2024-02-02 中国科学院苏州生物医学工程技术研究所 Fundus fluorescence contrast image generation method based on deep learning

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109919831A (en) * 2019-02-13 2019-06-21 广州视源电子科技股份有限公司 A kind of method for migrating retinal fundus images in different images domain, electronic equipment and computer readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10405739B2 (en) * 2015-10-23 2019-09-10 International Business Machines Corporation Automatically detecting eye type in retinal fundus images

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109919831A (en) * 2019-02-13 2019-06-21 广州视源电子科技股份有限公司 A kind of method for migrating retinal fundus images in different images domain, electronic equipment and computer readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
康莉 ; 江静婉 ; 黄建军 ; 黄德渠 ; 张体江 ; .基于分步生成模型的视网膜眼底图像合成.中国体视学与图像分析.2019,(04),全文. *

Also Published As

Publication number Publication date
CN111563839A (en) 2020-08-21

Similar Documents

Publication Publication Date Title
Wang et al. An experimental-based review of image enhancement and image restoration methods for underwater imaging
Fu et al. Uncertainty inspired underwater image enhancement
CN109523506B (en) Full-reference stereo image quality objective evaluation method based on visual salient image feature enhancement
CN111415304A (en) Underwater vision enhancement method and device based on cascade deep network
CN112102201A (en) Image shadow reflection eliminating method and device, computer equipment and storage medium
CN111563839B (en) Fundus image conversion method and device
CN112785572A (en) Image quality evaluation method, device and computer readable storage medium
Sari et al. Interactive image inpainting of large-scale missing region
CN111462002A (en) Underwater image enhancement and restoration method based on convolutional neural network
CN114187201A (en) Model training method, image processing method, device, equipment and storage medium
WO2022087941A1 (en) Face reconstruction model training method and apparatus, face reconstruction method and apparatus, and electronic device and readable storage medium
CN110598652B (en) Fundus data prediction method and device
Zhang et al. Consecutive context perceive generative adversarial networks for serial sections inpainting
Viacheslav et al. Low-level features for inpainting quality assessment
CN113379716B (en) Method, device, equipment and storage medium for predicting color spots
CN112949585A (en) Identification method and device for blood vessels of fundus image, electronic equipment and storage medium
Dang et al. Visual coherence metric for evaluation of color image restoration
Kumar et al. Performance evaluation of joint filtering and histogram equalization techniques for retinal fundus image enhancement
Irshad et al. No-reference image quality assessment of underwater images using multi-scale salient local binary patterns
CN111539940B (en) Super wide angle fundus image generation method and equipment
CN111626972B (en) CT image reconstruction method, model training method and equipment
CN115115900A (en) Training method, device, equipment, medium and program product of image reconstruction model
Shaikha et al. Optic Disc Detection and Segmentation in Retinal Fundus Image
Zhu et al. Quantitative assessment mechanism transcending visual perceptual evaluation for image dehazing
Chandra et al. Retinal based image enhancement using contourlet transform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant