CN111563839A - Fundus image conversion method and device - Google Patents

Fundus image conversion method and device Download PDF

Info

Publication number
CN111563839A
CN111563839A CN202010401356.1A CN202010401356A CN111563839A CN 111563839 A CN111563839 A CN 111563839A CN 202010401356 A CN202010401356 A CN 202010401356A CN 111563839 A CN111563839 A CN 111563839A
Authority
CN
China
Prior art keywords
fundus
fundus image
image
processor
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010401356.1A
Other languages
Chinese (zh)
Other versions
CN111563839B (en
Inventor
张伊凡
付萌
郭子扬
熊健皓
戈宗元
赵昕
和超
张大磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eaglevision Medical Technology Co Ltd
Original Assignee
Shanghai Eaglevision Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eaglevision Medical Technology Co Ltd filed Critical Shanghai Eaglevision Medical Technology Co Ltd
Priority to CN202010401356.1A priority Critical patent/CN111563839B/en
Publication of CN111563839A publication Critical patent/CN111563839A/en
Application granted granted Critical
Publication of CN111563839B publication Critical patent/CN111563839B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Eye Examination Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a fundus image conversion method and equipment, wherein the involved model training method comprises the steps of acquiring a plurality of training data, wherein the training data comprise a first fundus image and a second fundus image, the first fundus image and the second fundus image are retina images of the same eyeball shot by different fundus cameras, and the first fundus image is subjected to desensitization processing to remove the influence of the attributes of the fundus cameras on the image contents; training a neural network using the plurality of training data to generate a fundus image from the first fundus image that is similar to the second fundus image.

Description

Fundus image conversion method and device
Technical Field
The invention relates to the field of medical image processing, in particular to a fundus image conversion method and equipment.
Background
With the development of machine learning techniques, particularly represented by deep learning, are widely used in the field of medical imaging. How to improve the accuracy and training efficiency of machine learning models remains a very challenging task. In the prior art, a certain promotion effect is generated on the training efficiency and the recognition accuracy of the model by optimizing the structure of the model and preprocessing the fundus image.
Particularly in the field of fundus image recognition, a large number of fundus image samples are required as training data, regardless of the neural network used for recognizing the type of fundus lesion or the neural network used for segmenting an abnormal region. However, different fundus cameras have different structures, different parameters and even different imaging principles, so that the visual effects of the photographed fundus images are obviously different. According to the principle of the neural network, if a sample image taken by a certain camera is fixedly used as training data in a training stage and the neural network is applied to recognize an image taken by another fundus camera in use, the accuracy thereof may be degraded. Therefore, the network should be trained to improve its adaptability using images taken by various fundus cameras as training data as much as possible, and the number of various fundus images should be equalized as much as possible.
However, in reality, fundus images are not a public data set, so that a person skilled in the art is difficult to acquire fundus images shot by various cameras, and to acquire fundus images with various focus characteristics and shot by different cameras, and the lack of training data is one of the problems to be overcome in the field of fundus image recognition.
Disclosure of Invention
In view of the above, the present invention provides a fundus image transformation model training method, which includes:
acquiring a plurality of training data, wherein the training data comprises a first fundus image and a second fundus image, the first fundus image and the second fundus image are retina images of the same eyeball taken by different fundus cameras, and the first fundus image is subjected to desensitization processing to remove the influence of the attributes of the fundus cameras on the image content;
training a neural network using the plurality of training data to generate a fundus image from the first fundus image that is similar to the second fundus image.
Optionally, the neural network is a generation countermeasure network, and includes a generator and a discriminator, the generator is configured to generate a fundus image according to the first fundus image, the discriminator is configured to determine whether the generated fundus image is a true image of a fundus camera domain feature belonging to the second image, and parameters of the generator and the discriminator are optimized according to a loss function during a training process.
Optionally, the loss function comprises three parts, a first part for coordinating the generator and the discriminator to synchronously improve, a second part for ensuring that the generated fundus image has a corresponding relationship with the second fundus image, and a third part for ensuring that information of a critical area of the fundus image is not modified.
Optionally, the generator includes a neural network having layers connected in a jump manner, and is configured to extract feature data from the first fundus image, and to stitch feature data extracted from layers of different depths to generate a fundus image from the stitched feature data.
Optionally, the discriminator is configured to divide the generated fundus image into image blocks, and determine whether the generated fundus image is a real image of a feature of a fundus camera field belonging to the second image by determining each image block separately.
Optionally, the obtaining training data comprises:
acquiring original fundus images of the same eyeball shot by two different fundus cameras;
identifying the same target in the two original fundus images respectively;
aligning the positions of the two fundus images based on the target;
desensitizing one of the fundus images removes the influence of the fundus camera attributes on the image content.
The invention also provides a fundus image conversion method, which comprises the following steps:
acquiring a fundus image;
desensitizing the fundus image to remove the influence of the fundus camera attribute on the image content;
the neural network trained by the method is used for processing the desensitized fundus image to obtain a converted fundus image.
Optionally, the neural network is a generation countermeasure network, and the fundus image is generated from the fundus image by a generator in the generation countermeasure network.
Correspondingly, the invention also provides fundus image conversion model training equipment, which comprises: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the one processor to cause the at least one processor to perform the above fundus image transformation model training method.
Accordingly, the present invention also provides a fundus image conversion apparatus comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the one processor, the instructions being executable by the at least one processor to cause the at least one processor to perform the above fundus image conversion method.
According to the fundus image conversion model training method and device provided by the invention, the neural network is trained through the retina images of the same eyeball shot by two different fundus cameras, so that the neural network can learn how to convert the color attribute of the fundus image shot by one camera into the color attribute of the other camera, and can keep all lines and contours in the fundus.
According to the fundus image conversion method and the fundus image conversion equipment provided by the invention, a fundus image which is shot by a certain camera and has any characteristics is desensitized, the influence of the camera attribute on the image content is removed, then the processed image is used as input data of a trained neural network, so that the fundus image after color attribute conversion is obtained, the line and the contour of the input fundus image can be reserved in the converted image, the image can be regarded as an image shot by another camera, and the scheme can be used as a very effective data enhancement means in the field, so that the performance of a fundus image recognition model is optimized.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a set of training data used in an embodiment of the present invention;
FIG. 2 is a comparison diagram of desensitization of fundus images in an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a preferred neural network model in an embodiment of the present invention;
FIG. 4 is a schematic diagram of preprocessing a fundus image in an embodiment of the present invention;
fig. 5 is a schematic view of converting a fundus image in the embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The embodiment of the invention provides a fundus image conversion model training method, which can be executed by electronic equipment such as a computer or a server and the like, and training a neural network for processing fundus images by using training data as the model.
Firstly, a plurality of training data are acquired, each training data respectively comprises two fundus images, the two fundus images are retina images of the same eyeball shot by different fundus cameras, the different fundus cameras can be of different models, different manufacturers and the like, and as the lens structures of the cameras are different, the light sources are different, various parameters are different, even possibly the imaging principle is different, the shot fundus images usually have certain difference in color or difference in contrast, brightness and the like, the attribute of the fundus camera can be interpreted as that the image content is influenced, namely the fundus images have the characteristics of a fundus camera domain.
In the training data used in the present embodiment as shown in fig. 1, an image (first fundus image) taken by a fundus camera of Canon corporation is located on the left side, and the influence of the fundus camera attribute on the image content is removed through desensitization processing, or referred to as removal of the fundus camera field feature. There are many alternative embodiments of desensitization processing described in this application, for example, an original fundus image may be directly converted into a grayscale image, or a calculation may be performed based on values of each channel of the original fundus image, and a fundus image is restored using the calculation result, so that it may be said that a specific functional relationship exists between a first fundus image in training data and its original image.
This desensitization process is intended to remove as much of the camera's various factors or features from the image as possible, leaving the retina with the most primitive information. As shown in fig. 2, in which the original images (upper side) taken by the four fundus cameras and the results of desensitization processing (lower side) are shown, it can be seen that, although the original images are significantly different in color, brightness, etc. (since the drawing of the present application is a gray scale image, the difference is not significant. The desensitization processing method adopted in this embodiment is to perform blurring processing on the fundus image first, and then calculate the difference between the fundus image original image and the blurred image as the result of the desensitization processing.
On the right side in fig. 1 is an image (second fundus image) taken by a fundus camera of minda medical corporation (Crystalvue). Since the two images are images taken of the same eyeball of the same person, the lines and contours of various tissues therein are consistent, but the second fundus image remains in original form, i.e., contains the influence of the attributes of the fundus camera on the image content. For convenience of describing the difference between the two images, the color attributes of the two fundus images are different, but the color attributes described in the present application should be broadly understood, and all influences of camera characteristics on image contents, such as hue, contrast, brightness and the like, belong to the color attributes described in the present application.
The purpose of the training model of the embodiment is to enable the training model to convert the color attribute of one fundus image into another color attribute without changing the texture and contour of tissues or lesions such as blood vessels, optic discs, macula lutea and the like in the fundus image. In order to implement the scheme, a neural network with an encoder-decoder structure can be adopted, and the neural network is trained by using a plurality of training data so as to learn the knowledge related to the color attribute in the fundus image. An encoder in the network extracts characteristic data in a first fundus image in the training data, a decoder reconstructs the fundus image according to the characteristic data, and parameters in the network are optimized through a large amount of data and a loss function, so that the color attribute of the reconstructed fundus image is as close to the color attribute of a second fundus image in the training data as possible. Since the texture and contour of the tissue and lesion of both fundus images are consistent, this may indicate that the neural network does not change the texture and contour of the fundus image, but only the color attributes.
According to the fundus image conversion model training method provided by the embodiment of the invention, the neural network is trained through the retinal images of the same eyeball shot by two different fundus cameras, so that the neural network can learn how to convert the color attribute of the fundus image shot by one camera into the color attribute of the other camera, and can keep all lines and contours in the fundus.
In order to obtain better conversion results, the present embodiment provides a preferred neural network as the model. As shown in fig. 3, the neural network of the present embodiment is a generation countermeasure network (GAN) including a Generator 21(Generator) and a Discriminator 22 (Discriminator). The generator 21 is configured to generate a fundus image according to a first fundus image in the training data, as exemplified in connection with the image shown in fig. 1, and the generator 21 is configured to generate a fundus image g (x) according to a left fundus image x, for example, and actually the generator 21 extracts feature data from the fundus image x and reconstructs the fundus image g (x) by using the feature data and given information (such as random noise information z).
The discriminator 22 is for judging whether or not the generated fundus image g (x) is a true image of the feature of the fundus camera field belonging to the second image. As illustrated with reference to the image shown in fig. 1, both the left side (fundus image x) and the right side (fundus image y) are used as input data of the discriminator 22, and after a large amount of training data is trained, the loss function of the generator 21 is reduced, the generated image fits the distribution of the real image, and the discriminator 22 cannot effectively distinguish the fundus image x from the fundus image y.
The parameters of the generator 21 and the discriminator 22 are optimized according to a loss function during the training process. Specifically, the loss function of the present embodiment is expressed as argminGmaxDLGAN(G,D)+λ1Ll1(G)+λ2Ll2(G) Wherein λ is1、λ2Are weights. The first part is LGAN(G,D)=Ex,y[logD(x,y)]+Ex,z[log(1-D(x,G(x,z)))],LGAN(G, D) is a loss function improved by both the harmony generator 21 and the discriminator 22, G (x, z) represents a generated fundus image, and D (x, G (x, z)) and D (x, y) represent the discrimination of the generated fundus image and the discrimination of the real fundus image, respectively. When the discriminator 22 is stronger than the generator 21, then Ex,y[logD(x,y)]Smaller, and Ex,z[log(1-D(x,G(x,z)))]Larger, the generator 21 will receive larger loss and be updated by larger magnitude, and vice versa.
The function can not ensure that the generated fundus image and the real fundus image have strict corresponding relation, so a second partial loss function L is introducedl1(G) It is defined as:
Ll1(G)=Ex,y,z||y-G(x,z)||
that is, the difference in the pixels of the fundus image y and the generated image G (x, z) on the three color channels is required to be minimum. Furthermore, different regions of a medical image have special medical significance, requiring that complete medical information be retained in the special regions. Such as macular areas, optic disc areas, blood vessels, bleeding spots, oozing out of these critical areas or lesions of particular medical interest cannot be modified or removed. The present embodiment thus introduces a third portion Ll2(G) As an additional penalty term, the model is constrained:
Ll2(G)=Ex,y,z||wy-wG(x,z)||
where w is the weight of the medically significant key region in the fundus image. The critical areas of the fundus image where macula, optic disc area, blood vessels, bleeding points, and exudation are all weighted higher to ensure that this information is not modified by the model. And w can be obtained by manually dividing the label, and these regions can be marked out by using a mask, and the value of the region can be set to 1, while the other regions are set to 0.
Further, as a preferred embodiment, the generator 21 includes a convolutional neural network having layers of skip-connection (skip-connection) for extracting feature data for the fundus image x and concatenating the feature data extracted from the layers of different depths to generate a fundus image from the concatenated feature data. By way of example, the generator 21 may adopt a U-Net structure, and by a jump connection structure, the corresponding feature data (feature maps) and the feature data (feature maps) of the same size after decoding may be pieced together by channel (concatemate), which is very effective in enhancing details of the fundus image.
Compared with a common encoding and decoding (Encoder-Decoder) structure network which firstly performs down-sampling to a low dimension and then performs up-sampling to an original resolution, the generator can retain detail information of pixel levels under different resolutions, has an obvious effect on detail improvement, and can make a generated fundus image G (x) clearer.
Since the generation countermeasure network of the present embodiment solves the high frequency component of the fundus image, that is, the generator 21 is only used to construct the high frequency information of the fundus image g (x), for the discriminator 22, the entire fundus image may not be taken as an input, but the generated fundus image g (x) may be divided into smaller image blocks, the image blocks may be taken as an input of the discriminator 22, the recognition results for the respective image blocks are obtained, and then the results of all the image blocks are averaged and output as the final discriminator. Since the image dimension of the input discriminator 22 is greatly reduced, the number of parameters is small, and the operation speed is faster than that of the mode of directly inputting the whole fundus image.
In practical application, some preprocessing can be performed on the fundus images before training the model in consideration of the matching problem of the two fundus images. In an alternative embodiment, fundus images of the same eyeball taken by two different fundus cameras are first acquired, the overall size of the two images may be different, and the location of the fundus region may also be mismatched. The same target, for example, the disc center, the macular center, or the like, may be recognized in the two fundus images, respectively, and then the positions of the two fundus images are aligned based on the target, forming training data.
As shown in fig. 4, the fundus image is rotated to adjust the optic disk (center point) to a set horizontal line, and in fig. 4, the fundus image before adjustment is shown on the left side and the fundus image after rotational adjustment is shown on the right side. All fundus images are adjusted according to the mode, then black backgrounds of the fundus images are removed, the fundus images are adjusted to be the same size, and then the images are subjected to desensitization treatment, so that a highly-matched fundus image pair can be obtained, the training efficiency of the model can be improved, and the generated fundus images are consistent with lines and contours in real fundus images.
After the model performance reaches the expected value through training, the model can be used for converting the fundus images. The present embodiment provides a fundus image conversion method, which extracts a portion of a model used for generating an image, such as an encoder and a decoder, or a generator 21 that generates a countermeasure network. The method comprises the following steps:
s1, a fundus image to be converted is acquired. As shown in fig. 5, the present embodiment takes a fundus image 41 taken by a canon company camera as an image to be converted;
s2, desensitizing the fundus image to remove the influence of the fundus camera attribute on the image content, and obtaining a desensitized fundus image 42.
S2, the generator 21 processes the fundus image 42, and the reconstructed fundus image 43 is obtained as a conversion result by extracting the feature data.
According to the fundus image conversion scheme provided by the embodiment of the invention, a fundus image shot by a certain camera and having any characteristics is desensitized to remove the influence of the camera attribute on the image content, then the processed image is used as input data of a trained neural network to obtain a fundus image after color attribute conversion, the line and the contour of the input fundus image can be reserved in the converted image, the image can be regarded as an image shot by another camera, and the scheme can be used as a very effective data enhancement means in the field to optimize the performance of a fundus image recognition model.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.

Claims (10)

1. A fundus image transformation model training method is characterized by comprising the following steps:
acquiring a plurality of training data, wherein the training data comprises a first fundus image and a second fundus image, the first fundus image and the second fundus image are retina images of the same eyeball taken by different fundus cameras, and the first fundus image is subjected to desensitization processing to remove the influence of the attributes of the fundus cameras on the image content;
training a neural network using the plurality of training data to generate a fundus image from the first fundus image that is similar to the second fundus image.
2. The method according to claim 1, wherein the neural network is a generation countermeasure network, and comprises a generator and a discriminator, wherein the generator is used for generating a fundus image according to the first fundus image, the discriminator is used for judging whether the generated fundus image is a real image of the characteristics of the fundus camera domain belonging to the second image, and parameters of the generator and the discriminator are optimized according to a loss function in a training process.
3. The method of claim 2, wherein the loss function comprises three parts, a first part for coordinating the generator and the discriminator to improve synchronously, a second part for ensuring that the generated fundus image has correspondence with the second fundus image, and a third part for ensuring that information of critical areas of the fundus image is not modified.
4. The method of claim 2, wherein the generator comprises a neural network having layers with jump connections for extracting feature data for the first fundus image and stitching feature data extracted for layers of different depths to generate a fundus image from the stitched feature data.
5. The method according to claim 2 or 3, wherein the discriminator is configured to divide the generated fundus image into image blocks, and determine whether the generated fundus image is a real image that is a feature of a fundus camera field belonging to the second image by determining each image block separately.
6. The method of claim 1, wherein obtaining training data comprises:
acquiring original fundus images of the same eyeball shot by two different fundus cameras;
identifying the same target in the two original fundus images respectively;
aligning the positions of the two fundus images based on the target;
desensitizing one of the fundus images removes the influence of the fundus camera attributes on the image content.
7. A fundus image conversion method, comprising:
acquiring a fundus image;
desensitizing the fundus image to remove the influence of the fundus camera attribute on the image content;
processing the fundus image after desensitization using a neural network trained using the method of any of claims 1-6 to obtain a converted fundus image.
8. The method of claim 7, wherein the neural network is a generative countermeasure network, and wherein generators in the generative countermeasure network are used to generate a fundus image from the processed fundus image.
9. An eye fundus image transformation model training device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the one processor to cause the at least one processor to perform the fundus image transformation model training method of any of claims 1-6.
10. An eye fundus image conversion apparatus, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the one processor to cause the at least one processor to perform the fundus image conversion method of claim 7 or 8.
CN202010401356.1A 2020-05-13 2020-05-13 Fundus image conversion method and device Active CN111563839B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010401356.1A CN111563839B (en) 2020-05-13 2020-05-13 Fundus image conversion method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010401356.1A CN111563839B (en) 2020-05-13 2020-05-13 Fundus image conversion method and device

Publications (2)

Publication Number Publication Date
CN111563839A true CN111563839A (en) 2020-08-21
CN111563839B CN111563839B (en) 2024-03-22

Family

ID=72073443

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010401356.1A Active CN111563839B (en) 2020-05-13 2020-05-13 Fundus image conversion method and device

Country Status (1)

Country Link
CN (1) CN111563839B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950737A (en) * 2021-03-17 2021-06-11 中国科学院苏州生物医学工程技术研究所 Fundus fluorescence radiography image generation method based on deep learning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109919831A (en) * 2019-02-13 2019-06-21 广州视源电子科技股份有限公司 A kind of method for migrating retinal fundus images in different images domain, electronic equipment and computer readable storage medium
US20200000331A1 (en) * 2015-10-23 2020-01-02 International Business Machines Corporation Automatically detecting eye type in retinal fundus images

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200000331A1 (en) * 2015-10-23 2020-01-02 International Business Machines Corporation Automatically detecting eye type in retinal fundus images
CN109919831A (en) * 2019-02-13 2019-06-21 广州视源电子科技股份有限公司 A kind of method for migrating retinal fundus images in different images domain, electronic equipment and computer readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
康莉;江静婉;黄建军;黄德渠;张体江;: "基于分步生成模型的视网膜眼底图像合成" *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950737A (en) * 2021-03-17 2021-06-11 中国科学院苏州生物医学工程技术研究所 Fundus fluorescence radiography image generation method based on deep learning
CN112950737B (en) * 2021-03-17 2024-02-02 中国科学院苏州生物医学工程技术研究所 Fundus fluorescence contrast image generation method based on deep learning

Also Published As

Publication number Publication date
CN111563839B (en) 2024-03-22

Similar Documents

Publication Publication Date Title
Bian et al. Optic disc and optic cup segmentation based on anatomy guided cascade network
JP2008234342A (en) Image processor and image processing method
CN109816666B (en) Symmetrical full convolution neural network model construction method, fundus image blood vessel segmentation device, computer equipment and storage medium
CN111415304A (en) Underwater vision enhancement method and device based on cascade deep network
Li et al. Vessel recognition of retinal fundus images based on fully convolutional network
CN114187201A (en) Model training method, image processing method, device, equipment and storage medium
CN111563839B (en) Fundus image conversion method and device
CN112785540B (en) Diffusion weighted image generation system and method
CN113658097A (en) Training method and device for fundus image quality enhancement model
CN109919098B (en) Target object identification method and device
Zhang et al. Consecutive context perceive generative adversarial networks for serial sections inpainting
CN110598652A (en) Fundus data prediction method and device
CN113379716B (en) Method, device, equipment and storage medium for predicting color spots
Priya et al. A novel approach to the detection of macula in human retinal imagery
CN112949585A (en) Identification method and device for blood vessels of fundus image, electronic equipment and storage medium
Shaikha et al. Optic Disc Detection and Segmentation in Retinal Fundus Image
CN111539940B (en) Super wide angle fundus image generation method and equipment
CN113096117A (en) Ectopic ossification CT image segmentation method, three-dimensional reconstruction method and device
JP2022147713A (en) Image generation device, learning device, and image generation method
Tal et al. Nldnet++: A physics based single image dehazing network
CN114972148A (en) Fundus image quality evaluation method, system and device
CN114698398A (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
Wu et al. Fundus Image Enhancement via Semi-Supervised GAN and Anatomical Structure Preservation
CN116580445B (en) Large language model face feature analysis method, system and electronic equipment
de Almeida Simões Image Quality Improvement of Medical Images using Deep Learning for Computer-aided Diagnosis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant