CN110298850B - Segmentation method and device for fundus image - Google Patents

Segmentation method and device for fundus image Download PDF

Info

Publication number
CN110298850B
CN110298850B CN201910590552.5A CN201910590552A CN110298850B CN 110298850 B CN110298850 B CN 110298850B CN 201910590552 A CN201910590552 A CN 201910590552A CN 110298850 B CN110298850 B CN 110298850B
Authority
CN
China
Prior art keywords
image
cup
optic
detected
disc
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910590552.5A
Other languages
Chinese (zh)
Other versions
CN110298850A (en
Inventor
孙钦佩
杨叶辉
王磊
许言午
黄艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201910590552.5A priority Critical patent/CN110298850B/en
Publication of CN110298850A publication Critical patent/CN110298850A/en
Application granted granted Critical
Publication of CN110298850B publication Critical patent/CN110298850B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Abstract

The embodiment of the disclosure discloses a segmentation method and device of a fundus image. One embodiment of the method comprises: acquiring a fundus image to be detected; inputting the fundus image to be detected into an image generation model to obtain a cup mask image corresponding to the fundus image to be detected; and fitting a mask image of an optic cup region and a mask image of an optic disc region in the fundus image to be detected based on the cup and disc mask image corresponding to the fundus image to be detected. This embodiment can accurately divide the optic disc region and the optic cup region in the fundus image.

Description

Segmentation method and device for fundus image
The embodiment of the disclosure relates to the technical field of computers, in particular to the field of image processing, and particularly relates to a segmentation method and a segmentation device for fundus images.
Background
Currently, with the development of computer technology, various image segmentation techniques are emerging. Image segmentation techniques can solve many practical problems. One typical application is the segmentation of medical images to locate key sites in the image and to aid diagnosis and treatment.
In image processing of a fundus image, it is desirable to divide the cup region and the optic disc region by an image division technique. Most of the current fundus image segmentation techniques perform threshold segmentation based on pixel values of images of the optic cup and optic disc.
Disclosure of Invention
The embodiment of the disclosure provides a segmentation method and device of a fundus image.
In a first aspect, an embodiment of the present disclosure provides a segmentation method of a fundus image, the method including: acquiring a fundus image to be detected; inputting the fundus image to be detected into an image generation model to obtain a cup and disc mask image corresponding to the fundus image to be detected, wherein the cup and disc mask image corresponding to the fundus image to be detected represents a difference area between an optic cup and an optic disc in the fundus image to be detected; and fitting a mask image of an optic cup region and a mask image of an optic disc region in the fundus image to be detected based on the cup and disc mask image corresponding to the fundus image to be detected.
In some embodiments, fitting a mask image of a cup region and a mask image of a optic disc region in the fundus image to be detected based on a cup and disc mask image corresponding to the fundus image to be detected includes: and fitting the inner boundary and the outer boundary of the cup and disk mask image corresponding to the fundus image to be detected by adopting an ellipse fitting method to obtain a mask image of the optic cup region and a mask image of the optic disk region.
In some embodiments, the method further comprises: determining boundary information of an optic cup area and an optic disc area in the fundus image to be detected based on the mask image of the optic cup area and the mask image of the optic disc area; controlling a display device to display a fundus image to be detected containing boundary information.
In some embodiments, the image generation model is generated as follows: acquiring a sample set, wherein samples in the sample set comprise fundus images and sample mask images corresponding to the fundus images, and the sample mask images represent difference regions of an optic cup and an optic disc in the fundus images of the corresponding samples; acquiring an initial generation countermeasure network, wherein the initial generation countermeasure network comprises a generation network and a judgment network; selecting samples from the sample set, and performing the following training steps: predicting a difference area between an optic cup and an optic disc in the fundus image of the selected sample by using the generation network to obtain a prediction mask image corresponding to the fundus image of the sample; inputting the prediction mask image and the selected sample mask image into a discrimination network to obtain a class discrimination result of the sample mask image and the corresponding prediction mask image; comparing the category judgment result with a preset expected category judgment result; determining whether the training of the generated network is finished according to the comparison result; in response to determining that the training of the generating network is complete, the generating network is determined to be the image generation model.
In some embodiments, the differential area between the optic cup and the optic disc in the fundus image of the selected specimen is predicted using the generation network as follows: noise is superimposed on the fundus image of the sample and input to the generation network to predict a difference region between the optic cup and the optic disc in the fundus image of the sample.
In a second aspect, an embodiment of the present disclosure provides an apparatus for segmenting a fundus image, the apparatus including: an acquisition unit configured to acquire a fundus image to be detected; the generation unit is configured to input the fundus image to be detected into the image generation model to obtain a cup and disc mask image corresponding to the fundus image to be detected, and the cup and disc mask image corresponding to the fundus image to be detected represents a difference area between a visual cup and a visual disc in the fundus image to be detected; and the fitting unit is configured to fit a mask image of a cup area and a mask image of a optic disc area in the fundus image to be detected based on the cup and disc mask image corresponding to the fundus image to be detected.
In some embodiments, the fitting unit is further configured to fit the mask image of the cup region and the mask image of the optic disc region in the fundus image to be detected as follows: and fitting the inner boundary and the outer boundary of the cup and disk mask image corresponding to the fundus image to be detected by adopting an ellipse fitting method to obtain a mask image of the optic cup region and a mask image of the optic disk region.
In some embodiments, the apparatus further comprises: a determination unit configured to determine boundary information of the cup region and the optic disc region in the fundus image to be detected, based on the mask image of the cup region and the mask image of the optic disc region; a display unit configured to control the display device to display a fundus image to be detected containing the boundary information.
In some embodiments, the image generation model is generated as follows: acquiring a sample set, wherein samples in the sample set comprise fundus images and sample mask images corresponding to the fundus images, and the sample mask images represent difference regions of an optic cup and an optic disc in the fundus images of the corresponding samples; acquiring an initial generation countermeasure network, wherein the initial generation countermeasure network comprises a generation network and a judgment network; selecting samples from the sample set, and performing the following training steps: predicting a difference area between an optic cup and an optic disc in the fundus image of the selected sample by using the generation network to obtain a prediction mask image corresponding to the fundus image of the sample; inputting the prediction mask image and the selected sample mask image into a discrimination network to obtain a class discrimination result of the sample mask image and the corresponding prediction mask image; comparing the category judgment result with a preset expected category judgment result; determining whether the training of the generated network is finished according to the comparison result; in response to determining that the training of the generating network is complete, the generating network is determined to be the image generation model.
In some embodiments, the differential area between the optic cup and the optic disc in the fundus image of the selected specimen is predicted using the generation network as follows: noise is superimposed on the fundus image of the sample and input to the generation network to predict a difference region between the optic cup and the optic disc in the fundus image of the sample.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: one or more processors; a storage device having one or more programs stored thereon; when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the method as described in any implementation of the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a computer-readable medium on which a computer program is stored, which when executed by a processor implements the method as described in any of the implementations of the first aspect.
According to the segmentation method and device for the fundus images, the fundus images to be detected are acquired, the fundus images to be detected are input into the image generation model, cup and disc mask images corresponding to the fundus images to be detected are obtained, the cup and disc mask images corresponding to the fundus images to be detected represent a difference area between a visual cup and a visual disc in the fundus images to be detected, and finally, the mask images of the visual cup area and the mask images of the visual disc area in the fundus images to be detected are fitted based on the cup and disc mask images corresponding to the fundus images to be detected. Therefore, an image segmentation method can be obtained, and accurate and rapid segmentation of image areas can be realized.
Drawings
Other features, objects and advantages of the disclosure will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present disclosure may be applied;
fig. 2 is a flowchart of one embodiment of a segmentation method of a fundus image according to the present disclosure;
fig. 3 is a schematic diagram of one application scenario of a segmentation method of a fundus image according to an embodiment of the present disclosure;
FIG. 4 is a flow diagram of one implementation of a method of generating an image generation model as described above;
fig. 5a, 5b are example diagrams of a sample fundus image and a corresponding sample mask image according to embodiments of the present disclosure;
fig. 6 is a schematic configuration diagram of an embodiment of a division apparatus of a fundus image according to an embodiment of the present disclosure;
FIG. 7 is a schematic structural diagram of an electronic device suitable for use in implementing embodiments of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 shows an exemplary architecture 100 to which the segmentation method of a fundus image or the segmentation apparatus of a fundus image of the present disclosure can be applied.
As shown in fig. 1, the system architecture 100 may include terminals 101, 102, a network 103, a database server 104, and a server 105. The network 103 serves as a medium for providing communication links between the terminals 101, 102, the database server 104 and the server 105. Network 103 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user 110 may use the terminals 101, 102 to interact with the server 105 over the network 103 to receive or send messages or the like. The terminals 101 and 102 may have various client applications installed thereon, such as a model training application, an image processing application, a shopping application, a payment application, a web browser, an instant messenger, and the like.
Here, the terminals 101 and 102 may be hardware or software. When the terminals 101 and 102 are hardware, they may be various electronic devices with display screens, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III), laptop portable computers, desktop computers, and the like. When the terminals 101 and 102 are software, they can be installed in the electronic devices listed above. It may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
Database server 104 may be a database server that provides various services. In some application scenarios, for example, a sample set may be stored in database server 104. The sample set contains a large number of samples. Wherein the sample may include a fundus image and a sample mask image corresponding to the fundus image. In this way, the user 110 may also select training samples from the sample set stored by the database server 104 via the terminals 101, 102.
The server 105 may also be a server providing various services, such as a background server providing support for various applications running on the terminals 101, 102. The background server may process the received fundus image to be detected, and feed back a processing result (a cup and tray mask image corresponding to the fundus image to be detected) to the terminals 101 and 102. In some application scenarios, the background server may also train the initial generation countermeasure network using samples in the sample set sent by the terminals 101 and 102, and may send the training results (e.g., the generated image generation model) to the terminals 101 and 102. In this way, the end user can use the generated image generation model to obtain a mask image representing the differential region between the optic cup and the optic disc in the fundus image to be detected.
Here, the database server 104 and the server 105 may be hardware or software. When they are hardware, they can be implemented as a distributed server cluster composed of a plurality of servers, or as a single server. When they are software, they may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
Note that the division method of the fundus image provided by the embodiment of the present disclosure is generally performed by the server 105. Accordingly, a division device of the fundus image is also generally provided in the server 105.
It is noted that database server 104 may not be provided in system architecture 100, as server 105 may perform the relevant functions of database server 104.
It should be understood that the number of terminals, networks, database servers, and servers in fig. 1 are merely illustrative. There may be any number of terminals, networks, database servers, and servers, as desired for implementation.
With continued reference to fig. 2, a flow 200 of one embodiment of a method of segmentation of a fundus image according to the present disclosure is shown. The segmentation method of the fundus image may include the steps of:
step 201, acquiring a fundus image to be detected.
In the present embodiment, the execution subject of the method of image generation (e.g., the server 105 shown in fig. 1) can acquire a fundus image to be detected in various ways. For example, the execution body described above may acquire a fundus image to be detected stored therein from a database server (e.g., database server 104 shown in fig. 1). For another example, the execution body may receive a fundus image to be detected acquired by a terminal (e.g., terminals 101 and 102 shown in fig. 1) or other eye detection device.
Here, the fundus image generally refers to an image including an optic cup region and an optic disc region. It may be a color image (e.g., RGB (Red, Green, Blue, Red, Green, Blue) photograph) or a grayscale image. The Format of the Image is not limited in the present application, and may be a Format such as jpg (Joint Photo graphics Experts Group, a picture Format), BMP (Bitmap, Image file Format), or RAW (RAW Image Format), as long as the subject reading and recognition can be performed.
Step 202, inputting the fundus image to be detected into the image generation model to obtain a cup and disc mask image corresponding to the fundus image to be detected, wherein the cup and disc mask image corresponding to the fundus image to be detected represents a difference area between a visual cup and a visual disc in the fundus image to be detected.
In this embodiment, the image generation model may be an artificial neural network. Based on the fundus image to be detected acquired in step 201, the execution body may input the fundus image to be detected to a pre-trained artificial neural network to determine a cup and mask image corresponding to the fundus image to be detected. Here, the cup-and-disk mask image corresponding to the fundus image to be detected represents the differential region between the optic cup and optic disk in the fundus image to be detected. The retina, namely the retina, has a light red discoid structure with the diameter of about 1.5mm and clear boundaries from the macula lutea to about 3mm of the nasal side. The optic cup is a white cup-shaped region in the center of the optic disc. The image generation model may be trained using a difference region between the optic cup and the optic disc of the labeled fundus image. It is to be understood that the differential area between the optic cup and the optic disc may be expressed in various forms, and for example, the position of the differential area on the fundus image may be expressed using the boundary coordinate values of the differential area between the optic cup and the optic disc.
In some alternative implementations of the present embodiment, the image generation model may be generated using a method as described in the implementation of fig. 4. The specific generation process can be referred to the related description of the implementation of fig. 4.
And step 203, fitting a mask image of a cup area and a mask image of a optic disc area in the fundus image to be detected based on the cup and disc mask image corresponding to the fundus image to be detected.
In the present embodiment, since the boundary of the optic cup and the boundary of the optic disc are generally elliptical, the execution body can locate the boundaries of the optic cup and the optic disc from the fundus image by using this characteristic, and further obtain the mask image of the optic cup region and the mask image of the optic disc region in the fundus image to be detected from the cup mask image by fitting processing.
In some optional implementations of this embodiment, the method further includes: and fitting the inner boundary and the outer boundary of the cup and disk mask image corresponding to the fundus image to be detected by adopting an ellipse fitting method to obtain a mask image of the optic cup region and a mask image of the optic disk region.
In this optional implementation manner, the execution main body may perform pixel value inversion operation on the cup mask image corresponding to the fundus image to be detected by obtaining the cup mask image corresponding to the fundus image to be detected, according to the cup mask image corresponding to the fundus image to be detected, representing the difference region between the optic cup and the optic disc in the fundus image to be detected, so as to obtain the connected domain of the optic cup region, and extract the inner boundary of the cup mask image, i.e., the boundary of the optic cup region. And then, overlapping the extracted optic cup area and a cup disc mask image corresponding to the fundus image to be detected, and extracting the outer boundary of the cup disc mask image, namely the boundary of the optic disc area. And finally, respectively carrying out ellipse fitting processing on the extracted boundary of the optic cup area and the extracted boundary of the optic disc area to obtain a mask image of the optic cup area and a mask image of the optic disc area.
In some optional implementations of this embodiment, the method further includes: determining boundary information of an optic cup area and an optic disc area in the fundus image to be detected based on the mask image of the optic cup area and the mask image of the optic disc area; controlling a display device to display a fundus image to be detected containing boundary information.
In this optional implementation, the boundary information of the optic cup region and the optic disc region in the fundus image to be detected may be obtained according to the boundary of the optic cup region and the boundary of the optic disc region extracted from the mask image of the optic cup region and the mask image of the optic disc region. The boundary information here may be coordinate position information of the boundary of the cup region and the boundary of the optic disc region.
In this alternative implementation, the display device may be a device (e.g., terminals 101 and 102 shown in fig. 1) communicatively connected to the execution main body and configured to display an image transmitted by the execution main body. In practice, the execution body described above may send a control signal to the display device, which in turn controls the display device to display the fundus image to be detected containing the boundary information. For example, the pixel value of the boundary coordinates of the cup and the optic disc may be set to a specified pixel value so that the boundary of the cup and the optic disc in the fundus image is highlighted.
In this embodiment, the execution body may determine boundary information of the cup region and the optic disc region in the fundus image to be detected based on the mask image of the cup region and the mask image of the optic disc region, and control the display device to display the fundus image to be detected including the boundary information. On the one hand, the fundus image to be detected containing the boundary information can be directly displayed, and whether the generated image is accurate or not can be determined to complete the region segmentation. On the other hand, the generated image completes the one-time segmentation of the cup area and the optic disc area, and then the simple fitting processing is carried out to obtain the cup area image and the optic disc area image, so that the image segmentation speed and accuracy are improved.
With continued reference to fig. 3, there is shown a schematic diagram of one application scenario of a segmentation method of a fundus image according to an embodiment of the present disclosure. In the application scene diagram of fig. 3, a user acquires a fundus image 302 to be detected from a terminal device 301, processes the fundus image 302 to be detected by a server 303 providing background support for an image generation model application to obtain a cup and disc mask image 304 corresponding to the fundus image to be detected, and finally obtains a mask image 305 of a cup region and a mask image 306 of a disc region in the fundus image to be detected through fitting processing.
The method for segmenting the fundus image first acquires the fundus image to be detected. Then, the fundus image to be detected is input into the image generation model, and a cup mask image corresponding to the fundus image to be detected is obtained. And finally, obtaining a mask image of a cup region and a mask image of a optic disc region in the fundus image to be detected through image fitting processing. The method realizes accurate segmentation of the optic cup and optic disc region in the fundus image.
With continued reference to FIG. 4, a flow diagram of one implementation of a method of generating the image generation model described above is shown. The flow 400 of the image generation model generation method may include the steps of:
step 401, a sample set is obtained, where a sample in the sample set includes a fundus image and a sample mask image corresponding to the fundus image, and the sample mask image represents a differential area between an optic cup and an optic disc in the fundus image of the corresponding sample.
In this embodiment, the executing entity may obtain an existing training sample set stored therein from a database server (e.g., database server 104 shown in fig. 1). As another example, the user may collect training samples via a terminal (e.g., terminals 101, 102 shown in fig. 1). In this way, the execution subject may receive samples collected by the terminal and store the samples locally, thereby generating a training sample set.
Here, the sample set may include at least one sample. Wherein the sample may include a fundus image and a sample mask image corresponding to the fundus image. The sample mask image here may represent the differential area of the optic cup and optic disc in the fundus image of the corresponding sample. It is understood that the sample mask image may be obtained in advance based on a manual labeling method or the like. For example, the execution body may perform labeling based on the positions of the cup region and the optic disc region in the fundus image to obtain the mask image. In the field of mathematical image processing, the execution subject may also block the processed image (wholly or partially) with a selected image or graphic to control the area or process of image processing. This selected image or pattern is referred to as a mask. The mask image may be a two-dimensional matrix array.
Fig. 5a and 5b are example diagrams of a sample fundus image and a corresponding sample mask image according to embodiments of the present disclosure. As shown in fig. 5a and 5b, fig. 5a is a sample fundus image in which an image region 501 is a disc region in the fundus image and an image region 502 is a cup region in the fundus image. Fig. 5b is a sample mask image corresponding to the above sample fundus image, wherein the annular region in the sample mask image may be a differential region between the optic cup and the optic disc in the sample fundus image.
Step 402, obtaining an initial generation countermeasure network, wherein the initial generation countermeasure network comprises a generation network and a discrimination network.
In this embodiment, the execution subject may obtain an initial generation countermeasure network. Wherein, the initially generating the countermeasure network may include an initially generating network and an initially discriminating network. The execution subject may predict a differential area between the optic cup and the optic disc in the fundus image of the selected sample using the generation network to obtain a prediction mask image corresponding to the fundus image of the sample. The discrimination network may be used to determine whether the predicted mask image corresponding to the fundus image of the sample from which the network output is generated is a true mask image corresponding to the fundus image of the sample.
Generating the network may include, but is not limited to, at least one of: deep neural network models, Hidden Markov Models (HMMs), naive bayes models, gaussian mixture models. The discriminating network may include, but is not limited to, at least one of: linear regression model, linear discriminant analysis, Support Vector Machine (SVM), neural network. It should be understood that the initial generation of the countermeasure network may be an untrained generation of the countermeasure network after initialization of the parameters, or a generation of the countermeasure network that has been trained in advance.
Step 403, selecting samples from the sample set, and executing the training step.
In this embodiment, the executing entity may select a sample from the sample set obtained in step 401, and execute the training steps 4031 to 4035. The selection manner and the number of samples are not limited in the present disclosure. For example, the performing subject may take at least one sample.
More specifically, the training step comprises the steps of:
step 4031, the difference area between the optic cup and the optic disc in the fundus image of the selected sample is predicted by the generation network, and a prediction mask image corresponding to the fundus image of the sample is obtained.
In this embodiment, the execution subject may add a preset noise to the fundus image of the selected sample, and then input the fundus image of the sample to which the noise is added to the generation network to predict a difference region between the cup and the optic disc in the fundus image of the sample, resulting in a prediction mask image corresponding to the fundus image of the sample. For example, the preset noise may be salt and pepper noise or gaussian noise. The purpose of adding noise is to improve the interference resistance of the generated network and to improve the generalization capability.
Step 4032, the prediction mask image and the selected sample mask image are input into a discrimination network, and the sample mask image and the corresponding type decision result of the prediction mask image are obtained.
In this embodiment, the execution subject may input the predicted mask image obtained by generating the network in step 4031 and the sample mask image corresponding to the fundus image of the selected sample into the discrimination network. The discrimination network may output a class discrimination result of the obtained sample mask image and the corresponding predicted mask image. In generating the countermeasure network, the discrimination network is used to discriminate whether the synthesized image coincides with the real image representation. If the discrimination result given by the discrimination network shows that the category of the synthesized image generated by the generation network is consistent with that of the real image, or the discrimination network cannot distinguish which of the synthesized image generated by the generation network and the real image is the real image, it can be considered that the similarity between the synthesized image generated by the generation network and the real image is very high. In this embodiment, the type determined by the discrimination network may be determined based on whether or not the synthesized predicted mask image corresponding to the fundus image of the sample is a true sample mask image corresponding to the fundus image of the sample. As an example, the category determination result here may be a category determination result obtained by determining the above two images based on the category labels of the prediction mask image and the sample mask image, and assuming that the image label of the sample mask image is 1, the image label of the prediction mask image may be determined to be 0 or 1. The image tag may be other preset information, and is not limited to the values 1 and 0. The loss function is based on the label and class decision results of the sample mask image.
Step 4033, compare the class determination result with the preset expected class determination result.
In this embodiment, when the category determination result obtained in step 4032 reaches the preset desired category determination result, it may be considered that the category determination result is close to or approximate to the preset desired category determination result. The preset expected category determination result may be preset by a person skilled in the art according to experience, and the person skilled in the art may adjust the preset expected category determination result.
As an example, the preset desired category determination result may be: the discrimination network cannot distinguish the categories of the predicted mask image and the sample mask image, and as an example, the predicted expected category decision result may be: the prediction probability of the discrimination network for the class of the mask image generated by the generation network is close to 0.5.
Step 4034, determine whether the generated network is trained according to the comparison result.
In this embodiment, according to the comparison result in step 4033, the executing entity may determine whether the training of the generation network is completed. As an example, if a plurality of samples are selected in step 4033, the executing entity may determine that the generated network training is completed when the class determination result of each sample reaches the preset expected class determination result. For another example, the execution subject may count a ratio of the total class determination result to the sample with the preset expected class determination result to the selected sample. And when the proportion reaches a preset training sample proportion (such as 95%), the generation network training can be determined to be completed. If the executing entity determines that the training of the generating network is completed, the executing entity may continue to execute step 4035.
In some optional implementations of this embodiment, if the executing entity determines that the generation network is not trained, the relevant parameters in the initial generation countermeasure network may be adjusted. And selecting samples from the training sample set, and returning to execute the training step again. The way of adjusting the parameters may for example use a back propagation algorithm or the like. Therefore, the initially generated countermeasure network can be trained circularly, and the optimal initially generated countermeasure network is obtained after iterative training.
It should be noted that the selection mode is not limited in the present disclosure. For example, in the case of a large number of samples in the training sample set, the performing entity may select samples from which to not select.
Step 4035, in response to determining that training of the generated network is complete, determine the generated network as the image generation model.
In this embodiment, if the execution subject determines that the training of the generation network is completed, the generation network (i.e., the trained generation network) may be used as the image generation model.
Alternatively, the executing entity may store the generated image generation model locally, or may send the generated image generation model to a terminal or a database server.
The method flow 400 is based on the generation of the confrontation network training model, and the training completed generation network is used as the image generation model, so that the image generation model capable of generating the mask image in a 'false-to-false' manner is obtained, the generation network is optimized by continuously confronting the generation network and the confrontation network in the training process, and the accurate and reliable image generation model can be obtained. And then the image generation model is adopted to segment the fundus image, and the accuracy of segmentation of the optic cup and optic disc region in the fundus image is further improved.
With continuing reference to FIGURE 6, the present application provides one embodiment of a segmentation apparatus for fundus images as an implementation of the method illustrated in FIGURE 2 above. The embodiment of the device corresponds to the embodiment of the method shown in fig. 2, and the device can be applied to various electronic devices.
As shown in fig. 6, the division apparatus 600 of fundus images of the present embodiment may include: an acquisition unit 601 configured to acquire a fundus image to be detected; a generating unit 602, configured to input the fundus image to be detected into an image generation model, to obtain a cup and disc mask image corresponding to the fundus image to be detected, where the cup and disc mask image corresponding to the fundus image to be detected represents a difference region between a cup and a disc in the fundus image to be detected; a fitting unit 603 configured to fit a mask image of a cup region and a mask image of a optic disc region in the fundus image to be detected based on a cup and disc mask image corresponding to the fundus image to be detected.
In some embodiments, the apparatus 600 may further include: a determination unit (not shown in the figure) configured to determine boundary information of the cup region and the optic disc region in the fundus image to be detected, based on the mask image of the cup region and the mask image of the optic disc region; a display unit (not shown in the figure) configured to control the display device to display the fundus image to be detected containing the boundary information.
In some optional implementations of the present embodiment, the fitting unit 603 is further configured to fit the mask image of the cup region and the mask image of the optic disc region in the fundus image to be detected as follows: and fitting the inner boundary and the outer boundary of the cup and disk mask image corresponding to the fundus image to be detected by adopting an ellipse fitting method to obtain a mask image of the optic cup region and a mask image of the optic disk region.
In some optional implementations of the present embodiment, the image generation model is generated as follows: acquiring a sample set, wherein samples in the sample set comprise fundus images and sample mask images corresponding to the fundus images, and the sample mask images represent difference regions of an optic cup and an optic disc in the fundus images of the corresponding samples; acquiring an initial generation countermeasure network, wherein the initial generation countermeasure network comprises a generation network and a judgment network; selecting samples from the sample set, and performing the following training steps: predicting a difference area between an optic cup and an optic disc in the fundus image of the selected sample by using the generation network to obtain a prediction mask image corresponding to the fundus image of the sample; inputting the prediction mask image and the selected sample mask image into a discrimination network to obtain a class discrimination result of the sample mask image and the corresponding prediction mask image; comparing the category judgment result with a preset expected category judgment result; determining whether the training of the generated network is finished according to the comparison result; in response to determining that the training of the generating network is complete, the generating network is determined to be the image generation model.
In some optional implementations of the present embodiment, the difference region between the optic cup and the optic disc in the fundus image of the selected sample is predicted using the generation network as follows: noise is superimposed on the fundus image of the sample and input to the generation network to predict a difference region between the optic cup and the optic disc in the fundus image of the sample.
It will be understood that the elements described in the apparatus 600 correspond to various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 600 and the units included therein, and are not described herein again.
Referring now to FIG. 7, a block diagram of an electronic device (e.g., the server of FIG. 1) 700 suitable for use in implementing embodiments of the present disclosure is shown. The server shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 7, electronic device 700 may include a processing means (e.g., central processing unit, graphics processor, etc.) 701 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)702 or a program loaded from storage 708 into a Random Access Memory (RAM) 703. In the RAM703, various programs and data necessary for the operation of the electronic apparatus 700 are also stored. The processing device 701, the ROM 702, and the RAM703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Generally, the following devices may be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 707 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 708 including, for example, magnetic tape, hard disk, etc.; and a communication device 709. The communication means 709 may allow the electronic device 700 to communicate wirelessly or by wire with other devices to exchange data. While fig. 7 illustrates an electronic device 700 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 7 may represent one device or may represent multiple devices as desired.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communication means 709, or may be installed from the storage means 708, or may be installed from the ROM 702. The computer program, when executed by the processing device 701, performs the above-described functions defined in the methods of embodiments of the present disclosure.
It should be noted that the computer readable medium of the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In embodiments of the present disclosure, however, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (Radio Frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a fundus image to be detected; inputting the fundus image to be detected into an image generation model to obtain a cup and disc mask image corresponding to the fundus image to be detected, wherein the cup and disc mask image corresponding to the fundus image to be detected represents a difference area between an optic cup and an optic disc in the fundus image to be detected; and fitting a mask image of an optic cup region and a mask image of an optic disc region in the fundus image to be detected based on the cup and disc mask image corresponding to the fundus image to be detected.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor comprising an acquisition unit, a generation unit, a fitting unit, or: a processor includes an image acquisition unit and an image processing unit. Here, the names of these units do not constitute a limitation on the unit itself in some cases, and for example, the first acquisition unit may also be described as "a unit that acquires a fundus image to be detected".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (12)

1. A segmentation method of a fundus image, comprising:
acquiring a fundus image to be detected;
inputting the fundus image to be detected into an image generation model to obtain a cup and disc mask image corresponding to the fundus image to be detected, wherein the cup and disc mask image corresponding to the fundus image to be detected represents a difference area between a visual cup and a visual disc in the fundus image to be detected;
fitting a mask image of a cup area and a mask image of a optic disc area in the fundus image to be detected based on the cup and disc mask image corresponding to the fundus image to be detected;
the method for fitting the mask image of the optic cup region and the mask image of the optic disc region in the fundus image to be detected based on the cup and disc mask image corresponding to the fundus image to be detected comprises the following steps:
according to a difference area between the optic cup and the optic disc in the fundus image to be detected represented by the cup disc mask image corresponding to the fundus image to be detected, carrying out pixel value inversion operation on the cup disc mask image corresponding to the fundus image to be detected to obtain a connected domain of the optic cup area;
extracting the inner boundary of the cup disc mask image, namely the boundary of the view cup area;
superposing the extracted optic cup area and a cup disc mask image corresponding to the fundus image to be detected, and extracting the outer boundary of the cup disc mask image, namely the boundary of the optic disc area;
and fitting the extracted boundary of the optic cup region and the extracted boundary of the optic disc region to obtain a mask image of the optic cup region and a mask image of the optic disc region in the fundus image to be detected.
2. The method according to claim 1, wherein fitting the mask image of the optic cup region and the mask image of the optic disc region in the fundus image to be detected based on the cup and disc mask image corresponding to the fundus image to be detected comprises:
and fitting the inner boundary and the outer boundary of the cup and disk mask image corresponding to the fundus image to be detected by adopting an ellipse fitting method to obtain a mask image of the optic cup region and a mask image of the optic disk region.
3. The method of claim 2, wherein the method further comprises:
determining the boundary information of the optic cup area and the optic disc area in the fundus image to be detected based on the mask image of the optic cup area and the mask image of the optic disc area;
controlling a display device to display a fundus image to be detected containing boundary information.
4. The method of claim 1, wherein the image generation model is generated as follows:
acquiring a sample set, wherein samples in the sample set comprise fundus images and sample mask images corresponding to the fundus images, and the sample mask images represent difference regions of an optic cup and an optic disc in the fundus images of the corresponding samples;
acquiring an initial generation countermeasure network, wherein the initial generation countermeasure network comprises a generation network and a judgment network;
selecting samples from the sample set, and performing the following training steps: predicting a difference area between an optic cup and an optic disc in the fundus image of the selected sample by using the generation network to obtain a prediction mask image corresponding to the fundus image of the sample; inputting the prediction mask image and the selected sample mask image into a discrimination network to obtain a class discrimination result of the sample mask image and the corresponding prediction mask image; comparing the category judgment result with a preset expected category judgment result; determining whether the training of the generated network is finished according to the comparison result; in response to determining that the training of the generating network is complete, the generating network is determined to be the image generation model.
5. The method according to claim 4, wherein the differential area between the optic cup and the optic disc in the fundus image of the selected specimen is predicted using a generation network as follows:
noise is superimposed on the fundus image of the sample and input to the generation network to predict a difference region between the optic cup and the optic disc in the fundus image of the sample.
6. A segmentation apparatus of a fundus image, comprising:
an acquisition unit configured to acquire a fundus image to be detected;
the generation unit is configured to input the fundus image to be detected into an image generation model to obtain a cup and disc mask image corresponding to the fundus image to be detected, and the cup and disc mask image corresponding to the fundus image to be detected represents a difference area between a visual cup and a visual disc in the fundus image to be detected;
the fitting unit is configured to fit a mask image of a cup area and a mask image of a optic disc area in the fundus image to be detected based on the cup and disc mask image corresponding to the fundus image to be detected;
wherein the fitting unit is further configured to:
according to a difference area between the optic cup and the optic disc in the fundus image to be detected represented by the cup disc mask image corresponding to the fundus image to be detected, carrying out pixel value inversion operation on the cup disc mask image corresponding to the fundus image to be detected to obtain a connected domain of the optic cup area;
extracting the inner boundary of the cup disc mask image, namely the boundary of the view cup area;
superposing the extracted optic cup area and a cup disc mask image corresponding to the fundus image to be detected, and extracting the outer boundary of the cup disc mask image, namely the boundary of the optic disc area;
and fitting the extracted boundary of the optic cup region and the extracted boundary of the optic disc region to obtain a mask image of the optic cup region and a mask image of the optic disc region in the fundus image to be detected.
7. The apparatus according to claim 6, wherein the fitting unit is further configured to fit a mask image of a cup region and a mask image of a disc region in the fundus image to be detected as follows:
and fitting the inner boundary and the outer boundary of the cup and disk mask image corresponding to the fundus image to be detected by adopting an ellipse fitting method to obtain a mask image of the optic cup region and a mask image of the optic disk region.
8. The apparatus of claim 7, wherein the apparatus further comprises:
a determination unit configured to determine boundary information of the cup region and the optic disc region in the fundus image to be detected, based on the mask image of the cup region and the mask image of the optic disc region;
a display unit configured to control the display device to display a fundus image to be detected containing the boundary information.
9. The apparatus of claim 6, wherein the image generation model is generated as follows:
acquiring a sample set, wherein samples in the sample set comprise fundus images and sample mask images corresponding to the fundus images, and the sample mask images represent difference regions of an optic cup and an optic disc in the fundus images of the corresponding samples;
acquiring an initial generation countermeasure network, wherein the initial generation countermeasure network comprises a generation network and a judgment network;
selecting samples from the sample set, and performing the following training steps: predicting a difference area between an optic cup and an optic disc in the fundus image of the selected sample by using the generation network to obtain a prediction mask image corresponding to the fundus image of the sample; inputting the prediction mask image and the selected sample mask image into a discrimination network to obtain a class discrimination result of the sample mask image and the corresponding prediction mask image; comparing the category judgment result with a preset expected category judgment result; determining whether the training of the generated network is finished according to the comparison result; in response to determining that the training of the generating network is complete, the generating network is determined to be the image generation model.
10. The apparatus of claim 9, wherein the differential area between the optic cup and the optic disc in the fundus image of the selected specimen is predicted using a generation network as follows:
noise is superimposed on the fundus image of the sample and input to the generation network to predict a difference region between the optic cup and the optic disc in the fundus image of the sample.
11. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by one or more processors, cause the one or more processors to implement the method of any one of claims 1-5.
12. A computer-readable medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method according to any one of claims 1-5.
CN201910590552.5A 2019-07-02 2019-07-02 Segmentation method and device for fundus image Active CN110298850B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910590552.5A CN110298850B (en) 2019-07-02 2019-07-02 Segmentation method and device for fundus image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910590552.5A CN110298850B (en) 2019-07-02 2019-07-02 Segmentation method and device for fundus image

Publications (2)

Publication Number Publication Date
CN110298850A CN110298850A (en) 2019-10-01
CN110298850B true CN110298850B (en) 2022-03-15

Family

ID=68029938

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910590552.5A Active CN110298850B (en) 2019-07-02 2019-07-02 Segmentation method and device for fundus image

Country Status (1)

Country Link
CN (1) CN110298850B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110969617B (en) * 2019-12-17 2024-03-15 腾讯医疗健康(深圳)有限公司 Method, device, equipment and storage medium for identifying video cup video disc image
CN111311565A (en) * 2020-02-11 2020-06-19 平安科技(深圳)有限公司 Eye OCT image-based detection method and device for positioning points of optic cups and optic discs
CN112001920B (en) * 2020-10-28 2021-02-05 北京至真互联网技术有限公司 Fundus image recognition method, device and equipment
CN113450341A (en) * 2021-07-16 2021-09-28 依未科技(北京)有限公司 Image processing method and device, computer readable storage medium and electronic device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108520522A (en) * 2017-12-31 2018-09-11 南京航空航天大学 Retinal fundus images dividing method based on the full convolutional neural networks of depth
CN109325942A (en) * 2018-09-07 2019-02-12 电子科技大学 Eye fundus image Structural Techniques based on full convolutional neural networks
CN109658385A (en) * 2018-11-23 2019-04-19 上海鹰瞳医疗科技有限公司 Eye fundus image judgment method and equipment
CN109684981A (en) * 2018-12-19 2019-04-26 上海鹰瞳医疗科技有限公司 Glaucoma image-recognizing method, equipment and screening system
CN109829877A (en) * 2018-09-20 2019-05-31 中南大学 A kind of retinal fundus images cup disc ratio automatic evaluation method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108520522A (en) * 2017-12-31 2018-09-11 南京航空航天大学 Retinal fundus images dividing method based on the full convolutional neural networks of depth
CN109325942A (en) * 2018-09-07 2019-02-12 电子科技大学 Eye fundus image Structural Techniques based on full convolutional neural networks
CN109829877A (en) * 2018-09-20 2019-05-31 中南大学 A kind of retinal fundus images cup disc ratio automatic evaluation method
CN109658385A (en) * 2018-11-23 2019-04-19 上海鹰瞳医疗科技有限公司 Eye fundus image judgment method and equipment
CN109684981A (en) * 2018-12-19 2019-04-26 上海鹰瞳医疗科技有限公司 Glaucoma image-recognizing method, equipment and screening system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Optic Disc and Cup Segmentation Based on Deep Convolutional Generative Adversarial Networks;YUN JIANG 等;《IEEE Access》;20190530;第II-III节以及图1、图4和图6 *

Also Published As

Publication number Publication date
CN110298850A (en) 2019-10-01

Similar Documents

Publication Publication Date Title
CN110298850B (en) Segmentation method and device for fundus image
CN107622240B (en) Face detection method and device
CN109800732B (en) Method and device for generating cartoon head portrait generation model
CN109816589B (en) Method and apparatus for generating cartoon style conversion model
US10270896B2 (en) Intuitive computing methods and systems
EP2559030B1 (en) Intuitive computing methods and systems
CN109993150B (en) Method and device for identifying age
US9197736B2 (en) Intuitive computing methods and systems
CN108197618B (en) Method and device for generating human face detection model
US11715223B2 (en) Active image depth prediction
CN110070076B (en) Method and device for selecting training samples
US20220207875A1 (en) Machine learning-based selection of a representative video frame within a messaging application
CN108038473B (en) Method and apparatus for outputting information
CN109241930B (en) Method and apparatus for processing eyebrow image
CN112037305B (en) Method, device and storage medium for reconstructing tree-like organization in image
CN110046571B (en) Method and device for identifying age
CN111968030B (en) Information generation method, apparatus, electronic device and computer readable medium
CN110942033B (en) Method, device, electronic equipment and computer medium for pushing information
CN112070022A (en) Face image recognition method and device, electronic equipment and computer readable medium
CN111797931A (en) Image processing method, image processing network training method, device and equipment
CN116309585B (en) Method and system for identifying breast ultrasound image target area based on multitask learning
CN111311616B (en) Method and apparatus for segmenting an image
WO2022146707A1 (en) Selecting representative video frame by machine learning
CN111523412A (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant