CN110288610A - A kind of retina OCT hard exudate dividing method - Google Patents
A kind of retina OCT hard exudate dividing method Download PDFInfo
- Publication number
- CN110288610A CN110288610A CN201910487189.4A CN201910487189A CN110288610A CN 110288610 A CN110288610 A CN 110288610A CN 201910487189 A CN201910487189 A CN 201910487189A CN 110288610 A CN110288610 A CN 110288610A
- Authority
- CN
- China
- Prior art keywords
- image
- retina oct
- hard exudate
- dividing method
- net network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of retina OCT hard exudate dividing methods, comprising: retina OCT image is input in trained U-net network model, obtains corresponding bianry image by propagated forward;By the bianry image and retina OCT image proportionally Weighted Fusion, characteristic image is obtained;Several characteristic points in the characteristic image are chosen as seed point, the maximum Fuzzy connected degree of seed point is calculated using maximum fuzzy connectedness algorithm, obtains the image for being partitioned into hard exudate region.The present invention has the advantages that robustness is good, segmentation precision is high.
Description
Technical field
The invention belongs to technical field of image processing, and in particular to a kind of retina OCT hard exudate dividing method.
Background technique
Current existing retina OCT hard exudate separation algorithms should mainly with traditional Region growing segmentation algorithm
Algorithm shortcomings are that the speed of growth is slower, and the dice coefficient of segmentation result is low, it is difficult to reach good segmentation effect, and algorithm
Robustness is poor.
Summary of the invention
It is an object of the invention to overcome deficiency in the prior art, a kind of retina OCT hard exudate segmentation side is provided
Method has the advantages that robustness is good, segmentation precision is high.
In order to achieve the above objectives, the present invention adopts the following technical solutions realization:
A kind of retina OCT hard exudate dividing method, described method includes following steps:
Retina OCT image is input in trained U-net network model, corresponding binary map is obtained by propagated forward
Picture;
By the bianry image and retina OCT image proportionally Weighted Fusion, characteristic image is obtained;
Several characteristic points in the characteristic image are chosen as seed point, seed point is calculated using maximum fuzzy connectedness algorithm
Maximum Fuzzy connected degree, obtain the image for being partitioned into hard exudate region.
Further, the method for the training U-net network model includes:
Using the retina OCT image with mark hard exudate region manually as training sample set;
Input of the retina OCT image of training sample concentration as U-net network model is chosen, and adjusts U-net network mould
The parameter information of type, until hard exudate area image can be extracted tentatively.
Further, the method also includes: carry out image before retina OCT image is input to U-net network model
Size normalized.
Further, selected weight is (0.5,0.5) when proportion weighted merges.
Further, include: using the method that maximum fuzzy connectedness algorithm is partitioned into hard exudate area image
It is spread according to selected seed point to surrounding pixel point, calculates surrounding pixel point to the maximum Connected degree of seed point, if
Maximum Connected degree is greater than preset fuzzy connectedness threshold value, this pixel is put into hard exudate region, otherwise, then by this picture
Vegetarian refreshments puts background area into.
Further, the fuzzy connectedness threshold value is 0.23.
Compared with prior art, the beneficial effects of the present invention are: using the view based on U-net in conjunction with fuzzy connectedness
Film OCT hard exudate partitioning algorithm, can more accurately be partitioned into seepage areas, can obtain ideal Dice coefficient, and
And the robustness of algorithm is higher.
Detailed description of the invention
Fig. 1 is the flow chart of the retina OCT hard exudate dividing method provided according to embodiments of the present invention;
Fig. 2 is the retina OCT exemplary diagram with hard exudate disease;
Fig. 3 is normal retina OCT exemplary diagram;
Fig. 4 is retina OCT original image used by the embodiment of the present invention;
Fig. 5 is the goldstandard of Fig. 4;
Fig. 6 is the output image that Fig. 4 is obtained after the segmentation of trained U-net network model;
Fig. 7 is the image after Fig. 4 and Fig. 6 Weighted Fusion;
Fig. 8 is the seed point chosen from Fig. 7 (black irises out part);
Fig. 9 is the hard exudate area image that Fig. 4 uses dividing method provided in an embodiment of the present invention to obtain.
Specific embodiment
The invention will be further described below in conjunction with the accompanying drawings.Following embodiment is only used for clearly illustrating the present invention
Technical solution, and not intended to limit the protection scope of the present invention.
As shown in Figure 1, be the flow chart of retina OCT hard exudate dividing method provided in an embodiment of the present invention, including
Following steps:
S01: image preprocessing: image preprocessing: by original image resize at the size of 224*224 pixel, as shown in figure 4,
It is the retina OCT original image after resize, focal area is more, and there are many form is oozed out, with traditional method
It is difficult to divide.
S02: retina OCT image is input in trained U-net network model, is obtained and is corresponded to by propagated forward
Bianry image;
Deep learning model used in the present invention is U-net network, using 5 layers of U-net network, and layer and layer it
Between joined dropout means, prevent network over-fitting, cause the Generalization Capability of network poor.
The training method of U-net network model includes the following steps:
S02-1: using 100 retina OCT images with mark hard exudate region manually as training sample set;
As shown in Fig. 2, being the retina OCT exemplary diagram with hard exudate disease, Fig. 3 is the view of no hard exudate disease
Film OCT figure, it can be clearly seen that regional area has its pixel value of boxed area to differ greatly with peripheral region in Fig. 2, in " white
Block " shape.
S02-2: input of the retina OCT image of training sample concentration as U-net network model is chosen, and adjusts U-
The parameter information of net network model is obtained by propagated forward and is corresponded to until hard exudate area image can be extracted tentatively
Bianry image.As shown in Figure 6, it can be seen that result just have seepage areas blank, but detail section still with Fig. 5 institute
The goldstandard shown has a certain distance, it is therefore desirable to further algorithm process.
S03: after the processing by step S02, also some remaining informations are U-net network moulds in retina OCT original image
Type learn less than, but these information are also very helpful for finally obtaining a good result, therefore the present invention
Bianry image that method obtains S02 and retina OCT original image proportionally Weighted Fusion, obtain characteristic image.
In many experiments test, discovery (0.5,0.5) this weight segmentation effect is best, and Fig. 7 is original image and network
Export the Overlay of figure, it can be seen that the seepage areas of this picture clearly, is laid well for the processing of next step
Basis.
S04: several characteristic points in the characteristic image are chosen as seed point (i.e. point of interest), are connected using maximum is fuzzy
Degree of connecing algorithm calculates the maximum Fuzzy connected degree of seed point, is partitioned into hard exudate area image.
The present invention constructs field with the result that S03 is obtained, 1 ~ 4 point of interest is taken (to scheme in upper larger seepage areas
Heart point, shown in this such as Fig. 8 in black line iris out region), to be spread to surrounding pixel point, calculate remaining point and this put most
Big Connected degree puts this pixel in seepage areas into if maximum Connected degree is greater than optimal threshold 0.23, if being less than this threshold
Value, then put this pixel into background area.The image for being partitioned into hard exudate region will be finally obtained, as shown in Figure 9.
The above is only a preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art
For member, without departing from the technical principles of the invention, several improvement and deformations can also be made, these improvement and deformations
Also it should be regarded as protection scope of the present invention.
Claims (7)
1. a kind of retina OCT hard exudate dividing method, which is characterized in that described method includes following steps:
Retina OCT image is input in trained U-net network model, corresponding binary map is obtained by propagated forward
Picture;
By the bianry image and retina OCT image proportionally Weighted Fusion, characteristic image is obtained;
Several characteristic points in the characteristic image are chosen as seed point, seed point is calculated using maximum fuzzy connectedness algorithm
Maximum Fuzzy connected degree, obtain the image for being partitioned into hard exudate region.
2. retina OCT hard exudate dividing method according to claim 1, which is characterized in that the training U-net net
The method of network model includes:
Using the retina OCT image with mark hard exudate region manually as training sample set;
Input of the retina OCT image of training sample concentration as U-net network model is chosen, and adjusts U-net network mould
The parameter information of type, until hard exudate area image can be extracted tentatively.
3. retina OCT hard exudate dividing method according to claim 1 or 2, which is characterized in that the method is also wrapped
It includes: carrying out picture size normalized before retina OCT image is input to U-net network model.
4. retina OCT hard exudate dividing method according to claim 1, which is characterized in that when proportion weighted merges
Selected weight is (0.5,0.5).
5. retina OCT hard exudate dividing method according to claim 1, which is characterized in that using maximum fuzzy company
The method that degree of connecing algorithm is partitioned into hard exudate area image includes:
It is spread according to selected seed point to surrounding pixel point, calculates surrounding pixel point to the maximum Connected degree of seed point, if
Maximum Connected degree is greater than preset fuzzy connectedness threshold value, this pixel is put into hard exudate region, otherwise, then by this picture
Vegetarian refreshments puts background area into.
6. retina OCT hard exudate dividing method according to claim 1, which is characterized in that the fuzzy connectedness
Threshold value is 0.23.
7. retina OCT hard exudate dividing method according to claim 1, which is characterized in that the U-net network mould
For type using 5 layers of U-net network model, be added has dropout means between layers.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910487189.4A CN110288610A (en) | 2019-06-05 | 2019-06-05 | A kind of retina OCT hard exudate dividing method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910487189.4A CN110288610A (en) | 2019-06-05 | 2019-06-05 | A kind of retina OCT hard exudate dividing method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110288610A true CN110288610A (en) | 2019-09-27 |
Family
ID=68003413
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910487189.4A Pending CN110288610A (en) | 2019-06-05 | 2019-06-05 | A kind of retina OCT hard exudate dividing method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110288610A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104331881A (en) * | 2014-10-23 | 2015-02-04 | 中国科学院苏州生物医学工程技术研究所 | A blood vessel lumen segmentation method based on intravascular ultrasound images |
CN105303546A (en) * | 2014-06-20 | 2016-02-03 | 江南大学 | Affinity propagation clustering image segmentation method based on fuzzy connectedness |
CN106408558A (en) * | 2016-09-05 | 2017-02-15 | 南京理工大学 | Analysis method of hard exudates and high-reflection signals in diabetic retinopathy image |
CN108564564A (en) * | 2018-03-09 | 2018-09-21 | 华南理工大学 | Based on the medical image cutting method for improving fuzzy connectedness and more seed points |
WO2019200740A1 (en) * | 2018-04-20 | 2019-10-24 | 平安科技(深圳)有限公司 | Pulmonary nodule detection method and apparatus, computer device, and storage medium |
-
2019
- 2019-06-05 CN CN201910487189.4A patent/CN110288610A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105303546A (en) * | 2014-06-20 | 2016-02-03 | 江南大学 | Affinity propagation clustering image segmentation method based on fuzzy connectedness |
CN104331881A (en) * | 2014-10-23 | 2015-02-04 | 中国科学院苏州生物医学工程技术研究所 | A blood vessel lumen segmentation method based on intravascular ultrasound images |
CN106408558A (en) * | 2016-09-05 | 2017-02-15 | 南京理工大学 | Analysis method of hard exudates and high-reflection signals in diabetic retinopathy image |
CN108564564A (en) * | 2018-03-09 | 2018-09-21 | 华南理工大学 | Based on the medical image cutting method for improving fuzzy connectedness and more seed points |
WO2019200740A1 (en) * | 2018-04-20 | 2019-10-24 | 平安科技(深圳)有限公司 | Pulmonary nodule detection method and apparatus, computer device, and storage medium |
Non-Patent Citations (4)
Title |
---|
你听的到、: "语义分割—对FCN、U-Net、SegNet的一点理解", 《HTTPS://BLOG.CSDN.NET/QQ_34606546/ARTICLE/DETAILS/89434487》 * |
段彦华等: "眼底图像中硬性渗出物检测算法", 《北京生物医学工程》 * |
高玮玮等: "眼底图像中硬性渗出自动检测方法的对比", 《南京航空航天大学学报》 * |
黄锦丽: "基于深度学习的青光眼杯盘检测技术研究", 《中国优秀硕士学位论文全文数据库 医药卫生科技辑》 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106920227B (en) | The Segmentation Method of Retinal Blood Vessels combined based on deep learning with conventional method | |
CN103914699B (en) | A kind of method of the image enhaucament of the automatic lip gloss based on color space | |
CN106408564B (en) | A kind of method for processing fundus images based on deep learning, apparatus and system | |
CN110909690B (en) | Method for detecting occluded face image based on region generation | |
CN107145845A (en) | The pedestrian detection method merged based on deep learning and multi-characteristic points | |
CN107403168B (en) | Face recognition system | |
CN109584251A (en) | A kind of tongue body image partition method based on single goal region segmentation | |
CN109145713A (en) | A kind of Small object semantic segmentation method of combining target detection | |
CN109448001B (en) | Automatic picture clipping method | |
CN112750106B (en) | Nuclear staining cell counting method based on incomplete marker deep learning, computer equipment and storage medium | |
CN113240691A (en) | Medical image segmentation method based on U-shaped network | |
CN108682017A (en) | Super-pixel method for detecting image edge based on Node2Vec algorithms | |
CN113762201B (en) | Mask detection method based on yolov4 | |
CN107358152A (en) | A kind of vivo identification method and system | |
CN103679173A (en) | Method for detecting image salient region | |
CN107944386A (en) | Visual scene recognition methods based on convolutional neural networks | |
CN109492668A (en) | MRI based on multichannel convolutive neural network not same period multi-mode image characterizing method | |
CN108596062A (en) | The real-time high-intensity region method and device of face picture based on deep learning | |
CN110276279A (en) | A kind of arbitrary shape scene text detection method based on image segmentation | |
CN109087310A (en) | Dividing method, system, storage medium and the intelligent terminal of Meibomian gland texture region | |
CN107895157A (en) | A kind of pinpoint method in low-resolution image iris center | |
CN108470178A (en) | A kind of depth map conspicuousness detection method of the combination depth trust evaluation factor | |
CN105046230B (en) | The method and system of people canthus detection based on image | |
CN112712500A (en) | Remote sensing image target extraction method based on deep neural network | |
CN113762009A (en) | Crowd counting method based on multi-scale feature fusion and double-attention machine mechanism |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20200522 Address after: 510000 No. 411, 412, 413, building F1, No. 39, Ruihe Road, Huangpu District, Guangzhou City, Guangdong Province Applicant after: Guangzhou bigway Medical Technology Co.,Ltd. Address before: High tech Zone Suzhou city Jiangsu province 215011 Chuk Yuen Road No. 209 Applicant before: SUZHOU BIGVISION MEDICAL TECHNOLOGY Co.,Ltd. |
|
TA01 | Transfer of patent application right | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190927 |
|
RJ01 | Rejection of invention patent application after publication |