CN116128825A - Meibomian gland morphology analysis method based on deep learning - Google Patents

Meibomian gland morphology analysis method based on deep learning Download PDF

Info

Publication number
CN116128825A
CN116128825A CN202211720980.3A CN202211720980A CN116128825A CN 116128825 A CN116128825 A CN 116128825A CN 202211720980 A CN202211720980 A CN 202211720980A CN 116128825 A CN116128825 A CN 116128825A
Authority
CN
China
Prior art keywords
meibomian
morphology analysis
model
gland
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211720980.3A
Other languages
Chinese (zh)
Inventor
李钰杰
刘亮为
沈志华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Upyun Technology Co ltd
Original Assignee
Hangzhou Upyun Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Upyun Technology Co ltd filed Critical Hangzhou Upyun Technology Co ltd
Priority to CN202211720980.3A priority Critical patent/CN116128825A/en
Publication of CN116128825A publication Critical patent/CN116128825A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Eye Examination Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a meibomian gland morphology analysis method based on deep learning, which comprises the following steps: the method comprises the steps that an eye surface image is enhanced through an image quality enhancement module to obtain an eye surface image; marking a meibomian range and a gland range on the eye surface image to obtain a meibomian coordinate and a gland coordinate; data enhancement is carried out by combining the meibomian coordinates and the gland coordinates with the eye surface image, and the enhanced meibomian image, the enhanced gland image and the enhanced eye surface image are trained to obtain a meibomian gland morphology analysis primary model; inputting a real picture GT in the data into a meibomian gland morphology analysis preliminary model to obtain a model prediction picture P; and (3) calculating the intersection ratio of the real picture GT and the gland area relative position of the model prediction picture P, optimizing the meibomian gland morphology analysis preliminary model through the intersection ratio to obtain a meibomian gland morphology analysis model, and analyzing the meibomian gland morphology through the meibomian gland morphology analysis model.

Description

Meibomian gland morphology analysis method based on deep learning
Technical Field
The invention relates to the field of meibomian gland morphology analysis, in particular to a meibomian gland morphology analysis method based on deep learning.
Background
Dry eye is a common disease, which occurs when tears do not provide enough lubrication for eyes, and the detection rate of dry eye in China is about 6.1% -52.4% according to statistics, and the incidence rate of dry eye in China is kept high along with the increase of related dangerous factors. The main cause of dry eye is meibomian gland dysfunction (meibomian gland dysfunction, MGD), and in clinical diagnosis, analysis of meibomian gland morphology to diagnose and grade the extent of MGD is an important means of dry eye diagnosis.
The current clinical diagnosis means of MGD mainly relies on professional eye surface photographing analysis instruments, high-definition pictures are photographed through the instruments, traditional indexes such as white balance and saturation are adjusted, gland parts are marked manually, and various indexes are calculated, such as: uneven expansion results in deformation coefficient (DI), glandular Torsion (TI), etc. According to the method, various indexes are required to be marked and calculated manually, the workload is high, the subjective experience of marking personnel is relied on, errors are large, and the patients feel painful when the eyelids are shot, so that the imaging is easy to be fuzzy, and diagnosis is influenced. And the index between the single glands cannot be extracted and analyzed, and the related index of the development intensity of the gland picture cannot be accurately obtained.
Disclosure of Invention
The invention provides a meibomian gland morphology analysis method based on deep learning.
A meibomian gland morphology analysis method based on deep learning comprises the following steps:
1) Shooting an eye table picture through an eye table shooting instrument, and enhancing the eye table picture through a picture quality enhancement module to obtain an eye table image;
2) Marking a meibomian range and a gland range on the eye surface image to obtain a meibomian coordinate and a gland coordinate;
3) Data enhancement is carried out on the meibomian coordinates and the gland coordinates combined with the eye surface image obtained in the step 1) to obtain an enhanced meibomian image, an enhanced gland image and an enhanced eye surface image, and training is carried out on the enhanced meibomian image, the enhanced gland image and the enhanced eye surface image to obtain a meibomian gland morphology analysis preliminary model;
4) Inputting a real picture GT in the data into a meibomian gland morphology analysis preliminary model to obtain a model prediction picture P;
5) And (3) calculating the intersection ratio of the real picture GT and the gland area relative position of the model prediction picture P, optimizing the meibomian gland morphology analysis preliminary model through the intersection ratio to obtain a meibomian gland morphology analysis model, and analyzing the meibomian gland morphology through the meibomian gland morphology analysis model.
In step 1), the image quality enhancement module enhances the surface-eye image, which specifically includes:
the picture downsampling module is used for downsampling the eye surface picture;
the neural network convolution layer is used for carrying out convolution processing on the eye surface image after downsampling;
the 1D LUT color conversion module is used for carrying out color extraction on the eye table data obtained after convolution processing;
and the VGG network structure is used for carrying out color mapping on the color eye table data obtained after the color extraction to obtain an eye table image.
The LUT itself does not operate, but a series of input and output data are listed, and for R channels, there is r_out=lut (r_in), and for G, B channels, these data are corresponding to each other, so that the Gamma value, gray field, and contrast of the image can be changed by using 1DLUT, and the white field and black field can be redefined. In deep learning neural networks, 1DLUT is sampled discretely as a complete color transform function and embedded in the network structure as a fully connected operational layer. When training the quality enhancement module, sampling results
Figure BDA0004028465510000021
Stored in an output three-channel one-dimensional matrix, can be represented by input color coordinates +.>
Figure BDA0004028465510000022
Making a query, where N s Is the number of sampling coordinates along each of the three dimensions, +.>
Figure BDA0004028465510000023
Represents {0,1, … N s -1}. Thus in the complete 1D LUT colorA total of N3 s sampling points are defined on the transformation function. During reasoning, the sampled eye table data passes through the full connection layer, the input pixel searches the nearest sampling point according to the color of the input pixel, and the converted output of the input pixel is calculated through tri-linear interpolation. Predictive image output value T epsilon [0,1 ]]The 3×ns×ns can enhance picture quality by learning several basic 1D LUTs and fusing with image-dependent weights predicted from the downsampled input image by the CNN model.
In step 5), calculating the intersection ratio of the gland positions and the relative areas of the real picture GT and the model predicted picture P, specifically including:
Figure BDA0004028465510000024
wherein I is the cross ratio.
In step 5), I ranges from [0-1 ].
In the step 5), optimizing the meibomian gland morphology analysis preliminary model through the intersection ratio to obtain the meibomian gland morphology analysis model, which specifically comprises the following steps:
optimizing the meibomian gland morphology analysis preliminary model through the intersection ratio, wherein the intersection ratio of the trained meibomian gland morphology analysis preliminary model is the maximum meibomian gland morphology analysis model.
Wherein the gland position intersection area operation of the predicted pictures P and GT, i.e., GT n p= Σ i,j GT (i,j) ∧P (i,j) The union area is calculated as GT U.P= Σ i,j GT (i,j) ∨P (i,j) The calculation of the cross-over ratio replaces the original classification index map by the calculation of the relative area of the gland pixel position, and in the training iteration process, the model with the best performance of gland segmentation is preferentially selected instead of the model with the highest classification accuracy. Compared with the prior art, the invention has the following advantages:
the meibomian gland body shape analysis capability provided by the invention completely depends on a deep learning algorithm, and the quality enhancement module can optimize the quality of the eye surface picture in a 1D color space, so that the picture quality is more in line with the viewing of human eyes, the gland characteristics are more obvious, manual labeling of personnel is not needed after model training is completed, and the meibomian gland body shape analysis capability is accurate and objective enough, and can accurately describe the meibomian range and the meibomian gland range.
The invention can calculate the index between single glands, can calculate regional index, and can calculate the gland development intensity related index according to the accurate numerical value of the color space of the photographed original image.
The invention can iterate the version, the more the data is accumulated, the stronger the model capability is, and the better the effect is.
Drawings
FIG. 1 is a schematic diagram of a training portion of the present invention.
Fig. 2 is a schematic diagram of the inferencing portion of the present invention.
Fig. 3 is a schematic diagram of a quality enhancement module according to the present invention.
Detailed Description
The invention aims at analyzing the meibomian gland morphology by a deep learning method and can be continuously and iteratively optimized along with the data quantity, so the method is mainly divided into two parts, training and reasoning.
As shown in fig. 1, the training part mainly provides training of a deep learning CNN model before reasoning, and includes five modules of data enhancement, picture quality enhancement, deep learning meibomian segmentation training, deep learning gland segmentation training and IOU calculation index. After the eye surface image shot by the ophthalmic equipment is obtained, the eye surface image with higher subjective quality and better glandular identification can be obtained through the image quality enhancement module, and then data are input into the data enhancement module to obtain training data which is at least ten times that of source data. Thereafter, two parts: 1. after the meibomian segmentation training module is input to obtain the optimal CNN model, the effect of the model is judged by calculating the IOU, and the training iteration can be continued until an ideal result is obtained. 2. After the gland segmentation training module is input to obtain the optimal CNN model, the IOU judgment model effect is calculated, and the iteration can be performed to an ideal result.
As shown in fig. 2, the reasoning part mainly comprises model reasoning and various index calculations after reasoning, including four modules of image quality enhancement, meibomian segmentation reasoning, gland segmentation reasoning, index calculation and output. The database eye surface images are input and then preprocessed through the quality enhancement module to obtain high-quality eye surface images, the meibomian images are input into the meibomian segmentation reasoning module to obtain meibomian images, the meibomian images are input into the gland segmentation reasoning module to obtain gland segmentation images, and finally the meibomian images and the gland images are input into index calculation to calculate various indexes and output results.
The training process inputs not less than ten pictures at a time.
The quality enhancement module is a subjective quality enhancement structure MGSE (Meibomian Gland Subjective Enhancement) provided by the invention aiming at meibomian glands, as shown in fig. 3), takes an image to be enhanced as input, takes a high-quality image as output, takes the output image and the high-quality image subjected to manual adjustment as target images to calculate MSE loss, and completes an end-to-end supervised learning process to realize subjective quality enhancement of the eye surface pictures. After inputting a picture, downsampling the CNN structure to fix the size of the image, extracting features to provide global understanding of the image, and fusing the color coefficient T { x ] by a full-connection layer based on 1DLUT R ,x G ,x B And finally, the VGG-8 structure color mapping outputs the processed high-quality image.
Deep learning meibomian segmentation into Mask-rcnn convolutional neural network modules, mask and BG of the image are separated by example segmentation, i.e., the meibomian portion and the ocular surface portion other than the meibomian, using res net50 as the classification network. The loss function is designed as l=lcls (meibomian and other eye surfaces) +lbox (circumscribed frame position of meibomian part) +lmask (two-dimensional position of meibomian part), and the segmentation task is completed by reducing the loss value through iterative learning.
The deep learning gland is divided into a Unet convolutional neural module, VGG16 is used as a backstone, characteristics are extracted in depth by utilizing characteristics of a VGG network, and the gland and the non-gland are classified according to thought of an encoder-decoder.
The IOU calculates the position relative area of the real picture GT and the model predicted picture P to do cross ratio calculation (0-1), namely:
Figure BDA0004028465510000041
the larger the number, the better the model prediction effect.
The model only needs to be iteratively upgraded, the eye table picture data can be fitted after training is finished, the model can be retrained to be iteratively upgraded, and the more the data is, the stronger the model capacity is.
The index calculation module can calculate the index of the single gland, and can output all the indexes related to the single gland in a self-defined way, such as: total variation, individual gland formation coefficients, etc.
And the index calculation module outputs a complete visualization result and can synthesize a result picture.
The reasoning process is a pipeline, manual labeling is not needed, and input and output are needed.
Implementing picture quality enhancement example:
referring to fig. 3, a certain number of high quality eye surface pictures (glands are clearly visible) are manually adjusted and selected in advance to be GT, and Input into an end-to-end model of Input-output, the MSE is made to approach an optimal value through iteration, so that the color distribution of a predictive picture Pred approaches to GT, the model is fixedly stored, and the pictures are directly Input as GT during reasoning, so that the enhanced Pred can be obtained.

Claims (5)

1. The meibomian gland morphology analysis method based on deep learning is characterized by comprising the following steps of:
1) Shooting an eye table picture through an eye table shooting instrument, and enhancing the eye table picture through a picture quality enhancement module to obtain an eye table image;
2) Marking a meibomian range and a gland range on the eye surface image to obtain a meibomian coordinate and a gland coordinate;
3) Data enhancement is carried out on the meibomian coordinates and the gland coordinates combined with the eye surface image obtained in the step 1) to obtain an enhanced meibomian image, an enhanced gland image and an enhanced eye surface image, and training is carried out on the enhanced meibomian image, the enhanced gland image and the enhanced eye surface image to obtain a meibomian gland morphology analysis preliminary model;
4) Inputting a real picture GT in the data into a meibomian gland morphology analysis preliminary model to obtain a model prediction picture P;
5) And (3) calculating the position relative area of the real picture GT and the model prediction picture P, optimizing a meibomian gland morphology analysis primary model through the intersection ratio to obtain a meibomian gland morphology analysis model, and analyzing the meibomian gland morphology through the meibomian gland morphology analysis model.
2. The deep learning-based meibomian gland morphology analysis method of claim 1, wherein in step 1), the image quality enhancement module enhances an eye surface image, specifically comprising:
the picture downsampling module is used for downsampling the eye surface picture;
the neural network convolution layer is used for carrying out convolution processing on the eye surface image after downsampling;
the 1D LUT color conversion module is used for carrying out color extraction on the eye table data obtained after convolution processing;
and the VGG network structure is used for carrying out color mapping on the color eye table data obtained after the color extraction to obtain an eye table image.
3. The deep learning-based meibomian gland morphology analysis method of claim 1, wherein in step 5), the position relative area of the real picture GT and the model predictive picture P is calculated by a cross ratio, specifically comprising:
Figure FDA0004028465500000011
wherein I is the cross ratio.
4. The deep learning-based meibomian gland morphology analysis method of claim 3, wherein in step 5), I ranges from [0-1 ].
5. The deep learning-based meibomian gland morphology analysis method of claim 1, wherein in step 5), the meibomian gland morphology analysis model is obtained by optimizing the meibomian gland morphology analysis preliminary model by means of an intersection ratio, specifically comprising:
optimizing the meibomian gland morphology analysis preliminary model through the intersection ratio, wherein the intersection ratio of the trained meibomian gland morphology analysis preliminary model is the maximum meibomian gland morphology analysis model.
CN202211720980.3A 2022-12-30 2022-12-30 Meibomian gland morphology analysis method based on deep learning Pending CN116128825A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211720980.3A CN116128825A (en) 2022-12-30 2022-12-30 Meibomian gland morphology analysis method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211720980.3A CN116128825A (en) 2022-12-30 2022-12-30 Meibomian gland morphology analysis method based on deep learning

Publications (1)

Publication Number Publication Date
CN116128825A true CN116128825A (en) 2023-05-16

Family

ID=86309510

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211720980.3A Pending CN116128825A (en) 2022-12-30 2022-12-30 Meibomian gland morphology analysis method based on deep learning

Country Status (1)

Country Link
CN (1) CN116128825A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109087310A (en) * 2018-07-24 2018-12-25 深圳大学 Dividing method, system, storage medium and the intelligent terminal of Meibomian gland texture region
CN111127431A (en) * 2019-12-24 2020-05-08 杭州求是创新健康科技有限公司 Dry eye disease grading evaluation system based on regional self-adaptive multitask neural network
CN112885456A (en) * 2021-01-20 2021-06-01 武汉爱尔眼科医院有限公司 Meibomian gland quantitative analysis based on deep learning and application thereof in MGD diagnosis and treatment
CN113962978A (en) * 2021-10-29 2022-01-21 北京富通东方科技有限公司 Eye movement damage detection and film reading method and system
CN114343563A (en) * 2021-12-31 2022-04-15 温州医科大学附属眼视光医院 Method, device and system for assisting dry eye diagnosis and typing through multi-modal fusion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109087310A (en) * 2018-07-24 2018-12-25 深圳大学 Dividing method, system, storage medium and the intelligent terminal of Meibomian gland texture region
CN111127431A (en) * 2019-12-24 2020-05-08 杭州求是创新健康科技有限公司 Dry eye disease grading evaluation system based on regional self-adaptive multitask neural network
CN112885456A (en) * 2021-01-20 2021-06-01 武汉爱尔眼科医院有限公司 Meibomian gland quantitative analysis based on deep learning and application thereof in MGD diagnosis and treatment
CN113962978A (en) * 2021-10-29 2022-01-21 北京富通东方科技有限公司 Eye movement damage detection and film reading method and system
CN114343563A (en) * 2021-12-31 2022-04-15 温州医科大学附属眼视光医院 Method, device and system for assisting dry eye diagnosis and typing through multi-modal fusion

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
朱敏颖 等: "基于卷积神经网络的睑板腺形态人工智能分析系统的构建", 《浙江医学》, vol. 43, no. 18, pages 1946 - 1952 *
金阳: "《锂离子电池储能电站早期安全预警及防护 第1版》", 机械工业出版社, pages: 202 *

Similar Documents

Publication Publication Date Title
Bhalla et al. A fuzzy convolutional neural network for enhancing multi-focus image fusion
CN110287846B (en) Attention mechanism-based face key point detection method
CN109255758A (en) Image enchancing method based on full 1*1 convolutional neural networks
CN111489324A (en) Cervical cancer lesion diagnosis method fusing multi-modal prior pathology depth features
CN110675462A (en) Gray level image colorizing method based on convolutional neural network
CN113420794B (en) Binaryzation Faster R-CNN citrus disease and pest identification method based on deep learning
CN106157249A (en) Based on the embedded single image super-resolution rebuilding algorithm of optical flow method and sparse neighborhood
CN118097372B (en) Crop growth visual prediction method based on artificial intelligence
CN113486894A (en) Semantic segmentation method for satellite image feature component
CN118379288B (en) Embryo prokaryotic target counting method based on fuzzy rejection and multi-focus image fusion
CN112508814A (en) Image tone restoration type defogging enhancement method based on unmanned aerial vehicle at low altitude view angle
CN114897742A (en) Image restoration method with texture and structural features fused twice
CN116543386A (en) Agricultural pest image identification method based on convolutional neural network
Zheng et al. Overwater image dehazing via cycle-consistent generative adversarial network
CN110992301A (en) Gas contour identification method
CN113810683A (en) No-reference evaluation method for objectively evaluating underwater video quality
CN113643297A (en) Computer-aided age analysis method based on neural network
CN111881924B (en) Dark-light vehicle illumination identification method combining illumination invariance and short-exposure illumination enhancement
CN113128517A (en) Tone mapping image mixed visual feature extraction model establishment and quality evaluation method
CN115187982B (en) Algae detection method and device and terminal equipment
CN116128825A (en) Meibomian gland morphology analysis method based on deep learning
CN116543414A (en) Tongue color classification and tongue redness and purple quantification method based on multi-model fusion
CN115690047A (en) Prostate ultrasound image segmentation method and device based on abnormal point detection
CN114463192A (en) Infrared video distortion correction method based on deep learning
CN111179224A (en) Joint learning-based reference-free evaluation method for aerial image restoration quality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination